Skip to content

Commit

Permalink
initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Amal Feriani committed Jun 9, 2023
0 parents commit 09225b2
Show file tree
Hide file tree
Showing 48 changed files with 5,378 additions and 0 deletions.
Binary file added .assets/banner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
41 changes: 41 additions & 0 deletions .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: CI

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
build:

runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.8"]

steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
pip install -e .
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
129 changes: 129 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/
11 changes: 11 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 22.10.0
hooks:
- id: black
135 changes: 135 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
<div align='center'>
<p align='center'>
<img width='50%' src='./.assets/CeBed_back.png' />
</p>


![Continuous Integration](https://github.com/SAIC-MONTREAL/CeBed/actions/workflows/python-package.yml/badge.svg)
[![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC_BY--NC_4.0-blue.svg)](https://creativecommons.org/licenses/by-nc/4.0/)
[![codestyle](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

</div>

Channel estimation test bed (CeBed) is a suite of implementations and benchmarks for OFDM channel estimation in Tensorflow.

The goal of CeBed is to unify and facilitate the replication, refinement and design of new deep channel estimators. CeBed can also be used as a baseline to create and build new projects and compare with existing algorithms in the litterature. It is simple to add a new dataset or a model to our package and we welcome the community to update or add exisiting algorithms or datasets.

For now, CeBed provides a simple interface to train and evaluate various deep channel estimation models.

# Setup

<details open>

Clone repo and install the requirements in a [**Python>=3.8.0**](https://www.python.org/) environment.
```bash
git clone https://github.com/SAIC-MONTREAL/CeBed
cd CeBed
pip install -e .
```
</details>

<!--# Setup
CeBed can be installed as follows
-->

# Using CeBed
## Datasets
### Sionna dataset
<details>
For now, CeBed uses the link-level simulator [Sionna](https://nvlabs.github.io/sionna/) for data generation. CeBed provides an interface to generate datasets using different channel models, system parameters, pilot patterns, etc.

Here is an example to generate a `SISO` dataset using one SNR level (by default = 0 dB) :
```bash
python scripts/generate_datasets_from_sionna.py --size 10000 --num_rx_antennas 1 --path_loss
```
The generated dataset contains:
- `x`: Transmitted symbols, a complex tensor with shape `[batch_size, num_tx, num_tx_ant, num_ofdm_symbols, num_subcarriers]`
- `h`: The channel impulse response, a complex tensor with shape `[batch size, num_rx, num_rx_ant, num_tx, num_tx_ant, num_ofdm_symbols, num_subcarriers]`
- `y`: The received symbols, a complex tensor with shape `[batch size, num_rx, num_rx_ant, num_ofdm_symbols, num_subcarriers]`

Here is another example on how to generate a multi-domain dataset where each SNR level is a different domain:
```bash
python scripts/generate_datasets_from_sionna.py --size 10000 --scenario umi --num_rx_antennas 1 --path_loss --num_domains 5 start_ds 0 end_ds 25
```
</details>

### Custom dataset
<details>
It is easy to add a new dataset to CeBed. The dataset can be generated offline using any link-level simulator like MATLAB.

Please check the tutorial in [notebooks/cusotm_dataset.ipynb](notebooks/custom_dataset.ipynb), detailing how to use CeBed with your dataset.
</details>

## Training

<details>
<summary>Single model training</summary>

The command below will train and evaluate a single model
```bash
python scripts/train.py --experiment_name EXPERIMENT_NAME --seed SEED --data_dir DATADIR --epochs 100 --dataset_name SionnaOfflineMD --model_name ReEsNet --input_type low
```
</details>

<details>
<summary>Model hyperparameters</summary>

The model hyperprameters are defined in `yaml` files under [hyperparams](./hyperparams).
Make sure that the `EXPERIMENT_NAME` exists in the yaml files of the model(s) you would like to train.
Here is an example configuration of the [ReEsNet model](./hyperparams/ReEsNet.yaml):
```yaml
MyExperimentName:
default:
hidden_size: 16
input_type: low
kernel_size: 3
lr: 0.001
n_blocks: 4
upsamling_mode: deconv
```
</details>

<details>
<summary>Benchmarking all models</summary>

To reproduce the benchamrking results from our paper:
```python
python scripts/benchmark.py --seed SEED --data_dir DATADIR --epochs 100 --experiment_name EXPERIMENT_NAME --gpus GPU_IDS
```

**Note**: The model inputs and outputs are expects to have the following shape `[batch_size, num_ofdm_symbols, num_ofdm_subcarriers, num_channels]` where `num_channels = num_rx_ant*num_tx*2`.

</details>

## Evaluation
<details>
**Evaluate a trained model**

To evaluate a model trained with CeBed,
```
python scripts/evaluate.py PATH_TO_MODEL
```

**Evaluate model and baselines**

You can provide a list of baselines to compare the model to :
```
python scripts/evaluate.py PATH_TO_MODEL LS LMMSE ALMMSE
```
</details>


# Citation
If you use our code, please cite our work.
```bibtex
@article{cebed,
author = {Amal Feriani and Di Wu and Steve Liu and Greg Dudek},
title = {CeBed: A Benchmark for Deep Data-Driven OFDM Channel Estimation},
url = {https://github.com/SAIC-MONTREAL/cebed.git}
yeart = {2023}
}
```

# License

The code is licensed under the [Creative Commons Attribution 4.0 License (CC BY)](https://creativecommons.org/licenses/by/4.0/).
1 change: 1 addition & 0 deletions cebed/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
__version__ = "1.0"
Loading

0 comments on commit 09225b2

Please sign in to comment.