Skip to content

Commit

Permalink
Minor tune ups: codespell'ing (fixes + tox + CI (github actions)), re…
Browse files Browse the repository at this point in the history
…move of unintended to be committed 2 files (#239)

* Remove some outputs from scheduler which should not be in git

introduced in 2f7c826

* Add tox.ini with codespell invocation to ensure/fix (run with -w) typos

* [DATALAD RUNCMD] Run codespell with -w to fix typoes

=== Do not change lines below ===
{
 "chain": [],
 "cmd": "tox -e codespell -- -w",
 "exit": 0,
 "extra_inputs": [],
 "inputs": [],
 "outputs": [],
 "pwd": "."
}
^^^ Do not change lines above ^^^

* Add Github actions workflog (lint) which would run codespell

You might later add other linters into the same workflow so
called it lint
  • Loading branch information
yarikoptic committed May 3, 2023
1 parent ecb539f commit 2a83f7c
Show file tree
Hide file tree
Showing 11 changed files with 61 additions and 26 deletions.
26 changes: 26 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: Linters

on:
- push
- pull_request

jobs:
lint:
runs-on: ubuntu-latest

steps:
- name: Set up environment
uses: actions/checkout@v3
with: # no need for the history
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade tox
- name: Run linters
run: |
tox -e codespell
2 changes: 1 addition & 1 deletion docs/example.rst
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ to detect potential curation errors using ``cubids-validate``.
$ cubids-validate BIDS_Dataset_DataLad v0 --sequential
.. note:: The use of the ``--sequential`` flag forces the validator to treat each participant as its own BIDS dataset. This can be helpful for identifying heterogenous elements, but can be slowed down by extremely large datasets.
.. note:: The use of the ``--sequential`` flag forces the validator to treat each participant as its own BIDS dataset. This can be helpful for identifying heterogeneous elements, but can be slowed down by extremely large datasets.

This command produces the following tsv:

Expand Down
6 changes: 3 additions & 3 deletions docs/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ Use ``cubids-group`` to generate your dataset's Key Groups and Parameter Groups:
This will output four files, including the summary and files tsvs described above,
prefixed by the second argument ``v0``.

Appplying changes
Applying changes
------------------

The ``cubids-apply`` program provides an easy way for users to manipulate their datasets.
Expand All @@ -136,7 +136,7 @@ Groups—e.g., every Parameter Group except the Dominant one. Specifically, CuBI
all non-dominant Parameter Group to include VARIANT* in their acquisition field where * is the reason
the Parameter Group varies from the Dominant Group. For example, when CuBIDS encounters a Parameter
Group with a repetition time that varies from the one present in the Dominant Group, it will automatically
suggest renaming all scans in that Variant Group to include ``acquisition-VARIANTRepetitionTime`` in thier
suggest renaming all scans in that Variant Group to include ``acquisition-VARIANTRepetitionTime`` in their
filenames. When the user runs ``cubids-apply``, filenames will get renamed according to the auto-generated
names in the “Rename Key Group” column in the summary.tsv

Expand All @@ -146,7 +146,7 @@ Deleting a mistake
To remove files in a Parameter Group from your BIDS data, you simply set the ``MergeInto`` value
to ``0``. We see in our data that there is a strange scan that has a ``RepetitionTime`` of 12.3
seconds and is also variant with respect to EffectiveEchoSpacing and EchoTime. We elect to remove this scan from
our dataset becasuse we do not want these parameters to affect our analyses.
our dataset because we do not want these parameters to affect our analyses.
To remove these files from your BIDS data, add a ``0`` to ``MergeInto`` and save the new tsv as ``v0_edited_summary.tsv``

.. csv-table:: Pre Apply Groupings with Deletion Requested
Expand Down
2 changes: 1 addition & 1 deletion notebooks/Fieldmaps.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
"\n",
"from pkg_resources import resource_filename as pkgrf \n",
"\n",
"# returns stirng path to testdata\n",
"# returns string path to testdata\n",
"TEST_DATA = pkgrf(\"cubids\", \"testdata\")\n",
"\n",
"# should give you the full path \n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/HTML_param_groups.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
"from cubids import CuBIDS\n",
"from pkg_resources import resource_filename as pkgrf \n",
"\n",
"# returns stirng path to testdata\n",
"# returns string path to testdata\n",
"TEST_DATA = pkgrf(\"cubids\", \"testdata\")\n",
"\n",
"# should give you the full path \n",
Expand Down Expand Up @@ -411,7 +411,7 @@
"import pathlib \n",
"\n",
"# @Params\n",
"# - path: a string contianing the path to the bids directory inside which we want to change files \n",
"# - path: a string containing the path to the bids directory inside which we want to change files \n",
"# @Returns\n",
"# - HTML report of acquisitions and their parameter groups \n",
"\n",
Expand Down
8 changes: 4 additions & 4 deletions notebooks/JSON_PoC_read_write.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@
"sample_data.keys()\n",
"sample_data.get('SliceTiming')\n",
"SliceTime = sample_data.get('SliceTiming') #the way you can snatch things out of a dictionary \n",
"#if dict doens't have the key it will return none vs. error\n",
"#if dict doesn't have the key it will return none vs. error\n",
"\n",
"if SliceTime: \n",
" sample_data.update({\"SliceTime%03d\"%SliceNum : time for SliceNum, time in enumerate(SliceTime)})\n",
Expand All @@ -166,7 +166,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"the next one might not have slice timing but you concatonate the next row -- if the file doesn't have slice timing it fills with NaN and if it doesn't then google! \n",
"the next one might not have slice timing but you concatenate the next row -- if the file doesn't have slice timing it fills with NaN and if it doesn't then google! \n",
"\n",
"rglob to get all the files in the bids tree then load it with json.load "
]
Expand Down Expand Up @@ -228,7 +228,7 @@
"source": [
"3. Checking that the sidecare will write valid JSON files \n",
"\n",
"In order to do this, we use the json.dumps function as it will turn the python object into a JSON string, and therefore, will wirte a valid JSON file always. \n",
"In order to do this, we use the json.dumps function as it will turn the python object into a JSON string, and therefore, will write a valid JSON file always. \n",
"\n",
"Note: same as the previous chunk of code, this was written for a single .json file and therefore is commentend out "
]
Expand Down Expand Up @@ -349,7 +349,7 @@
" #print(s_path)\n",
" file_tree = open(s_path)\n",
" example_data = json.load(file_tree)\n",
" SliceTime = example_data.get('SliceTiming') #the way you can snatch things out of a dictionary #if dict doens't have the key it will return none vs. error\n",
" SliceTime = example_data.get('SliceTiming') #the way you can snatch things out of a dictionary #if dict doesn't have the key it will return none vs. error\n",
" if SliceTime: \n",
" example_data.update({\"SliceTime%03d\"%SliceNum : time for SliceNum, time in enumerate(SliceTime)})\n",
" del example_data['SliceTiming']\n",
Expand Down
6 changes: 3 additions & 3 deletions notebooks/PofC_Key_Values2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@
"#initialize list\n",
"\n",
"for file in all_files:\n",
"#for each file in the list, parse the information into a dictionary and add it to the list we just initalized\n",
"#for each file in the list, parse the information into a dictionary and add it to the list we just initialized\n",
" result = parse_file_entities(file) \n",
" \n",
" entities.append(result)\n",
Expand All @@ -158,7 +158,7 @@
"\n",
"# loop through files to create a bigger dictionary of discrete keys, adding each value to a list \n",
"dictionary = {}\n",
"# initalize a new dictionary\n",
"# initialize a new dictionary\n",
"for e in entities:\n",
"# for each dictionary in the list we created above \n",
" for k,v in e.items():\n",
Expand Down Expand Up @@ -228,7 +228,7 @@
}
],
"source": [
"#make a new dictionary with KEYS: BIDS entities (ie: subject, session, etc) and VALUES: dicionaries of ID's and instances\n",
"#make a new dictionary with KEYS: BIDS entities (ie: subject, session, etc) and VALUES: dictionaries of ID's and instances\n",
"\n",
"new_dictionary = {}\n",
"counter = 0 \n",
Expand Down
10 changes: 5 additions & 5 deletions notebooks/metadata_image_param.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@
"sample_data.keys()\n",
"sample_data.get('SliceTiming')\n",
"SliceTime = sample_data.get('SliceTiming') #the way you can snatch things out of a dictionary \n",
"#if dict doens't have the key it will return none vs. error\n",
"#if dict doesn't have the key it will return none vs. error\n",
"\n",
"if SliceTime: \n",
" sample_data.update({\"SliceTime%03d\"%SliceNum : time for SliceNum, time in enumerate(SliceTime)})\n",
Expand All @@ -166,7 +166,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"the next one might not have slice timing but you concatonate the next row -- if the file doesn't have slice timing it fills with NaN and if it doesn't then google! \n",
"the next one might not have slice timing but you concatenate the next row -- if the file doesn't have slice timing it fills with NaN and if it doesn't then google! \n",
"\n",
"rglob to get all the files in the bids tree then load it with json.load "
]
Expand Down Expand Up @@ -228,7 +228,7 @@
"source": [
"3. Checking that the sidecare will write valid JSON files \n",
"\n",
"In order to do this, we use the json.dumps function as it will turn the python object into a JSON string, and therefore, will wirte a valid JSON file always. \n",
"In order to do this, we use the json.dumps function as it will turn the python object into a JSON string, and therefore, will write a valid JSON file always. \n",
"\n",
"Note: same as the previous chunk of code, this was written for a single .json file and therefore is commentend out "
]
Expand Down Expand Up @@ -452,7 +452,7 @@
" example_data = json.load(file_tree)\n",
" wanted_keys = example_data.keys() & IMAGING_PARAMS\n",
" example_data = {key: example_data[key] for key in wanted_keys} \n",
" SliceTime = example_data.get('SliceTiming') #the way you can snatch things out of a dictionary #if dict doens't have the key it will return none vs. error\n",
" SliceTime = example_data.get('SliceTiming') #the way you can snatch things out of a dictionary #if dict doesn't have the key it will return none vs. error\n",
" if SliceTime: \n",
" example_data.update({\"SliceTime%03d\"%SliceNum : [time] for SliceNum, time in enumerate(SliceTime)})\n",
" del example_data['SliceTiming']\n",
Expand All @@ -467,7 +467,7 @@
"\n",
"\n",
"#create dataframe of unique rows \n",
"#bids entities filter in the cubids class to fileter through the files \n",
"#bids entities filter in the cubids class to filter through the files \n",
"#loop over , get metadata, and put into the dataframe \n",
"\n",
"\n",
Expand Down
12 changes: 6 additions & 6 deletions notebooks/rename_files_work.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
"\n",
"from pkg_resources import resource_filename as pkgrf \n",
"\n",
"# returns stirng path to testdata\n",
"# returns string path to testdata\n",
"TEST_DATA = pkgrf(\"cubids\", \"testdata\")\n",
"\n",
"# should give you the full path \n",
Expand Down Expand Up @@ -262,7 +262,7 @@
"outputs": [],
"source": [
"# @Params\n",
"# - path: a string contianing the path to the directory inside which we want to change files \n",
"# - path: a string containing the path to the directory inside which we want to change files \n",
"# - pattern: the substring of the file we would like to replace\n",
"# - replacement: the substring that will replace \"pattern\"\n",
"# @Returns\n",
Expand Down Expand Up @@ -296,7 +296,7 @@
"import pathlib \n",
"\n",
"# @Params\n",
"# - path: a string contianing the path to the directory inside which we want to change files \n",
"# - path: a string containing the path to the directory inside which we want to change files \n",
"# - pattern: the substring of the file we would like to replace\n",
"# - replacement: the substring that will replace \"pattern\"\n",
"# @Returns\n",
Expand All @@ -320,7 +320,7 @@
"import pathlib \n",
"\n",
"# @Params\n",
"# - path: a string contianing the path to the bids directory inside which we want to change files \n",
"# - path: a string containing the path to the bids directory inside which we want to change files \n",
"# - pattern: the substring of the file we would like to replace\n",
"# - replacement: the substring that will replace \"pattern\"\n",
"# @Returns\n",
Expand Down Expand Up @@ -2321,8 +2321,8 @@
"# in BIDS, want to replace everything up to the BIDS root \n",
"# don't want to replace all filenames up to the BIDS root \n",
"\n",
"# could have a rename subject funciton and a rename session function\n",
"# also have a rename files funciton \n",
"# could have a rename subject function and a rename session function\n",
"# also have a rename files function \n",
"\n",
"# wants a single function that lets you replace any part of the string \n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion tests/test_bond.py
Original file line number Diff line number Diff line change
Expand Up @@ -817,7 +817,7 @@ def _add_ext_files(img_path):


def _edit_a_json(json_file):
"""Open a json file, write somthing to it and save it to the same name."""
"""Open a json file, write something to it and save it to the same name."""
with open(json_file, "r") as metadatar:
metadata = json.load(metadatar)

Expand Down
9 changes: 9 additions & 0 deletions tox.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
[tox]
envlist = codespell

[testenv:codespell]
skip_install = true
deps =
codespell~=2.0
commands =
codespell -D- --skip "_version.py,*.pem,*.json" {posargs} cubds docs notebooks tests

0 comments on commit 2a83f7c

Please sign in to comment.