Skip to content

Commit

Permalink
Merge pull request #71 from CBroz1/dev
Browse files Browse the repository at this point in the history
Add pre-commit & utilities for pytests
  • Loading branch information
kushalbakshi authored Jan 11, 2023
2 parents 4e2a412 + b291be6 commit f4312b3
Show file tree
Hide file tree
Showing 23 changed files with 230 additions and 92 deletions.
16 changes: 16 additions & 0 deletions .markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Markdown Linter configuration for docs
# https://github.com/DavidAnson/markdownlint
# https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md
MD009: false # permit trailing spaces
MD007: false # List indenting - permit 4 spaces
MD013:
line_length: "88" # Line length limits
tables: false # disable for tables
headings: false # disable for headings
MD030: false # Number of spaces after a list
MD033: # HTML elements allowed
allowed_elements:
- "br"
MD034: false # Permit bare URLs
MD031: false # Spacing w/code blocks. Conflicts with `??? Note` and code tab styling
MD046: false # Spacing w/code blocks. Conflicts with `??? Note` and code tab styling
56 changes: 56 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
default_stages: [commit, push]
exclude: (^.github/|^docs/|^images/)

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files # prevent giant files from being committed
- id: requirements-txt-fixer
- id: mixed-line-ending
args: ["--fix=lf"]
description: Forces to replace line ending by the UNIX 'lf' character.

# black
- repo: https://github.com/psf/black
rev: 22.12.0
hooks:
- id: black
- id: black-jupyter
args:
- --line-length=88

# isort
- repo: https://github.com/pycqa/isort
rev: 5.11.2
hooks:
- id: isort
args: ["--profile", "black"]
description: Sorts imports in an alphabetical order

# flake8
- repo: https://github.com/pycqa/flake8
rev: 4.0.1
hooks:
- id: flake8
args: # arguments to configure flake8
# making isort line length compatible with black
- "--max-line-length=88"
- "--max-complexity=18"
- "--select=B,C,E,F,W,T4,B9"

# these are errors that will be ignored by flake8
# https://www.flake8rules.com/rules/{code}.html
- "--ignore=E203,E501,W503,W605"
# E203 - Colons should not have any space before them.
# Needed for list indexing
# E501 - Line lengths are recommended to be no greater than 79 characters.
# Needed as we conform to 88
# W503 - Line breaks should occur after the binary operator.
# Needed because not compatible with black
# W605 - a backslash-character pair that is not a valid escape sequence now
# generates a DeprecationWarning. This will eventually become a SyntaxError.
# Needed because we use \d as an escape sequence
18 changes: 14 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,16 @@
# Changelog

Observes [Semantic Versioning](https://semver.org/spec/v2.0.0.html) standard and [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) convention.
Observes [Semantic Versioning](https://semver.org/spec/v2.0.0.html) standard and
[Keep a Changelog](https://keepachangelog.com/en/1.0.0/) convention.

## [0.5.0] - 2023-01-09

+ Remove - `recursive_search` function
+ Add - pre-commit checks to the repo to observe flake8, black, isort
+ Add - `value_to_bool` and `QuietStdOut` utilities

## [0.4.2] - 2022-12-16

+ Update - PrairieView loader checks for multi-plane vs single-plane scans.

## [0.4.1] - 2022-12-15
Expand All @@ -17,10 +25,10 @@ Observes [Semantic Versioning](https://semver.org/spec/v2.0.0.html) standard and

## [0.3.0] - 2022-10-7

+ Add - Function `prairieviewreader` to parse metadata from Bruker PrarieView acquisition system
+ Add - Function `prairieviewreader` to parse metadata from Bruker PrarieView acquisition
system
+ Update - Changelog with tag links


## [0.2.1] - 2022-07-13

+ Add - Adopt `black` formatting
Expand All @@ -33,7 +41,8 @@ Observes [Semantic Versioning](https://semver.org/spec/v2.0.0.html) standard and
+ Add - Function `run_caiman` to trigger CNMF algorithm.
+ Add - Function `ingest_csv_to_table` to insert data from CSV files into tables.
+ Add - Function `recursive_search` to search through nested dictionary for a key.
+ Add - Function `upload_to_dandi` to upload Neurodata Without Borders file to the DANDI platform.
+ Add - Function `upload_to_dandi` to upload Neurodata Without Borders file to the DANDI
platform.
+ Update - Remove `extras_require` feature to allow this package to be published to PyPI.

## [0.1.0a1] - 2022-01-12
Expand All @@ -44,6 +53,7 @@ Observes [Semantic Versioning](https://semver.org/spec/v2.0.0.html) standard and

+ Add - Readers for: `ScanImage`, `Suite2p`, `CaImAn`.

[0.5.0]: https://github.com/datajoint/element-interface/releases/tag/0.5.0
[0.4.2]: https://github.com/datajoint/element-interface/releases/tag/0.4.2
[0.4.1]: https://github.com/datajoint/element-interface/releases/tag/0.4.1
[0.4.0]: https://github.com/datajoint/element-interface/releases/tag/0.4.0
Expand Down
4 changes: 3 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Contribution Guidelines

This project follows the [DataJoint Contribution Guidelines](https://docs.datajoint.io/python/community/02-Contribute.html). Please reference the link for more full details.
This project follows the
[DataJoint Contribution Guidelines](https://datajoint.com/docs/community/contribute/).
Please reference the link for more full details.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
SOFTWARE.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ corresponding database tables that can be combined with other Elements to assemb
fully functional pipeline. Element Interface is home to a number of utilities that make
this possible.

Installation and usage instructions can be found at the
Installation and usage instructions can be found at the
[Element documentation](https://datajoint.com/docs/elements/element-interface).
25 changes: 25 additions & 0 deletions cspell.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
// cSpell Settings
//https://github.com/streetsidesoftware/vscode-spell-checker
{
"version": "0.2", // Version of the setting file. Always 0.2
"language": "en", // language - current active spelling language
"enabledLanguageIds": [
"markdown",
"yaml"
],
// flagWords - list of words to be always considered incorrect
// This is useful for offensive words and common spelling errors.
// For example "hte" should be "the"
"flagWords": [],
"allowCompoundWords": true,
"ignorePaths": [
],
"words": [
"isort",
"Bruker",
"Neurodata",
"Prairie",
"CNMF",
"deconvolution"
]
}
2 changes: 1 addition & 1 deletion docs/mkdocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ nav:
# HOST_UID=$(id -u) docker compose -f docs/docker-compose.yaml up --build
# ```
# 02. Site analytics depend on a local environment variable GOOGLE_ANALYTICS_KEY
# You can find this in LastPass or declare with any string to suprress errors
# You can find this in LastPass or declare with any string to suppress errors
# 03. The API section will pull docstrings.
# A. Follow google styleguide e.g.,
# https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html
Expand Down
2 changes: 1 addition & 1 deletion docs/src/citation.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ Resource Identifier (RRID).
Neurophysiology. bioRxiv. 2021 Jan 1. doi: https://doi.org/10.1101/2021.03.30.437358

+ DataJoint Elements ([RRID:SCR_021894](https://scicrunch.org/resolver/SCR_021894)) -
Element Interface (version {{ PATCH_VERSION }})
Element Interface (version {{ PATCH_VERSION }})
42 changes: 23 additions & 19 deletions docs/src/concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,26 +11,30 @@ across other packages, without causing issues in the respective Element.

### General utilities

`utils.find_full_path` and `utils.find_root_directory` are used
across many Elements and Workflows to allow for the flexibility of providing
`utils.find_full_path` and `utils.find_root_directory` are used
across many Elements and Workflows to allow for the flexibility of providing
one or more root directories in the user's config, and extrapolating from a relative
path at runtime.

`utils.ingest_csv_to_table` is used across workflow examples to ingest from sample data from
local CSV files into sets of manual tables. While researchers may wish to manually
`utils.ingest_csv_to_table` is used across workflow examples to ingest from sample data
from local CSV files into sets of manual tables. While researchers may wish to manually
insert for day-to-day operations, it helps to have a more complete dataset when learning
how to use various Elements.

`utils.str_to_bool` converts a set of strings to boolean True or False. This is implemented
as the equivalent item in Python's `distutils` which will be removed in future versions.

### Suite2p

This Element provides functions to independently run Suite2p's motion correction,
segmentation, and deconvolution steps. These functions currently work for single plane
tiff files. If one is running all Suite2p pre-processing steps concurrently, these functions
are not required and one can run `suite2p.run_s2p()`. The wrapper functions here were developed primarily because `run_s2p` cannot individually
run deconvolution using the `spikedetect` flag (
tiff files. If one is running all Suite2p pre-processing steps concurrently, these
functions are not required and one can run `suite2p.run_s2p()`. The wrapper functions
here were developed primarily because `run_s2p` cannot individually run deconvolution
using the `spikedetect` flag (
[Suite2p Issue #718](https://github.com/MouseLand/suite2p/issues/718)).

**Requirements**
Requirements:

- [ops dictionary](https://suite2p.readthedocs.io/en/latest/settings.html)

Expand All @@ -42,13 +46,13 @@ run deconvolution using the `spikedetect` flag (

### PrairieView Reader

This Element provides a function to read the PrairieView Scanner's metadata
file. The PrairieView software generates one `.ome.tif` imaging file per frame acquired. The
metadata for all frames is contained in one `.xml` file. This function locates the `.xml`
file and generates a dictionary necessary to populate the DataJoint ScanInfo and
Field tables. PrairieView works with resonance scanners with a single field,
does not support bidirectional x and y scanning, and the `.xml` file does not
contain ROI information.
This Element provides a function to read the PrairieView Scanner's metadata file. The
PrairieView software generates one `.ome.tif` imaging file per frame acquired. The
metadata for all frames is contained in one `.xml` file. This function locates the
`.xml` file and generates a dictionary necessary to populate the DataJoint ScanInfo and
Field tables. PrairieView works with resonance scanners with a single field, does not
support bidirectional x and y scanning, and the `.xml` file does not contain ROI
information.

## Element Architecture

Expand All @@ -58,14 +62,14 @@ module.
- Acquisition packages: [ScanImage](../api/element_interface/scanimage_utils)
- Analysis packages:

- Suite2p [loader](../api/element_interface/suite2p_loader) and [trigger](../api/element_interface/suite2p_trigger)
- CaImAn [loader](../api/element_interface/caiman_loader) and [trigger](../api/element_interface/run_caiman)
- Suite2p [loader](../api/element_interface/suite2p_loader) and [trigger](../api/element_interface/suite2p_trigger)

- CaImAn [loader](../api/element_interface/caiman_loader) and [trigger](../api/element_interface/run_caiman)

- Data upload: [DANDI](../api/element_interface/dandi/)

## Roadmap

Further development of this Element is community driven. Upon user requests and based
on guidance from the Scientific Steering Group we will additional features to
this Element.
this Element.
16 changes: 8 additions & 8 deletions element_interface/caiman_loader.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
import h5py
import caiman as cm
import scipy
import numpy as np
from datetime import datetime
import os
import pathlib
from tqdm import tqdm
from datetime import datetime

import caiman as cm
import h5py
import numpy as np
import scipy
from tqdm import tqdm

_required_hdf5_fields = [
"/motion_correction/reference_image",
Expand Down Expand Up @@ -122,7 +122,7 @@ def extract_masks(self) -> dict:
mask_xpix, mask_ypix, mask_zpix, inferred_trace, dff, spikes
"""
if self.params.motion["is3D"]:
raise NotImplemented(
raise NotImplementedError(
"CaImAn mask extraction for volumetric data not yet implemented"
)

Expand Down Expand Up @@ -166,8 +166,8 @@ def _process_scanimage_tiff(scan_filenames, output_dir="./"):
Read ScanImage TIFF - reshape into volumetric data based on scanning depths/channels
Save new TIFF files for each channel - with shape (frame x height x width x depth)
"""
from tifffile import imsave
import scanreader
from tifffile import imsave

# ------------ CaImAn multi-channel multi-plane tiff file ------------
for scan_filename in tqdm(scan_filenames):
Expand Down
1 change: 1 addition & 0 deletions element_interface/dandi.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import os
import subprocess

from dandi.download import download
from dandi.upload import upload

Expand Down
9 changes: 5 additions & 4 deletions element_interface/extract_loader.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
import os
import numpy as np
from pathlib import Path
from datetime import datetime
from pathlib import Path

import numpy as np


class EXTRACT_loader:
Expand All @@ -18,7 +19,7 @@ def __init__(self, extract_dir: str):

try:
extract_file = next(Path(extract_dir).glob("*_extract_output.mat"))
except StopInteration:
except StopInteration: # noqa F821
raise FileNotFoundError(
f"EXTRACT output result file is not found at {extract_dir}."
)
Expand All @@ -31,7 +32,7 @@ def __init__(self, extract_dir: str):

def load_results(self):
"""Load the EXTRACT results
Returns:
masks (dict): Details of the masks identified with the EXTRACT segmentation package.
"""
Expand Down
7 changes: 3 additions & 4 deletions element_interface/extract_trigger.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
import os
from typing import Union
from pathlib import Path
from textwrap import dedent
from datetime import datetime
from typing import Union


class EXTRACT_trigger:
Expand All @@ -11,11 +10,11 @@ class EXTRACT_trigger:
% Load Data
data = load('{scanfile}');
M = data.M;
% Input Paramaters
config = struct();
{parameters_list_string}
% Run EXTRACT
output = extractor(M, config);
save('{output_fullpath}', 'output');
Expand Down
Loading

0 comments on commit f4312b3

Please sign in to comment.