Developer’s guide#

Note

The https://learn.scientific-python.org webpages are an extremely valuable learning resource for Python software developer. The reader is referred to that for any detail not covered in the following guide.

The following rules and conventions have been established for the package development and are enforced throughout the entire code base. Merge requests that do not comply to the following directives will be rejected.

To start developing dspeed, fork the remote repository to your personal GitHub account (see About Forks). If you have not set up your ssh keys on the computer you will be working on, please follow GitHub’s instructions. Once you have your own fork, you can clone it via (replace “yourusername” with your GitHub username):

$ git clone git@github.com:yourusername/dspeed.git

All extra tools needed to develop dspeed are listed as optional dependencies and can be installed via pip by running:

$ cd dspeed
$ pip install -e '.[all]'  # single quotes are not needed on bash

Important

Pip’s --editable | -e flag let’s you install the package in “developer mode”, meaning that any change to the source code will be directly propagated to the installed package and importable in scripts.

Tip

It is strongly recommended to work inside a virtual environment, which guarantees reproductibility and isolation. For more details, see learn.scientific-python.org.

Code style#

  • All functions and methods (arguments and return types) must be type-annotated. Type annotations for variables like class attributes are also highly appreciated.

  • Messaging to the user is managed through the logging module. Do not add print() statements. To make a logging object available in a module, add this:

    import logging
    log = logging.getLogger(__name__)
    

    at the top. In general, try to keep the number of logging.debug() calls low and use informative messages. logging.info() calls should be reserved for messages from high-level routines (like dspeed.dsp.build_dsp()) and very sporadic. Good code is never too verbose.

  • If an error condition leading to undefined behavior occurs, raise an exception. try to find the most suitable between the built-in exceptions, otherwise raise RuntimeError("message"). Do not raise Warnings, use logging.warning() for that and don’t abort the execution.

  • Warning messages (emitted when a problem is encountered that does not lead to undefined behavior) must be emitted through logging.warning() calls.

A set of pre-commit hooks is configured to make sure that dspeed coherently follows standard coding style conventions. The pre-commit tool is able to identify common style problems and automatically fix them, wherever possible. Configured hooks are listed in the .pre-commit-config.yaml file at the project root folder. They are run remotely on the GitHub repository through the pre-commit bot, but should also be run locally before submitting a pull request:

$ cd dspeed
$ pip install '.[test]'
$ pre-commit run --all-files  # analyse the source code and fix it wherever possible
$ pre-commit install          # install a Git pre-commit hook (strongly recommended)

For a more comprehensive guide, check out the learn.scientific-python.org documentation about code style.

Testing#

  • The dspeed test suite is available below tests/. We use pytest to run tests and analyze their output. As a starting point to learn how to write good tests, reading of the relevant learn.scientific-python.org webpage is recommended.

  • Unit tests are automatically run for every push event and pull request to the remote Git repository on a remote server (currently handled by GitHub actions). Every pull request must pass all tests before being approved for merging. Running the test suite is simple:

    $ cd dspeed
    $ pip install '.[test]'
    $ pytest
    
  • Additionally, pull request authors are required to provide tests with sufficient code coverage for every proposed change or addition. If necessary, high-level functional tests should be updated. We currently rely on codecov.io to keep track of test coverage. A local report, which must be inspected before submitting pull requests, can be generated by running:

    $ pytest --cov=dspeed
    

Testing Numba-Wrapped Functions#

When using Numba to vectorize Python functions, the Python version of the function does not, by default, get directly tested, but the Numba version instead. In this case, we need to unwrap the Numba function and test the pure Python version. With various processors in dspeed.processors, this means that testing and triggering the code coverage requires this unwrapping.

Within the testing suite, we use the @pytest.fixture() decorator to include a helper function called compare_numba_vs_python that can be used in any test. This function runs both the Numba and pure Python versions of a function, asserts that they are equal up to floating precision, and returns the output value.

As an example, we show a snippet from the test for dspeed.processors.fixed_time_pickoff(), a processor which uses the @numba.guvectorize() decorator.

def test_fixed_time_pickoff(compare_numba_vs_python):
    """Testing function for the fixed_time_pickoff processor."""

    len_wf = 20

    # test for nan if w_in has a nan
    w_in = np.ones(len_wf)
    w_in[4] = np.nan
    assert np.isnan(compare_numba_vs_python(fixed_time_pickoff, w_in, 1, ord("i")))

In the assertion that the output is what we expect, we use compare_numba_vs_python(fixed_time_pickoff, w_in, 1, ord("i")) in place of fixed_time_pickoff(w_in, 1, ord("i")). In general, the replacement to make is func(*inputs) becomes compare_numba_vs_python(func, *inputs).

Note, that in cases of testing for the raising of errors, it is recommended to instead run the function twice: once with the Numba version, and once using the inspect.unwrap() function. We again show a snippet from the test for dspeed.processors.fixed_time_pickoff() below. We include the various required imports in the snippet for verbosity.

import inspect

import numpy as np
import pytest

from dspeed.dsp.errors import DSPFatal
from dspeed.dsp.processors import fixed_time_pickoff

def test_fixed_time_pickoff(compare_numba_vs_python):
"skipping parts of function..."
# test for DSPFatal errors being raised
# noninteger t_in with integer interpolation
with pytest.raises(DSPFatal):
    w_in = np.ones(len_wf)
    fixed_time_pickoff(w_in, 1.5, ord("i"))

with pytest.raises(DSPFatal):
    a_out = np.empty(len_wf)
    inspect.unwrap(fixed_time_pickoff)(w_in, 1.5, ord("i"), a_out)

In this case, the general idea is to use pytest.raises() twice, once with func(*inputs), and again with inspect.unwrap(func)(*inputs).

Testing Factory Functions that Return Numba-Wrapped Functions#

As in the previous section, we also have processors that are first initialized with a factory function, which then returns a callable Numba-wrapped function. In this case, there is a slightly different way of testing the function to ensure full code coverage when using compare_numba_vs_python, as the function signature is generally different.

As an example, we show a snippet from the test for dspeed.processors.dwt.discrete_wavelet_transform(), a processor which uses a factory function to return a function wrapped by the @numba.guvectorize() decorator.

import numpy as np
import pytest

from dspeed.dsp.errors import DSPFatal
from dspeed.dsp.processors import discrete_wavelet_transform

def test_discrete_wavelet_transform(compare_numba_vs_python):
    """Testing function for the discrete_wavelet_transform processor."""

    # set up values to use for each test case
    len_wf_in = 16
    wave_type = 'haar'
    level = 2
    len_wf_out = 4

    # ensure the DSPFatal is raised for a negative level
    with pytest.raises(DSPFatal):
        discrete_wavelet_transform(wave_type, -1)

    # ensure that a valid input gives the expected output
    w_in = np.ones(len_wf_in)
    w_out = np.empty(len_wf_out)
    w_out_expected = np.ones(len_wf_out) * 2**(level / 2)

    dwt_func = discrete_wavelet_transform(wave_type, level)
    assert np.allclose(
        compare_numba_vs_python(dwt_func, w_in, w_out),
        w_out_expected,
    )
    ## rest of test function is truncated in this example

In this case, the error is raised outside of the Numba-wrapped function, and we only need to test for the error once. For the comparison of the calculated values to expectation, we must initialize the output array and pass it to the list of inputs that should be used in the comparison. This is different than the previous section, where we are instead now updating the outputted values in place.

Documentation#

We adopt best practices in writing and maintaining dspeed’s documentation. When contributing to the project, make sure to implement the following:

  • Documentation should be exclusively available on the Project website dspeed.readthedocs.io. No READMEs, GitHub/LEGEND wiki pages should be written.

  • Pull request authors are required to provide sufficient documentation for every proposed change or addition.

  • Documentation for functions, classes, modules and packages should be provided as Docstrings along with the respective source code. Docstrings are automatically converted to HTML as part of the dspeed package API documentation.

  • General guides, comprehensive tutorials or other high-level documentation (e.g. referring to how separate parts of the code interact between each other) must be provided as separate pages in docs/source/ and linked in the table of contents.

  • Jupyter notebooks should be added to the main Git repository below docs/source/notebooks.

  • Before submitting a pull request, contributors are required to build the documentation locally and resolve and warnings or errors.

Writing documentation#

We adopt the following guidelines for writing documentation:

Building documentation#

Scripts and tools to build documentation are located below docs/. To build documentation, sphinx and a couple of additional Python packages are required. You can get all the needed dependencies by running:

$ cd dspeed
$ pip install '.[docs]'

Pandoc is also required to render Jupyter notebooks. To build documentation, run the following commands:

$ cd docs
$ make clean
$ make

Documentation can be then displayed by opening build/html/index.html with a web browser. Documentation for the dspeed website is built and deployed by Read the Docs.

Versioning#

Collaborators with push access to the GitHub repository that wish to release a new project version must implement the following procedures:

  • Semantic versioning is adopted. The version string uses the MAJOR.MINOR.PATCH format.

  • To release a new minor or major version, the following procedure should be followed:

    1. A new branch with name releases/vMAJOR.MINOR (note the v) containing the code at the intended stage is created

    2. The commit is tagged with a descriptive message: git tag vMAJOR.MINOR.0 -m 'short descriptive message here' (note the v)

    3. Changes are pushed to the remote:

      $ git push origin releases/vMAJOR.MINOR
      $ git push origin refs/tags/vMAJOR.MINOR.0
      
  • To release a new patch version, the following procedure should be followed:

    1. A commit with the patch is created on the relevant release branch releases/vMAJOR.MINOR

    2. The commit is tagged: git tag vMAJOR.MINOR.PATCH (note the v)

    3. Changes are pushed to the remote:

      $ git push origin releases/vMAJOR.MINOR
      $ git push origin refs/tags/vMAJOR.MINOR.PATCH
      
  • To upload the release to the Python Package Index, a new release must be created through the GitHub interface, associated to the just created tag. Usage of the “Generate release notes” option is recommended.