Running and writing tests

Todo

Finish documenting the procedure for running and writing tests.

Finesse 3 contains a large suite of tests. These are run automatically when changes are made to the code and pushed to the git repository, but can also be run manually with pytest.

Pytest

Pytest is a framework for writing tests and provides a number of useful features over the standard library’s unittest. Pytest strongly encourages the use of “fixtures” with tests, which are functions which set up test resources to be provided to tests themselves. For more information, refer to the Pytest documentation.

Pytest is installed as part of the development dependencies for Finesse 3.

Running the tests provided with Finesse 3

Pytest provides a command-line interface (CLI) for executing tests. To run all of the tests, run the following command from the project root directory:

pytest tests

Pytest’s CLI can run combinations of tests or individual tests by specifying subdirectories or using certain flags. For instance, unit tests can be run with pytest tests/unit. See pytest -h for more details.

Types of test

There are three broad categories of test in the test directory: unit, integration and validation. These are described in the following sections.

Unit tests

Unit tests are to test the behaviour of atomic units like single functions, methods, classes, etc. for given sets of inputs. These tests should only test one function/method/class at a time.

Integration tests

Integration tests are to test the behaviour of entire subsystems of the code. These generally don’t need mock objects because they test real objects used together.

Validation tests

Validation tests are to test the correctness of the high level outputs from Finesse, such as its predicted interferometer behaviour. It may be useful to use validation tests to e.g. compare scripts using Finesse to analytical models, or to compare the results of two separate Finesse scripts, or to other simulation tools.

The validation test directory also contains IPython notebooks. These define more complex validation tests which check behaviour against analytical models. Every notebook in the validation directory is executed on a per commit bases by BrumSoftTest. See 000_example_validation_notebook.ipynb for an example.

Writing tests

Before starting, it is useful to take a look at existing tests and the Pytest documentation to understand how to write good (and bad) tests. More guidance for particular types of test is given in the following sections.

Writing unit tests

In order to test the functionality of a single unit, it is sometimes necessary to use mock objects to mimick the behaviour of other functions/methods/classes required for the operation of the unit under test.

Which should I write tests for: concrete classes or parents?

There is a somewhat “philosophical” choice to make when it comes to unit testing of classes that inherit behaviour from other classes. A regular example in Finesse 3 is components, which ultimately descend from ModelElement which requires the name parameter. Beamsplitter and Mirror descend from ModelElement via Surface. Do we test ModelElement’s name getter and setter directly (perhaps via a mock object, if abstract) or test every component’s getter and setter individually? Do we test Surface for the parameters it provides to Beamsplitter and Mirror (e.g. R, T and L)?

The preferred approach with Finesse 3 is to in most cases test the concrete classes directly. This results in many duplicate tests however it ensures that any overridden behaviour in child classes is properly tested. This also ensures that the classes used by end users are the ones being directly tested. An antidote to code duplication in such tests is to use pytest fixtures. Individual tests where this guidance is broken should ideally explain why in the docstring.

An example of applying the same tests to children of a parent class is in /tests/unit/components/. The test classes in test_beamsplitter.py and test_mirror.py there, TestBeamsplitter and TestMirror, respectively, inherit from TestSurface which defines many tests specific to surfaces. All that TestBeamsplitter and TestMirror have to do to get the tests defined in TestSurface to run on Beamsplitter and Mirror objects is to define component fixtures in the same modules.