Usage

Sybil works by discovering a series of documents as part of the test runner integration. These documents are then parsed into a set of non-overlapping regions. When the tests are run, each region is turned into an example that is evaluated in the document’s namespace. The examples are evaluated in the order in which they appear in the document. If an example does not evaluate as expected, a test failure is reported and Sybil continues on to evaluate the remaining examples in the document.

To use Sybil, you need pick the integration for your project’s test runner and then configure appropriate parsers for the examples in your project’s documentation and source code.

It’s worth checking the Patterns of Use to see if the pattern required for your project is covered there.

Test runner integration

Sybil is used by driving it through a test runner, with each example being presented as a test. The following test runners are currently supported:

pytest

Please use the pytest integration.

unittest

Please use the unittest integration.

Twisted’s trial

Please use the unittest integration.

Parsers

Sybil parsers are what extract examples from source files and turns them into parsed examples with evaluators that can check if they are correct. The parsers available depend on the source language of the files containing the examples you wish to check:

  • For ReStructured Text, typically .rst or .txt files, see ReST Parsers.

  • For Markdown, typically .md files, CommonMark, GitHub Flavored Markdown and MyST, along with other flavours, are supported.

  • For Python source code, typically .py files, it depends on the markup used in the docstrings; both the ReST parsers and MyST parsers will work. The source files are presented as PythonDocument instances that import the document’s source file as a Python module, making names within it available in the document’s namespace.

It’s also relatively easy to develop your own parsers.