API Reference#

This reference is organised into sections relating to the different things you’re likely to be doing when using Sybil.

Sybils#

class sybil.Sybil(parsers: Sequence[Callable[[sybil.Document], Iterable[sybil.Region]]], pattern: str | None = None, patterns: Sequence[str] = (), exclude: str | None = None, excludes: Sequence[str] = (), filenames: Collection[str] = (), path: str = '.', setup: Callable[[Dict[str, Any]], None] | None = None, teardown: Callable[[Dict[str, Any]], None] | None = None, fixtures: Sequence[str] = (), encoding: str = 'utf-8', document_types: Mapping[str | None, Type[Document]] | None = None)#

An object to provide test runner integration for discovering examples in documentation and ensuring they are correct.

Parameters:
  • parsers – A sequence of callables. See Parsers.

  • path

    The path in which source files are found, relative to the path of the Python source file in which this class is instantiated. Absolute paths can also be passed.

    Note

    This is ignored when using the pytest integration.

  • pattern – An optional pattern used to match source files that will be parsed for examples.

  • patterns – An optional sequence of patterns used to match source paths that will be parsed for examples.

  • exclude – An optional pattern for source file names that will be excluded when looking for examples.

  • excludes – An optional sequence of patterns for source paths that will be excluded when looking for examples.

  • filenames – An optional collection of file names that, if found anywhere within the root path or its sub-directories, will be parsed for examples.

  • setup – An optional callable that will be called once before any examples from a Document are evaluated. If provided, it is called with the document’s namespace.

  • teardown – An optional callable that will be called after all the examples from a Document have been evaluated. If provided, it is called with the document’s namespace.

  • fixtures – An optional sequence of strings specifying the names of fixtures to be requested when using the pytest integration. The fixtures will be inserted into the document’s namespace before any examples for that document are evaluated. All scopes of fixture are supported.

  • encoding – An optional string specifying the encoding to be used when decoding documentation source files.

  • document_types – A mapping of file extension to Document subclass such that custom evaluation can be performed per document type.

__add__(other: Sybil) SybilCollection#

Sybil instances can be concatenated into a SybilCollection.

pytest() Callable[[Path, Any], Any]#

The helper method for when you use pytest.

unittest() Callable[[Any, Any, str | None], Any]#

The helper method for when you use unittest.

class sybil.sybil.SybilCollection(iterable=(), /)#

When Sybil instances are concatenated, the collection returned can be used in the same way as a single Sybil.

This allows multiple configurations to be used in a single test run.

pytest() Callable[[Path, Any], Any]#

The helper method for when you use pytest.

unittest() Callable[[Any, Any, str | None], Any]#

The helper method for when you use unittest.

Documents#

class sybil.Document(text: str, path: str)#

This is Sybil’s representation of a documentation source file. It will be instantiated by Sybil and provided to each parser in turn.

Different types of document can be handled by subclassing to provide the required evaluation. The required file extensions, such as '.py', can then be mapped to these subclasses using Sybil's document_types parameter.

text: str#

This is the text of the documentation source file.

path: str#

This is the absolute path of the documentation source file.

namespace: Dict[str, Any]#

This dictionary is the namespace in which all examples parsed from this document will be evaluated.

classmethod parse(path: str, *parsers: Callable[[sybil.Document], Iterable[sybil.Region]], encoding: str = 'utf-8') Document#

Read the text from the supplied path and parse it into a document using the supplied parsers.

line_column(position: int) str#

Return a line and column location in this document based on a character position.

find_region_sources(start_pattern: Pattern[str], end_pattern: Pattern[str]) Iterator[Tuple[Match[str], Match[str], str]]#

This helper method can be used to extract source text for regions based on the two regular expressions provided.

It will yield a tuple of (start_match, end_match, source) for each occurrence of start_pattern in the document’s text that is followed by an occurrence of end_pattern. The matches will be provided as match objects, while the source is provided as a string.

push_evaluator(evaluator: Callable[[sybil.Example], str | None]) None#

Push an Evaluator onto this document’s stack of evaluators if it is not already in that stack.

When evaluating an Example, any evaluators in the stack will be tried in order, starting with the most recently pushed. If an evaluator raises a NotEvaluated exception, then the next evaluator in the stack will be attempted.

If the stack is empty or all evaluators present raise NotEvaluated, then the example’s evaluator will be used. This is the most common case!

pop_evaluator(evaluator: Callable[[sybil.Example], str | None]) None#

Pop an Evaluator off this document’s stack of evaluators. If it is not present in that stack, the method does nothing.

class sybil.document.PythonDocument(text: str, path: str)#

A Document type that imports the document’s source file as a Python module, making names within it available in the document’s namespace.

import_document(example: Example) None#

Imports the document’s source file as a Python module when the first Example from it is evaluated.

class sybil.document.PythonDocStringDocument(text: str, path: str)#

A PythonDocument subclass that only considers the text of docstrings in the document’s source.

classmethod parse(path: str, *parsers: Callable[[sybil.Document], Iterable[sybil.Region]], encoding: str = 'utf-8') Document#

Read the text from the supplied path to a Python source file and parse any docstrings it contains into a document using the supplied parsers.

Regions#

class sybil.Region(start: int, end: int, parsed: Any = None, evaluator: Callable[[sybil.Example], str | None] | None = None, lexemes: Dict[str, Any] | None = None)#

Parsers should yield instances of this class for each example they discover in a documentation source file.

Parameters:
  • start – The character position at which the example starts in the Document.

  • end – The character position at which the example ends in the Document.

  • parsed – The parsed version of the example.

  • evaluator – The callable to use to evaluate this example and check if it is as it should be.

start: int#

The start of this region within the document’s text.

end: int#

The end of this region within the document’s text.

parsed: Any#

The parsed version of this region. This only needs to have meaning to the evaluator.

evaluator: Evaluator | None#

The Evaluator for this region.

lexemes: LexemeMapping#

The lexemes extracted from the region.

adjust(lexed: Region, lexeme: Lexeme) None#

Adjust the start and end of this region based on the provided Lexeme and :Region that lexeme came from.

class sybil.Lexeme(text: str, offset: int, line_offset: int)#

Bases: str

Where needed, this can store both the text of the lexeme and it’s line offset relative to the line number of the example that contains it.

Lexing#

sybil.typing.Lexer#

The signature for a lexer. See Developing your own parsers. Lexers must not set parsed or evaluator on the Region instances they return.

alias of Callable[[sybil.Document], Iterable[sybil.Region]]

sybil.typing.LexemeMapping#

Mappings used to store lexemes for a Region.

alias of Dict[str, Any]

class sybil.parsers.abstract.lexers.BlockLexer(start_pattern: Pattern[str], end_pattern_template: str, mapping: Dict[str, str] | None = None)#

This is a base class useful for any Lexer that must handle block-style languages such as ReStructured Text or MarkDown.

It yields a sequence of Region objects for each case where the start_pattern matches. A source Lexeme is created from the text between the end of the start pattern and the start of the end pattern.

Parameters:
  • start_pattern – This is used to match the start of the block. Any named groups will be returned in the lexemes dict of resulting Region objects. If a prefix named group forms part of the match, this will be template substituted into the end_pattern_template before it is compiled.

  • end_pattern_template – This is used to match the end of any block found by the start_pattern. It is templated with any prefix group from the start_pattern Match and len_prefix, the length of that prefix, before being compiled into a Pattern.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

Parsing#

sybil.typing.Parser#

The signature for a parser. See Developing your own parsers.

alias of Callable[[sybil.Document], Iterable[sybil.Region]]

class sybil.parsers.abstract.codeblock.AbstractCodeBlockParser(lexers: Sequence[Callable[[sybil.Document], Iterable[sybil.Region]]], language: str | None = None, evaluator: Callable[[sybil.Example], str | None] | None = None)#

An abstract parser for use when evaluating blocks of code.

Parameters:
  • lexers – A sequence of Lexer objects that will be applied in turn to each Document that is parsed. The Region objects returned by these lexers must have both an arguments string, containing the language of the lexed region, and a source Lexeme containing the source code of the lexed region.

  • language – The language that this parser should look for. Lexed regions which don’t have this language in their arguments lexeme will be ignored.

  • evaluator – The evaluator to use for evaluating code blocks in the specified language. You can also override the evaluate() method below.

evaluate(example: Example) str | None#

The Evaluator used for regions yields by this parser can be provided by implementing this method.

class sybil.parsers.abstract.doctest.DocTestStringParser(evaluator: ~sybil.evaluators.doctest.DocTestEvaluator = <sybil.evaluators.doctest.DocTestEvaluator object>)#

This isn’t a true Parser in that it must be called with a str containing the doctest example’s source and the file name that the example came from.

evaluator: DocTestEvaluator#

The evaluator to use for any doctests found in the supplied source string.

__call__(string: str, name: str) Iterable[Region]#

This will yield sybil.Region objects for any doctest examples found in the supplied string with the evaluator supplied to its constructor and the file name supplied.

Each section starting with a >>> will form a separate region.

class sybil.parsers.abstract.skip.AbstractSkipParser(lexers: Sequence[Callable[[sybil.Document], Iterable[sybil.Region]]])#

An abstract parser for skipping subsequent examples.

Parameters:

lexers – A sequence of Lexer objects that will be applied in turn to each Document that is parsed.

class sybil.parsers.abstract.clear.AbstractClearNamespaceParser(lexers: Sequence[Callable[[sybil.Document], Iterable[sybil.Region]]])#

An abstract parser for clearing the namespace.

ReST Parsing and Lexing#

class sybil.parsers.rest.lexers.DirectiveLexer(directive: str, arguments: str = '', mapping: Dict[str, str] | None = None)#

A BlockLexer for ReST directives that extracts the following lexemes:

  • directive as a str.

  • arguments as a str.

  • source as a Lexeme.

Parameters:
  • directive – a str containing a regular expression pattern to match directive names.

  • arguments – a str containing a regular expression pattern to match directive arguments.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

class sybil.parsers.rest.lexers.DirectiveInCommentLexer(directive: str, arguments: str = '', mapping: Dict[str, str] | None = None)#

A BlockLexer for faux ReST directives in comments such as:

.. not-really-a-directive: some-argument

  Source here...

It extracts the following lexemes:

  • directive as a str.

  • arguments as a str.

  • source as a Lexeme.

Parameters:
  • directive – a str containing a regular expression pattern to match directive names.

  • arguments – a str containing a regular expression pattern to match directive arguments.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

class sybil.parsers.rest.CaptureParser#

A Parser for captures.

class sybil.parsers.rest.CodeBlockParser(language: str | None = None, evaluator: Callable[[sybil.Example], str | None] | None = None)#

A Parser for Code blocks examples.

Parameters:
  • language – The language that this parser should look for.

  • evaluator – The evaluator to use for evaluating code blocks in the specified language. You can also override the evaluate() method below.

evaluate(example: Example) str | None#

The Evaluator used for regions yields by this parser can be provided by implementing this method.

class sybil.parsers.rest.PythonCodeBlockParser(future_imports: Sequence[str] = ())#

A Parser for Python Code blocks examples.

Parameters:

future_imports – An optional sequence of strings that will be turned into from __future__ import ... statements and prepended to the code in each of the examples found by this parser.

class sybil.parsers.rest.DocTestParser(optionflags: int = 0)#

A Parser for doctest examples.

Parameters:

optionflagsdoctest option flags to use when evaluating the examples found by this parser.

class sybil.parsers.rest.DocTestDirectiveParser(optionflags: int = 0)#

A Parser for doctest directives.

Parameters:

optionflagsdoctest option flags to use when evaluating the examples found by this parser.

class sybil.parsers.rest.SkipParser#

A Parser for skip instructions.

class sybil.parsers.rest.ClearNamespaceParser#

A Parser for clear-namespace instructions.

Markdown Parsing and Lexing#

class sybil.parsers.markdown.lexers.RawFencedCodeBlockLexer(info_pattern: Pattern[str] = re.compile('$\\n', re.MULTILINE), mapping: Dict[str, str] | None = None)#

A Lexer for Markdown fenced code blocks allowing flexible lexing of the whole info line along with more complicated prefixes.

The following lexemes are extracted:

  • source as a Lexeme.

  • any other named groups specified in info_pattern as strings.

Parameters:
  • info_pattern – a re.Pattern to match the info line and any required prefix that follows it.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

class sybil.parsers.markdown.lexers.FencedCodeBlockLexer(language: str, mapping: Dict[str, str] | None = None)#

A Lexer for Markdown fenced code blocks where a language is specified. RawFencedCodeBlockLexer can be used if the whole info line, or a more complicated prefix, is required.

The following lexemes are extracted:

Parameters:
  • language – a str containing a regular expression pattern to match language names.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

class sybil.parsers.markdown.lexers.DirectiveInHTMLCommentLexer(directive: str, arguments: str = '.*?', mapping: Dict[str, str] | None = None)#

A BlockLexer for faux directives in HTML-style Markdown comments such as:

<!--- not-really-a-directive: some-argument

    Source here...

--->

It extracts the following lexemes:

  • directive as a str.

  • arguments as a str.

  • source as a Lexeme.

Parameters:
  • directive – a str containing a regular expression pattern to match directive names.

  • arguments – a str containing a regular expression pattern to match directive arguments.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

class sybil.parsers.markdown.CodeBlockParser(language: str | None = None, evaluator: Callable[[sybil.Example], str | None] | None = None)#

A Parser for Code blocks examples.

Parameters:
  • language – The language that this parser should look for.

  • evaluator – The evaluator to use for evaluating code blocks in the specified language. You can also override the evaluate() method below.

evaluate(example: Example) str | None#

The Evaluator used for regions yields by this parser can be provided by implementing this method.

class sybil.parsers.markdown.PythonCodeBlockParser(future_imports: Sequence[str] = (), doctest_optionflags: int = 0)#

A Parser for Python Code blocks examples.

Parameters:
  • future_imports – An optional list of strings that will be turned into from __future__ import ... statements and prepended to the code in each of the examples found by this parser.

  • doctest_optionflagsdoctest option flags to use when evaluating the doctest examples found by this parser.

class sybil.parsers.markdown.SkipParser#

A Parser for skip instructions.

class sybil.parsers.markdown.ClearNamespaceParser#

A Parser for clear-namespace instructions.

MyST Parsing and Lexing#

class sybil.parsers.myst.lexers.DirectiveLexer(directive: str, arguments: str = '.*', mapping: Dict[str, str] | None = None)#

A Lexer for MyST directives such as:

```{directivename} arguments
---
key1: val1
key2: val2
---
This is
directive content
```

The following lexemes are extracted:

  • directive as a str.

  • arguments as a str.

  • source as a Lexeme.

Parameters:
  • directive – a str containing a regular expression pattern to match directive names.

  • arguments – a str containing a regular expression pattern to match directive arguments.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

class sybil.parsers.myst.lexers.DirectiveInPercentCommentLexer(directive: str, arguments: str = '.*', mapping: Dict[str, str] | None = None)#

A BlockLexer for faux MyST directives in %-style Markdown comments such as:

% not-really-a-directive: some-argument
%
%     Source here...

It extracts the following lexemes:

  • directive as a str.

  • arguments as a str.

  • source as a Lexeme.

Parameters:
  • directive – a str containing a regular expression pattern to match directive names.

  • arguments – a str containing a regular expression pattern to match directive arguments.

  • mapping – If provided, this is used to rename lexemes from the keys in the mapping to their values. Only mapped lexemes will be returned in any Region objects.

class sybil.parsers.myst.CodeBlockParser(language: str | None = None, evaluator: Callable[[sybil.Example], str | None] | None = None)#

A Parser for Code blocks examples.

Parameters:
  • language – The language that this parser should look for.

  • evaluator – The evaluator to use for evaluating code blocks in the specified language. You can also override the evaluate() method below.

evaluate(example: Example) str | None#

The Evaluator used for regions yields by this parser can be provided by implementing this method.

class sybil.parsers.myst.PythonCodeBlockParser(future_imports: Sequence[str] = (), doctest_optionflags: int = 0)#

A Parser for Python Code blocks examples.

Parameters:
  • future_imports – An optional list of strings that will be turned into from __future__ import ... statements and prepended to the code in each of the examples found by this parser.

  • doctest_optionflagsdoctest option flags to use when evaluating the doctest examples found by this parser.

class sybil.parsers.myst.DocTestDirectiveParser(optionflags: int = 0)#

A Parser for doctest directive examples.

Parameters:

optionflagsdoctest option flags to use when evaluating the examples found by this parser.

class sybil.parsers.myst.SkipParser#

A Parser for skip instructions.

class sybil.parsers.myst.ClearNamespaceParser#

A Parser for clear-namespace instructions.

Evaluation#

class sybil.Example(document: Document, line: int, column: int, region: Region, namespace: Dict[str, Any])#

This represents a particular example from a documentation source file. It is assembled from the Document and Region the example comes from and is passed to the region’s evaluator.

document: Document#

The Document from which this example came.

path: str#

The absolute path of the Document.

line: int#

The line number at which this example occurs in the Document.

column: int#

The column number at which this example occurs in the Document.

region: Region#

The Region from which this example came.

start: int#

The character position at which this example starts in the Document.

end: int#

The character position at which this example ends in the Document.

parsed: Any#

The version of this example provided by the parser that yielded the Region containing it.

namespace: Dict[str, Any]#

The namespace of the document from which this example came.

class sybil.example.NotEvaluated#

An exception that can be raised by an Evaluator previously pushed onto the document to indicate that it is not evaluating the current example and that a previously pushed evaluator, or the Region evaluator if no others have been pushed, should be used to evaluate the Example instead.

sybil.typing.Evaluator#

The signature for an evaluator. See Developing your own parsers.

class sybil.evaluators.doctest.DocTestEvaluator(optionflags: int = 0)#

The Evaluator to use for Regions yielded by a DocTestStringParser.

Parameters:

optionflagsdoctest option flags to use when evaluating examples.