Type-safe functional pipeline composition with support for tuple unpacking.
This module implements a generic pipeline builder that preserves full callable
signatures using Python 3.12+ generic syntax ([**P], [*Ts]).
For now package is not published to index, however it can be installed from source.
For example with uv it can be done in pyproject.toml file like this:
[project]
...
dependencies = ["pypipeline"]
[tool.uv.sources]
pypipeline = { git = "https://github.com/ihorh/pypipeline" }-
Composable functions – chain arbitrary callables with
.then()while retaining static type inference for arguments and return values. -
Tuple-unpacking pipes – when a stage returns a tuple, the next stage can automatically receive it as multiple positional arguments by specifying
result_unpack="tuple". -
Strict typing – uses
ParamSpecand type-var tuples to propagate argument and return types through every composition step. Type checkers infer(args, kwargs)of the initial function and final return type of the chain.
-
Dictionary-unpacking
-
Tuple and Dictionary Unpacking
-
PipelineEntry point for building a new pipeline chain. Supports
.then(f)(and corresponding shorthand operator>>) and.then(f, result_unpack="tuple"). -
composeCompose two callables
fandgasg(f(*args, **kwargs))in a type-safe way.
See module docstrings in init.py and _pipeline.py.
For more examples see tests.
def f1(x: int, y: int) -> tuple[int, int]:
return x + 1, y + 1
def f2(a: int, b: int) -> int:
return a * b
pipeline = Pipeline().then(f1, result_unpack="tuple").then(f2)
pipeline.call(2, 3) # result is 12or even better:
pipeline = Pipeline() >> f1 >> f2
pipeline.call(2, 3) # result is 12or using unstable feature - operator |:
result = Pipeline() >> f1 >> f2 | (2, 3) # result should be still 12 def sw_remover(tokens: Sequence[str], sw_loader: StopWordsLoader) -> Sequence[str]:
return [t for t in tokens if t.lower() not in sw_loader()]
txt_mws = config.text_middlewares
txt_mws = [txt_mws] if isinstance(txt_mws, TextMiddleware) else txt_mws or []
tokenizer = config.tokenizer_factory()
sw_loaders = config.stopwords_loaders or []
pipeline = Pipeline()
if config.text_preprocessor:
pipeline = pipeline >> config.text_preprocessor
pipeline = reduce(lambda p, mv: p >> mv, [txtmw.before for txtmw in txt_mws], pipeline)
pipeline = pipeline >> tokenizer.tokenize
pipeline = reduce(lambda p, mv: p >> mv, [partial(sw_remover, sw_loader=swl) for swl in sw_loaders], pipeline)
if config.remove_punctuation:
pipeline = pipeline >> (lambda tokens: [t for t in tokens if t not in config.remove_punctuation])
pipeline = pipeline >> tokenizer.detokenize
pipeline = reduce(lambda p, mv: p >> mv, [txtmw.after for txtmw in reversed(txt_mws)], pipeline)This library is primarily designed for statically-typed functional composition experiments, not for production runtime optimisation.
MIT License