Add q16-fixed-exp: a Q16.16 fixed-point exp() library for Cortex-M (C + ASM) #25
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This library implements fixed_mul(), fixed_exp() (for x ≥ 0) and fixed_exp_signed() (full signed support) in Q16.16 format, entirely in integer math—no FPU or divisions. It uses:
Binary decomposition of the fractional part (11-bit resolution) for sub-0.1 % accuracy on positive exponents
LUT + one-point linear interpolation for negative exponents (≤ 0.1 % error over [0,10])
A tiny lookup table (~324 B) and just one extra multiply per call for negatives
Both portable C and highly optimized ARM Cortex-M3 assembly
Based on O. W. Jackson’s TechRxiv paper “A Fixed-Point Binary Decomposition Method for Efficient Exponential Approximation in Embedded Systems” (May 2025):
https://www.techrxiv.org/users/921611/articles/1293706-a-fixed-point-binary-decomposition-method-for-efficient-exponential-approximation-in-embedded-systems
GitHub: https://github.com/you/q16-fixed-exp