Releases: mudler/LocalAI
v2.10.0
LocalAI v2.10.0 Release Notes
Excited to announce the release of LocalAI v2.10.0! This version introduces significant changes, including breaking changes, numerous bug fixes, exciting new features, dependency updates, and more. Here's a summary of what's new:
Breaking Changes 🛠
- The
trust_remote_code
setting in the YAML config file of the model are now consumed for enhanced security measures also for the AutoGPTQ and transformers backend, thanks to @dave-gray101's contribution (#1799). If your model relied on the old behavior and you are sure of what you are doing, settrust_remote_code: true
in the YAML config file.
Bug Fixes 🐛
- Various fixes have been implemented to enhance the stability and performance of LocalAI:
- SSE no longer omits empty
finish_reason
fields for better compatibility with the OpenAI API, fixed by @mudler (#1745). - Functions now correctly handle scenarios with no results, also addressed by @mudler (#1758).
- A Command Injection Vulnerability has been fixed by @ouxs-19 (#1778).
- OpenCL-based builds for llama.cpp have been restored, thanks to @cryptk's efforts (#1828, #1830).
- An issue with OSX build
default.metallib
has been resolved, which should now allow running the llama-cpp backend on Apple arm64, fixed by @dave-gray101 (#1837).
- SSE no longer omits empty
Exciting New Features 🎉
- LocalAI continues to evolve with several new features:
- Ongoing implementation of the assistants API, making great progress thanks to community contributions, including an initial implementation by @christ66 (#1761).
- Addition of diffusers/transformers support for Intel GPU - now you can generate images and use the
transformer
backend also on Intel GPUs, implemented by @mudler (#1746). - Introduction of Bitsandbytes quantization for transformer backend enhancement and a fix for transformer backend error on CUDA by @fakezeta (#1823).
- Compatibility layers for Elevenlabs and OpenAI TTS, enhancing text-to-speech capabilities: Now LocalAI is compatible with Elevenlabs and OpenAI TTS, thanks to @mudler (#1834).
- vLLM now supports
stream: true
! This feature was introduced by @golgeek (#1749).
Dependency Updates 👒
- Our continuous effort to keep dependencies up-to-date includes multiple updates to
ggerganov/llama.cpp
,donomii/go-rwkv.cpp
,mudler/go-stable-diffusion
, and others, ensuring that LocalAI is built on the latest and most secure libraries.
Other Changes
- Several internal changes have been made to improve the development process and documentation, including updates to integration guides, stress reduction on self-hosted runners, and more.
Details of What's Changed
Breaking Changes 🛠
- feat(autogpt/transformers): consume
trust_remote_code
by @dave-gray101 in #1799
Bug fixes 🐛
- fix(sse): do not omit empty finish_reason by @mudler in #1745
- fix(functions): handle correctly when there are no results by @mudler in #1758
- fix(tests): re-enable tests after code move by @mudler in #1764
- Fix Command Injection Vulnerability by @ouxs-19 in #1778
- fix: the correct BUILD_TYPE for OpenCL is clblas (with no t) by @cryptk in #1828
- fix: missing OpenCL libraries from docker containers during clblas docker build by @cryptk in #1830
- fix: osx build default.metallib by @dave-gray101 in #1837
Exciting New Features 🎉
- fix: vllm - use AsyncLLMEngine to allow true streaming mode by @golgeek in #1749
- refactor: move remaining api packages to core by @dave-gray101 in #1731
- Bump vLLM version + more options when loading models in vLLM by @golgeek in #1782
- feat(assistant): Initial implementation of assistants api by @christ66 in #1761
- feat(intel): add diffusers/transformers support by @mudler in #1746
- fix(config): set better defaults for inferencing by @mudler in #1822
- fix(docker-compose): update docker compose file by @mudler in #1824
- feat(model-help): display help text in markdown by @mudler in #1825
- feat: Add Bitsandbytes quantization for transformer backend enhancement #1775 and fix: Transformer backend error on CUDA #1774 by @fakezeta in #1823
- feat(tts): add Elevenlabs and OpenAI TTS compatibility layer by @mudler in #1834
- feat(embeddings): do not require to be configured by @mudler in #1842
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1752
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1753
- deps(llama.cpp): update by @mudler in #1759
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1756
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1767
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1772
- ⬆️ Update donomii/go-rwkv.cpp by @localai-bot in #1771
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1779
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1789
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1791
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1794
- depedencies(sentencentranformers): update dependencies by @TwinFinz in #1797
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1801
- ⬆️ Update mudler/go-stable-diffusion by @localai-bot in #1802
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1805
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1811
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1827
Other Changes
- ci: add stablediffusion to release by @sozercan in #1757
- Update integrations.md by @Joshhua5 in #1765
- ci: reduce stress on self-hosted runners by @mudler in #1776
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1785
- Revert "feat(assistant): Initial implementation of assistants api" by @mudler in #1790
- Edit links in readme and integrations page by @lunamidori5 in #1796
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1813
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1816
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1818
- fix(doc/examples): set defaults to mirostat by @mudler in #1820
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1821
- fix: OSX Build Files for llama.cpp by @dave-gray101 in #1836
- ⬆️ Update go-skynet/go-llama.cpp by @localai-bot in #1835
- docs(transformers): add docs section about transformers by @mudler in #1841
- ⬆️ Update mudler/go-piper by @localai-bot in #1844
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1840
New Contributors
- @golgeek made their first contribution in #1749
- @Joshhua5 made their first contribution in #1765
- @ouxs-19 made their first contribution in #1778
- @TwinFinz made their first contribution in #1797
- @cryptk made their first contribution in #1828
- @fakezeta made their first contribution in #1823
Thank you to all contributors and users for your continued support and feedback, making LocalAI better with each release!
Full Changelog: v2.9.0...v2.10.0
v2.9.0
This release brings many enhancements, fixes, and a special thanks to the community for the amazing work and contributions!
We now have sycl images for Intel GPUs, ROCm images for AMD GPUs,and much more:
- You can find the AMD GPU images tags between the container images available - look for
hipblas
. For example, master-hipblas-ffmpeg-core. Thanks to @fenfir for this nice contribution! - Intel GPU images are tagged with
sycl
. You can find images with two flavors, sycl-f16 and sycl-f32 respectively. For example, master-sycl-f16. Work is in progress to support also diffusers and transformers on Intel GPUs. - Thanks to @christ66 first efforts in supporting the Assistant API were made, and we are planning to support the Assistant API! Stay tuned for more!
- Now LocalAI supports the Tools API endpoint - it also supports the (now deprecated) functions API call as usual. We now also have support for SSE with function calling. See #1726 for more
- Support for Gemma models - did you hear? Google released OSS models and LocalAI supports it already!
- Thanks to @dave-gray101 in #1728 to put efforts in refactoring parts of the code - we are going to support soon more ways to interface with LocalAI, and not only restful api!
Support the project
First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀
What's Changed
Bug fixes 🐛
Exciting New Features 🎉
- Build docker container for ROCm by @fenfir in #1595
- feat(tools): support Tool calls in the API by @mudler in #1715
- Initial implementation of upload files api. by @christ66 in #1703
- feat(tools): Parallel function calling by @mudler in #1726
- refactor: move part of api packages to core by @dave-gray101 in #1728
- deps(llama.cpp): update, support Gemma models by @mudler in #1734
👒 Dependencies
- deps(llama.cpp): update by @mudler in #1714
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1740
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1718
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1705
- Update README.md by @lunamidori5 in #1739
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1750
New Contributors
- @fenfir made their first contribution in #1595
- @christ66 made their first contribution in #1703
- @blob42 made their first contribution in #1730
Full Changelog: v2.8.2...v2.9.0
v2.8.2
v2.8.1
This is a patch release, mostly containing minor patches and bugfixes from 2.8.0.
Most importantly it contains a bugfix for #1333 which made the llama.cpp backend to get stuck in some cases where the model starts to hallucinate and fixes to the python-based backends.
Spread the word!
First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!
Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀
What's Changed
Bug fixes 🐛
- fix(vall-e-x): Fix voice cloning by @mudler in #1696
- fix: drop unused code by @mudler in #1697
- fix(llama.cpp): disable infinite context shifting by @mudler in #1704
- fix(llama.cpp): downgrade to a known working version by @mudler in #1706
- fix(python): pin exllama2 by @mudler in #1711
Exciting New Features 🎉
- feat(tts): respect YAMLs config file, add sycl docs/examples by @mudler in #1692
- ci: add cuda builds to release by @sozercan in #1702
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1693
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1694
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1698
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1700
Full Changelog: v2.8.0...v2.8.1
v2.8.0
This release adds support for Intel GPUs, and it deprecates old ggml-based backends which are by now superseded by llama.cpp (that now supports more architectures out-of-the-box). See also #1651.
Images are now based on Ubuntu 22.04 LTS instead of Debian bullseye.
Intel GPUs
There are now images tagged with "sycl". There are sycl-f16 and sycl-f32 images indicating f16 or f32 support.
For example, to start phi-2 with an Intel GPU it is enough to use the container image like this:
docker run -e DEBUG=true -ti -v $PWD/models:/build/models -p 8080:8080 -v /dev/dri:/dev/dri --rm quay.io/go-skynet/local-ai:master-sycl-f32-ffmpeg-core phi-2
Note
First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!
And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.
Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome, together.
Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀
What's Changed
Exciting New Features 🎉
- feat(sycl): Add support for Intel GPUs with sycl (#1647) by @mudler in #1660
- Drop old falcon backend (deprecated) by @mudler in #1675
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1678
- Drop ggml-based gpt2 and starcoder (supported by llama.cpp) by @mudler in #1679
- fix(Dockerfile): sycl dependencies by @mudler in #1686
- feat: Use ubuntu as base for container images, drop deprecated ggml-transformers backends by @mudler in #1689
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1656
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1665
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1669
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1673
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1683
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1688
- ⬆️ Update mudler/go-stable-diffusion by @localai-bot in #1674
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1661
- feat(mamba): Add bagel-dpo-2.8b by @richiejp in #1671
- fix (docs): fixed broken links
github/
->github.com/
by @Wansmer in #1672 - Fix HTTP links in README.md by @vfiftyfive in #1677
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1681
- ci: cleanup worker before run by @mudler in #1685
- Revert "fix(Dockerfile): sycl dependencies" by @mudler in #1687
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1691
New Contributors
- @richiejp made their first contribution in #1671
- @Wansmer made their first contribution in #1672
- @vfiftyfive made their first contribution in #1677
Full Changelog: v2.7.0...v2.8.0
v2.7.0
This release adds support to the transformer backend for LLM as well!
For now instance you can run codellama-7b with transformers with:
docker run -ti -p 8080:8080 --gpus all localai/localai:v2.7.0-cublas-cuda12 codellama-7b
In the quickstart there are more examples available https://localai.io/basics/getting_started/#running-models.
Note: As llama.cpp is ongoing with changes that could possible cause breakage, this release does not includes changes from ggerganov/llama.cpp#5138 (the future versions will).
What's Changed
Bug fixes 🐛
Exciting New Features 🎉
- feat(transformers): support also text generation by @mudler in #1630
- transformers: correctly load automodels by @mudler in #1643
- feat(startup): fetch model definition remotely by @mudler in #1654
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1642
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1644
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1652
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1655
Other Changes
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1632
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1631
Full Changelog: v2.6.1...v2.6.2
v2.6.1
This is a patch release containing bug-fixes around parallel request support with llama.cpp models.
What's Changed
Bug fixes 🐛
- fix(llama.cpp): Enable parallel requests by @tauven in #1616
- fix(llama.cpp): enable cont batching when parallel is set by @mudler in #1622
Exciting New Features 🎉
👒 Dependencies
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1623
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1619
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1620
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1626
New Contributors
Full Changelog: v2.6.0...v2.6.1
v2.6.0
What's Changed
Bug fixes 🐛
- move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build by @dionysius in #1576
- prepend built binaries in PATH for BUILD_GRPC_FOR_BACKEND_LLAMA by @dionysius in #1593
Exciting New Features 🎉
- minor: replace shell pwd in Makefile with CURDIR for better windows compatibility by @dionysius in #1571
- Makefile: allow to build without GRPC_BACKENDS by @mudler in #1607
- feat: 🐍 add mamba support by @mudler in #1589
- feat(extra-backends): Improvements, adding mamba example by @mudler in #1618
👒 Dependencies
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1567
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1568
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1573
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1578
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1583
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1587
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1590
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1594
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1599
Other Changes
- Moving the how tos to self hosted by @lunamidori5 in #1574
- docs: missing golang requirement for local build for debian by @dionysius in #1596
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1597
- docs/examples: enhancements by @mudler in #1572
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1604
- Update README.md by @lunamidori5 in #1601
- docs: re-use original permalinks by @mudler in #1610
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1612
- Expanded and interlinked Docker documentation by @jamesbraza in #1614
- Modernized LlamaIndex integration by @jamesbraza in #1613
New Contributors
- @dionysius made their first contribution in #1571
Full Changelog: v2.5.1...v2.6.0
v2.5.1
Patch release to create /build/models
in the container images.
What's Changed
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1562
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1563
Full Changelog: v2.5.0...v2.5.1
v2.5.0
What's Changed
This release adds more embedded models, and shrink image sizes.
You can run now phi-2
( see here for the full list ) locally by starting localai with:
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core phi-2
LocalAI accepts now as argument a list of short-hands models and/or URLs pointing to valid yaml file. A popular way to host those files are Github gists.
For instance, you can run llava
, by starting local-ai
with:
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llava.yaml
Exciting New Features 🎉
👒 Dependencies
- deps(conda): use transformers-env with vllm,exllama(2) by @mudler in #1554
- deps(conda): use transformers environment with autogptq by @mudler in #1555
- ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1558
Other Changes
- ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1557
Full Changelog: v2.4.1...v2.5.0