v2.15.0
๐ LocalAI v2.15.0! ๐
Hey awesome people! I'm happy to announce the release of LocalAI version 2.15.0! This update introduces several significant improvements and features, enhancing usability, functionality, and user experience across the board. Dive into the key highlights below, and don't forget to check out the full changelog for more detailed updates.
๐ WebUI Upgrades: Turbocharged!
๐ Vision API Integration
The Chat WebUI now seamlessly integrates with the Vision API, making it easier for users to test image processing models directly through the browser interface - this is a very simple and hackable interface in less then 400L of code with Alpine.JS and HTMX!
๐ฌ System Prompts in Chat
System prompts can be set in the WebUI chat, which guide the user through interactions more intuitively, making our chat interface smarter and more responsive.
๐ Revamped Welcome Page
New to LocalAI or haven't installed any models yet? No worries! The updated welcome page now guides users through the model installation process, ensuring you're set up and ready to go without any hassle. This is a great first step for newcomers - thanks for your precious feedback!
๐ Background Operations Indicator
Don't get lost with our new background operations indicator on the WebUI, which shows when tasks are running in the background.
๐ Filter Models by Tag and Category
As our model gallery balloons, you can now effortlessly sift through models by tag and category, making finding what you need a breeze.
๐ง Single Binary Release
LocalAI is expanding into offering single binary releases, simplifying the deployment process and making it easier to get LocalAI up and running on any system.
For the moment we have condensed the builds which disables AVX and SSE instructions set. We are also planning to include cuda builds as well.
๐ง Expanded Model Gallery
This release introduces several exciting new models to our gallery, such as 'Soliloquy', 'tess', 'moondream2', 'llama3-instruct-coder' and 'aurora', enhancing the diversity and capability of our AI offerings. Our selection of one-click-install models is growing! We pick carefully model from the most trending ones on huggingface, feel free to submit your requests in a github issue, hop to our Discord or contribute by hosting your gallery, or.. even by adding models directly to LocalAI!
Want to share your model configurations and customizations? See the docs: https://localai.io/docs/getting-started/customize-model/
๐ฃ Let's Make Some Noise!
A gigantic THANK YOU to everyone whoโs contributedโyour feedback, bug squashing, and feature suggestions are what make LocalAI shine. To all our heroes out there supporting other users and sharing their expertise, youโre the real MVPs!
Remember, LocalAI thrives on community supportโnot big corporate bucks. If you love what we're building, show some love! A shoutout on social (@LocalAI_OSS and @mudler_it on twitter/X), joining our sponsors, or simply starring us on GitHub makes all the difference.
Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy
Thanks a ton, and.. enjoy this release!
What's Changed
Bug fixes ๐
- fix(webui): correct documentation URL for text2img by @mudler in #2233
- fix(ux): fix small glitches by @mudler in #2265
Exciting New Features ๐
- feat: update ROCM and use smaller image by @cryptk in #2196
- feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants by @mudler in #2232
- fix(webui): display small navbar with smaller screens by @mudler in #2240
- feat(startup): show CPU/GPU information with --debug by @mudler in #2241
- feat(single-build): generate single binaries for releases by @mudler in #2246
- feat(webui): ux improvements by @mudler in #2247
- fix: OpenVINO winograd always disabled by @fakezeta in #2252
- UI: flag
trust_remote_code
to users // favicon support by @dave-gray101 in #2253 - feat(ui): prompt for chat, support vision, enhancements by @mudler in #2259
๐ง Models
- fix(gallery): hermes-2-pro-llama3 models checksum changed by @Nold360 in #2236
- models(gallery): add moondream2 by @mudler in #2237
- models(gallery): add llama3-llava by @mudler in #2238
- models(gallery): add llama3-instruct-coder by @mudler in #2242
- models(gallery): update poppy porpoise by @mudler in #2243
- models(gallery): add lumimaid by @mudler in #2244
- models(gallery): add openbiollm by @mudler in #2245
- gallery: Added some OpenVINO models by @fakezeta in #2249
- models(gallery): Add Soliloquy by @mudler in #2260
- models(gallery): add tess by @mudler in #2266
- models(gallery): add lumimaid variant by @mudler in #2267
- models(gallery): add kunocchini by @mudler in #2268
- models(gallery): add aurora by @mudler in #2270
- models(gallery): add tiamat by @mudler in #2269
๐ Documentation and examples
- docs: updated Transformer parameters description by @fakezeta in #2234
- Update readme: add ShellOracle to community integrations by @djcopley in #2254
- Add missing Homebrew dependencies by @michaelmior in #2256
๐ Dependencies
- โฌ๏ธ Update docs version mudler/LocalAI by @localai-bot in #2228
- โฌ๏ธ Update ggerganov/llama.cpp by @localai-bot in #2229
- โฌ๏ธ Update ggerganov/whisper.cpp by @localai-bot in #2230
- build(deps): bump tqdm from 4.65.0 to 4.66.3 in /examples/langchain/langchainpy-localai-example in the pip group across 1 directory by @dependabot in #2231
- โฌ๏ธ Update ggerganov/llama.cpp by @localai-bot in #2239
- โฌ๏ธ Update ggerganov/llama.cpp by @localai-bot in #2251
- โฌ๏ธ Update ggerganov/llama.cpp by @localai-bot in #2255
- โฌ๏ธ Update ggerganov/llama.cpp by @localai-bot in #2263
Other Changes
- test: check the response URL during image gen in
app_test.go
by @dave-gray101 in #2248
New Contributors
- @Nold360 made their first contribution in #2236
- @djcopley made their first contribution in #2254
- @michaelmior made their first contribution in #2256
Full Changelog: v2.14.0...v2.15.0