Tired of waiting for official translations? Wish you could continue the story right after the anime or manga ends?
LexiconForge is your gateway to the world of web novels. It is an interface that lets you translate chapters from almost any source (let me know and I can add support), in any language, to any language!
This was a passion project because of how much I love going into these worlds.
๐ฎ Use it instantly: lexicon-forge.vercel.app is live for anyone to read and translate right now.
๐ Patreon concierge: Subscribers get 1:1 onboarding, priority feature requests, bug fixes, and a complimentary $10 API credit to start reading. Join here.
LexiconForge is more than just a translator; it's a power tool for readers.
- ๐ / ๐ on any line to teach the model your taste and steadily improve every chapter.
- ๐ Privacy-first architecture keeps your API keys and translation history on-device.
- ๐ง Bring your own favorite modelโGemini, Claude, DeepSeek, OpenRouter, Flux, moreโall supported.
- โ Use the question emoji to generate cultural footnotes and etymology explanations on demand.
- ๐จ Summon bespoke illustrations by reacting to your favorite scene with the art emoji.
- โ๏ธ Tap the edit button to surgically refine the AI's output before saving it.
- ๐ Compare against trusted fan translations inline, toggling between AI, raw, and fan versionsโuse Settings to control whether fan translations are sent as reference to the AI or kept purely for comparison.
- ๐ฆ Export polished EPUBs for offline reading once you've curated the perfect translation.
- ๐๏ธ Experiment with prompts, OST generation, img2img steering, session analytics, and more quality-of-life tools built for deep reading.
-
๐ Multi-Site Support: Currently supports 5 major web novel platforms:
- Kakuyomu (kakuyomu.jp) - Japanese light novels
- Syosetu (ncode.syosetu.com) - Japanese web novels
- Dxmwx (dxmwx.org) - Chinese web novels
- Kanunu (kanunu8.com, kanunu.net) - Chinese literature archive
- NovelCool (novelcool.com) - Multi-language novel platform
-
๐ Intelligent CORS Proxy System: 10+ redundant proxy servers with automatic health monitoring and failover for reliable content fetching.
-
โก Smart Preloading: Background fetching of upcoming chapters for seamless reading (configurable 0-50 chapters ahead).
-
๐ Navigation Memory: Intelligent usage of Disk and Ram to ensure the app does not slow your computer down.
- ๐ Multi-Provider Support: Use your own API keys for Gemini, Claude, DeepSeek, or OpenRouter. You control your usage and data. If you need help contact admin in the @webnovels group to get an API key that works!
- ๐ 22+ AI Models: Access the latest generation of AI models across all providers to find your perfect translator. Quality and style varies across models and prompt combinations.
- ๐ฎ Coming Soon: Direct OpenAI integration (requires backend proxy for security).
- ๐๏ธ Fine-Tuned Control: Adjust temperature (creativity), context depth (0-5 previous chapters), and model-specific settings.
- ๐ฐ Real-Time Cost Tracking: Obsessive focus on cost-efficiency. See exactly how much each translation costs, down to the fraction of a cent, with 2025 pricing.
- ๐ Cancelable Requests: Click the red spinner to abort inโflight translations instantly.
- โ Structure Guarantees: Built-in validation for illustration and footnote markers keeps body text and JSON aligned.
- ๐ฏ Fan Translation Control: Toggle whether fan translations are sent to the AI as reference (Settings โ General โ "Include Fan Translation as Reference"). When enabled (default), the AI uses fan translations as ground truth to improve quality. When disabled, test pure translation quality with only raw text and previous chaptersโfan translations remain available for side-by-side comparison.
- ๐ฌ Text Selection Feedback: Select any text and rate it ๐๐? to teach the AI your preferences.
- โ Smart Explanations: Click the ? emoji on selected text to generate detailed footnotes explaining translation choices, cultural context, or literary techniques.
- ๐จ Illustration Generation: Click the ๐จ emoji on selected passages to automatically generate contextual illustration prompts that capture key story moments.
- โ๏ธ Prompt Template Library: Create, save, and manage custom system prompts for different novel types (Wuxia, Romance, Technical, etc.).
- ๐ Amendment Proposals: AI suggests prompt improvements based on your feedback patterns.
- ๐ Inline Annotations: Collaborative feedback system with comments and rating history.
- ๐ผ๏ธ Advanced AI Image Generation: Bring pivotal story moments to life with cutting-edge image generation:
- Multi-Model Support: Flux models (PiAPI), Imagen 3.0/4.0, and Gemini image generation
- 21 LoRA Style Models: XLabs (7) and CivitAI (14) collections for artistic transformation - anime, realism, cyberpunk, art deco, and more
- img2img with Steering Images: Guide generation with reference images for consistent character/scene styling
- Advanced Controls: Negative prompts, guidance scale (1.5-5.0), and LoRA strength tuning (0.1-2.0)
- Smart Context Placement: AI automatically places illustration markers at key story moments
- Collapsible Interface: Advanced controls hidden by default for distraction-free reading
- ๐ Professional EPUB Export: Generate beautiful e-books with:
- Comprehensive translation statistics and cost breakdowns
- Provider/model usage analytics across your entire library
- Embedded AI-generated illustrations with captions
- Customizable acknowledgments and project descriptions
- ๐พ Complete Data Ownership: Export/import your entire session as JSON. Your reading history, translations, feedback, and settings belong to you.
- ๐ง Scene Music & Cues: Generate background music or ambient tracks from style prompts
- ๐งฉ Two Modes:
txt2audio(from text prompt) andaudio2audio(style transfer) - ๐๏ธ Style Presets: Curated prompts (Dark Cinematic, Strategistโs Gambit, etc.)
- ๐ Cost Awareness: Providerโreported durations and simple cost estimates
- ๐ OptโIn: Works with your PiAPI key; entirely clientโside
- ๐๏ธ Dual-Tier Architecture: Instant UI updates (Zustand) + unlimited persistent storage (IndexedDB) for the best of both worlds.
- ๐ Session Persistence: Survive browser crashes and restarts. Your progress is never lost.
- ๐ Professional Statistics: Detailed breakdowns of token usage, costs, translation time, and model performance across your entire library.
- ๐ Smart Preloading: Configurable background fetching (0-10 chapters ahead) with intelligent rate limiting and deduplication.
- ๐ฏ Advanced Navigation: Smart URL mapping, browser history integration, and cross-session chapter hydration.
- ๐ง Developer-Friendly Debugging: Optional console logging system to monitor translation performance and troubleshoot issues.
The easiest way to start is with the official hosted version on Vercel. No installation required!
โก๏ธ Click here to launch LexiconForge
If youโd like a guided setup, tailored prompts, or bespoke feature development, hop onto the PatreonโIโll work with you directly to craft the perfect reading experience.
Want to run your own instance? It's easy.
- Clone the repository.
- Install dependencies:
npm install - Add your API keys to a new
.env.localfile:VITE_GEMINI_API_KEY=your_gemini_key_here VITE_DEEPSEEK_API_KEY=your_deepseek_key_here VITE_CLAUDE_API_KEY=your_claude_key_here VITE_OPENROUTER_API_KEY=your_openrouter_key # Access to 100+ models including GPT-4o VITE_PIAPI_API_KEY=your_piapi_key_here # For Flux models and LoRA # Note: Direct OpenAI support coming soon (requires backend proxy)
- Run the app:
npm run dev
If you have reference fan translations (e.g., from human translators), you can merge them into an exported session JSON:
npm run merge-fan-translations path/to/session.json path/to/fan-translations/ [output.json]What this does:
- Matches fan translation files by chapter number (e.g.,
chapter-256.txtโ chapter 256) - Adds them to the session as
fanTranslationfield for each chapter - Prints merge coverage statistics (how many chapters got fan translations)
Fan translations unlock:
- Side-by-side comparison: Toggle between AI, raw, and fan versions while reading
- AI reference mode: When "Include Fan Translation as Reference" is enabled (Settings โ General), the AI uses fan translations as ground truth to improve quality
- Quality benchmarking: Disable the reference mode to test how well the AI translates from raw text alone, using fan translations purely for comparison
For detailed technical information, see the Project Structure & Technical Details.
- Settings Reference:
docs/Settings.md - Environment Variables:
docs/EnvVars.md - Providers & Models:
docs/Providers.md - Image/Illustrations: see Rich Media section above
- Audio Generation:
docs/Audio.md - Workers & Batch Jobs:
docs/Workers.md - Data Schemas (Translation/Session):
docs/Schemas.md - EPUB Export & Templates:
docs/EPUB.md - Architecture Decisions (ADRs):
docs/anddocs/adr/ - Chrome Extension (BookToki scraper):
chrome_extension/README.md
Have a question, a feature request, or want to see what's next? The project is fully open sourceโhack on it with me, or just hang out with other readers.
- Join our Telegram Group: Get help, suggest new site adapters, and chat with other users at @webnovels.
- Patreon concierge: Become a patron for bespoke support, new feature prototypes, and API credits.
- You can try readomni and let me know if you like it more than LexiconForge and why!
LexiconForge is a passion project. If you find it useful, please consider supporting its continued development.
- Donate via Ethereum:
adityaarpitha.eth - Sponsor ongoing work: Patreon link
- Contributing Guide:
CONTRIBUTING.md - Debugging Flags:
docs/Debugging.md - Prompt Configuration:
config/PROMPT_DOCUMENTATION.md