memU-ui is the web frontend for MemU, designed to provide developers with an intuitive and visual interface. Through a graphical dashboard, users can easily browse, query, and manage the memory data of Agents. It seamlessly connects with the memU-server API for real-time data display and operations. memU-ui can be deployed locally or in private environments, and supports one-click startup via Docker.
Core Algorithm 👉 memU: https://github.com/NevaMind-AI/memU
Full backend for local deployment 👉 memU-server: https://github.com/NevaMind-AI/memU-server
One call = response + memory 👉 memU Response API: https://memu.pro/docs#responseapi
Try memU instantly 👉 https://app.memu.so/quick-start
Star memU-ui to get notified about new releases and join our growing community of AI developers building intelligent agents with persistent memory.
💬 Join our Discord community: https://discord.gg/memu
# make sure you are running memu-server
#
# quick start by docker:
#
# docker pull nevamindai/memu-server:latest
# export OPENAI_API_KEY=your-open-ai-key
# docker run --rm -p 8000:8000 -e OPENAI_API_KEY=$OPENAI_API_KEY nevamindai/memu-server:latest
npm i
npm run dev
- Docker image provided
- Launch the frontend with a single command
- Fully compatible with memU-server API
- Always in sync with memU feature updates
(Some features planned for future releases)
- View memory submission records
- Query and track retrieval records
- Visualize LLM token usage
- Login and registration for multi-user environments
- Role-based access control (Developer / Admin / Regular User)
- Configure access scope and permissions from the frontend
Most memory systems in current LLM pipelines rely heavily on explicit modeling, requiring manual definition and annotation of memory categories. This limits AI’s ability to truly understand memory and makes it difficult to support diverse usage scenarios.
MemU offers a flexible and robust alternative, inspired by hierarchical storage architecture in computer systems. It progressively transforms heterogeneous input data into queryable and interpretable textual memory.
Its core architecture consists of three layers: Resource Layer → Memory Item Layer → MemoryCategory Layer.
- Resource Layer: Multimodal raw data warehouse
- Memory Item Layer: Discrete extracted memory units
- MemoryCategory Layer: Aggregated textual memory units
- Full Traceability: Track from raw data → items → documents and back
- Memory Lifecycle: Memorization → Retrieval → Self-evolution
- Two Retrieval Methods:
- RAG-based: Fast embedding vector search
- LLM-based: Direct file reading with deep semantic understanding
- Self-Evolving: Adapts memory structure based on usage patterns
By contributing to memU-server, you agree that your contributions will be licensed under the AGPL-3.0 License.
For more information please contact [email protected]
- GitHub Issues: Report bugs, request features, and track development. Submit an issue
- Discord: Get real-time support, chat with the community, and stay updated. Join us
- X (Twitter): Follow for updates, AI insights, and key announcements. Follow us