A simple, configuration-driven command-line chatbot using Spring Boot and LangChain4J. This MVP demonstrates building a flexible AI application where LLM providers and parameters can be easily swapped via external configuration.
- Interactive CLI-based chat interface.
- Configuration-Driven: Switch LLM providers (Ollama, OpenAI, AWS Bedrock) and models via
application.properties
. - Flexible Parameters: Adjust model parameters (temperature, max tokens, etc.) via configuration.
- Provider Support: Includes LangChain4j integrations for Ollama, OpenAI (incl. Azure), and AWS Bedrock.
- Maintains conversation history using configurable
ChatMemory
. - Built with Spring Boot for dependency injection and configuration.
- Docker-Ready: Optimized multi-stage Docker build supporting external configuration.
- Docker installed on your system.
- Java 21 JDK and Maven 3.8+ (for building locally).
- Provider Access (depending on configuration):
- Ollama: Running Ollama instance with desired models (e.g.,
ollama pull gemma3:1b
). - OpenAI: OpenAI API key (or Azure OpenAI endpoint/key).
- AWS Bedrock: Configured AWS credentials and Bedrock model access.
- Ollama: Running Ollama instance with desired models (e.g.,
spring-cli-chatbot/
├── src/
│ └── main/
│ ├── java/
│ │ └── org/pennmedicine/predictivehealthcare/chatbot/cli/
│ │ ├── ChatBotApplication.java # Spring Boot Main Class & CLI Runner
│ │ ├── config/
│ │ │ └── ChatConfiguration.java # Defines ChatMemory Bean
│ │ └── service/
│ │ └── ChatService.java # Core chat logic
│ └── resources/
│ └── application.properties # Default/base configuration
├── Dockerfile # Builds the application container
├── pom.xml # Maven dependencies and build config
└── README.md # This file
(Other files like my-config.properties
and ollama_example.md
are for local configuration/documentation).
Configuration is managed via standard Spring Boot properties files (either src/main/resources/application.properties
or an external file mounted in Docker).
Key Configuration Steps:
-
Exclude Unused Providers: Required if
pom.xml
includes multiple provider dependencies. Usespring.autoconfigure.exclude
to prevent startup errors from providers missing required configuration (like API keys).# Example: Exclude OpenAI and AWS when using Ollama spring.autoconfigure.exclude=\ dev.langchain4j.openai.spring.AutoConfig,\ dev.langchain4j.aws.spring.AutoConfig
(To enable a provider later, remove its class from this list in your active properties file).
-
Select Active Provider: Set the primary provider to use.
# Options: ollama, open-ai, bedrock langchain4j.chat-model.provider=ollama
-
Configure Active Provider: Set the specific parameters for the selected provider.
- Ollama Example:
langchain4j.ollama.chat-model.base-url=${OLLAMA_BASE_URL:http://localhost:11434} langchain4j.ollama.chat-model.model-name=gemma3:1b langchain4j.ollama.chat-model.temperature=0.7 langchain4j.ollama.chat-model.num-predict=2000 langchain4j.ollama.chat-model.timeout=90s
- OpenAI Example:
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY:} langchain4j.open-ai.chat-model.model-name=gpt-4o-mini langchain4j.open-ai.chat-model.max-tokens=2000 langchain4j.open-ai.chat-model.timeout=90s
- AWS Bedrock Example:
langchain4j.aws.region=us-east-1 langchain4j.bedrock.chat-model.model=anthropic.claude-3-sonnet-20240229-v1:0 langchain4j.bedrock.chat-model.max-tokens=2000 langchain4j.bedrock.chat-model.timeout=90s
- Ollama Example:
-
Configure Memory & Common Settings:
langchain4j.chat-memory.type=message_window langchain4j.chat-memory.max-messages=20 chatbot.system-prompt=You are a helpful AI assistant configured via Spring Boot. logging.level.org.pennmedicine.predictivehealthcare.chatbot.cli=INFO
Note on External Config: When running with Docker, create a local properties file (e.g., my-config.properties
) with the desired provider selection, exclusions, and parameters, then mount it (see below).
- Configure
src/main/resources/application.properties
(incl. exclusions). - Run:
./mvnw spring-boot:run
(ormvn spring-boot:run
).
docker build -t spring-cli-chatbot .
-
Prepare Local Config: Create/edit a local properties file (e.g.,
my-config.properties
) with your desired provider selection, parameters, andspring.autoconfigure.exclude
settings. -
Run: Mount the config file and provide necessary environment variables (like API keys).
-
Ollama Example (Local, Mac/Win): (See
ollama_example.md
for a samplemy-config.properties
usinghost.docker.internal
)docker run -it --rm \ -v "$(pwd)/my-config.properties:/app/config/application.properties" \ spring-cli-chatbot
-
OpenAI Example:
docker run -it --rm \ -v "$(pwd)/my-config.properties:/app/config/application.properties" \ -e OPENAI_API_KEY="your_actual_openai_api_key" \ spring-cli-chatbot
-
AWS Bedrock Example:
docker run -it --rm \ -v "$(pwd)/my-config.properties:/app/config/application.properties" \ -e AWS_ACCESS_KEY_ID="YOUR_AWS_KEY" \ -e AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET" \ -e AWS_REGION="us-east-1" \ spring-cli-chatbot
-
After application startup:
- See the
Chatbot Ready!
message. - Type your message at the
You:
prompt and press Enter. - View the AI's response.
- Type
exit
,quit
, orbye
to end.
- Run
./mvnw clean package
(ormvn clean package
). - This creates
target/spring-cli-chatbot.jar
. - Run:
java -jar target/spring-cli-chatbot.jar
(Optionally use--spring.config.location=file:/path/to/your/config.properties
)
Ensure the used properties file includes the necessary spring.autoconfigure.exclude
settings.
- Web Interface: Expose
ChatService
via Spring Web REST controllers. - Advanced LangChain4j: RAG, function calling/tools, etc.
- Multiple Named Models: Use Spring profiles or
@Qualifier
. - Production: Health checks, structured logging.
- Spring Boot Starter
- LangChain4j Core (
langchain4j
) - LangChain4j Integrations:
langchain4j-ollama-spring-boot-starter
langchain4j-open-ai-spring-boot-starter
langchain4j-bedrock
- Logback (via Spring Boot Starter)
This project is licensed under the MIT License. See the LICENSE file for details.