Skip to content

Spring Boot + LangChain4j CLI Chatbot with flexible LLM config.

License

Notifications You must be signed in to change notification settings

pennsignals/spring-cli-chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spring Boot CLI Chatbot with LangChain4J

A simple, configuration-driven command-line chatbot using Spring Boot and LangChain4J. This MVP demonstrates building a flexible AI application where LLM providers and parameters can be easily swapped via external configuration.

Features

  • Interactive CLI-based chat interface.
  • Configuration-Driven: Switch LLM providers (Ollama, OpenAI, AWS Bedrock) and models via application.properties.
  • Flexible Parameters: Adjust model parameters (temperature, max tokens, etc.) via configuration.
  • Provider Support: Includes LangChain4j integrations for Ollama, OpenAI (incl. Azure), and AWS Bedrock.
  • Maintains conversation history using configurable ChatMemory.
  • Built with Spring Boot for dependency injection and configuration.
  • Docker-Ready: Optimized multi-stage Docker build supporting external configuration.

Prerequisites

  • Docker installed on your system.
  • Java 21 JDK and Maven 3.8+ (for building locally).
  • Provider Access (depending on configuration):
    • Ollama: Running Ollama instance with desired models (e.g., ollama pull gemma3:1b).
    • OpenAI: OpenAI API key (or Azure OpenAI endpoint/key).
    • AWS Bedrock: Configured AWS credentials and Bedrock model access.

Project Structure

spring-cli-chatbot/
├── src/
│   └── main/
│       ├── java/
│       │   └── org/pennmedicine/predictivehealthcare/chatbot/cli/
│       │       ├── ChatBotApplication.java     # Spring Boot Main Class & CLI Runner
│       │       ├── config/
│       │       │   └── ChatConfiguration.java  # Defines ChatMemory Bean
│       │       └── service/
│       │           └── ChatService.java        # Core chat logic
│       └── resources/
│           └── application.properties         # Default/base configuration
├── Dockerfile                                 # Builds the application container
├── pom.xml                                    # Maven dependencies and build config
└── README.md                                  # This file

(Other files like my-config.properties and ollama_example.md are for local configuration/documentation).

Configuration (application.properties)

Configuration is managed via standard Spring Boot properties files (either src/main/resources/application.properties or an external file mounted in Docker).

Key Configuration Steps:

  1. Exclude Unused Providers: Required if pom.xml includes multiple provider dependencies. Use spring.autoconfigure.exclude to prevent startup errors from providers missing required configuration (like API keys).

    # Example: Exclude OpenAI and AWS when using Ollama
    spring.autoconfigure.exclude=\
      dev.langchain4j.openai.spring.AutoConfig,\
      dev.langchain4j.aws.spring.AutoConfig

    (To enable a provider later, remove its class from this list in your active properties file).

  2. Select Active Provider: Set the primary provider to use.

    # Options: ollama, open-ai, bedrock
    langchain4j.chat-model.provider=ollama
  3. Configure Active Provider: Set the specific parameters for the selected provider.

    • Ollama Example:
      langchain4j.ollama.chat-model.base-url=${OLLAMA_BASE_URL:http://localhost:11434}
      langchain4j.ollama.chat-model.model-name=gemma3:1b
      langchain4j.ollama.chat-model.temperature=0.7
      langchain4j.ollama.chat-model.num-predict=2000
      langchain4j.ollama.chat-model.timeout=90s
    • OpenAI Example:
      langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY:}
      langchain4j.open-ai.chat-model.model-name=gpt-4o-mini
      langchain4j.open-ai.chat-model.max-tokens=2000
      langchain4j.open-ai.chat-model.timeout=90s
    • AWS Bedrock Example:
      langchain4j.aws.region=us-east-1
      langchain4j.bedrock.chat-model.model=anthropic.claude-3-sonnet-20240229-v1:0
      langchain4j.bedrock.chat-model.max-tokens=2000
      langchain4j.bedrock.chat-model.timeout=90s
  4. Configure Memory & Common Settings:

    langchain4j.chat-memory.type=message_window
    langchain4j.chat-memory.max-messages=20
    chatbot.system-prompt=You are a helpful AI assistant configured via Spring Boot.
    logging.level.org.pennmedicine.predictivehealthcare.chatbot.cli=INFO

Note on External Config: When running with Docker, create a local properties file (e.g., my-config.properties) with the desired provider selection, exclusions, and parameters, then mount it (see below).

Building and Running

Building from Source (Local)

  1. Configure src/main/resources/application.properties (incl. exclusions).
  2. Run: ./mvnw spring-boot:run (or mvn spring-boot:run).

Building the Docker Image

docker build -t spring-cli-chatbot .

Running the Docker Container (Recommended)

  1. Prepare Local Config: Create/edit a local properties file (e.g., my-config.properties) with your desired provider selection, parameters, and spring.autoconfigure.exclude settings.

  2. Run: Mount the config file and provide necessary environment variables (like API keys).

    • Ollama Example (Local, Mac/Win): (See ollama_example.md for a sample my-config.properties using host.docker.internal)

      docker run -it --rm \
        -v "$(pwd)/my-config.properties:/app/config/application.properties" \
        spring-cli-chatbot
    • OpenAI Example:

      docker run -it --rm \
        -v "$(pwd)/my-config.properties:/app/config/application.properties" \
        -e OPENAI_API_KEY="your_actual_openai_api_key" \
        spring-cli-chatbot
    • AWS Bedrock Example:

      docker run -it --rm \
        -v "$(pwd)/my-config.properties:/app/config/application.properties" \
        -e AWS_ACCESS_KEY_ID="YOUR_AWS_KEY" \
        -e AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET" \
        -e AWS_REGION="us-east-1" \
        spring-cli-chatbot

Usage

After application startup:

  1. See the Chatbot Ready! message.
  2. Type your message at the You: prompt and press Enter.
  3. View the AI's response.
  4. Type exit, quit, or bye to end.

Building Executable JAR (Alternative)

  1. Run ./mvnw clean package (or mvn clean package).
  2. This creates target/spring-cli-chatbot.jar.
  3. Run: java -jar target/spring-cli-chatbot.jar (Optionally use --spring.config.location=file:/path/to/your/config.properties)

Ensure the used properties file includes the necessary spring.autoconfigure.exclude settings.

Future Considerations

  • Web Interface: Expose ChatService via Spring Web REST controllers.
  • Advanced LangChain4j: RAG, function calling/tools, etc.
  • Multiple Named Models: Use Spring profiles or @Qualifier.
  • Production: Health checks, structured logging.

Dependencies

  • Spring Boot Starter
  • LangChain4j Core (langchain4j)
  • LangChain4j Integrations:
    • langchain4j-ollama-spring-boot-starter
    • langchain4j-open-ai-spring-boot-starter
    • langchain4j-bedrock
  • Logback (via Spring Boot Starter)

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

Spring Boot + LangChain4j CLI Chatbot with flexible LLM config.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published