Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .coveragerc
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
[run]
source = src/llama_prompt_ops
omit =
*/tests/*
*/site-packages/*
setup.py

[report]
exclude_lines =
pragma: no cover
def __repr__
raise NotImplementedError
if __name__ == .__main__.:
pass
raise ImportError
189 changes: 189 additions & 0 deletions frontend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
# Llama Prompt Ops - Frontend

A modern React frontend interface for [llama-prompt-ops](https://github.com/meta-llama/llama-prompt-ops), providing an intuitive web interface for prompt optimization workflows.

## Features

- **Prompt Enhancement**: Optimize prompts for better performance with Llama models
- **Prompt Migration**: Migrate prompts between different model architectures
- **Real-time Optimization**: Monitor optimization progress with live updates
- **Dataset Management**: Upload and manage datasets for optimization
- **Configuration Management**: Flexible configuration for different optimization strategies
- **Clean UI**: Modern, accessible interface with Meta's design language

## Technology Stack

- **Frontend**: React 18 + TypeScript
- **UI Components**: Radix UI + shadcn/ui
- **Styling**: Tailwind CSS with Meta/Facebook design system
- **Build Tool**: Vite
- **Backend**: FastAPI with llama-prompt-ops integration

## Quick Start

### Prerequisites

- **Node.js 18+** and npm
- **Python 3.8+** (for backend)
- **OpenRouter API Key** (get one at [OpenRouter](https://openrouter.ai/))

### Installation

1. **Clone the repository**
```bash
git clone https://github.com/meta-llama/llama-prompt-ops.git
cd llama-prompt-ops/frontend
```

2. **Install frontend dependencies**
```bash
npm install
```

3. **Set up backend environment**
```bash
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
```

4. **Configure environment variables**

Create a `.env` file in the `frontend/backend` directory:
```bash
# In frontend/backend/.env
OPENROUTER_API_KEY=your_openrouter_api_key_here
OPENAI_API_KEY=your_openai_api_key_here # Optional: for fallback enhance feature
```

### Running the Application

#### Option 1: Use the Development Script (Recommended)
```bash
# From the frontend directory
chmod +x start-dev.sh
./start-dev.sh
```

#### Option 2: Manual Start
```bash
# Terminal 1: Start backend
cd backend
source venv/bin/activate
python -m uvicorn main:app --reload --port 8000

# Terminal 2: Start frontend
cd ..
npm run dev
```

The application will be available at:
- **Frontend**: http://localhost:8080
- **Backend API**: http://localhost:8000

### First Run

1. **Upload a dataset**: Click "Manage Dataset" and upload a JSON file with your training data
2. **Configure optimization**: Select your preferred model, metrics, and optimization strategy
3. **Enter your prompt**: Paste your existing prompt in the text area
4. **Click "Optimize"**: Watch the real-time progress and get your optimized prompt!

## Dataset Format

Upload JSON files in this format:
```json
[
{
"question": "Your input query here",
"answer": "Expected response here"
},
{
"question": "Another input query",
"answer": "Another expected response"
}
]
```

## Troubleshooting

### Common Issues

**Backend won't start:**
- Ensure you've activated the virtual environment
- Check that all requirements are installed: `pip install -r requirements.txt`
- Verify your API keys are set in the `.env` file

**Frontend can't connect to backend:**
- Make sure the backend is running on port 8000
- Check browser console for CORS errors
- Verify the backend URL in the frontend code

**Optimization fails:**
- Check that you've uploaded a valid dataset
- Verify your OpenRouter API key is correct
- Ensure your dataset has the expected format

**Port already in use:**
- Kill existing processes: `pkill -f "uvicorn\|vite"`
- Or use different ports in the configuration

## Development

### Frontend Development

```bash
# Start with hot reload
npm run dev

# Build for production
npm run build

# Preview production build
npm run preview

# Lint code
npm run lint
```

### Backend Development

```bash
# Start with auto-reload
uvicorn main:app --reload --port 8000

# Run with debug logging
uvicorn main:app --reload --port 8000 --log-level debug
```

## Project Structure

```
frontend/
├── backend/ # FastAPI backend
│ ├── main.py # API server
│ ├── requirements.txt # Python dependencies
│ └── uploaded_datasets/ # Dataset storage
├── src/
│ ├── components/ # React components
│ │ ├── ui/ # Reusable UI components
│ │ ├── ConfigurationPanel.tsx
│ │ ├── PromptInput.tsx
│ │ └── ...
│ ├── context/ # React context
│ ├── hooks/ # Custom hooks
│ └── pages/ # Page components
├── package.json
└── start-dev.sh # Development startup script
```

## Contributing

1. Follow the existing code style and patterns
2. Add tests for new features
3. Update documentation for any changes
4. Ensure the application builds and runs successfully

## License

This project is licensed under the same terms as llama-prompt-ops.
79 changes: 79 additions & 0 deletions frontend/backend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Llama Prompt Ops Frontend Backend

This is a FastAPI backend for the llama-prompt-ops frontend interface. It provides API endpoints for optimizing prompts using OpenAI's GPT models and the llama-prompt-ops library.

## Setup

1. Create a virtual environment:
```bash
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```

2. Install dependencies:
```bash
pip install -r requirements.txt
```

3. Set up your environment variables:
- Copy `.env.example` to `.env` (if available)
- Add your OpenAI API key and OpenRouter API key to the `.env` file

## Running the Server

Start the FastAPI server with:
```bash
uvicorn main:app --reload --port 8000
```

The API will be available at http://localhost:8000

## API Endpoints

### POST /api/enhance-prompt

Enhances a prompt using OpenAI's GPT model.

**Request Body:**
```json
{
"prompt": "Your prompt text here"
}
```

**Response:**
```json
{
"optimizedPrompt": "Enhanced prompt text"
}
```

### POST /api/migrate-prompt

Optimizes a prompt using the llama-prompt-ops library.

**Request Body:**
```json
{
"prompt": "Your prompt text here",
"config": {
"taskModel": "Llama 3.3 70B",
"proposerModel": "Llama 3.1 8B",
"optimizer": "MiPro",
"dataset": "Q&A",
"metrics": "Exact Match",
"useLlamaTips": true
}
}
```

**Response:**
```json
{
"optimizedPrompt": "Optimized prompt text"
}
```

## Integration with llama-prompt-ops

This backend serves as a development interface for the llama-prompt-ops library, providing web API access to prompt optimization features. When this frontend is eventually integrated into the main llama-prompt-ops repository, this backend functionality will be incorporated into the library's core API structure.
Loading