Skip to content

Commit 2247066

Browse files
committedSep 5, 2023
update readme + add frontend & backend folders
1 parent 4625339 commit 2247066

File tree

149 files changed

+18082
-2
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

149 files changed

+18082
-2
lines changed
 

‎.devcontainer/devcontainer.json

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
{
2+
"image": "mcr.microsoft.com/devcontainers/universal:2",
3+
"features": {
4+
"ghcr.io/devcontainers-contrib/features/pipx-package:1": {},
5+
"ghcr.io/devcontainers-contrib/features/poetry:2": {},
6+
"ghcr.io/warrenbuckley/codespace-features/sqlite:1": {},
7+
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
8+
"ghcr.io/devcontainers/features/aws-cli:1": {}
9+
}
10+
}

‎README.md

+48-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,48 @@
1-
# sec-insights
2-
A real world full-stack application using LlamaIndex
1+
# SEC Insights
2+
SEC Insights uses the Retrieval Augment Generation (RAG) capabilities of LlamaIndex to answer questions about SEC 10-K & 10-Q documents.
3+
4+
You can start using the application now at [secinsights.ai](https://www.secinsights.ai/)
5+
6+
## Why did we make this?
7+
As RAG applications look to move increasingly from prototype to production, we thought our developer community would find it valuable to have a complete example of a working real world RAG application. SEC Insights works as well locally as it does in the cloud. It also comes with many product features that will be immediately applicable to most RAG applications.
8+
9+
## Product Features
10+
- Chat-based Document Q&A against a pool of documents
11+
- Citation of source data that LLM response was based on
12+
- PDF Viewer with highlighting of citations
13+
- Token-level streaming of LLM responses via [Server-Sent Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events)
14+
- Streaming of Reasoning Steps (Sub-Questions) within Chat
15+
16+
## Dev Features
17+
- Infrastructure-as-code for deploying directly to [Vercel](https://vercel.com/) & [Render](https://render.com/)
18+
- Robust local environment setup making use of [LocalStack](https://localstack.cloud/) & [Docker](https://www.docker.com/) compose
19+
- Monitoring & Profiling provided by [Sentry](https://sentry.io/welcome/)
20+
- Load Testing provided by [Loader.io](https://loader.io/)
21+
- Variety of python scripts for REPL-based interaction & managing data
22+
23+
## Tech Stack
24+
- Frontend
25+
- React/Next.js
26+
- Tailwind CSS
27+
- Backend
28+
- FastAPI
29+
- Docker
30+
- SQLAlchemy
31+
- OpenAI
32+
- PGVector
33+
- LlamaIndex 🦙
34+
- Infrastructure
35+
- Render.com
36+
- Backend hosting
37+
- Postgres 15
38+
- Vercel
39+
- Frontend Hosting
40+
- AWS
41+
- Cloudfront
42+
- S3
43+
44+
## Usage
45+
See `README.md` files in `frontend/` & `backend/` folders for individual setup instructions for each.
46+
47+
## 💡 Contributing
48+
We remain very open to contributions! We're looking forward to seeing the ideas and improvements the LlamaIndex community is able to provide.

0 commit comments

Comments
 (0)
Please sign in to comment.