This is a Next.js project bootstrapped with create-next-app
.
In This project, I work with Edge Runtime and retrieval augmented generation (RAG).
- Next.js
- Typescript
- TanStack Query
- Clerk Auth
- Drizzle ORM + Neon DB
- Stripe
- PineconeDB
- Langchain
- Open AI
- Vercel AI SDK
First, run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.tsx
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Geist, a new font family for Vercel.
You can directly apply changes to your database using the drizzle-kit push command. This is a convenient method for quickly testing new schema designs or modifications in a local development environment, allowing for rapid iterations without the need to manage migration files:
npx drizzle-kit push
You can browse your database using the drizzle-kit studio command. This will open a web-based interface that allows you to interact with your database, view data, and run queries:
npx drizzle-kit studio
Drizzle ORM: Focuses on defining and interacting with the database schema using TypeScript.
Drizzle Kit: Provides tools for managing database schema changes and migrations, and offers additional utilities for database interaction.
Vector embeddings are a way to represent data as points in a multidimensional space, where similar data points cluster together. This compact representation captures the semantic relationships and similarities between data points, making it possible to perform mathematical operations and comparisons on the data.
In Chatly, we use vector embeddings to enable AI-powered interactions with PDF documents. This is how retrieval augmented generation works:
- Obtain the PDF: The first step is to obtain the PDF.
- Split and Segment: The second step is to split and segment the PDF. Langchain helps us with this process.
- Vectorize and Embed: The third step is to vectorize and embed individual documents.
- Store Vectors: The fourth step is to store the vectors into PineconeDB.
- Search the PDF: We then try to search the PDF as a query.
- Embed the Query: The fifth step is to embed the query.
- Query PineconeDB: The sixth step is to query PineconeDB for similar vectors.
- Extract Metadata: The seventh step is to extract the metadata of the similar vectors.
- Feed Metadata into OpenAI: The eighth step is to feed the metadata into an OpenAI prompt.
This process allows the AI to understand and retrieve relevant information from the document, making it possible to chat with the AI about the content of the PDF.
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - learn about Next.js features and API.
- Learn Next.js - an interactive Next.js tutorial.
You can check out the Next.js GitHub repository - your feedback and contributions are welcome!
The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.
Check out our Next.js deployment documentation for more details.