-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
5eed70b
commit 62dabc0
Showing
39 changed files
with
1,118 additions
and
205 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,9 +1,32 @@ | ||
import { defineConfig } from 'rspress/config'; | ||
import { pluginFontOpenSans } from 'rspress-plugin-font-open-sans'; | ||
import * as path from 'node:path'; | ||
|
||
export default defineConfig({ | ||
root: 'src', | ||
base: '/byorg-ai/', | ||
title: 'byorg-ai', | ||
title: 'byorg.ai', | ||
icon: '/img/favicon.ico', | ||
description: 'TypeScript framework for writing chatbot applications.', | ||
plugins: [], | ||
logo: { | ||
light: '/img/logo_mono_light.svg', | ||
dark: '/img/logo_mono_dark.svg', | ||
}, | ||
globalStyles: path.join(__dirname, 'src/styles/index.css'), | ||
themeConfig: { | ||
enableContentAnimation: true, | ||
enableScrollToTop: true, | ||
outlineTitle: 'Contents', | ||
footer: { | ||
message: `Copyright © ${new Date().getFullYear()} Callstack Open Source`, | ||
}, | ||
socialLinks: [ | ||
{ | ||
icon: 'github', | ||
mode: 'link', | ||
content: 'https://github.com/callstack/byorg-ai', | ||
}, | ||
], | ||
}, | ||
plugins: [pluginFontOpenSans()], | ||
}); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,12 @@ | ||
# About byorg-ai | ||
# About byorg.ai | ||
|
||
This is main section about byorg-ai Framework | ||
## Introduction | ||
|
||
byorg.ai is a framework designed for rapid development and deployment of AI assistants within companies and organizations. | ||
|
||
## Supported Integrations | ||
|
||
- Slack | ||
- Discord | ||
|
||
byorg.ai supports a wide range of large language models (LLMs) via the Vercel [AI SDK](https://sdk.vercel.ai/docs/introduction). You can host byorg.ai applications on various cloud platforms or local environments. We provide examples for some popular hosting options. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
# Chat Model | ||
|
||
## Providers and Adapter | ||
|
||
You can use any AI provider supported by Vercel’s [AI SDK](https://sdk.vercel.ai/providers/ai-sdk-providers). This includes both LLM-as-a-service providers like OpenAI, Anthropic, and others, as well as locally hosted LLMs. We are also open to extending support to other types of chat models, such as LangChain’s [runnables](https://js.langchain.com/docs/how_to/streaming). | ||
|
||
### Providers Examples | ||
|
||
```js | ||
import { createOpenAI } from '@ai-sdk/openai'; | ||
|
||
const openAiProvider = createOpenAI({ | ||
apiKey: 'your-api-key', | ||
compatibility: 'strict', | ||
}); | ||
``` | ||
|
||
After instantiating the provider client, wrap it with our `VercelAdapter` class: | ||
|
||
```js | ||
import { VercelChatModelAdapter } from '@callstack/byorg-core'; | ||
|
||
const openAiChatModel = new VercelChatModelAdapter({ | ||
languageModel: openAiModel, | ||
}); | ||
``` | ||
|
||
Now that the `chatModel` is ready, let’s discuss the `systemPrompt` function. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
# Context | ||
|
||
The `context` object holds information about the currently processed message. It allows you to modify the behavior of your assistant at runtime or alter the message processing flow. | ||
|
||
`Context` can be modified by [middlewares](./plugins.md) during the message processing flow to implement highly flexible logic or rules (e.g., authentication, RAG, etc.). | ||
|
||
### Properties in Context | ||
|
||
```js | ||
export type RequestContext = { | ||
/** All messages from given conversation */ | ||
messages: Message[]; | ||
|
||
/** Convenience reference to the last `messages` item which is the latest `UserMessage`. */ | ||
lastMessage: UserMessage; | ||
|
||
/** Declarations of tools for ai assistant */ | ||
tools: ApplicationTool[]; | ||
|
||
/** Storage with references to documents mentioned in the conversation */ | ||
references: ReferenceStorage; | ||
|
||
/** Ids of users who are a part of conversation */ | ||
resolvedEntities: EntityInfo; | ||
|
||
/** Function for generating a system prompt */ | ||
systemPrompt: () => Promise<string> | string; | ||
|
||
/** | ||
* Received partial response update with response streaming. | ||
* Note: setting this option will switch underlying assistant calls to streaming format. | ||
*/ | ||
onPartialResponse?: (text: string) => void; | ||
|
||
/** Measures and marks for performance tracking */ | ||
performance: PerformanceTimeline; | ||
|
||
/** Container for additional custom properties */ | ||
extras: MessageRequestExtras; | ||
}; | ||
``` | ||
|
||
To add typing for your custom properties to the context, create a file with the type definition and override the typing. | ||
|
||
```js | ||
declare module '@callstack/byorg-core' { | ||
interface MessageRequestExtras { | ||
// Here you can add your own properties | ||
example?: string; | ||
messagesCount?: number; | ||
isAdmin?: boolea; | ||
} | ||
} | ||
|
||
export {}; | ||
``` | ||
|
||
:::warning | ||
All custom properties must be optional, as the current context creation does not support default values for custom objects. | ||
::: | ||
|
||
After setting extras, you can access them from the context object: | ||
|
||
```js | ||
export const systemPrompt = (context: RequestContext): Promise<string> | string => { | ||
if (context.extras.isAdmin) { | ||
return `You are currently talking to an admin.`; | ||
} | ||
|
||
return `You are talking to user with regular permissions.`; | ||
}; | ||
``` | ||
|
||
Next, we’ll explore the concept of `plugins` to understand how to modify the `context`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
# Error handling | ||
|
||
The error handler in byorg.ai is responsible for processing error objects and returning messages that are sent back to the user. You can customize the error handling by providing your own error handler function. This allows you to define specific reactions to errors and deliver appropriate feedback to users. | ||
|
||
```js | ||
function handleError(error: unknown): SystemResponse { | ||
logger.error('Unhandled error:', error); | ||
|
||
return { | ||
role: 'system', | ||
content: 'There was a problem with Assistant. Please try again later or contact administrator.', | ||
error, | ||
}; | ||
} | ||
|
||
const app = createApp({ | ||
chatModel, | ||
systemPrompt, | ||
errorHandler: handleError, | ||
}); | ||
``` | ||
|
||
By implementing a custom error handler, you can tailor the user experience by providing meaningful responses to errors encountered within the byorg framework. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
# Performance | ||
|
||
To test your application's performance, you can use the performance object available in the context. | ||
|
||
```js | ||
const slowPlugin: Promise<MessageResponse> = { | ||
name: 'slow-plugin', | ||
middleware: async (context, next): Promise<MessageResponse> => { | ||
|
||
context.performance.markStart("SlowPluginPerformance"); | ||
await slowFunction(); | ||
context.performance.markEnd("SlowPluginPerformance"); | ||
|
||
// Continue middleware chain | ||
return next(); | ||
}, | ||
}; | ||
``` | ||
|
||
After collecting your performance data, you can access it through the same performance object. Performance tracking requires all processes to complete, so it uses effect instead of middleware, as it runs after the response is finalized. | ||
|
||
```js | ||
const analyticsPlugin: Promise<MessageResponse> = { | ||
name: 'analytics', | ||
effects: [analyticsEffect] | ||
}; | ||
|
||
async function analyticsEffect(context: RequestContext, response: MessageResponse): Promise<void> { | ||
console.log(context.performance.getMeasureTotal("SlowPluginPerformance")) | ||
} | ||
``` | ||
|
||
## Measures vs Marks | ||
|
||
This concept comes from [Web Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance). | ||
Marks are just named 'sequences' for the performance tool to measure. | ||
Let's say that you have a tool for your AI, and you'd like to check how it performs. | ||
Issue might be that it's being triggered multiple times by AI. For that reason | ||
one mark can be a part of multiple measures. | ||
Single measure is constructed of two marks: `start` and `end`. | ||
|
||
This concept is inspired by the [Web Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance). Marks are essentially named sequences that the performance tool uses to measure execution time. For instance, if you have a tool for your AI and want to evaluate its performance, you might find it triggered multiple times by the AI. Therefore, a single mark can be part of multiple measures. A measure is constructed using two marks: `start` and `end`. | ||
|
||
:::info | ||
You can also access all marks and measures using `getMarks` and `getMeasures` | ||
::: | ||
|
||
## Default measures | ||
|
||
Byorg automatically gathers performance data. Middleware measures are collected in two separate phases: before handling the response and after it. | ||
|
||
```js | ||
export const PerformanceMarks = { | ||
processMessages: 'processMessages', | ||
middlewareBeforeHandler: 'middleware:beforeHandler', | ||
middlewareAfterHandler: 'middleware:afterHandler', | ||
chatModel: 'chatModel', | ||
toolExecution: 'toolExecution', | ||
errorHandler: 'errorHandler', | ||
} as const; | ||
``` |
Oops, something went wrong.