Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs #14

Merged
merged 13 commits into from
Nov 29, 2024
3 changes: 3 additions & 0 deletions docs/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,8 @@
"rspress": "^1.35.1",
"ts-node": "^10.9.2",
"typescript": "^5.6.3"
},
"dependencies": {
"rspress-plugin-font-open-sans": "^1.0.0"
}
}
27 changes: 25 additions & 2 deletions docs/rspress.config.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,32 @@
import { defineConfig } from 'rspress/config';
import { pluginFontOpenSans } from 'rspress-plugin-font-open-sans';
import * as path from 'node:path';

export default defineConfig({
root: 'src',
base: '/byorg-ai/',
title: 'byorg-ai',
title: 'byorg.ai',
icon: '/img/favicon.ico',
description: 'TypeScript framework for writing chatbot applications.',
plugins: [],
logo: {
light: '/img/logo_mono_light.svg',
dark: '/img/logo_mono_dark.svg',
},
globalStyles: path.join(__dirname, 'src/styles/index.css'),
themeConfig: {
enableContentAnimation: true,
enableScrollToTop: true,
outlineTitle: 'Contents',
footer: {
message: `Copyright © ${new Date().getFullYear()} Callstack Open Source`,
},
socialLinks: [
{
icon: 'github',
mode: 'link',
content: 'https://github.com/callstack/byorg-ai',
},
],
},
plugins: [pluginFontOpenSans()],
});
5 changes: 0 additions & 5 deletions docs/src/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,5 @@
"text": "Docs",
"link": "/docs/about",
"activeMatch": "^/docs/"
},
{
"text": "API",
"link": "/api/about",
"activeMatch": "^/api/"
}
]
23 changes: 0 additions & 23 deletions docs/src/api/_meta.json

This file was deleted.

3 changes: 0 additions & 3 deletions docs/src/api/about.md

This file was deleted.

7 changes: 0 additions & 7 deletions docs/src/api/core/_meta.json

This file was deleted.

3 changes: 0 additions & 3 deletions docs/src/api/core/index.md

This file was deleted.

7 changes: 0 additions & 7 deletions docs/src/api/slack/_meta.json

This file was deleted.

3 changes: 0 additions & 3 deletions docs/src/api/slack/index.md

This file was deleted.

10 changes: 2 additions & 8 deletions docs/src/docs/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,8 @@
},
{
"type": "dir",
"name": "slack",
"label": "Slack",
"collapsed": true
},
{
"type": "dir",
"name": "discord",
"label": "Discord",
"name": "integrations",
"label": "Integrations",
"collapsed": true
}
]
13 changes: 11 additions & 2 deletions docs/src/docs/about.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
# About byorg-ai
# About byorg.ai

This is main section about byorg-ai Framework
## Introduction

byorg.ai is a framework designed for rapid development and deployment of AI assistants within companies and organizations.

## Supported Integrations

- Slack
- Discord

byorg.ai supports a wide range of large language models (LLMs) via the Vercel [AI SDK](https://sdk.vercel.ai/docs/introduction). You can host byorg.ai applications on various cloud platforms or local environments. We provide examples for some popular hosting options.
40 changes: 40 additions & 0 deletions docs/src/docs/core/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,45 @@
"type": "file",
"name": "usage",
"label": "Usage"
},
{
"type": "file",
"name": "chat-model",
"label": "Chat Model"
},
{
"type": "file",
"name": "system-prompt",
"label": "System Prompt"
},
{
"type": "file",
"name": "context",
"label": "Context"
},
{
"type": "file",
"name": "plugins",
"label": "Plugins"
},
{
"type": "file",
"name": "tools",
"label": "Tools"
},
{
"type": "file",
"name": "references",
"label": "References"
},
{
"type": "file",
"name": "performance",
"label": "Performance"
},
{
"type": "file",
"name": "error-handling",
"label": "Error Handling"
}
]
28 changes: 28 additions & 0 deletions docs/src/docs/core/chat-model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Chat Model

## Providers and Adapter

You can use any AI provider supported by Vercel’s [AI SDK](https://sdk.vercel.ai/providers/ai-sdk-providers). This includes both LLM-as-a-service providers like OpenAI, Anthropic, and others, as well as locally hosted LLMs. We are also open to extending support to other types of chat models, such as LangChain’s [runnables](https://js.langchain.com/docs/how_to/streaming).

### Providers Examples

```js
import { createOpenAI } from '@ai-sdk/openai';

const openAiProvider = createOpenAI({
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved
apiKey: 'your-api-key',
compatibility: 'strict',
});
```

After instantiating the provider client, wrap it with our `VercelAdapter` class:

```js
import { VercelChatModelAdapter } from '@callstack/byorg-core';

const openAiChatModel = new VercelChatModelAdapter({
languageModel: openAiModel,
});
```

Now that the `chatModel` is ready, let’s discuss the `systemPrompt` function.
74 changes: 74 additions & 0 deletions docs/src/docs/core/context.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Context

The `context` object holds information about the currently processed message. It allows you to modify the behavior of your assistant at runtime or alter the message processing flow.

`Context` can be modified by [middlewares](./plugins.md) during the message processing flow to implement highly flexible logic or rules (e.g., authentication, RAG, etc.).

### Properties in Context

```js
export type RequestContext = {
/** All messages from given conversation */
messages: Message[];

/** Convenience reference to the last `messages` item which is the latest `UserMessage`. */
lastMessage: UserMessage;

/** Declarations of tools for ai assistant */
tools: ApplicationTool[];

/** Storage with references to documents mentioned in the conversation */
references: ReferenceStorage;

/** Ids of users who are a part of conversation */
resolvedEntities: EntityInfo;

/** Function for generating a system prompt */
systemPrompt: () => Promise<string> | string;

/**
* Received partial response update with response streaming.
* Note: setting this option will switch underlying assistant calls to streaming format.
*/
onPartialResponse?: (text: string) => void;

/** Measures and marks for performance tracking */
performance: PerformanceTimeline;

/** Container for additional custom properties */
extras: MessageRequestExtras;
};
```

To add typing for your custom properties to the context, create a file with the type definition and override the typing.

```js
declare module '@callstack/byorg-core' {
interface MessageRequestExtras {
// Here you can add your own properties
example?: string;
messagesCount?: number;
isAdmin?: boolea;
}
}

export {};
```

:::warning
All custom properties must be optional, as the current context creation does not support default values for custom objects.
:::

After setting extras, you can access them from the context object:

```js
export const systemPrompt = (context: RequestContext): Promise<string> | string => {
if (context.extras.isAdmin) {
return `You are currently talking to an admin.`;
}

return `You are talking to user with regular permissions.`;
};
```

Next, we’ll explore the concept of `plugins` to understand how to modify the `context`.
23 changes: 23 additions & 0 deletions docs/src/docs/core/error-handling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Error handling

The error handler in byorg.ai is responsible for processing error objects and returning messages that are sent back to the user. You can customize the error handling by providing your own error handler function. This allows you to define specific reactions to errors and deliver appropriate feedback to users.

```js
function handleError(error: unknown): SystemResponse {
logger.error('Unhandled error:', error);

return {
role: 'system',
content: 'There was a problem with Assistant. Please try again later or contact administrator.',
error,
};
}

const app = createApp({
chatModel,
systemPrompt,
errorHandler: handleError,
});
```

By implementing a custom error handler, you can tailor the user experience by providing meaningful responses to errors encountered within the byorg framework.
61 changes: 61 additions & 0 deletions docs/src/docs/core/performance.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Performance

To test your application's performance, you can use the performance object available in the context.

```js
const slowPlugin: Promise<MessageResponse> = {
name: 'slow-plugin',
middleware: async (context, next): Promise<MessageResponse> => {

context.performance.markStart("SlowPluginPerformance");
await slowFunction();
context.performance.markEnd("SlowPluginPerformance");

// Continue middleware chain
return next();
},
};
```

After collecting your performance data, you can access it through the same performance object. Performance tracking requires all processes to complete, so it uses effect instead of middleware, as it runs after the response is finalized.

```js
const analyticsPlugin: Promise<MessageResponse> = {
name: 'analytics',
effects: [analyticsEffect]
};

async function analyticsEffect(context: RequestContext, response: MessageResponse): Promise<void> {
console.log(context.performance.getMeasureTotal("SlowPluginPerformance"))
}
```

## Measures vs Marks
Q1w1N marked this conversation as resolved.
Show resolved Hide resolved

This concept comes from [Web Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance).
Marks are just named 'sequences' for the performance tool to measure.
Let's say that you have a tool for your AI, and you'd like to check how it performs.
Issue might be that it's being triggered multiple times by AI. For that reason
one mark can be a part of multiple measures.
Single measure is constructed of two marks: `start` and `end`.

This concept is inspired by the [Web Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance). Marks are essentially named sequences that the performance tool uses to measure execution time. For instance, if you have a tool for your AI and want to evaluate its performance, you might find it triggered multiple times by the AI. Therefore, a single mark can be part of multiple measures. A measure is constructed using two marks: `start` and `end`.

:::info
You can also access all marks and measures using `getMarks` and `getMeasures`
:::

## Default measures

Byorg automatically gathers performance data. Middleware measures are collected in two separate phases: before handling the response and after it.

```js
export const PerformanceMarks = {
processMessages: 'processMessages',
middlewareBeforeHandler: 'middleware:beforeHandler',
middlewareAfterHandler: 'middleware:afterHandler',
chatModel: 'chatModel',
toolExecution: 'toolExecution',
errorHandler: 'errorHandler',
} as const;
```
Loading
Loading