You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm building a NuxtJS application using Vercel's AI SDK to interact with OpenAI's Assistants API. When the assistant references files in its responses, it generates citations in this format: 【4:2†Source】.
I need to replace these random citation markers (e.g. 4:2†) with sequential footnote references (e.g. [1], [2], etc.) so users can easily map citations to references.
Here's my current setup using the useAssistant hook:
According to the document 【4:2†Source】, the deadline is tomorrow. The budget information in 【7:1†Source】 shows...
Desired output:
According to the document [1], the deadline is tomorrow. The budget information in [2] shows...
References:
[1] file name title
[2] file name title from
I believe I need to:
Intercept the assistant's response
Use regex to find and replace the citation format
Build a references list from file retrieval by id
Update the message content
Where in the Vercel AI SDK's AssistantResponse flow should I implement this transformation? Should I modify the message content when appending new messages or create a computed property for displaying the transformed content?
What I've tried:
I've looked through the AI SDK documentation but haven't found clear guidance on message content transformation. I considered using a computed property but worry about maintaining the reference list state across messages.
Any help would be appreciated!
Full code:
// Create and handle the assistant response stream
const response = AssistantResponse(
{ threadId, messageId: createdMessage.id },
async ({ forwardStream }) => {
// Get both the run stream and message stream
const stream = openai.beta.threads.runs
.stream(threadId, {
assistant_id: assistantId,
// Add include parameter to get file search results
include: [
'step_details.tool_calls[*].file_search.results[*].content'
],
// https://platform.openai.com/docs/api-reference/assistants/modifyAssistant
tools: [
{
type: 'file_search',
file_search: {
ranking_options: {
// a ranking between 0.0 and 1.0, with 1.0 being the highest ranking. A higher number will constrain the file chunks used to generate a result to only chunks with a higher possible relevance, at the cost of potentially leaving out relevant chunks.
score_threshold: 0.5,
ranker: 'auto'
}
}
}
]
})
.on('messageDelta', (event) => {
const content = event.content?.[0]
if (content?.type === 'text' && content.text?.value != null) {
console.log('content: ', content)
}
})
.on('messageDone', async (event) => {
// Add this code to fetch and log run steps
try {
const run = await openai.beta.threads.runs.steps.list(
threadId,
event.run_id
)
// Find the step with tool_calls
const toolCallStep = run.data.find(
(step) => step.type === 'tool_calls'
)
console.log(
'toolCallStep: ',
toolCallStep.step_details.tool_calls[0].file_search.results
)
} catch (error) {
console.error('Error fetching run steps:', error)
}
if (event.content[0].type === 'text') {
const { text } = event.content[0]
if (text.annotations) {
const citationsText = text.annotations.map((annotation) => {
return `[${annotation.text}] (${annotation.file_citation.file_id})`
})
// Join citationsText with two new lines
citationsText.join('\n\n')
res.write(
formatAssistantStreamPart(
'text',
'\n\n### Citations\n' + citationsText
)
)
}
console.log(
'event content text annotations: ',
event.content[0].text.annotations
)
}
})
await forwardStream(stream)
}
)
// Get the stream from the Response object
const stream = response.body
// Set appropriate headers for streaming
res.writeHead(200, {
'Content-Type': 'text/plain; charset=utf-8',
'Transfer-Encoding': 'chunked'
})
// Pipe the stream to the response
if (stream) {
for await (const chunk of stream) {
res.write(chunk)
}
}
res.end()
} catch (error) {
console.error('Assistant error:', error)
res.status(500).send({
error:
'An error occurred while processing your request in assistant function.',
details: error.message
})
}
```
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm building a NuxtJS application using Vercel's AI SDK to interact with OpenAI's Assistants API. When the assistant references files in its responses, it generates citations in this format: 【4:2†Source】.
I need to replace these random citation markers (e.g. 4:2†) with sequential footnote references (e.g. [1], [2], etc.) so users can easily map citations to references.
Here's my current setup using the useAssistant hook:
The messages are rendered like this:
Sample output:
Desired output:
I believe I need to:
Where in the Vercel AI SDK's
AssistantResponse
flow should I implement this transformation? Should I modify the message content when appending new messages or create a computed property for displaying the transformed content?What I've tried:
I've looked through the AI SDK documentation but haven't found clear guidance on message content transformation. I considered using a computed property but worry about maintaining the reference list state across messages.
Any help would be appreciated!
Full code:
Beta Was this translation helpful? Give feedback.
All reactions