Skip to content
This repository has been archived by the owner on Aug 14, 2021. It is now read-only.

Latest commit

 

History

History
746 lines (548 loc) · 32.3 KB

advanced-usage.md

File metadata and controls

746 lines (548 loc) · 32.3 KB

Advanced Usage


Table of Contents

Main Components

The main components from runtime-client-js you should understand are the RuntimeClientFactory and the RuntimeClient. We will briefly introduce each component and we will provide more detail in following sections.

RuntimeClientFactory

The RuntimeClientFactory is a factory class that is used to create RuntimeClient instances, which are set to the same configuration passed into the factory itself.

For example, the RuntimeClientFactory accepts a versionID representing the Voiceflow app we want to start a conversation with. Let's say the versionID has value fishandchips. Any RuntimeClient we construct with this particular factory will then contact the same Voiceflow app with the versionID of fishandchips

RuntimeClient

The RuntimeClient is an object that represents one instance of a Voiceflow app. This is the main interface you use to interact with such the app, advance the conversation session, and get a response. You never construct RuntimeClients directly.

Statefulness of RuntimeClient

A RuntimeClient instance is a stateful object that represents some Voiceflow (VF) application. It has interaction methods such as .sendText() which produce side-effects that modify the RuntimeClient's internal state, which represents the state of the current conversation session (which we will define shortly).

Conversation Session

We frequently refer to a conversation session in the documentation. A conversation session is an ongoing execution of the Voiceflow app.

The RuntimeClient is said to store the current state of the conversation session. The most recent Context object returned by an interaction method, such as .start() or .sendText(), also contains the state of the current conversation session.

Typically, a conversation session begins when you call .start() and it is said to have terminated when some context returned by a subsequent interaction method returns true for .isEnding(). For example:

const context1 = await app.start(); // start a new conversation session
console.log(context1.isEnding()); // prints 'false' so conversation session hasn't ended

const context2 = await app.sendText(userInput); // advance the conversation
console.log(context2.isEnding()); // prints "false" so conversation session hasn't ended

const context3 = await app.sendText(userInput); // advance the conversation
console.log(context3.isEnding()); // prints "true" so conversation session has ended!

Alternatively, the current conversation session can end if we call .start() to start a new session from the beginning.

Interaction Methods

An interaction method is any method of RuntimeClient which sends a request to our runtime servers. Interaction methods transition the conversation session and produce side-effects on RuntimeClient.

An interaction method sends the current internal state of RuntimeClient to our runtime servers. The servers compute the next state of the Voiceflow application and sends it back to the RuntimeClient, and when the response arrives, the RuntimeClient updates its current internal state to the new application state.

This process of sending a request to the runtime servers, computing the next state, and storing it in RuntimeClient's internal storage is referred to as starting/advancing the conversation (session), depending on what side-effect is produced.

The list of interaction methods is as follows:

.start()

  • DESC: Starts the conversation session and runs the application until it requests user input, at which point, the method returns the current context. If this is called while a conversation session is ongoing, then it terminates the current session and starts a new conversation session from the beginning.
  • ARG:
    • None
  • RETURNS:
    • Context - A context representing the current application state
  • ASSUMPTIONS
    • This is callable at any time.
const context = await runtimeClient.start()

.sendText(userInput)

  • DESC: Advances the current conversation session based on the user's input and then runs the application until it requests user input, at which point, the method returns the current context.
  • ARG:
    • userInput - string - The user's response.
  • RETURNS:
    • Context - A context representing the current application state.
  • ASSUMPTIONS
    • Callable only if RuntimeClient has an ongoing conversation session. That is, runtimeClient.getContext().isEnding() is false. If there is no ongoing conversation session, then this call throws an exception.
const context = await runtimeClient.sendText("I would like a large cheeseburger with sprite");

.sendIntent(intentName, entities)

  • DESC: Advances the conversation session based an intent being invoked - make sure that the intentName exists in the interaction model on your Voiceflow project. This bypasses NLP/NLU resolution, and is useful in explicitly triggering certain conversation paths. The method returns the current context.
  • ARG:
    • intentName - string - The name of the intent that was matched
    • entities - Entity[] - An Entity has the follow properties
      • name - The name of the slot associated with the intent.
      • value - The value which was assigned to the slot.
    • query - string - The user input that triggered the intent
  • RETURNS:
    • Context - A context representing the current application state.
  • ASSUMPTIONS
    • Callable only if RuntimeClient has an ongoing conversation session. That is, runtimeClient.getContext().isEnding() is false. If there is no ongoing conversation session, then this call throws an exception.
const context = await client.sendIntent('order_pizza', [{ name: size, value: small }], 'I want a small pizza')

Events

The RuntimeClient has an event system that notifies the developer of any changes in the RuntimeClient's data.

Event Types

Trace Events occur when the RuntimeClient receives a response from our Runtime servers. For each trace that RuntimeClient receives from our Runtime servers we trigger a corresponding event for that trace.

The full list of events is listed below.

  • TraceType.X - During an interaction method call, when a specific trace of type X is being processed, there is a corresponding event that is fired, e.g., if SpeakTrace is received then the TraceType.SPEAK event is triggered.
  • TraceEvent.GENERAL - Triggered when any trace is being processed.
  • TraceEvent.BEFORE_PROCESSING - Triggered before any TraceType.X event is fired.
  • TraceEvent.AFTER_PROCESSING - Triggered after all TraceType.X events are fired and handled.

Moreover, Trace Events are guaranteed to occur in the order of the trace response. For example, if the RuntimeClient received a list containing BlockTrace, SpeakTrace, DebugTrace, SpeakTrace in that order, then the following events will occur in this exact order:

  • TraceEvent.BEFORE_PROCESSING
  • TraceType.BLOCK
  • TraceEvent.GENERAL
  • TraceType.SPEAK - Corresponds with the first SpeakTrace in the list
  • TraceEvent.GENERAL
  • TraceType.DEBUG
  • TraceEvent.GENERAL
  • TraceType.SPEAK - Corresponds with the second SpeakTrace in the list
  • TraceEvent.GENERAL
  • TraceEvent.AFTER_PROCESSING

Since Trace Events occur in the order of the trace response, then handlers also execute in order.

Event Handlers

.on(event, handler)

  • DESC: Registers the handler to fire whenever the specified event occurs
  • ARG:
    • event - TraceType | TraceEvent - The name of the event to listen for
    • handler - Function - The handler for the event. The specific function signature depends on event
      • (trace: T, context: Context) => void - Handles TraceType.X, TraceEvent.GENERAL
        • trace - T - The trace object that triggered the current event. The type T varies depending on the event that was triggered. If the event is a specific trace type, like TraceType.SPEAK, then T is type of that trace, SpeakTrace. If the event is TraceEvent.GENERAL, then T is type GeneralTrace
        • context - A Context representing the current application state
      • (context: Context) => void - Handles TraceEvent.BEFORE_PROCESSING, TraceEvent.AFTER_PROCESSING
        • context - A Context representing the current application state
  • RETURNS:
    • None
rclient.on(TraceType.SPEAK, (trace, context) => {		 // register a handler for only SpeakTraces
  console.log(trace.payload.message);								 // traces will be added to your local store in order
});
rclient.on(TraceEvent.GENERAL, (trace, context) => { // register a handler for any GeneralTrace
  console.log(trace);
});
await rclient.start();															 // trigger event handler if `SpeakTrace` received

.onSpeak(handler)

  • DESC: Register the handler whenever a TraceType.SPEAK event occurs. Similar functions exists for the other TraceTypes.
  • ARG:
    • handler - (trace: T, context: Context) => void - The handler for the event.
      • trace - T - The SpeakTrace that triggered the event
      • context - A Context representing the current application state
  • RETURNS
    • None
rclient.onSpeak((trace, context) => {			// register a handler for only SpeakTraces
  console.log(trace.payload.message);			// traces will be added to your local store in order
});

.off(event, handler)

  • DESC: Removes the handler from the list of event listeners.
  • ARG:
    • event - TraceType | TraceEvent - The name of the event, whose listener we must remove.
    • handler - Function - The handler of the event to remove.
  • RETURNS
    • None
const dummy = (trace) => {
  console.log(trace.payload.message)
};
rclient.on(TraceType.SPEAK, dummy);
rclient.off(TraceType.SPEAK, dummy);

Asynchronous Event Handlers

Another thing to note, event handlers can be asynchronous. Since traces are processed sequentially, you can create a delay between the handling of each trace by instantiating a promise with a timeout. This is helpful for implementing fancy UI logic that creates a delay between the rendering of text responses.

rclient.on(TraceType.SPEAK, (trace, context) => {		
  // Unpack the data from the `.payload`
	const { payload: { message, src } } = trace;
  
  // Add the response text to the store, so it triggers a UI update.
  myStore.traces.push(trace.payload.message);

  // Construct an HTMLAudioElement to speak out the response text.
  const audio = new Audio(src);
  await new Promise(res => audio.addEventListener('loadedmetadata', res));

  // Play the audio and wait until the audio finishes, before displaying the next SpeakTrace
  audio.play();
  await new Promise(res => setTimeout(res, audio.duration * 1000));
});
await rclient.start();

Context

For even more detail and control, Interaction methods all return a Context object. The Context is a snapshot of the Voiceflow application's state and includes data such as the variable values.

const context1 = await chatbot.start();
const context2 = await chatbot.sendText(userInput);

As described in "Statefulness of RuntimeClient", interaction methods replace RuntimeClient's internal copy of the conversation session state. More specifically, interaction methods create a new Context object. and we never modify any previous Context objects.

Therefore, we can access past application states through past Contexts. This means you can build a history of Context objects and implement time-travelling capabilities in your chatbot.

The Context object has a handful of methods to expose its internal data. We will describe a subset of them below.

.getTrace()

Returns a list of traces representing the Voiceflow app's response. You can use this to manually access the response from a Voiceflow app. However, we recommend using the event-system instead if you want to handle the response's data.

const response = context.getTrace();
response.forEach((trace) => {
  if (trace.type === TraceType.SPEAK) {
  	console.log(trace.payload.message)
  } else if (trace.type === TraceType.DEBUG) {
    errorLogger.log(trace.payload.message);
  }
});

.isEnding()

Returns true if the application state wrapped by the Context is the last state before the corresponding conversation session ended. Returns false otherwise.

This method is mainly used to detect when RuntimeClient's current conversation session has ended and that the next valid interaction method is .start() to start a new conversation from beginning.

do {
  const userInput = await frontend.getUserInput(); // listen for a user response
  const context = await app.sendText(userInput); // send the response to the app
  frontend.display(context.trace); // display the response, if any
} while (!context.isEnding()); // check if the current conversation is over.
terminateApp(); // perform any cleanup

.getChips()

The .getChips() method returns a list of suggestion chips. If you are unfamiliar with this terminology, a suggestion chip is simply a suggested response that the user can send to a voice interface.

Suggestion chips can be passed into UI buttons. When the user presses one of these buttons, the button can trigger a click handler which automatically sends the suggested response on the user's behalf. An example illustrating this is shown below:

const chips = context.getChips();
// => [{ name: "I would like a pizza", ... }, { name: "I would like a hamburger", ... }]

const createOnClickSuggestion = (chosenSuggestion) => () => {
  const context = await chatbot.sendText(chosenSuggestion);
}

chips.forEach(({ name }) => {
  frontend.addButton({
    text: name,
    callback: createOnClickSuggestion(name)
  });
});

You can also check our samples for a working implementation of suggestion chips on the browser.

Configuration

The RuntimeClientFactory accepts configurations which it will apply to RuntimeClient instances it constructs. In particular, there is a dataConfig option for managing the data returned by Context.getTrace() for all Contexts produced by a RuntimeClient. To summarize, there are two options currently available:

  1. tts - Default value is false. Set to true to enable text-to-speech functionality. Any returned SpeakTraces will contain an additionalsrc property containing an .mp3 string, which is an audio-file that will speak out the trace text.
  2. stripSSML - Default value is true. Set to false to disable the Context's SSML sanitization and return the full text string with the SSML included. This may be useful if you want to use your own TTS system.

The Samples section has some working code demonstrating some of the configuration options. Also, see the subsections below for how to access the data exposed by dataConfig options.

const app = new RuntimeClientFactory({
    versionID: 'XXXXXXXXXXXXXXXXX',
  	apiKey: 'VF.XXXXXX.XXXXXXXXX'
    dataConfig: {
      	tts: true,
      	stripSSML: false,
    }
});

tts

Once you have this to true, you can access the TTS audio-file through payload.src in a SpeakTrace as shown below

const speakTrace = context.getTrace()[0];				 // assume first element is a SpeakTrace
const audio = new Audio(speakTrace.payload.src); // HTMLAudioElement
audio.play();

stripSSML

When this is set to false, the message string returned by a SpeakTrace will contain your SSML that you added through Voiceflow Creator.

console.log(context.getTrace());
/* prints out the following:
[
  {
    "type": "speak",
    "payload": {
      "message": "<voice name=\"Alexa\">Welcome to Voiceflow Pizza! </voice>"
    }
  },
  {
    "type": "debug",
    "payload": {
      "message": "matched with Intent 'Fallback'"
    }
  }
]
*/

Variables

Getters

Voiceflow projects have variables that are modified as the app is executing. You can access the variable state at a particular point in time through context.variables. Recall that a Context is a snapshot of app state, so the value of .variables at one particular Context is the value of the variables at some previous fixed point in time.

  • .get(variableName) - Used to retrieve a single variable value
  • .getAll() - Returns an object containing all variables
  • .getKeys() - Returns a list of variable names
const context = await app.sendText('I would like a large cheeseburger');

const name = context.variables.get('name');

const allVariables = context.variables.getAll();
const name = allVariables.name;

const keys = context.variables.getKeys();

Setters

You can also set variables through a Context

  • .set(variableName, value) - Sets variableName to have the given value
  • .setMany(map) - Sets all variables which appear as keys in mapto the corresponding value in map.
context.variables.set('name', 'Jean-Luc Picard');
context.variables.setMany({
  name: 'Jean-Luc Picard',
  age: 52,
});

WARNING: This is an unsafe feature and you should know what you're doing before using it.

If you want to set variables to affect the result of the next interaction, then you should set the variables of the most recent Context returned by an interaction. Interaction methods will return a reference to the RuntimeClient's current internal Context object, which will be used for the next state transition.

Recall that each Context returned by the RuntimeClient is a snapshot of the Voiceflow app state at some point in time. Setting the variables on context1 will not affect variables values on context2.

Additionally, if you want to implement time-travelling and keep a record of past Contexts, then do not use a setter, as it will modify any past Contexts that you call the setter on, thus, leaving your record in a misleading state.

Enabling Stricter Typing

The Runtime Client is implemented in TypeScript and has strict types on all of its methods. The .variables submodule can also be configured to support stricter typing.

To do this, you must supply a variable schema to the RuntimeClientFactory. Once you do, variable methods like .get() will deduce the variable type based on the variable name you pass in as an argument (see below).

Since Voiceflow apps are loaded in at runtime, it is impossible for the RuntimeClient to deduce the types of variables for you, when you compile from TypeScript. It is up to you to define what types you expect to receive and to ensure your Voiceflow app will only send back what you expect.

export type VFVariablesSchema = {
  age: number;
  name: string;
};

const factory = new RuntimeClientFactory<VFVariablesSchema>({
  versionID: 'some-version-id',
  apiKey: 'VF.XXXXXX.XXXXXXXXX'
});
const app = factory.createClient();

const context = await app.start();

const name = context.variables.get('name'); // return value is inferred to be a "string"
context.variables.set('name', 12); // TypeError! expected a "number" not a "string"

Multiple Applications

You can integrate any number of different Voiceflow applications to your project, simply by constructing multiple RuntimeClientFactory instances, then constructing the RuntimeClient with .createClient().

NOTE: If you are integrating the Voiceflow app on the backend, we do not recommend creating a disposable chatbot with .createClient() to serve each request. This approach will not persist the conversation session between requests and trying to overcome this by persisting the chatbot object is not scalable. To integrate runtime-client-js on your backend, see Backend Usage

import RuntimeClientFactory from '@voiceflow/runtime-client-factory';

const customerSupportBotFactory = new RuntimeClientFactory({
  versionID: 'support-bot-1-id',
  apiKey: 'VF.XXXXXX.XXXXXXXXX'
});
const supportBot1 = customerSupportBotFactory.createClient();
const supportBot2 = customerSupportBotFactory.createClient(); // independent from supportBot1

const orderBotFactory = new RuntimeClientFactory({
  versionID: 'order-bot-id',
  apiKey: 'VF.XXXXXX.XXXXXXXXX'
});
const orderBot = orderBotFactory.createClient();

Backend Usage

Problem

In the backend, we may want to create a RuntimeClient to service a request from our clients. Previously in this document, we mainly described how to use RuntimeClient on the frontend by initializing it as a stateful global object. However, in the backend this approach does not work.

Ideally, we don't want to persist a RuntimeClient for every client that sends requests to our backend. This approach would not be scalable, because each RuntimeClient instance consumes memory. Thus, 1,000,000 active users on our backend means 1,000,000 active RuntimeClient objects running in our backend program.

// Our factory
const factory = new RuntimeClientFactory({
  versionID: 'XXXX',
  apiKey: 'VF.XXXXXX.XXXXXXXXX'
});

// Our collection of RuntimeClients
const runtimeClients = {};

// An endpoint in Express
app.get('/', async (req, res) => {
  if (!runtimeClients[req.userID]) {
    // BAD PRACTICE - Will consume a significant amount of memory if # of users grows
    runtimeClients[req.userID] = factory.createClient();
  }

  const context = await runtimeClients[req.userID].sendText(req.userInput);

  return context.getTrace();
});

However, we can't just deallocate the RuntimeClient for the current request, then construct a new RuntimeClient during the next request. Each RuntimeClient contains the conversation session and deallocating it would lose that information. So any input the user provided, such as their name, would be lost. Moreover, when we create a new RuntimeClient for the next session, it will start the conversation again from the beginning!

// An endpoint in Express
app.get('/', async (req, res) => {
  // WRONG - This will start the app from the beginning at every request
  const client = factory.createClient();

  const context = await client.sendText(req.userInput);

  return context.getTrace();
});

Solution

The .createClient() can accept an additional state object, which solves the problem of using the RuntimeClient on the backend. The .createClient() method has different behavior depending on the value of state

  1. If state is undefined, then createClient() behaves as before and creates an entirely new RuntimeClient
  2. If state is a valid Voiceflow application State, then createClient() creates a RuntimeClient with the provided state, thus, regenerating the same chatbot from a previous request.

After each request, you can extract the current RuntimeClient state by calling context.toJSON().state. Then, you can store this state in a database such as MongoDB. When the next request comes in, read the conversation state for that particular user from DB, then wrap the state with a RuntimeClient by calling .createClient(state). This approach allows you to persist a client's conversation session between requests.

app.post('/:userID', async (req, res) => {
  const { userID } = req.params;
  const { userInput } = req.body;

  // pull the current conversation session of the user from our DB
  const state = await db.read(userID);

  // if `state` is `undefined` then allocate a new client
  const client = runtimeClientFactory.createClient(state);

  // send the next user request
  const context = await client.sendText(userInput);

  // check if we need to cleanup the conversation session
  if (context.isEnding()) {
    db.delete(userID);
  } else {
    await db.insert(userID, context.toJSON().state);
  }

  // send the traces
  res.send(context.getTrace());
});

Conceptually, the RuntimeClient can be used on the frontend as a stateful global object. In the backend, you should think of the RuntimeClient as a disposable wrapper around independent state object, which you can use to perform operations on the state.

For a full-working sample demonstrating this technique, see here.

Best Practices

Sending data over Voiceflow interactions

Keep in mind that the State object in a Voiceflow application state will contains the value of any Voiceflow variables. We strongly recommend not embedding any sensitive information in Voiceflow variables or in any of your Voiceflow app responses. The State is transmitted over HTTP requests to our runtime servers.

API Keys

API Keys should not be directly embedded in your application, especially if your source code is public on a website like GitHub. Voiceflow API Keys should be kept in your environment variables, then loaded onto your application in your build process.

Trace Types

A GeneralTrace is an object which represents one piece of the overall response from a Voiceflow app. Specialized traces like SpeakTrace are a sub-type of the more abstract GeneralTrace super-type, as shown below.

export type GeneralTrace = EndTrace | SpeakTrace | ChoiceTrace | FlowTrace | StreamTrace | BlockTrace | DebugTrace | VisualTrace | AudioTrace;

All trace objects have a type and payload property, but differ in what the value of type and payload is. Shown below is a type that describes the common structure of trace objects. NOTE: the Trace type isn't actually declared in the package and is only shown for illustration.

const Trace<T extends TraceType, P> = {
  trace: T;
  payload: P;
};
// e.g. type SpeakTrace = Trace<TraceType.SPEAK, { message: string, src: string }>

In TypeScript, the string enum called TraceType is exported by this package and you can use it to quickly access the trace type string. A list of the available trace types is shown below.

enum TraceType {
    END = "end",
    FLOW = "flow",
    SPEAK = "speak",
    AUDIO = 'audio',
    BLOCK = "block",
    DEBUG = "debug",
    CHOICE = "choice",
    VISUAL = "visual"
}

For each of the specialized trace types, we will describe each trace's purpose and their payload structure below.

SpeakTrace

  • PURPOSE: Contains the "real" response of the voice interface. Corresponds to a Speak Step on Voiceflow.
  • PAYLOAD:
    • message - The text representation of the response from the voice interface. We strip any SSML that you may have added to the response on Voiceflow. To see the SSML, see the stripSSML option for the RuntimeClient constructor.
    • src - This property is a URL to an audio-file that voices out the message. This property contains valid data only if the tts option in RuntimeClient constructor is set to true.
    • voice - Only appears if type is "message" and tts is enabled. This property is the name of the voice assistant you chose to read out the Speak Step text.
type P = {
  message: string;
  src?: string | null;
  voice?: string;
};

AudioTrace

  • PURPOSE: Contains the "real" response of the Voice interface. Corresponds to an Audio Step on Voiceflow
  • PAYLOAD:
    • src - This property is a URL to an audio-file that contains the response.
    • message - An SSML representation of the audio-file being played. This is somewhat less useful.
type P = {
  src: string;
  message: string;
};

DebugTrace

  • PURPOSE: Contains a message that describes the control flow of the Voiceflow, e.g, matched intents, which blocks to move to.
  • PAYLOAD:
    • message - A message illustrating the Voiceflow App's control flow. Intended only to be seen by the developers.
type P = {
  message: string;
};

VisualTrace

  • PURPOSE: Contains the data used by the Visual Step to display images.
  • PAYLOAD:
    • image - URL to the image asset being displayed.
    • device - What device the Visual Step is meant to be displayed on.
    • dimensions - Your custom dimensions, if any.
    • canvasVisibility - If you've toggled "Actual Size" on the Voiceflow Creator this attribute will have the value "full". Otherwise, if you toggled "Small", then this attribute will have the value "cropped".
    • visualType - Our internal code supports other visuals systems like APL. However, this is not relevant to a General Project, so you should ignore this property.
export declare enum DeviceType {
  MOBILE = 'mobile',
  TABLET = 'tablet',
  DESKTOP = 'desktop',
  SMART_WATCH = 'smart_watch',
  TELEVISION = 'television',
  IN_CAR_DISPLAY = 'in_car_display',
  ECHO_SPOT = 'echo_spot',
  ECHO_SHOW_8 = 'echo_show_8',
  ECHO_SHOW_10 = 'echo_show_10',
  FIRE_HD_8 = 'fire_hd_8',
  FIRE_HD_10 = 'fire_hd_10',
  FIRE_TV_CUBE = 'fire_tv_cube',
  GOOGLE_NEST_HUB = 'google_nest_hub',
}

type P = {
  image: string | null;
  device: DeviceType | null;
  dimensions: null | { width: number; height: number };
  canvasVisibility: 'full' | 'cropped';
  visualType: 'image';
};

ChoiceTrace

  • PURPOSE: Contains suggested response that the user can make. Only appears at the end of a list of traces returned by the app. We recommend using .getChips() to access the suggested responses, rather than processing this trace manually.
  • PAYLOAD:
type P = {
  choices: { intent?: string; name: string }[];
};

ExitTrace

  • PURPOSE: Indicates if the Voiceflow app has terminated or not. Only appears at the end of a list of traces returned by the app. We recommend using .isEnding() to determine if the conversation is over, rather than processing this trace manually.
  • PAYLOAD: The payload is undefined

FlowTrace

  • PURPOSE: Indicates that the Voiceflow app has switched into a flow. This might be useful for debugging.
  • PAYLOAD:
    • diagramID - The ID of the Flow the app is stepping into.
type P = {
  diagramID: string;
};

BlockTrace

  • PURPOSE: Indicates that the Voiceflow app has entered a block.
  • PAYLOAD:
    • blockID - The ID of the block that the app is stepping into.
type P = {
  blockID: string;
};

Runtime

As the name suggests, runtime-client-js interfaces with a Voiceflow "runtime" server. You can check out our runtime SDK for building runtime servers. Modifying the runtime allows for extensive customization of bot behavior and integrations.

By default, the client will use the Voiceflow hosted runtime at https://general-runtime.voiceflow.com. To configure the client to consume your custom runtime, you should use the endpoint configuration option shown below. This option will change the target URL of runtime server that RuntimeClient instances sends its request to.

const factory = new RuntimeClientFactory({
  versionID: '5fa2c62c71d4fa0007f7881b',
  apiKey: 'VF.3fs98h2f09.asd9020jis128',
  endpoint: 'https://localhost:4000', // change to a local endpoint or your company's production servers
});