Skip to main content

Built-In AI

Preview

This SDK is currently in preview and is not yet available to the general public.

You can develop AI powered nodes easily using the built-in AI. It works by leveraging the runtime AI that the user has specified. This means that it is possible for you as a developer to not be concerned about running a specific AI model.

context.ai(<parameters>);

Runtime AI

As a developer, when you run context.ai, you can choose the class of AI you want to run. Depending on what the user of the node specify, the respective AI will be used. For example, a user can configure their environment to use Gemini Flash for speed, OpenAI o1 for reasoning, and Anthropic Claude for sota. Another user may configure to use Ollama for the models. When you define a class of model to use, your node will run using the specific AI that the user has configured for their environment.

Alternatively, you may specify a specific model that you need to run your node in. This will prompt the user who is using your node to do the setup for that AI model.

Parameters

  • model_class: the class of model to use speed | sota | reasoning
  • preferred_model: the model(s) that is preferred
  • model: the specific model to run this model in
  • prompt: the prompt to generate a response for
  • system: the system prompt
  • temperature: the temperature

Response

  • total_duration: time spent generating the response
  • load_duration: time spent in nanoseconds loading the model
  • prompt_eval_count: number of tokens in the prompt
  • prompt_eval_duration: time spent in nanoseconds evaluating the prompt
  • eval_count: number of tokens in the response
  • eval_duration: time in nanoseconds spent generating the response
  • context: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
  • response: empty if the response was streamed, if not streamed, this will contain the full response

Example

properties.yaml
prompt:
datatype: text
direction: input
response:
datatype: text
direction: output
src/main.ts
context.subscribe("text", (text) => {
const summary = context.ai({
prompt: `Summarize this: ${text}`,
temperature: 0.5,
});
context.set("summary", summary.response);
});