Built-In AI
This SDK is currently in preview and is not yet available to the general public.
You can develop AI powered nodes easily using the built-in AI. It works by leveraging the runtime AI that the user has specified. This means that it is possible for you as a developer to not be concerned about running a specific AI model.
context.ai(<parameters>);
Runtime AI
As a developer, when you run context.ai, you can choose the class of AI you want to run. Depending on what the user of the node specify, the respective AI will be used. For example, a user can configure their environment to use Gemini Flash for speed, OpenAI o1 for reasoning, and Anthropic Claude for sota. Another user may configure to use Ollama for the models. When you define a class of model to use, your node will run using the specific AI that the user has configured for their environment.
Alternatively, you may specify a specific model that you need to run your node in. This will prompt the user who is using your node to do the setup for that AI model.
Parameters
model_class: the class of model to usespeed|sota|reasoningpreferred_model: the model(s) that is preferredmodel: the specific model to run this model inprompt: the prompt to generate a response forsystem: the system prompttemperature: the temperature
Response
total_duration: time spent generating the responseload_duration: time spent in nanoseconds loading the modelprompt_eval_count: number of tokens in the promptprompt_eval_duration: time spent in nanoseconds evaluating the prompteval_count: number of tokens in the responseeval_duration: time in nanoseconds spent generating the responsecontext: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memoryresponse: empty if the response was streamed, if not streamed, this will contain the full response
Example
prompt:
datatype: text
direction: input
response:
datatype: text
direction: output
context.subscribe("text", (text) => {
const summary = context.ai({
prompt: `Summarize this: ${text}`,
temperature: 0.5,
});
context.set("summary", summary.response);
});