Our OpenAI integration allows you to easily perform AI-powered tasks, such as summarizing text, answering questions, generating images, fine tuning and much more.
Once you have set up a OpenAI client, you can add it to your job and start using the provided tasks:
client.defineJob({ id: "openai-job", name: "OpenAI Job", version: "1.0.0", trigger: invokeTrigger(), integrations: { openai, // Add the OpenAI client as an integration }, run: async (payload, io, ctx) => { // Now you can access it through the io object const completion = await io.openai.chat.completions.create("completion", { model: "gpt-3.5-turbo", messages: [ { role: "user", content: "Create a good programming joke about background jobs", }, ], }); },});
As you can see above, we’ve replicated the API of the OpenAI TypeScript SDK, with a crucial difference of adding the Task Cache Key as the first parameter.We’ve also added a few convenience methods to make it easier to work with the OpenAI API, especially in a serverless environment. For example, you can run a Chat Completion task in the background with backgroundCreate():
const completion = await io.openai.chat.completions.backgroundCreate("completion", { model: "gpt-3.5-turbo", messages: [ { role: "user", content: "Create a good programming joke about background jobs", }, ],});
See our full task reference below:
Chat Completions
Given a list of messages comprising a conversation, the model will return a response
Assistants (Beta)
Build assistants that can call models and use tools to perform tasks
Files
Upload files to use with assistants and fine-tuning
Images
Given a prompt and/or an input image, the model will generate a new image
Fine Tuning Jobs
Manage fine-tuning jobs to tailor a model to your specific training data
Models
List and describe the various models available in the API
Completions (Legacy)
Given a prompt, the model will return one or more predicted completions.