Skip to main content

Quick Start

In this tutorial, we'll walk through three simple steps to get started with Agenta:

  1. Create a prompt in the web UI
  2. Deploy it to an environment
  3. Integrate it with your codebase using the Agenta SDK

1. Create a prompt

Go to the app overview and click on Create a Prompt. You can choose between:

  • Completion Prompt: For single-turn LLM applications (question answering, text generation, classification)
  • Chat Application: For multi-turn applications like chatbots

2. Commit your changes

After making changes to your prompt configuration:

  1. Click the Commit button
  2. Choose to commit to the default variant or create a new one
info

Variants are like Git branches, allowing you to experiment with different configurations. Each variant is versioned, with each version having its own commit number and being immutable.

3. Deploy to an environment

When you're satisfied with your prompt:

  1. Navigate to the Registry page
  2. Select the revision you want to deploy
  3. Click the Deploy button
  4. Choose the environment (production, development, or staging)
  5. Add optional deployment notes
tip

You can deploy a variant from the Playground, the Registry, or from the Deployments page directly.

note

Most changes made while iterating on a variant are experimental and not immediately deployed to production. This separation allows you to experiment freely before pushing changes to live environments. Environments are versioned separately from variants, enabling rollbacks if needed.

Animation showing the deployment process in Agenta

4. Integrate with your code

Access your deployed prompt using either the Agenta Python SDK or the API directly.

First, import and initialize the Agenta SDK:

import agenta as ag
# os.environ["AGENTA_API_KEY"] = "YOUR_AGENTA_API_KEY"
# os.environ["AGENTA_HOST"] = "https://cloud.agenta.ai" # only needed when self-hosting
ag.init()

Fetch your prompt configuration from the registry:

config = ag.ConfigManager.get_from_registry(
app_slug="your-app-slug",
environment_slug="production"
)
tip

For asynchronous applications, use aget_from_registry instead.

The response is a JSON object containing your complete prompt configuration:

{
"prompt": {
"messages": [
{
"role": "system",
"content": "You are an expert in geography"
},
{
"role": "user",
"content": "What is the capital of {{country}}? "
}
],
"input_keys": [
"country"
],
"llm_config": {
"model": "gpt-3.5-turbo",
"tools": [],
"top_p": 0.2,
"max_tokens": 257,
"temperature": 0.2,
"presence_penalty": -1.7,
"frequency_penalty": -1.5,
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "MySchema",
"schema": {
"type": "object",
"properties": {}
},
"strict": false,
"description": "A description of the schema"
}
}
},
"template_format": "curly"
}
}

Use the helper class PromptTemplate to format your prompt and convert it to OpenAI-compatible parameters:

prompt = PromptTemplate(**config["prompt"]).format(country="France")
client = openai.OpenAI()
response = client.chat.completions.create(
**prompt.to_openai_kwargs()
)
info

Model names follow LiteLLM naming conventions: provider/model (e.g., cohere/command-light), provider/type/model (e.g., openrouter/google/palm-2-chat-bison), or just model for OpenAI models (e.g., gpt-3.5-turbo). Custom models use the format your_custom_provider_name/adapter/model (e.g., my_bedrock/bedrock/llama-3.1-8b-instant).

info

For simpler observability and cost tracking, Agenta also offers an endpoint to directly call LLMs with your prompt configuration. Learn more in the proxy LLM calls section.

Next Steps

Congratulations! You've created a prompt, deployed it to an environment, and integrated it with your codebase.

To continue your journey with Agenta: