Frequently Asked Questions
Does Agenta work with TypeScript?
The same described here applies to other languages like JavaScript, Java, Go, etc. Agenta's API-first approach enables integration with virtually any programming language.
Yes, Agenta can be used with TypeScript, though with varying levels of native support.
Current TypeScript Support
-
Prompt Management
While we don't currently have a native TypeScript SDK, you can fully leverage our prompt management features through direct API calls. All our APIs are documented in our API Reference.
This allows you to create prompts, fetch variants, and programmatically manage them without needing the Python SDK.
-
Observability
Agenta is fully OpenTelemetry compliant. You can auto-instrument your TypeScript application using Opentelemetry compatible integrations such as OpenLLMetry which works well with TypeScript projects.
We have documentation on setting up OpenTelemetry with your API key.
-
Evaluation
For evaluation, any prompts you create within the UI can be evaluated there. However, for more complex workflows like agentic applications, we currently only support Python for creating custom workflows in the playground.
Workaround: You can create a Python wrapper that calls your TypeScript endpoint and add this to Agenta. This would allow subject matter experts to run evaluations from the UI.
If you need support with integration, create an issue in Github.
What LLM providers does Agenta support?
Agenta works with almost any provider, including:
- OpenAI
- Anthropic
- Cohere
- OpenRouter
- Anyscale
- Perplexity AI
- TogetherAI
- DeepInfra
- Aleph Alpha
- Groq
- Gemini
- Mistral
- Ollama
In addition it works natively with
You can add any OpenAI compatible endpoint, including self-hosted models and custom models (for instance using Ollama). You can also dynamically add new models to any provider already listed in the playground, such as OpenRouter, Anthropic, Gemini, Cohere, and others.
You can learn more about setting up different models in the documentation.