Proxy LLM calls
Agenta offers a straightforward way to invoke the deployed version of your prompt through the Agenta SDK or via the REST API. When invoking a prompt, all calls are automatically traced and logged, and you can view these details in the observability dashboard.
Using the Agenta SDK
You can find the specific call to invoke the deployed version of your prompt directly within the Agenta UI.
Below is an example of using Python with the requests library to invoke a deployed prompt through the REST API.
import requests
import json
url = "https://xxxxx.lambda-url.eu-central-1.on.aws/generate_deployed"
params = {
"inputs": {
"question": "add_a_value",
"context": "add_a_value"
},
"environment": "production"
}
response = requests.post(url, json=params)
data = response.json()
print(json.dumps(data, indent=4))
Understanding the Parameters
The parameters you need to provide are:
-
inputs: This dictionary contains all the input parameters required by your specific LLM application. The keys and values depend on how your prompt is configured. For instance, you may have input fields like question, context, or other custom parameters that fit your use case.
-
environment: Defines which environment version of your prompt you want to use. This can be "development", "staging", or "production", allowing you to control which version is being called.