Skip to main content

Configure Evaluators

In this guide will show you how to configure evaluators for your LLM application.

What are evaluators?

Evaluators are functions that assess the output of an LLM application.

Evaluators typically take as input:

  • The output of the LLM application
  • (Optional) The reference answer (i.e., expected output or ground truth)
  • (Optional) The inputs to the LLM application
  • Any other relevant data, such as context

Evaluators return either a float or a boolean value.

Figure showing the inputs and outputs of an evaluator.

Configuring evaluators

To create a new evaluator, click on the Configure Evaluators button in the Evaluations view.

The configure evaluators button in agenta.

Selecting evaluators

Agenta offers a growing list of pre-built evaluators suitable for most use cases. We also provide options for creating custom evaluators (by writing your own Python function) or using webhooks for evaluation.

Available Evaluators
Evaluator NameUse CaseTypeDescription
Exact MatchClassification/Entity ExtractionPattern MatchingChecks if the output exactly matches the expected result.
Contains JSONClassification/Entity ExtractionPattern MatchingEnsures the output contains valid JSON.
Regex TestClassification/Entity ExtractionPattern MatchingChecks if the output matches a given regex pattern.
JSON Field MatchClassification/Entity ExtractionPattern MatchingCompares specific fields within JSON data.
JSON Diff MatchClassification/Entity ExtractionSimilarity MetricsCompares generated JSON with a ground truth JSON based on schema or values.
Similarity MatchText Generation / ChatbotSimilarity MetricsCompares generated output with expected using Jaccard similarity.
Semantic Similarity MatchText Generation / ChatbotSemantic AnalysisCompares the meaning of the generated output with the expected result.
Starts WithText Generation / ChatbotPattern MatchingChecks if the output starts with a specified prefix.
Ends WithText Generation / ChatbotPattern MatchingChecks if the output ends with a specified suffix.
ContainsText Generation / ChatbotPattern MatchingChecks if the output contains a specific substring.
Contains AnyText Generation / ChatbotPattern MatchingChecks if the output contains any of a list of substrings.
Contains AllText Generation / ChatbotPattern MatchingChecks if the output contains all of a list of substrings.
Levenshtein DistanceText Generation / ChatbotSimilarity MetricsCalculates the Levenshtein distance between output and expected result.
LLM-as-a-judgeText Generation / ChatbotLLM-basedSends outputs to an LLM model for critique and evaluation.
RAG FaithfulnessRAG / Text Generation / ChatbotLLM-basedEvaluates if the output is faithful to the retrieved documents in RAG workflows.
RAG Context RelevancyRAG / Text Generation / ChatbotLLM-basedMeasures the relevancy of retrieved documents to the given question in RAG.
Custom Code EvaluationCustom LogicCustomAllows users to define their own evaluator in Python.
Webhook EvaluatorCustom LogicCustomSends output to a webhook for external evaluation.

Screen for selecting an evaluator.

Evaluators' settings

Each evaluator comes with it's unique settings. For instance in the screen below, the JSON field match evaluator requires you to specify which field in the output JSON you need to consider for evaluation. You'll find detailed information about these parameters on each evaluator's documentation page.

Screen for configuring an evaluator.

Mappings evaluator's inputs to the LLM data

Evaluators need to know which parts of the data contain the output and the reference answer. Most evaluators allow you to configure this mapping, typically by specifying the name of the column in the test set that contains the reference answer.

For more sophisticated evaluators, such as RAG evaluators (available only in cloud and enterprise versions), you need to define more complex mappings (see figure below).

Figure showing how RAGAS faithfulness evaluator is configured in agenta.

Configuring the evaluator is done by mapping the evaluator inputs to the generation data:

Figure showing how RAGAS faithfulness evaluator is configured in agenta.