# CodeBLEU

**Objective:**

CodeBLEU is a comprehensive metric for evaluating code generation, integrating BLEU with code-specific aspects such as syntax and dataflow. It measures both linguistic and structural similarity, making it suitable for assessing code accuracy beyond surface-level token matching.

**Required Columns in Dataset:**

`Generated Code`, `Reference Code`

**Interpretation:**

* **High CodeBLEU Score:** Indicates strong alignment with the reference solution in terms of both syntax and logic.
* **Low CodeBLEU Score:** Reflects potential deviations in code structure or logic, which may impact functionality.

**Execution via UI:**

<figure><img src="/files/ov5DKg2pxeX5ml98Z0wi" alt=""><figcaption></figcaption></figure>

**Execution via SDK:**

```python
metrics=[
    {"name": "CodeBLEU", "schema_mapping": {"generated_code": "Generated Code", "reference_code": "Reference Code"}}
]

```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.raga.ai/ragaai-catalyst/ragaai-metric-library/code-generation/codebleu.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
