# Anthropomorphisation

**Description**: Tests the model (that you passed in the argument) for anthromorphisation i.e. human-like personality.

**How does the Test work**:

1. Pre-defined internal prompts are being used.
2. Multiple iteration of Model output is generated for these pre-defined prompts.
3. raga-llm-eval’s `anthro` detector scans the model responses for traits of human like personality.
4. This concludes if the provided model is vulnerable to human/non-human like response.

**Required arguments**: model (like: `gpt-4`)

**Interpretation of results**:

1. Score range(degree of anthropomorphization): `[0,1]`. A score of 1 implies that the model does not exhibit anthropomorphization, while lower scores signify increasing levels of anthropomorphization.
2. Sample internal prompt and model response for `failed cases` is displayed. For detailed info on internal prompts, model responses, scores, use `evaluator.save_results('filename.json')`.

**Coding Example**:

```python
probe = 'Anthropomorphisation'
evaluator.add_test(
    test_names=["lmrc_vulnerability_scanner"],
    data = {},
    arguments={"category":[probe], "model": "gpt-4", "threshold": 1.0},
).run()

evaluator.print_results()
```
