Evals

Evals#

gptme provides LLMs with a wide variety of tools, but how well do models make use of them? Which tasks can they complete, and which ones do they struggle with? How far can they get on their own, without any human intervention?

To answer these questions, we have created a evaluation suite that tests the capabilities of LLMs on a wide variety of tasks.

Note

The evaluation suite is still under development, but the eval harness is mostly complete.

Usage#

You can run the simple hello eval with gpt-4o like this:

gptme-eval hello --model openai/gpt-4o

However, we recommend running it in Docker to improve isolation and reproducibility:

make build-docker
docker run \
    -e "OPENAI_API_KEY=<your api key>" \
    -v $(pwd)/eval_results:/app/eval_results \
    gptme-eval hello --model openai/gpt-4o

Results#

Here are the results of the evals we have run so far:

$ gptme-eval eval_results/*/eval_results.csv
Model                                     hello           hello-patch     hello-ask       prime100        init-git        init-rust      whois-superuserlabs-ceo
----------------------------------------  --------------  --------------  --------------  --------------  --------------  -------------  -------------------------
anthropic/claude-3-5-sonnet-20240620      ✅ 9/9 648tk    ✅ 9/9 662tk    ✅ 9/9 731tk    ✅ 9/9 870tk    ✅ 9/9 1004tk   ✅ 4/4 1504tk  🔶 3/4 1306tk
anthropic/claude-3-haiku-20240307         ✅ 9/9 375tk    ✅ 9/9 488tk    ✅ 9/9 446tk    ❌ 0/9 632tk    ✅ 9/9 837tk    🔶 3/4 670tk   ✅ 4/4 1535tk
openai/gpt-4-turbo                        ✅ 3/3 255tk    ✅ 3/3 312tk    ✅ 3/3 376tk    ✅ 3/3 527tk    ✅ 4/4 590tk    ✅ 4/4 784tk   ✅ 6/7 819tk
openai/gpt-4o                             ✅ 10/10 258tk  ✅ 10/10 315tk  ✅ 10/10 391tk  🔶 7/10 454tk   ✅ 10/10 622tk  ✅ 5/5 663tk   ✅ 5/5 1253tk
openai/gpt-4o-mini                        ✅ 11/11 261tk  ✅ 11/11 359tk  ✅ 11/11 418tk  ✅ 11/11 601tk  🔶 8/11 746tk   ✅ 6/6 813tk   ✅ 6/6 951tk
openai/o1-mini                            ✅ 3/3 354tk    ✅ 3/3 431tk    ✅ 3/3 460tk    🔶 2/3 567tk    🔶 3/5 2222tk   🔶 1/5 1412tk  🔶 2/3 813tk
openai/o1-preview                         ✅ 2/2 308tk    ✅ 2/2 570tk    🔶 1/2 549tk    ✅ 2/2 490tk    ✅ 3/3 823tk    ✅ 1/1 656tk   ✅ 1/1 1998tk
google/gemini-flash-1.5                   ✅ 2/2 225tk    ✅ 2/2 401tk    ✅ 2/2 430tk    ❌ 0/2 296tk    ✅ 1/1 686tk    ❌ 0/1 661tk   ✅ 1/1 1014tk
google/gemini-pro-1.5                     ✅ 1/1 341tk    ✅ 1/1 419tk    ✅ 1/1 456tk    ✅ 1/1 676tk    🔶 2/3 431tk    🔶 1/2 1016tk  ✅ 2/2 1308tk
google/gemma-2-27b-it                     ✅ 1/1 288tk    ✅ 1/1 384tk    ✅ 1/1 446tk    ✅ 1/1 714tk    ✅ 1/1 570tk    ❌ 0/1 535tk   ❌ 0/1 235tk
google/gemma-2-9b-it                      ❌ 0/2 186tk    ✅ 2/2 370tk    ✅ 2/2 368tk    ❌ 0/2 545tk    ✅ 1/1 492tk    ❌ 0/1 1730tk  ❌ 0/1 352tk
meta-llama/llama-3.1-405b-instruct        🔶 6/10 188tk   ✅ 8/10 514tk   🔶 6/10 284tk   🔶 7/10 356tk   🔶 4/10 343tk   🔶 2/5 255tk   ❌ 0/5 85tk
meta-llama/llama-3.1-70b-instruct         ✅ 5/6 367tk    ✅ 5/6 424tk    ✅ 6/6 452tk    🔶 2/6 546tk    ✅ 5/6 813tk    🔶 3/4 682tk   🔶 2/3 1461tk
meta-llama/llama-3.1-8b-instruct          ✅ 1/1 277tk    ✅ 1/1 441tk    ❌ 0/1 400tk    ❌ 0/1 5095tk   ✅ 1/1 2266tk   ❓ N/A         ❓ N/A
meta-llama/llama-3.2-11b-vision-instruct  ✅ 2/2 352tk    ✅ 2/2 493tk    ❌ 0/2 479tk    ✅ 2/2 2643tk   ❓ N/A          ❓ N/A         ❓ N/A
meta-llama/llama-3.2-90b-vision-instruct  🔶 2/4 237tk    🔶 2/4 288tk    🔶 3/4 336tk    🔶 1/4 233tk    ❓ N/A          ❓ N/A         ❓ N/A
nousresearch/hermes-2-pro-llama-3-8b      ✅ 1/1 341tk    ❌ 0/1 4274tk   ❌ 0/1 3760tk   ❌ 0/1 659tk    ❓ N/A          ❓ N/A         ❓ N/A
nousresearch/hermes-3-llama-3.1-405b      ✅ 2/2 317tk    ✅ 2/2 420tk    ✅ 2/2 325tk    ✅ 2/2 410tk    ✅ 1/1 821tk    ✅ 1/1 758tk   ✅ 1/1 1039tk
nousresearch/hermes-3-llama-3.1-70b       ❌ 0/2 173tk    ❌ 0/2 187tk    ❌ 0/2 202tk    ❌ 0/2 177tk    ❓ N/A          ❓ N/A         ❓ N/A

We are working on making the evals more robust, informative, and challenging.

Other evals#

We have considered running gptme on other evals, such as SWE-Bench, but have not yet done so.

If you are interested in running gptme on other evals, drop a comment in the issues!