Evals#
gptme provides LLMs with a wide variety of tools, but how well do models make use of them? Which tasks can they complete, and which ones do they struggle with? How far can they get on their own, without any human intervention?
To answer these questions, we have created a evaluation suite that tests the capabilities of LLMs on a wide variety of tasks.
Note
The evaluation suite is still under development, but the eval harness is mostly complete.
Usage#
You can run the simple hello
eval with gpt-4o like this:
gptme-eval hello --model openai/gpt-4o
However, we recommend running it in Docker to improve isolation and reproducibility:
make build-docker
docker run \
-e "OPENAI_API_KEY=<your api key>" \
-v $(pwd)/eval_results:/app/eval_results \
gptme-eval hello --model openai/gpt-4o
Results#
Here are the results of the evals we have run so far:
$ gptme-eval eval_results/*/eval_results.csv
Model hello hello-patch hello-ask prime100 init-git init-rust whois-superuserlabs-ceo
---------------------------------------- -------------- -------------- -------------- --------------- --------------- ------------- -------------------------
anthropic/claude-3-5-haiku-20241022 ✅ 14/14 443tk ✅ 14/14 381tk ✅ 14/14 451tk ✅ 14/14 1007tk ✅ 13/13 938tk ❓ N/A ✅ 1/1 2619tk
anthropic/claude-3-5-sonnet-20240620 ✅ 34/34 630tk ✅ 34/34 542tk ✅ 34/34 710tk ✅ 34/34 924tk ✅ 34/34 1098tk ✅ 4/4 1504tk 🔶 3/4 1306tk
anthropic/claude-3-5-sonnet-20241022 ✅ 13/13 454tk ✅ 13/13 454tk ✅ 13/13 440tk ✅ 13/13 1125tk ✅ 13/13 733tk ❓ N/A ❓ N/A
anthropic/claude-3-haiku-20240307 ✅ 34/34 388tk ✅ 34/34 375tk ✅ 34/34 432tk ❌ 6/34 781tk 🔶 24/34 903tk 🔶 3/4 670tk ✅ 4/4 1535tk
openai/gpt-4-turbo ✅ 3/3 255tk ✅ 3/3 312tk ✅ 3/3 376tk ✅ 3/3 527tk ✅ 4/4 590tk ✅ 4/4 784tk ✅ 6/7 819tk
openai/gpt-4o ✅ 48/48 298tk ✅ 48/48 304tk ✅ 48/48 365tk 🔶 35/48 456tk ✅ 48/48 739tk ✅ 5/5 663tk ✅ 5/5 1253tk
openai/gpt-4o-mini ✅ 48/49 277tk ✅ 49/49 314tk ✅ 49/49 371tk ✅ 49/49 591tk ✅ 44/49 762tk ✅ 6/6 813tk ✅ 6/6 951tk
openai/o1-mini ✅ 3/3 354tk ✅ 3/3 431tk ✅ 3/3 460tk 🔶 2/3 567tk 🔶 3/5 2222tk 🔶 1/5 1412tk 🔶 2/3 813tk
openai/o1-preview ✅ 2/2 308tk ✅ 2/2 570tk 🔶 1/2 549tk ✅ 2/2 490tk ✅ 3/3 823tk ✅ 1/1 656tk ✅ 1/1 1998tk
google/gemini-flash-1.5 ✅ 2/2 225tk ✅ 2/2 401tk ✅ 2/2 430tk ❌ 0/2 296tk ✅ 1/1 686tk ❌ 0/1 661tk ✅ 1/1 1014tk
google/gemini-pro-1.5 ✅ 1/1 341tk ✅ 1/1 419tk ✅ 1/1 456tk ✅ 1/1 676tk 🔶 2/3 431tk 🔶 1/2 1016tk ✅ 2/2 1308tk
google/gemma-2-27b-it ✅ 1/1 288tk ✅ 1/1 384tk ✅ 1/1 446tk ✅ 1/1 714tk ✅ 1/1 570tk ❌ 0/1 535tk ❌ 0/1 235tk
google/gemma-2-9b-it ❌ 0/2 186tk ✅ 2/2 370tk ✅ 2/2 368tk ❌ 0/2 545tk ✅ 1/1 492tk ❌ 0/1 1730tk ❌ 0/1 352tk
meta-llama/llama-3.1-405b-instruct 🔶 30/48 197tk 🔶 34/48 481tk 🔶 34/48 361tk 🔶 35/48 363tk 🔶 23/48 399tk 🔶 2/5 255tk ❌ 0/5 85tk
meta-llama/llama-3.1-70b-instruct ✅ 5/6 367tk ✅ 5/6 424tk ✅ 6/6 452tk 🔶 2/6 546tk ✅ 5/6 813tk 🔶 3/4 682tk 🔶 2/3 1461tk
meta-llama/llama-3.1-8b-instruct ✅ 1/1 277tk ✅ 1/1 441tk ❌ 0/1 400tk ❌ 0/1 5095tk ✅ 1/1 2266tk ❓ N/A ❓ N/A
meta-llama/llama-3.2-11b-vision-instruct ✅ 2/2 352tk ✅ 2/2 493tk ❌ 0/2 479tk ✅ 2/2 2643tk ❓ N/A ❓ N/A ❓ N/A
meta-llama/llama-3.2-90b-vision-instruct 🔶 2/4 237tk 🔶 2/4 288tk 🔶 3/4 336tk 🔶 1/4 233tk ❓ N/A ❓ N/A ❓ N/A
nousresearch/hermes-2-pro-llama-3-8b ✅ 1/1 341tk ❌ 0/1 4274tk ❌ 0/1 3760tk ❌ 0/1 659tk ❓ N/A ❓ N/A ❓ N/A
nousresearch/hermes-3-llama-3.1-405b ✅ 2/2 317tk ✅ 2/2 420tk ✅ 2/2 325tk ✅ 2/2 410tk ✅ 1/1 821tk ✅ 1/1 758tk ✅ 1/1 1039tk
nousresearch/hermes-3-llama-3.1-70b ❌ 0/2 173tk ❌ 0/2 187tk ❌ 0/2 202tk ❌ 0/2 177tk ❓ N/A ❓ N/A ❓ N/A
We are working on making the evals more robust, informative, and challenging.
Other evals#
We have considered running gptme on other evals such as SWE-Bench, but have not finished it (see PR #142).
If you are interested in running gptme on other evals, drop a comment in the issues!