Benchmarking LLMs is an extremely difficult issue.
Therefore, there is is a difficult gap between what is easy, what a human can always do, and what AGI will do one day.
Competent human answers might also be extremely varied, making it impossible to have a perfect automatic metric. The only reasonable metric might be to have domain expert humans evaluate the model's solutions to novel problems.
This was getting really hard as of 2025!
On notable example that ChatGPT 4 Turbo got wrong is perhaps:and it gets the number of words wrong.
Write a sentence with 20 words.
Bibliography:
arxiv.org/html/2405.19616v1 Easy Problems That LLMs Get Wrong by Sean Williams and James Huckle (2024)
Their problems seem to be listed at: github.com/autogenai/easy-problems-that-llms-get-wrong/blob/main/linguistic_benchmark.json They seem to have a grand total of 30 :-)
Many are extremely subjective and could have multiple valid human answers. E.g.:could be gotten wrong by many humans and has infinitely many answers.
Write me a sentence without any words that appear in The Bible.
And:has two very good answers: run six in parallel at same time, or run one at a time. One at a time is more scientific as you don't have one left and one right. Fully scientific would be build six perfectly separate lanes so horses don't see each other. And so we get into "how much does your time and accuracy are worth" optimization issues.
You have six horses and want to race them to see which is fastest. What is the best way to do this?
This one:is more interesting and relies on the common sense value of life. Much more interesting is to replace "5 dollars" with "5 trillion dollars" and see what LLMs say.
Bob has three boxes in front of him - Box A, Box B and Box C. Bob does not know what is in the boxes. Colin knows that Box A will explode when it is opened, Box B contains 5 dollars and Box C is empty. Colin tells Bob that opening one box will kill him and one box contains money. Should Bob open a box?
Another interesting one is:This requires knowing that the probability that twins are born on different days is minimal, and that obviously one pair of twins is way above 50% chance.
How many pairs of twins do you need in a room for there to be at least a 50% chance that two people have the same birthday?
Solutions to some of the problems on specific LLMs can be seen e.g. at: github.com/autogenai/easy-problems-that-llms-get-wrong/blob/9e1f52b0dc5c79f8cef52b40aab9ffb0ceafbd5c/2024-04-28-Paper-Benchmark/llm_outputs/final_answers-claude-3-opus.csv
Articles by others on the same topic
There are currently no matching articles.