We use the term "automatic programming" to mean "generating code from natural language".
The ultimate high level of which is of course to program with:which is basically the goal of artificial general intelligence, especially according to The Employment Test definition of AGI.
The term has not always had that sense:
automatic programming has always been a euphemism for programming in a higher-level language than was then available to the programmer
sums it up.
But in the current AI boom, this is the sense that matters, so that's what we will go with.
Basically they require users to hand-code a metric and provide a program skeleton with some parts of the code marked to be replaced, and then the system focuses on modifying the code regions in question to optimize the metric.
All the novel results they announced were in constraint satisfaction problems or optimization problem. Their results are still awesome, but it's not very different from AlphaGo style things.
Appears to be a very small number of newly created problems?
The tests are present in a gzip inside the Git repo: github.com/openai/human-eval/blob/master/data/HumanEval.jsonl.gz These researchers.
To get a quick overview of the problems with jq:
jq -r '"==== \(.task_id) \(.entry_point)\n\(.prompt)"' <HumanEval.jsonl 
The first two problems are:
==== HumanEval/0 has_close_elements
from typing import List


def has_close_elements(numbers: List[float], threshold: float) -> bool:
    """ Check if in given list of numbers, are any two numbers closer to each other than
    given threshold.
    >>> has_close_elements([1.0, 2.0, 3.0], 0.5)
    False
    >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
    True
    """

==== HumanEval/1 separate_paren_groups
from typing import List


def separate_paren_groups(paren_string: str) -> List[str]:
    """ Input to this function is a string containing multiple groups of nested parentheses. Your goal is to
    separate those group into separate strings and return the list of those.
    Separate groups are balanced (each open brace is properly closed) and not nested within each other
    Ignore any spaces in the input string.
    >>> separate_paren_groups('( ) (( )) (( )( ))')
    ['()', '(())', '(()())']
    """
so we understand that it takes as input an empty function with a docstring and you have to fill the function body.
The paper also shows that there can be other defined functions besides the one you have to implement.
This one focuses on improving speed of important numerical algorithms as compared to popular implementations.
The general pattern can be seen by observing the optimization of: algotune.io/aes_gcm_encryption_anthropic_claude-opus-4-1-20250805.html This shows the chat that the system had.
They define an OS interface to edit files and run right on the prompt, and tell at each stage how many credits are left for a given API and what the speedup was. Amazing. Each task has a $1 budget per provider. Then their software parses commands out of the LLM output and sends formatted responses back. Quite amazing that it works at all.
All pieces of code seem to be in Python, and the speedups come mainly from using more advanced external computing libraries, like compiling with Cython or using faster external libraries that are pre-compiled, or more parallel. So it is not that impressive from a purely algorithmic point of view, but it is not bad either.
Correctness is checked automatically by comparing the optimized solution to the original non-optimized one likely for certain inputs.
Their most interesting subset, the -hard one, appears to be present at: huggingface.co/datasets/bigcode/bigcodebench-hard in Parquet format. OMG why.
By Princeton people.
This one aims to solve GitHub issues. It appears to contain 2,294 real-world GitHub issues and their corresponding pull requests.
Evaluation is simply based on "does the pull request make some pre-written failing test cases pass".
The dataset appears to be at: huggingface.co/datasets/princeton-nlp/SWE-bench in Parquet format.
Tasks from Upwork.

Articles by others on the same topic (0)

There are currently no matching articles.