Proud to announce our paper on "Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis" has been accepted to Findings of .
This is joint work with Matthieu Zimmer, Gerasimos Lampouras, Derrick Goh Xin Deik, and Ignacio Iacobacci .

Code Synthesis, the generation of programming language code from a natural language description, is a challenging problem for .
Various Reinforcement Learning methods have been proposed to improve performance of pretrained models.
One approach to this problem is to use functional tests (Unit Tests) as the reward signal; however, this requires data consisting of (i) NL problem prompts, (ii) varied unit tests for each problem to assess functional correctness, which is often unavaible. Some datatasets such as and exist; however, these are limited in size and contain (relatively) simple problems.

We show how to programmatically derive new training data for functional test-based Code Synthesis RL, generating and converting automatic tests from a strongly typed language (Java) to a weakly typed language (Python). This allows us to generate arbitrary amounts of test-annotated data.

We then introduce a very straight-forward yet effective practical REINFORCE-based Actor-Critic RL approach that makes use of Unit Test annotated data to tune a function-level Code Synthesis LM.
Crucially, we find that keeping the Critic in sync with the Policy yields better results than pretraining and freezing the Critic.
Use of our augmentation data further improves model performance.

Preprint available at arxiv.org/abs/2310.13669 ; code and model will be made available.

Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis

The advent of large pre-trained language models in the domain of Code Synthesis has shown remarkable performance on various benchmarks, treating the problem of Code Generation in a fashion similar to Natural Language Generation, trained with a Language Modelling (LM) objective. In addition, the property of programming language code being precisely evaluable with respect to its semantics -- through the use of Unit Tests to check its functional correctness -- lends itself to using Reinforcement Learning (RL) as a further training paradigm. Previous work has shown that RL can be applied as such to improve models' coding capabilities; however, such RL-based methods rely on a reward signal based on defined Unit Tests, which are much harder to obtain compared to the huge crawled code datasets used in LM objectives. In this work, we present a novel approach to automatically obtain data consisting of function signatures and associated Unit Tests, suitable for RL training of Code Synthesis models. We also introduce a straightforward, simple yet effective Actor-Critic RL training scheme and show that it, in conjunction with automatically generated training data, leads to improvement of a pre-trained code language model's performance by up to 9.9% improvement over the original underlying code synthesis LM, and up to 4.3% over RL-based models trained with standard PPO or CodeRL.

arxiv.org

Thrilled to announce the Regular Expression Inference Challenge (REIC), with Mojtaba Valizadeh, Ignacio Iacobacci, Martin Berger.

REI is a supervised machine learning () and program synthesis task, and poses the problem of finding minimal regular expressions from examples: Given two finite sets of strings P and N and a cost function cost(⋅), the task is to generate an expression r that accepts all strings in P and rejects all strings in N, while no other such expression r' exists with cost(r')<cost(r).

Turns out, this sort of inference seems to be really hard for current DL ( ) approaches. Prompting StarChat-beta -- a SOTA large LM for code with 15.5B parameters -- yields extremely low results.
Even a fully supervised 300M parameter model, which we call ReGPT, only achieves around 14% precise and minimal expressions.

Check out our preprint on arXiv: arxiv.org/abs/2308.07899
The challenge is available on CodaLab: codalab.lisn.upsaclay.fr/compe

We formally define the problem, and provide training and validation data, as well as starter code for all our baselines.

We invite researchers anywhere to participate in tackling our challenge.

The Regular Expression Inference Challenge

We propose \emph{regular expression inference (REI)} as a challenge for code/language modelling, and the wider machine learning community. REI is a supervised machine learning (ML) and program synthesis task, and poses the problem of finding minimal regular expressions from examples: Given two finite sets of strings $P$ and $N$ and a cost function $\text{cost}(\cdot)$, the task is to generate an expression $r$ that accepts all strings in $P$ and rejects all strings in $N$, while no other such expression $r'$ exists with $\text{cost}(r')<\text{cost}(r)$. REI has advantages as a challenge problem: (i) regular expressions are well-known, widely used, and a natural idealisation of code; (ii) REI's asymptotic worst-case complexity is well understood; (iii) REI has a small number of easy to understand parameters (e.g.~$P$ or $N$ cardinality, string lengths of examples, or the cost function); this lets us easily finetune REI-hardness; (iv) REI is an unsolved problem for deep learning based ML. Recently, an REI solver was implemented on GPUs, using program synthesis techniques. This enabled, for the first time, fast generation of minimal expressions for complex REI instances. Building on this advance, we generate and publish the first large-scale datasets for REI, and devise and evaluate several initial heuristic and machine learning baselines. We invite the community to participate and explore ML methods that learn to solve REI problems. We believe that progress in REI directly translates to code/language modelling.

arxiv.org

so, I guess I'm ... what's going on, Mastodon?
I mostly post about nothing and just lurk, and I doubt that'll change during this ...

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.