Abstract:
Large Language Models (LLMs) have been shown to capture the syntax, semantics, and structure of programming languages, enabling the generation of accurate code for similar test cases through Few Shot Learning (FSL) and prompt engineering. Although LLMs perform exceptionally well with small context-length inputs, they struggle to produce accurate results with large context-length inputs and out-of-distribution-dataset, thereby limiting their applicability for large-scale code generation tasks. Our work focuses on large code generation based on the custom dataset using LLMs. This research explored a number of small-sized, open-source, state-of-the-art LLMs and various configurations of LLMs, with temperature and other hyper-parameter settings. With pre-trained LLMs, code generation accuracy was less than 20% without tuning any hyper-parameters. By implementing Retrieval-Augmented Generation (RAG) to retrieve contextually relevant examples, the initial accuracy of the generated code was improved, reaching 65% to 70% correctness based on expert evaluations. A framework for reviewing the generated code called ‘LLM Judge’, was developed to identify correctness, issues, and places of improvement. By iteratively generating and refining code based on feedback from the ‘LLM Judge’, the accuracy of the generated code improved to 75%–80% at the end of the second iteration. These results highlight the potential of LLM to automate the test code generation. This work reduces the time required to write custom code to automate test cases from, on average, two days to a few hours, thereby simplifying the development process for engineers.