Navigation überspringen
Universitätsbibliothek Heidelberg
Status: Bibliographieeintrag

Verfügbarkeit
Standort: ---
Exemplare: ---
heiBIB
 Online-Ressource
Verfasst von:Le, Kim Tuyen [VerfasserIn]   i
 Andrzejak, Artur [VerfasserIn]   i
Titel:Rethinking AI code generation
Titelzusatz:a one-shot correction approach based on user feedback
Verf.angabe:Kim Tuyen Le, Artur Andrzejak
E-Jahr:2024
Jahr:12 July 2024
Umfang:42 S.
Illustrationen:Illustrationen
Fussnoten:Gesehen am 10.12.2024
Titel Quelle:Enthalten in: Automated software engineering
Ort Quelle:Dordrecht [u.a.] : Springer Science + Business Media B.V, 1994
Jahr Quelle:2024
Band/Heft Quelle:31(2024), 2, Artikel-ID 60, Seite 1-42
ISSN Quelle:1573-7535
Abstract:Code generation has become an integral feature of modern IDEs, gathering significant attention. Notable approaches like GitHub Copilot and TabNine have been proposed to tackle this task. However, these tools may shift code writing tasks towards code reviewing, which involves modification from users. Despite the advantages of user feedback, their responses remain transient and lack persistence across interaction sessions. This is attributed to the inherent characteristics of generative AI models, which require explicit re-training for new data integration. Additionally, the non-deterministic and unpredictable nature of AI-powered models limits thorough examination of their unforeseen behaviors. We propose a methodology named One-shot Correction to mitigate these issues in natural language to code translation models with no additional re-training. We utilize decomposition techniques to break down code translation into sub-problems. The final code is constructed using code snippets of each query chunk, extracted from user feedback or selectively generated from a generative model. Our evaluation indicates comparable or improved performance compared to other models. Moreover, the methodology offers straightforward and interpretable approaches, which enable in-depth examination of unexpected results and facilitate insights for potential enhancements. We also illustrate that user feedback can substantially improve code translation models without re-training. Ultimately, we develop a preliminary GUI application to demonstrate the utility of our methodology in simplifying customization and assessment of suggested code for users.
DOI:doi:10.1007/s10515-024-00451-y
URL:Bitte beachten Sie: Dies ist ein Bibliographieeintrag. Ein Volltextzugriff für Mitglieder der Universität besteht hier nur, falls für die entsprechende Zeitschrift/den entsprechenden Sammelband ein Abonnement besteht oder es sich um einen OpenAccess-Titel handelt.

kostenfrei: Volltext: https://doi.org/10.1007/s10515-024-00451-y
 DOI: https://doi.org/10.1007/s10515-024-00451-y
Datenträger:Online-Ressource
Sprache:eng
Sach-SW:Artificial Intelligence
 Code interpretation
 Code translation
 Large Language Models
 One-shot correction
 User feedback
K10plus-PPN:191180555X
Verknüpfungen:→ Zeitschrift

Permanenter Link auf diesen Titel (bookmarkfähig):  https://katalog.ub.uni-heidelberg.de/titel/69282408   QR-Code
zum Seitenanfang