Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

Evaluating Large Language Models Trained on Code

07 Sep 2025

Description

This July 2021 paper documents the development and evaluation of OpenAI's Codex models, which are large language models specialized in code generation, particularly Python functions from docstrings. They introduce HumanEval, a hand-written dataset designed to assess the functional correctness of generated code through unit tests, a more robust metric than traditional match-based scores like BLEU. The papers compare the performance of various Codex iterations, including supervised fine-tuned versions (Codex-S), against other models like GPT-3, demonstrating significant improvements in pass rates with increased model size and sample generation. Furthermore, the texts explore the limitations, broader impacts, and potential hazards of these models, discussing issues such as over-reliance, misalignment, economic implications for the labor market, and security concerns related to generating vulnerable or biased code. Finally, the sources touch upon Codex-D, a model for generating docstrings from code, and emphasize the need for continued research into safe and responsible AI deployment.Sources:https://arxiv.org/pdf/2107.03374https://github.com/openai/human-eval

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.