Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

AI: post transformers

BLEU: Automatic Machine Translation Evaluation

10 Sep 2025

Description

This July 2002 paper introduced BLEU (Bilingual Evaluation Understudy), an automatic and inexpensive method for evaluating machine translation (MT) quality. It highlights the limitations of human evaluation, such as its high cost and time consumption, and proposes BLEU as a quick, language-independent alternative that correlates strongly with human judgment. The core concept of BLEU involves measuring the "closeness" of a machine translation to one or more human reference translations through a modified n-gram precision metric and a brevity penalty. The paper details the mathematical formulation of the BLEU score, its evaluation against human and machine translations, and its proven correlation with human assessment across various languages.Source:https://dl.acm.org/doi/10.3115/1073083.1073135

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.