INTEGRATED STUDY OF AUTOMATED TRANSLATION QUALITY AND PHRASEOLOGICAL EQUIVALENCE IN ENGLISH-UZBEK TRANSLATION

Main Article Content

Shukurova Yulduz Yaxshimurotovna

Abstract

Machine translation (MT) of highly idiomatic text remains a formidable challenge, especially for low-resource, agglutinative languages like Uzbek. This paper presents a comprehensive study integrating (1) MT quality evaluation methods and (2) translation of English phraseological units (idioms, proverbs, phrasal verbs) into Uzbek. We review structural, cultural, and semantic issues in translating English idiomatic expressions into Uzbek, and evaluate how current MT systems (e.g. Google Translate, ChatGPT) handle them. Methodologically, we detail automatic metrics (BLEU, METEOR), human evaluation (fluency/adequacy ratings, post-editing effort), and comparative translation strategies (literal, idiomatic, calque, paraphrase, etc.). Our analysis includes tables contrasting machine outputs with human translations for representative idioms. We discuss the impact of Uzbek’s agglutinative morphology and cultural specificity on translation accuracy. Finally, we offer recommendations for improving Uzbek MT - including richer phraseological corpora, better morphological processing, and integration of cultural knowledge - and suggest how phraseological insights can be embedded in MT models.

Downloads

Download data is not yet available.

Article Details

Section

Articles

How to Cite

INTEGRATED STUDY OF AUTOMATED TRANSLATION QUALITY AND PHRASEOLOGICAL EQUIVALENCE IN ENGLISH-UZBEK TRANSLATION. (2026). Journal of Multidisciplinary Sciences and Innovations, 5(01), 1044-1050. https://doi.org/10.55640/

References

1.Banerjee, S., & Lavie, A. (2005). METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT.

2.Papineni, K., Roukos, S., Ward, T., & Zhu, W. (2002). BLEU: a method for automatic evaluation of machine translation. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), 311-318.

3.Snover, M., Madnani, N., Dorr, B. J., & Schwartz, R. (2009). Fluency, adequacy, or HTER? Proceedings of the Fourth Workshop on Statistical Machine Translation, 259-268.

4.Microsoft. (2025). What is a BLEU score? Microsoft Azure Documentation. Retrieved 2025 from https://learn.microsoft.com

5.Abdurashetona, A. M., Rashidova, U., & Sobirova, M. (2025). The issue of translating idioms between Uzbek and English in natural language processing. AIP Conference Proceedings, 3377(1), 070002.

6.Yaxshimurotovna, S. Y. (2025). CULTURAL-CONNOTATIVE FEATURES OF PHRASEOLOGICAL UNITS IN DIFFERENT LANGUAGES AND THEIR INTERPRETATION THROUGH ARTIFICIAL INTELLIGENCE. SHOKH LIBRARY, 1(13).