INTEGRATED STUDY OF AUTOMATED TRANSLATION QUALITY AND PHRASEOLOGICAL EQUIVALENCE IN ENGLISH-UZBEK TRANSLATION
Main Article Content
Abstract
Machine translation (MT) of highly idiomatic text remains a formidable challenge, especially for low-resource, agglutinative languages like Uzbek. This paper presents a comprehensive study integrating (1) MT quality evaluation methods and (2) translation of English phraseological units (idioms, proverbs, phrasal verbs) into Uzbek. We review structural, cultural, and semantic issues in translating English idiomatic expressions into Uzbek, and evaluate how current MT systems (e.g. Google Translate, ChatGPT) handle them. Methodologically, we detail automatic metrics (BLEU, METEOR), human evaluation (fluency/adequacy ratings, post-editing effort), and comparative translation strategies (literal, idiomatic, calque, paraphrase, etc.). Our analysis includes tables contrasting machine outputs with human translations for representative idioms. We discuss the impact of Uzbek’s agglutinative morphology and cultural specificity on translation accuracy. Finally, we offer recommendations for improving Uzbek MT - including richer phraseological corpora, better morphological processing, and integration of cultural knowledge - and suggest how phraseological insights can be embedded in MT models.
Downloads
Article Details
Section

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.
How to Cite
References
1.Banerjee, S., & Lavie, A. (2005). METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT.
2.Papineni, K., Roukos, S., Ward, T., & Zhu, W. (2002). BLEU: a method for automatic evaluation of machine translation. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), 311-318.
3.Snover, M., Madnani, N., Dorr, B. J., & Schwartz, R. (2009). Fluency, adequacy, or HTER? Proceedings of the Fourth Workshop on Statistical Machine Translation, 259-268.
4.Microsoft. (2025). What is a BLEU score? Microsoft Azure Documentation. Retrieved 2025 from https://learn.microsoft.com
5.Abdurashetona, A. M., Rashidova, U., & Sobirova, M. (2025). The issue of translating idioms between Uzbek and English in natural language processing. AIP Conference Proceedings, 3377(1), 070002.
6.Yaxshimurotovna, S. Y. (2025). CULTURAL-CONNOTATIVE FEATURES OF PHRASEOLOGICAL UNITS IN DIFFERENT LANGUAGES AND THEIR INTERPRETATION THROUGH ARTIFICIAL INTELLIGENCE. SHOKH LIBRARY, 1(13).