24 Hours Only: Get 39% OFF on Our Premium Plan - Check Out Now!

Cambridge IELTS 19 Academic Reading Test 3 Passage 3

Estimated reading time: 5:23s

Is the era of artificial speech translation upon us?

Once the stuff of science fiction, technology that enables people to talk using different languages is now here. But how effective is it?

Noise, Alex Waibel tells me, is one of the major challenges that artificial speech translation has to meet. A device may be able to recognize speech in a laboratory, or a meeting room, but will struggle to cope with the kind of background noise I can hear in my office surrounding Professor Waibel as he speaks to me from Kyoto station in Japan. I’m struggling to follow him in English, on a scratchy line that reminds me we are nearly 10,000 kilometers apart-and that distance is still an obstacle to communication even if you’re speaking the same language, as we are. We haven’t reached the future yet. If we had, Waibel would have been able to speak more comfortably in his native German and I would have been able to hear his words in English.

At Karlsruhe Institute of Technology, where he is a professor of computer science, Waibel and his colleagues already give lectures in German that their students can follow in English via an electronic translator. The system generates text that students can read on their laptops or phones, so the process is somewhat similar to subtitling. It helps that lecturers speak clearly, don’t have to compete with background chatter, and say much the same thing each year.

The idea of artificial speech translation has been around for a long time. Douglas Adams’ science fiction novel, The Hitchhiker’s Guide to the Galaxy, published in 1979, featured a life form called the ‘Babel fish’ which, when placed in the ear, enabled a listener to understand any language in the universe. It came to represent one of those devices that technology enthusiasts dream of long before they become practically realizable, like TVs flat enough to hang on walls: objects that we once could only dream of having but that are now commonplace. Now devices that look like prototype Babel fish have started to appear, riding a wave of advances in artificial translation and voice recognition.

At this stage, however, they seem to be regarded as eye-catching novelties rather than steps towards what Waibel calls ‘making a language-transparent society.’ They tend to be domestic devices or applications suitable for hotel check-ins, for example, providing a practical alternative to speaking traveler’s English. The efficiency of the translator is less important than the social function. However, ‘Professionals are less inclined to be patient in a conversation,’ founder and CEO at Waverly Labs, Andrew Ochoa, observes. To redress this, Waverly is now preparing a new model for professional applications, which entails performance improvements in speech recognition, translation accuracy and the time it takes to deliver the translated speech.

For a conversation, both speakers need to have devices called Pilots (translator earpieces) in their ears. ‘We find that there’s a barrier with sharing one of the earphones with a stranger,’ says Ochoa. That can’t have been totally unexpected. The problem would be solved if earpiece translators became sufficiently prevalent that strangers would be likely to already have their own in their ears. Whether that happens, and how quickly, will probably depend not so much on the earpieces themselves, but on the prevalence of voice-controlled devices and artificial translation in general.

Waibel highlights the significance of certain Asian nations, noting that voice translation has really taken off in countries such as Japan with a range of systems. There is still a long way to go, though. A translation system needs to be simultaneous, like the translator’s voice speaking over the foreign politician being interviewed on the TV, rather than in sections that oblige speakers to pause after every few remarks and wait for the translation to be delivered. It needs to work offline, for situations where internet access isn’t possible, and to address apprehensions about the amount of private speech data accumulating in the cloud, having been sent to servers for processing.

Systems not only need to cope with physical challenges such as noise, they will also need to be socially aware by addressing people in the right way. Some cultural traditions demand solemn respect for academic status, for example, and it is only polite to respect this. Etiquette-sensitive artificial translators could relieve people of the need to know these differing cultural norms. At the same time, they might help to preserve local customs, slowing the spread of habits associated with international English, such as its readiness to get on first-name terms.

Professors and other professionals will not outsource language awareness to software, though. If the technology matures into seamless, ubiquitous artificial speech translation, it will actually add value to language skills. Whether it will help people conduct their family lives or relationships is open to question-though one noteworthy possibility is that it could overcome the language barriers that often arise between generations after migration, leaving children and their grandparents without a shared language.

Whatever uses it is put to, though, it will never be as good as the real thing. Even if voice-morphing technology simulates the speaker’s voice, their lip movements won’t match, and they will look like they are in a dubbed movie. The contrast will underline the value of shared languages, and the value of learning them. Sharing a language can promote a sense of belonging and community, as with the international scientists who use English as a lingua franca, where their predecessors used Latin. Though the practical need for a common language will diminish, the social value of sharing one will persist. And software will never be a substitute for the subtle but vital understanding that comes with knowledge of a language.

Question: What is one major challenge that artificial speech translation faces?
High costs of the devices
Lack of user interest
Background noise
Limited language support
Explanation: The script mentions that noise is one of the major challenges that artificial speech translation has to meet, indicating that background noise hinders the effectiveness of such technology.

This exercise aims to help you comprehend English without mental translation to your native language. It also helps improve your reading speed.

For beginners: Start at 150-250 words per minute (WPM) and gradually increase as you become comfortable.

For advanced learners: Challenge yourself with speeds of 300-400 WPM or higher to further enhance your reading skills.

Adjust the speed as needed and remember: Understanding is just as important as speed!


Leave a Reply

Your email address will not be published. Required fields are marked *

We have detected unusual activity on your device.
Please verify your identity to continue.
Note: This verification step won't sign you in. If you have a premium account, please log in to access the service as usual.
Google/Gmail Verification
Or verify using Email/Code
We've sent a verification code to:
youremail@gmail.com (Not your email?)
Enter it below to complete the verification process.
Ensure your email address is correct, your inbox is not full, and you check your spam folder. If no email arrives, consider using an alternative email.
You will need a Premium plan to perform your action!
Note: If you already have a premium account, please log in to access our services as usual.

Plans & Pricing

Our mission is to make quality education accessible and free for everyone.
However, to keep our hardworking team running and this service alive, we genuinely need your support!
By opting for a premium plan, not only do you sustain us in achieving the mission, but you also unlock advanced features to enrich your learning experience.

Free

For learners who aren't pressed for time

What's included on Free
1000+ IELTS Tests & Samples
Instant IELTS Writing Task 1 & 2 Evaluation (2 times/month)
Instant IELTS Speaking Part 1, 2, & 3 Evaluation (5 times/month)
Instant IELTS Writing Task 1 & 2 Essay Generator (2 times/month)
500+ Dictation & Shadowing Exercises
100+ Pronunciation Exercises
Flashcards
Other Advanced Tools

Premium

For those serious about advancing their English proficiency, and for IELTS candidates aspiring to boost their band score by 1-2 points (especially in writing & speaking) in just 30 days or less

What's included on Premium
Save Your IELTS Test Progress
Unlock All Courses & Content
Unlimited AI Conversations
Unlimited AI Writing Enhancement Exercises
Unlimited IELTS Writing Task 1 & 2 Evaluation
Unlimited IELTS Speaking Part 1, 2, & 3 Evaluation
Checked Answers Will Not Be Published
Unlimited IELTS Writing Task 1 & 2 Essay Generator
Unlimited IELTS Speaking Part 1, 2, & 3 Sample Generator
Unlimited Usage Of Advanced Tools

Due to the nature of our service and the provided free trials, payments are non-refundable.
Nếu bạn là người Việt Nam và không có hoặc không muốn trả bằng credit/debit cards, bạn có thể thanh toán bằng phương thức chuyển khoản:



Chọn gói:
279,000₫ 157,000 ₫ cho gói 1 tháng (chỉ 5,233₫/ngày)
819,000₫ 397,000 ₫ cho gói 3 tháng (chỉ 4,411₫/ngày)
1,649,000₫ 667,000 ₫ cho gói 6 tháng (chỉ 3,706₫/ngày)
3,299,000₫ 857,000 ₫ cho gói 12 tháng (chỉ 2,381₫/ngày)


Sau khi chuyển khoản, vui lòng đợi trình duyệt tự động điều hướng bạn trở lại Engnovate và bạn sẽ ngay lập tức nhận được mã kích hoạt tài khoản premium.
Nếu có lỗi xảy ra, bạn có thể liên hệ với team thông qua một trong các phương thức: email đến helloengnovate@gmail.com hoặc nhắn tin qua facebook.com/engnovate.
Vì toàn bộ công cụ trên website đều có thể sử dụng thử miễn phí, Engnovate không hỗ trợ hoàn tiền.