By Ryan Bury, English in-house translator
Whether you’re chatting with Siri, Cortana or Alexa, modern technology invariably gives users the chance to – quite literally – find their voice. And it’s no different in the language services industry.
Automatic speech recognition (ASR) has been around for over half a century under various guises, and flexing your vocal cords to ‘write’ a text is nothing new. But where does the potential of this particular productivity tool fit within the strategy of a translation company?
The numbers game
There’s no doubt that under the right circumstances, ASR can dramatically boost that magical words/day figure, with anecdotal evidence at a recent ITI workshop suggesting that 10,000 words per day wasn’t out of reach.
Naturally, even a fraction of this increase would be music to the ears of company bosses, with enormous potential for gains both in terms of output and, of course, profit.
Health concerns have also played a major role in the increased popularity of ASR technology, with the avoidance of tens of thousands of keyboard taps per day a particular plus point for translators and their handiwork.
Indeed, various such ergonomic solutions have made everyday life much more comfortable for employees of companies such as STP. However, is the process of actually integrating this tool trickier than it might appear?
Striking the balance
For a translation company that already uses a plethora of tools, ASR must find a valuable and supportive place within the existing production process.
If translators have translation memory content and/or machine translation suggestions as a starting point, they need to be able to weave ASR into their workflow seamlessly to truly benefit from its productivity-boosting potential. And it mustn’t hinder any gains that would otherwise have been made through traditional TM leverage or MTPE.
With this in mind, the adoption of ASR – or even ASRPE, ASRMTPE, or any other such Scrabble-worthy acronym that might emerge – is no small balancing act.
A translation company must also consider the logistics of introducing ASR into its toolkit and ask whether it can truly prosper in an office-based environment.
Background noise can prove distracting not only for the translator, but also for the tool itself. So is it possible to have several voices translating at once and still retain the productivity benefits on offer?
Then there’s the old cliché of the introverted translator. For those who work best when fully immersed in a text, and with minimal outside distractions, the switch to the infamous anti-concentration fiend that is a noisy workspace could be a troublesome one.
These days, of course, remote working is becoming ever more common, and linking these two phenomena could well pay dividends. As a remote worker myself, I can safely say that – notwithstanding the occasional dog bark – it is generally much easier to make a home environment suitable for ASR than it would be in an open-plan office setting.
Evidently, there are a great number of issues to consider for any LSP with regard to voice recognition, but those productivity-related whispers may ultimately prove impossible to silence.
This post first appeared in the December 2016 edition of STP’s Icebreaker newsletter.