A common piece of wisdom says that great art is a mirror held up to nature, and we see this truth just as plainly in technological advances as we do in the fine arts. From insect inspired robots to the prominence of the Fibonacci sequence in architecture, many great inventions come from closely studying nature and applying what we learn to our creations.
This same principle has also greatly advanced the state of the art in artificial intelligence (AI) over the past decade. Yes, easier access to higher powered compute alongside more investment from top enterprises have gone a long way to unlocking AI’s potential, but still one of our most substantial breakthroughs remains indebted to the work of neuroscientists. We are, of course, referring to artificial neural networks, a family of machine learning (ML) algorithms that draws inspiration from the multilayered architecture of the human brain.
When DeepMind’s AlphaGo beat the world’s best Go player in 2016, a feat previously thought well beyond the reach of even the most sophisticated computer systems, neural networks began to garner international attention; before long, the tech that we now refer to as deep learning would be applied to everything from facial recognition to automatic speech recognition (ASR).
While the inner workings of these machine learning algorithms require graduate level data science studies to fully comprehend, anyone can understand deep learning in broad strokes. Now let’s dive into how Rev’s research team applies deep learning to create the world’s most accurate ASR solution.
A Brief History of ASR Technology
To understand where we are now, we need to look at where we’ve been. Computer scientists have long been fascinated with listening and talking machines, partly because of the notion that a conversationally competent computer is the hallmark of a truly intelligent machine and partly because they’re just plain useful. Until very recently, however, our attempts required a Frankenstein approach that was fragile, bulky, and difficult to scale.
Before deep learning for speech recognition and related applications, speech scientists built individual models to handle different parts of the speech recognition process. An acoustic model transformed analog sound waves into digital data that a computer can work with, a language model picked out words from that data, and a pronunciation model dealt with variations among speakers with different accents and dialects.
This method had some problems. Any error in any model would throw off the entire thing, often leaving us to search through the weeds to find its source. Since the rules of language were usually hard-coded into the algorithms, they were inflexible. When we combine the ever-evolving nature of language with immense variations in ways of speaking, we were left with a system that was sometimes good enough but that often left users frustrated and disappointed.
Speech recognition deep learning enables us to overcome these challenges by letting us train a single, end-to-end (E2E) model that encapsulates the entire processing pipeline. “The appeal of end-to-end ASR architectures,” explains NVIDIA’s developer documentation, is that it can “simply take an audio input and give a textual output, in which all components of the architecture are trained together towards the same goal…a much easier pipeline to handle!”
Besides solving many of the issues that plagued previous ASR iterations, speech recognition with deep learning brings other advantages. They’re faster to train. They don’t need as many resources to run, opening up new possibilities for deployment. They’re also much better at recognizing dialects, accents, and multiple languages.
One of the best algorithms for speech recognition uses supervised learning, which trains the neural network on labeled data. For instance, if we were to train a model to tell the difference between cats and dogs, we would show it pictures that a human had labeled as either cat or dog. For ASR, this means training the model on audio and the corresponding ground-truth transcript.
An ML model is only as good as the data that we feed to it. “We know that most acoustic modeling methods with deep neural network topologies are data hungry and more effective with supervised large datasets (with manually transcribed descriptions),” writes a group of researchers for the Institute of Electrical and Electronics Engineers (IEEE). While most ASR developers have limited access to these datasets, our abundance of them gives Rev’s ASR tech a competitive advantage. Since we’ve long provided human transcription services, we have incredible access to a pool of extremely well-labeled data.
Quality data makes all the difference.
Conclusion: Implications and Applications
When we combine deep learning and speech recognition, we start to see a new world of possibilities begin to open up. One of the most exciting implications is that these advances bring better ASR to a device near you. Instead of having to run multiple models on powerful computers in the cloud, we can run deep learning ASR on small devices like smartphones and internet of things (IoT) devices.
This, in turn, allows developers to create more sophisticated voice UIs, which are easier, faster, and more accessible than options such as touch screens and keyboards. From using voice to navigate menus in a mobile app to talking to a voice assistant to control smart home devices like lights and thermostats, we’re just beginning to see the many innovative ways to use Rev’s speech-to-text API.
Going forward, applications of deep learning will enable ASR to more easily adapt to variations among speakers, deal with different regional dialects, and even expand into more foreign languages. This will prove invaluable for use cases like call center optimization, translation apps, and streaming ASR for live-captioning TV shows, video calls, and more.
ASR can take your application to the next level, but mastering deep learning to create speech recognition algorithms and acquiring enough data to make it work is a big investment. We’ve made that investment, and you can leverage it with Rev.ai. If you know how to code, then you can make a custom voice application with our speech-to-text API.