Speech Recognition Trends to Watch: Responsible AI
Technology, especially AI, is a powerful tool. That's why Rev is committed to developing responsible AI. Learn more here.
Gazing at the horizon, there’s no shortage of excitement in technology: the promise of a more interconnected world, greater opportunities, and the sheer wonder of what’s going to be possible next. There’s no single force that’s driving this movement. Of course Automated Speech Recognition (ASR) will be a key player, but this application fits into a greater narrative that includes everything from augmented reality (AR) to quantum computing to the nearly endless uses of artificial intelligence (AI).
Using technology to create a better world, however, is harder than just developing the tech. There’s a lot that can go wrong; and in truth, there’s a lot that’s already gone wrong. While most tech companies would rather pull the wool over their customers’ eyes—your eyes—we’re facing these issues head-on because we want to live in a world where tech helps instead of harms. We’re playing the long-game.
Tech, especially AI, is a powerful tool. Think of it like fire. We harness fire to reap the rewards, yet it can just as easily come back to burn us.
Dr. Cathy O’Neil, data scientist and author, has a name for these 21st century munitions: Weapons of Math Destruction. In her book of the same name, she explains that WMDs “punish the poor and the oppressed in our society, while making the rich richer.” Despite often being developed with good intentions, these algorithms increase inequality and threaten democracy.
We cannot afford to be complicit. As the conversation about techlash intensifies, more people are starting to realize the problems that arise from surveillance, biased machine learning (ML), and the current socioeconomic structures that enable powerful corporations to render our lived experience as “big data” in order to create prediction products that auction human behavior via targeted advertising.
We’re not giving up. As technologists, we’re builders. Just like we devote ourselves to creating the most accurate end-to-end streaming ASR models, we’re also committed to developing responsible AI.
What is Responsible AI and Why Should We Care?
Simply put, responsible AI is ethically guided tech (and it’s one of the biggest AI trends right now). The Turing Institute defines AI ethics as “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”
Before the moral relativists take up arms against the inherent ambiguities in the language of “widely accepted standards,” let’s set a few things straight. Invading a person’s basic right to privacy is wrong. Propagating racism and other forms of discrimination and inequality is wrong. Threatening our democracy is wrong.
Even if we disagree about what’s right and wrong, the bottom line is indisputable. If we only look at the current largest companies by market cap, we see that abusive AI practices make for good business in the short term. Some of these companies may pay lip service to responsible AI, but firing the leader of your Ethical AI team over speaking out about bias tells a different story.
The tides are turning in the courts of public opinion. Companies that lead the way in responsible AI are poised to be the biggest winners from this fallout.
“Responsible AI should be seen as a potential source of competitive advantage,” concludes a report by The Economist Intelligence Unit (EIU) on the business case for responsible AI. “Firms that shift their AI development process to align with more responsible practices are likely to see reduced medium- and long-term downside risks associated with challenges such as dealing with a data breach. However, this study shows that the benefits of responsible AI actually extend far beyond risk management…Among EIU executive survey respondents, 90 percent agree that the potential long-term benefits and cost savings associated with implementing responsible AI far outweigh the initial costs.”
The Business Case for Responsible AI
They cite several reasons. First, it helps companies attract and retain top talent. For instance, following the 2018 Facebook-Cambridge Analytica scandal, only 35 percent to 55 percent of graduates from top US universities were accepting full-time positions at Facebook by May 2019, down from 85 percent just six months earlier.
Second, it’s good for brand image and customer engagement. A Capgemini market research report found that 55 percent of surveyed consumers would purchase more products if a company’s AI was perceived to be ethical, while 34 percent would stop interacting with a company altogether if its AI interactions resulted in ethical issues.
Third, responsible tech companies see increased pricing power. A Nielson report found that 66 percent of consumers are willing to pay more for sustainable, socially responsible, and ethically designed goods. This holds just as true for digital applications as it does for fair trade coffee.
Lastly, these practices lead to enhanced product quality and broadening revenue streams. When companies build technologies with less bias and greater protections, they attract and retain more users.
“The Age of Privacy has arrived,” concludes the Cisco 2021 Data Privacy Benchmark Study. They found that privacy budgets doubled in 2020 to an average of $2.4 million, that the average organization saw an ROI of 1.9 times spending, and that organizations with more mature privacy practices are getting higher business benefits than average.
Conclusion: Responsibility by Design
As these trends continue throughout 2021 and beyond, we expect responsible AI to follow a similar trajectory to cybersecurity. In the early days of networking and software development, few people worried about security. However, as the cost of data breaches became apparent and we saw the effects of cyberattacks on critical infrastructure, the industry shifted.
Cybersecurity budgets ballooned. IT professionals codified their hard-learned lessons into best practices like security by design, the practice of building security into tech from the very beginning rather than trying to bolt it on later.
Artificial intelligence applications like ASR are undergoing a similar reckoning, and that brings us to our final takeaway. We advocate for responsibility by design. That means working diligently to eliminate bias, respecting privacy in data collection and usage, and creating tech that complies with regulations like GDPR and CCPA.
Whether you’re a developer who plans to leverage Rev’s speech-to-text APIs, a CIO working to improve your organization’s privacy posture, or just a concerned citizen like us, we’re looking forward to continuing this conversation with you. Get in touch.
Heading
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Subscribe to The Rev Blog
Sign up to get Rev content delivered straight to your inbox.