November 22, 2019
4 Minute Read
By definition, trust is the “firm belief in the reliability, truth or ability of someone or something”.
As humans, we build trust with others so they’ll believe us when we say we’ll do things. When we think about how artificial intelligence is created, the process of training an algorithm is in essence the very same act of building trust. The machine learning model is fed knowledge from a set of data, and the more reliable the information shared, the more reliable the AI becomes.
Here’s a look at a few points when it comes to building trust in artificial intelligence, including the need for high-quality training data at scale.
An article by IBM concluded that 30 AI scientists agree, “building trust in AI will require a significant effort to instill in it a sense of morality, operate in full transparency and provide education about the opportunities it will create for business and consumers.”
Fear and anxiety regarding AI stems from an all too familiar fear of the unknown. By exploring ways to instill human values in AI, this technology has the potential to be more relatable to the humans living alongside it.
This TED Talk by scientist and philosopher Grady Booch highlights some of our most common fears and explains how we can teach (not program) artificial intelligence to share our values.
A New York Times article suggests that in order to build artificial intelligence we can trust, the technology needs to understand three things: time, space and causality. The article further explains that in order to achieve this, our approach to AI training i.e. deep learning and statistical patterns, requires a shift toward building AI systems that grasp these concepts from the start.
This is an interesting concept to explore, however, a first step toward building trust in artificial intelligence is finding ethically sourced, unbiased data to train the algorithms.
An article from Forbes featuring our CEO, Leila Janah, talks about how separating identifying data from a dataset before training can help prevent bias occurrence, resulting in higher quality data.
Whether an AI algorithm will succeed or fail depends on the quality of data it is trained on, and the more diverse and representative the training data set, the more equipped the algorithm will be to reach trustworthy decisions.
A Mckinsey Global Institute briefing on the promise and challenge of the age of artificial intelligence shared that the high expectations of AI frequently don’t live up to the hype surrounding it.
Organizations like the Partnership on AI exist to establish best practices for AI technologies, in order to advance the public’s understanding of AI, but greater representation of how AI is applied can start with things as simple as more variety in depictions of AI in movies and media.
Artificial intelligence is more than just robots, it’s the code running in the background of smart algorithms, the tech making VR glasses possible and the machine vision guiding self driving cars. AI is at the core of so many modern innovations, yet it's still often misunderstood, or taken for granted.
As humans and machines continue to work together to solve some of the world’s most pressing challenges, policymaking, industry best practices and scientific advancements will play a major role in building trust and confidence in AI systems.
Joseph is the Marketing Analyst and Community Manager at Samasource. Using his conversational moderating style, Joseph creates engaging experiences for the company's online communities. He also assists the global marketing team with research, content creation, strategic planning and report building.