Trust is the glue that holds enterprises and processes together, and lately, more of that trust has being relegated to artificial intelligence. How much decision-making can and should be entrusted to the machines? We often trust AI recommendations for books related to the ones we have purchased. We are learning to trust AI to help guide our trucks and cars, applying warnings and brakings in traffic situations. Our call-center staff trust AI-generated recommendations to upsell the customers they have on the line. We let AI move more valuable customers to the head in line of queues. But how trustworthy is AI? Maybe more, maybe less trustworthy than we perceive it to be — it depends on the situation.
That’s the conclusion drawn by Chiara Longoni and Luca Cian in a recent analysis posted in Harvard Business Review. Consumers, for example, “tend to believe AI is more competent at making recommendations when they are seeking functional or practical offerings.” But they prefer human judgement “when they are more interested in an offering’s experiential or sensory features.”
In terms of corporate decision-making, at least one in four executives responding to a survey released by SAS, Accenture Applied Intelligence, Intel and Forbes Insights, report they had to manually intervene to override an AI-generated decision. Still, a majority are still happy with the results of their AI efforts and intend to keep moving forward. Close to three-fourths of executives, 74%, recognize that close oversight of AI is essential, the survey also shows. (I was part of the team that designed and analyzed the study, as part of my work with Forbes Insights.)
Longoni and Cian explored consumer trust with AI in a series of experiments involving 3,000 consumers. Among their conclusions: “Simply offering AI assistance won’t necessarily lead to more successful transactions. In fact, there are cases when AI’s suggestions and recommendations are helpful and cases when they might be detrimental.”
They call reliance on AI’s recommendations a “word-of-machine effect,” which stems from a belief that AI systems are more competent than humans in dispensing advice on “utilitarian qualities” — such as selecting hair-care products. However, the opposite is true, as humans are just as capable of assisting with such choices. “Vice versa, AI is not necessarily less competent than humans at assessing and evaluating ‘hedonic’ attributes” — involving sensory experiences. “AI selects flower arrangements for 1-800-Flowers and creates new flavors for food companies such as McCormick.”
Leveraging the best of both worlds may be the best approach to building trustworthy AI. “Even though it is clear that consumer confidence in AI assistance is higher when searching for products that are utilitarian (e.g., computers and dishwashers), this does not mean that companies offering products that promise more hedonic experiences (e.g., fragrances, food, and wine) are out of luck when it comes to using AI recommenders,” Longoni and Cian conclude. “In fact, we found that people embrace AI’s recommendations as long as AI works in partnership with humans. For instance, in one experiment, we framed AI as augmented intelligence that enhances and supports human recommenders rather than replacing them. The AI-human hybrid recommender fared as well as the human-only recommender even when experiential and sensory considerations were important.”
There are even situations where AI may be akin to swatting a fly with a cannon. In a recent article in Entrepreneur, Ganes Kesari explains where AI simply may be overkill for problems that don’t require AI. “A majority of business problems can be solved by simple analysis,” he points out. “Only a fraction of businesses really need AI. With AI capability getting democratized, it can be tempting to use it for every business problem.”
Plus, Kesari adds, AI often requires large volumes of data — the right data at that. “AI has a huge data appetite and it needs hundreds of thousands of data points for basic tasks such as detecting pictures. This data must be cleaned and prepared in a specific format to teach AI. Unfortunately, a high volume of quality, labeled data is not a luxury that every organization can afford.”
The key is to expectations appropriately about AI. It is not a magical force that will lift businesses to new heights of profitability as many vendors suggest. Importantly, it needs to be trustworthy to both corporate decision makers and consumers. Consider this a work in progress.
Source: Artificial Intelligence Has Yet To Break The Trust Barrier