More than half or 54 percent of tech users would like to adopt artificial intelligence (AI) capabilities in their workflow processes but remain hesitant due to mistrust of the technology, a new report now shows.
The study by audit and consultancy firm KPMG indicates that the mistrust in AI systems chiefly stems from safety and ethical concerns, which, in turn, is giving rise to moderate acceptance and the coexistence of waned optimism and worry.
“The concern about the safety and security of AI and its impact on people helps explain why a little over half (54 percent) of people are wary about trusting AI systems, reporting either ambivalence or an unwillingness to trust,” writes KPMG.
“Only 46 percent are willing to trust AI systems.”
The report further indicates that people’s trust of AI varies depending on the application of the technology, with generative AI tools and AI use in human resources enjoying similar levels of trust.
Its application in healthcare, however, was found to be the most trusted application, buoyed by the direct benefit that increased precision of medical diagnoses and treatments affords people, combined with generally high levels of trust in medical professionals.
“These findings reinforce that people’s trust of AI systems is contextual and can depend on the use case and their confidence in the organisation that is deploying the AI system,” says the audit firm.
The perceived trustworthiness of AI systems has decreased over time from 63 percent of people viewing AI systems as trustworthy in 2022 to the current levels.
This, according to the report, demonstrates that many are feeling less positive about the ability of AI systems to provide accurate and reliable output, and be safe, secure and ethical to use.
Other risk concerns raised by users in the study include loss of human connection, deskilling and job losses, privacy, misinformation and disinformation, unequal access, environmental impact as well as potential risks of bias from AI.
Fifty seven percent of those sampled disagreed or were unsure that the current regulation, laws and safeguards are sufficient to make AI use safe and protect people from harm.
The trust and acceptance of AI systems was found to be consistently lower in advanced economies compared to emerging ones, reflecting the accelerated uptake of the technology in the latter.
In a general overview, the report indicates that people feel a range of emotions about AI applications, with the majority fell optimistic and excited, while also worried demonstrating a degree of emotional ambivalence.
“People in emerging economies report more positive emotions toward AI and a clear divergence between positive and negative sentiment.” states the report.
"Optimism and excitement are dominant emotions in emerging economies, experienced by 74 percent to 82 percent of people. Significantly fewer (56 percent) feel worried.” states the report.
In contrast, people in advanced economies feel both worried and optimistic in almost equal measure (61 percent to 64 percent), with just over half (51 percent) feeling excited.
The KPMG findings closely relate to those of a recent AI security study conducted by software technologies research firm Check Point which warned that hackers are increasingly exploiting AI tools to boost the efficiency, scale and impact of their operations.
According to Check Point’s report, cybercriminals are closely monitoring trends in AI adoption, noting that each time a new large language model (LLM) is released to the public, underground actors move with speed to explore its potential for abuse.
The firm further reported that ChatGPT and OpenAI’s API are currently the most popular tools among malicious actors, but others like Google Gemini, Microsoft Copilot, and Anthropic Laude are gaining traction.
The Communications Authority of Kenya (CA) had earlier in October last year sounded alarm over a rise in AI-powered cyberattacks, in a pronouncement that would presumably fuel further mistrust in the technology’s models.
“Cybercriminals are increasingly using AI-enabled attacks to enhance the efficiency and magnitude of their operations,” said CA Director-General David Mugonyi at the time.
“They leverage AI and machine learning to automate the creation of phishing emails and other types of social engineering.”