From both a personal and business standpoint, human use and dependency on artificial intelligence has been increasing. This article by Forbes suggests that, although useful, we should be cautious when putting our full trust into these systems. It is still “up in the air’ if these systems are fed accurate data, kept up to date, and free of biases.
According to a 2022 survey by IBM of over 7,000 business organizations around the world, 35% of companies use AI in their business, an increase of 4% from 2021. 54% reporting cost savings and efficiencies, 53% reported network improvements, and 48% reported better customer experiences. However, many of the organizations eagerly implementing AI structures into their organizations have not taken all of the necessary steps to ensure that AI is trustworthy. Trust, as we know, is crucial; the IBM survey reported that 84% of AI users expressed that “being able to explain how their AI arrives at different decisions is important to their business.” A majority of these organizations have not taken steps to reduce bias within their AI structures nor ensure their AI-powered decisions are explainable.
For many companies, AI ethics is a relatively new topic leading to doubts in where to begin with relevant skills and training. Moreover, companies may not have the capacity or budget to do so. Specifically, the main challenges associated with increasing trust in AI include:
- 63% – lack of skills and training
- 60% – AI governance inconsistent across all environments
- 59% – lack of AI strategy
- 57% – lack of explainable AI outcomes
- 57% – AI vendors without “explainability features”
- 56% – lack of regulatory guidance from Government entities
- 56% – inherent bias in data models
However, Forbes demonstrates that companies that have deployed AI are more likely to understand and value the importance of trustworthiness. This should hopefully encourage organizations to take steps towards increasing trust in AI and “safeguarding data privacy”.