A recent index from Deloitte show that more than 75% of organizations will move into the realm of operationalizing AI technologies by the end of 2024. This article from the Harvard Business Review highlights the challenges with shifting from piloting to operationalizing AI within organizations, and how to effectively make this transition smoother, safer, and faster.

AI is most effective in an organization when it is operationalized at scale. This means that AI is deeply and widely integrated within the business, being used at all levels of business processes and by a wide array of talents. Building AI into an organization’s core product or service and providing the entire business with the tools, knowledge, and skills to operationalize the AI in their everyday work is critical.

However scaling AI is challenging, and as AI is scaled, problems can scale as well. Organizations that are undertaking this challenge have begun to adopt a new discipline, defined as Machine Learning Operations (MLOps), which “seek to establish best practices and tools to facilitate rapid, safe, and efficient development and operationalization of AI”. Implementing MLOps requires investing time and resources in the processes, people, and tools in an organization to effectively integrate into business practices.

 

Processes

Here, the HBR suggests that organizations “standardize how you build and operationalize models”. This means establishing a clear set of stages for developing and integrating AI to “streamline development, implementation, and refinement of models”. For instance, “data scientists prepare the data, create features, train the model, tune its parameters, and validate that it works; software engineers and IT operationalize it, monitoring the output and performance continually to ensure the model works robustly in production,” with a governance team overseeing the process to “ensure that the AI model being built is sound from an ethics and compliance standpoint”. It is crucial to build AI models in a repeatable manner with a clear process of next steps towards operationalization. Many organizations “fall into the trap” of creating brand new AI models, causing “expensive and slow-to-remedy failures”. Instead, departments across an organization should collaborate to define a process for AI development and provide tools to support the integration of such processes. As well the “hand-off” points (i.e. from data scientist to software engineers) is critical; outlining how this should play out and emphasizing the work of independent teams without disruptions will contribute to smooth transitions.

 

People

As previously mentioned, AI requires a “variety of unique skill sets” across many departments in an organization to effectively be operationalized. For instance, “a data scientist creates algorithmic models that can accurately and consistently predict behaviour, while an ML engineer optimizes, packages, and integrates research models into products and monitors their quality on an ongoing basis”. This means that to successfully scale AI, business leaders should create specialized teams that can focus on specific projects to which their specific skills are catered. Businesses have seen the development of two main team structures over the years, namely the “pod model” and the “Centre of Excellence (COE)”. The “pod model” sees AI product development be undertaken by a small team of data scientists, engineers, and machine learning and software engineers, best-suited for fast execution. However, the “pod model” can lead to “knowledge silos”. The COE model sees organizations pool together all data scientist experts assigned to various product teams depending on different requirements and resource availability. Governance teams have shown to be most effective when they exist outside of “pods” or COEs.

 

Tools

The production of AI/ML models uses a completely different set of tools than does IT or governance, causing the production of a repeatable workflow to be difficult as departments work with their distinct skills. The HBR suggests when choosing MLOps, a leader should consider: (1) Interoperability; (2) If it is data science- and IT-friendly; (3) Collaboration; and (4) Governance.

 

Successful companies that can implement and scale AI smartly are the ones unlocking the full potential of AI in their businesses.

 

Read the full article on Harvard Business Review.