Microsoft put out a framework for Responsible AI Standards to guide how they build artificial intelligence systems. The framework serves to make understanding Microsoft’s AI practices clearer and therefore establish more trust in their organization and with their customers. The guide provides “specific, actionable guidance” beyond typical high-level principles.
AI systems are only able to function based on the programming decisions made by their developers. Microsoft suggests that it is critical to “proactively guide these decisions” to obtain the best outcomes. Specifically, this means upholding values such as fairness, safety, privacy, and transparency, among others.
The framework outlines goals that AI-developing teams must strive to attain, each of which composes a set of steps that each team must follow to ensure outcomes are met throughout the lifetime of the system. Each of the available tools and practices are then matched to goals so that the teams have the resources required for success.
As previously discussed in many posts, AI systems have the potential to exacerbate biases and inequities within their systems. For instance, 2020 academic study from Stanford suggested that speech-to-text technology produced a disproportionate number of errors for Black users, being nearly doubled as compared to white users. The study revealed that the system testing protocols did not account for speech diversity. Overall, this brought to light the fact that Microsoft needed to address how they were collecting data from communities, and to do so going forward in a more engaging, inclusive, and appropriate manner. Microsoft is taking similar action when it comes to facial recognition tools.
In order for AI systems to be trustworthy, there need to be clear and appropriate solutions to the problems they are built to solve. To align with Azure Face Capabilities, Microsoft has decided to not provide “open-ended API access to technology that can scan people’s faces and purport too infer their emotional states based on their facial expressions or movements” as a result of the lack of scientific basis surrounding the definition of “emotions”.
Overall, Microsoft’s updated Responsible AI Standard framework is much more actionable, concrete, and transparent than before with clear ways to identify, measure, and help to mitigate harm associated with AI tools. Microsoft also noted that the framework is ever evolving as issues arise and the journey with AI continues.
Read Microsoft’s blog post about the framework and the full guide.