EY today announced it has developed the first solution designed to help enterprises quantify the impact and trustworthiness of artificial intelligence (AI) systems, EY Trusted AI.
The EY Trusted AI platform, enabled by Microsoft Azure, offers users an integrated approach to evaluate, monitor and quantify the impact and trustworthiness of AI. The platform leverages advanced analytical models to evaluate the technical design of an AI system, measuring risk drivers that include its objective, underlying technologies, technical operating environment and level of autonomy compared with human oversight, and then produces a technical score.
“Trust must be a front-line consideration, rather than a box to check after an AI system goes live. Unlike traditional software, which can be fixed, tested and patched, if a neural network is trained on biased data, it may be impossible to fix, and the entire investment could be lost. The EY Trusted AI conceptual framework was launched last year, and now this offering is being launched to help organizations worldwide build trust in and derive sustained value from AI,” says Vladislav Severa, partner in EY and adds: “At EY, we apply our proprietary applications based on principles of artificial intelligence to help clients in various ways, from increasing customer satisfaction to inventory optimisation. In using these tools, we place an ever greater emphasis on ethics. We are therefore pleased to be introducing a tool that helps quantify developed solutions with the aim of minimising the risks of artificial intelligence and thus helping their real-world application and company acceptance.”
This new platform provides insights to users such as AI developers, executive sponsors, and risk practitioners. The technical score it provides is also subject to a complex multiplier, based on the impact on users, taking into account unintended consequences such as social and ethical implications. An evaluation of governance and control maturity acts as a further mitigating factor to reduce residual risk. The risk scoring model is based on the EY Trusted AI framework, which is being used to help enterprises understand and plan for these new risks that could undermine products, brands, relationships and reputations.
An interactive, web-based interface guides users through a series of schematic and assessment tools to build the risk profile of an AI agent. User-friendly visualizations provide users with a quick snapshot of the relative risk scores across their AI portfolio, with drill-down capabilities to reveal additional details.
A key benefit of the EY Trusted AI platform is its ability to perform dynamic risk management by forecasting the impact on risk when an AI component changes – such as an AI agent’s functional capabilities or level of autonomy. This will allow for a better understanding of the risk profile of AI agents and help foster fact-based evaluations of systems against their risk tolerance.
“Helping customers focus on the ethical use of AI as they build new solutions or infuse their existing solutions with AI is one of the core principles of Microsoft’s approach. A key component of our Microsoft Azure cloud platform is enabling the creation of applications and services using artificial intelligence by any developer or data scientist across a wide range of scenarios. The EY Trusted AI platform, enabled by Azure, is an important step in helping enterprises build their AI systems with the trust and security that is so essential to AI systems’ successful deployment,” says Steve Guggenheimer, Corporate Vice President, AI Business, Microsoft Corporation.
Offered as a standalone or managed service, the EY Trusted AI platform is based on open-source architecture to facilitate rapid deployment. The teams plan to regularly develop the capability through updates that add new AI risk metrics, measurement techniques and continuous monitoring tools. For more information on EY and artificial intelligence, visit https://www.ey.com/en_gl/ai.