/ Vigeo Eiris launches its rating on the extent to which companies consider the impacts of Artificial Intelligence
Press Releases - 09/09/2019
Vigeo Eiris launches its rating on the extent to which companies consider the impacts of Artificial Intelligence
In line with its exclusive methodology for rating sustainability risk and performance, Vigeo Eiris will examine the extent to which companies are committed to the responsible use of Artificial Intelligence.
The rating will focus on the commitments of companies in sectors involved in the design and/or use of Artificial Intelligence to establish clear, documented and verifiable principles, objectives and procedures to prevent the negative externalities of Artificial Intelligence and its contribution to the management of social and societal impacts.
This rating will be conducted within the framework of a new criterion in which questions will be specified and weighted according to the sector, value chain, nature of products and services, and size of the company under review. A score on the management of risks and the responsible integration of AI will be consolidated for investors and asset managers, as well as for companies using the solicited rating (Sustainability Rating).
The agency’s approach is to measure the degree to which companies, by complying with the principles of action defined by international public standards (UN, ILO, OECD, etc.), manage to control the risks likely to affect their capacity to create value and to report on their contribution to sustainability. In this respect, Artificial Intelligence is a major material issue in relation to the social responsibility of market players. Ongoing technological change will have systemic consequences on the volume, content and quality of jobs, as well as on skills, income levels, access to healthcare and societal acceptance of ongoing economic and social transformation.
These considerations represent both material risk and growth potential to which investors and asset managers will have to pay increasing attention.
Responsible integration of Artificial Intelligence is now a component of corporate social responsibility
42 OECD member and partner countries officially adopted the first set of intergovernmental principles on Artificial Intelligence on 22 May. The signatories undertake to consider robustness, security, fairness, reliability and trust as international standards to respect and promote. The OECD Principles on Artificial Intelligence were solemnised at the annual Ministerial Council Meeting, this year devoted to the “digital transition for sustainable development”. They were developed by a multidisciplinary group of more than 50 experts (administrations, academia, business, civil society, international bodies, the tech community and trade unions).
The OECD Principles on Artificial Intelligence are structured around five recommendations for public policy and international cooperation that are relevant to companies and investors involved in the design, operation and financing of AI systems:
1. Serve the interests of individuals and the planet by promoting inclusive growth, sustainable development and well-being.
2. Respect the rule of law, human rights, democratic values and diversity, accompanied by appropriate safeguards – allowing, for example, human intervention when necessary – in order to achieve a just and equitable society.
3. Ensure transparency and responsible disclosure of information related to AI systems so that people know when they interact with such systems and can challenge the outcomes.
4. Ensure the robustness, reliability and security of AI systems throughout their life cycle, and the ongoing identification, assessment and control of related risks.
5. Make those organisations and individuals responsible for the development and operation of AI systems accountable for their proper functioning and compliance with the above principles.
The OECD Principles on Artificial Intelligence are in addition to the recommendations of the European Union’s High-Level Expert Group on Ethics Guidelines for Trustworthy AI.
These principles provide guidance for questioning the commitments, business models and processes deployed by issuers (of equity, bonds and loans) to prepare for transition, anticipate the risks, reduce the threats and seize the opportunities that Artificial Intelligence is expected to represent in terms of the well-being and cohesion of human societies, and the rights and conditions of employment, as well as on corporate competitiveness and the functioning of the markets.
The AI Principles constitute, like the entire heritage of the OECD Guidelines, an internationally recognised, legitimate and enforceable reference point for the definition, progress and evaluation of the social responsibility of states, companies and investors alike.
For more information on the OECD and artificial intelligence: www.oecd.org/going-digital/ai/