[BMDV] – The seven leading industrialized nations of the West – Germany, France, the UK, Italy, Japan, Canada and the USA (G7) – have agreed on common guidelines for the development of artificial intelligence.
The “Code of Conduct” is aimed at the developers of advanced AI systems and calls on them to take measures to promote safe and trustworthy AI. These include early identification and mitigation of risks, transparency about the capabilities and limitations of AI and labeling of AI-generated content. The Code of Conduct has now been
published with a declaration by the G7 heads of state and government.
Eleven principles for trustworthy and safe AI
The G7 declaration is aimed at all organizations that develop advanced AI systems such as generative AI. Organizations should respect democracy, human rights and the rule of law during development and not create systems that undermine these values.
Organizations should take eleven principles into account. These include adequate risk precautions right from the development stage. Appropriate measures should be taken to prevent AI from becoming a threat to the safety, health or democratic values of a society. The use of AI should be accompanied by regular transparency reports that point out safety risks, among other things. In addition, AI-generated content should be made recognizable, for example through watermarks. The G7 also call on developers of advanced AI systems to handle data responsibly, protect personal data and safeguard copyrights.
The Code of Conduct published by the G7 is to be further developed on the basis of specific requirements. The aim is for companies to voluntarily commit to applying the principles. The Code of Conduct also provides important guidance for the planned legislation in the individual countries. In Europe, the EU Commission is currently working with the member states and the European Parliament on its own AI regulation. A first draft should be available by the end of the year.
The G7 Code of Conduct is available in English here.