반응형
47 countries endorsed this political declaration as of Nov 1 2023.
Some ethical principles that can be deduced from the political declaration are:
- Respect for international law: Military use of AI should comply with applicable international law, especially international humanitarian law, and protect civilians and civilian objects in armed conflict.
- Human responsibility and accountability: Military use of AI should be subject to human oversight and control, and those who develop, deploy, or use military AI should be accountable for their actions.
- Transparency and auditability: Military use of AI should be developed and deployed with clear and well-defined uses, and their methodologies, data sources, design procedures, and documentation should be transparent and auditable.
- Safety and security: Military use of AI should be tested and assured for their safety, security, and effectiveness, and have safeguards to mitigate risks of failures, unintended consequences, and malicious use.
- Minimization of bias and accidents: Military use of AI should avoid or reduce unintended bias and accidents that could harm human dignity, rights, or interests.
Analyzing this political declaration based on the DoD 5 ethical principles of RAI, which are: responsible, equitable, traceable, reliable, and governable, here is how the contents of the declaration could be catagorized into these principles:
- Responsible: The declaration emphasizes the need for ethical, responsible, and lawful use of military AI capabilities, and the need for accountability and human oversight in the development, deployment, and use of such capabilities. It also calls for careful consideration of risks and benefits, and the minimization of unintended bias and accidents. These contents reflect the principle of responsible AI, which requires that AI systems are developed and used in a way that respects human dignity, rights, and values.
- Equitable: The declaration acknowledges the potential of military AI capabilities to enhance the implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict. It also urges states to take proactive steps to minimize unintended bias in military AI capabilities. These contents reflect the principle of equitable AI, which requires that AI systems are designed and used in a way that is fair, inclusive, and non-discriminatory.
- Traceable: The declaration stresses the importance of transparency and auditability in the development, deployment, and use of military AI capabilities. It also requires states to ensure that military AI capabilities are developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel. These contents reflect the principle of traceable AI, which requires that AI systems are developed and used in a way that is understandable, explainable, and verifiable.
- Reliable: The declaration highlights the need for safety, security, and effectiveness of military AI capabilities, and the need for appropriate and rigorous testing and assurance within their well-defined uses and across their entire life-cycles. It also requires states to implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example by disengaging or deactivating deployed systems, when such systems demonstrate unintended behavior. These contents reflect the principle of reliable AI, which requires that AI systems are developed and used in a way that is robust, resilient, and trustworthy.
- Governable: The declaration emphasizes the need for senior officials to effectively and appropriately oversee the development and deployment of military AI capabilities with high-consequence applications, and the need for relevant personnel to exercise appropriate care in the development, deployment, and use of military AI capabilities. It also requires states to ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those systems in order to make appropriate context-informed judgments on the use of those systems and to mitigate the risk of automation bias. These contents reflect the principle of governable AI, which requires that AI systems are developed and used in a way that is subject to human control and intervention.
The political declaration on the responsible use of military AI capabilities contains some ethical principles that are not explicitly covered by the DoD 5 ethical principles for AI. These include:
- Transparency and auditability: States should ensure that military AI capabilities are developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel.
- Training and education: States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those systems in order to make appropriate context-informed judgments on the use of those systems and to mitigate the risk of automation bias.
- International cooperation and engagement: States should pursue continued discussions among the endorsing States on how military AI capabilities are developed, deployed, and used responsibly and lawfully; promote the effective implementation of these measures and refine these measures or establish additional measures that the endorsing States find appropriate; and further engage the rest of the international community to promote these measures, including in other fora on related subjects, and without prejudice to ongoing discussions on related subjects in other fora.
반응형
'AI, 인류 그리고 미래 > 군사적 AI 거버넌스' 카테고리의 다른 글
[GAI.T 칼럼] AI 기반 자율무기와 윤리기준 (0) | 2023.11.06 |
---|---|
자율살상무기에 대한 포괄적 접근 (0) | 2023.04.04 |
5 Major Countries' Approaches to Lethal Autonomous Weapons Systems(LAWS) (0) | 2023.03.23 |