US defence releases ethical guidelines for AI use in weaponry
Priya Wadhwa
10x Industry
Published:

US defence releases ethical guidelines for AI use in weaponry

But there is a catch!

Most of the negative publicity that artificial intelligence (AI) gets is related to its unethical and uncontrollable use in warcraft. Some of the leading experts in the world call for restrictions on the use of AI owing to the potential misuse of the technology.

World leaders are of course taking this seriously. The World Economic Forum has already set up committees to guide the implementation of artificial intelligence. And now the US Department of Defence (DOD) has published an ethics guideline on the use of artificial intelligence in weapons.

The ethics guidelines were developed by the Defence Innovation Board, comprising of leaders in AI, blockchain and related fields who are working in the industry, in academia or in think tanks. An extensive study was conducted for the formulation of these ethics since July 2018, which included discussions with experts, public listening sessions, stakeholder interviews and more. The board tasked with the development of the principles also conducted practical exercises with subject matter experts and the intelligence community.

Artificial intelligence, as per the ethical principles approved unanimously are as follows:

  1. Responsible: The guidelines say that people should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of AI systems. While this is vague, the paper suggests that there is no one-size-fits-all model in defence.

    The guidelines read, “Humans are the subjects of legal rights and obligations, and as such, they are the entities that are responsible under the law. AI systems are tools, and they have no legal or moral agency that accords them rights, duties, privileges, claims, powers, immunities, or status. However, the use of AI systems to perform various tasks means that the lines of accountability for the decision to design, develop, and deploy such systems need to be clear to maintain human responsibility. With increasing reliance on AI systems, where system components may be machine learned, it may be increasingly difficult to estimate when a system is acting outside of its domain of use or is failing. In these instances, responsibility mechanisms will become increasingly important.

  2. Equitable: The guidelines suggest that the DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.

  3. Traceable: More importantly, the DoD’s AI engineering discipline should be sufficiently advanced so that technical experts possess an appropriate understanding of the technology, development processes, as well as operational methods of its AI systems.

  4. Reliable: AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.

  5. Governable: DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior.

While these principles have been published, it is important to note that they will be meant to address only the new ethical questions posed by AI. Note: not new projects, just new ethical dilemmas. The processes and projects currently in place are covered by the department's ethics frameworks, which are based on the U.S. Constitution, Title 10 of the U.S. Code, the Law of War, existing international treaties and longstanding DOD norms and values.