War and Artificial Intelligence
The use of artificial intelligence in war threatens humanity. However, there are no rules or ethics for its use.
This article has been translated using AI. See Original .
About AI Translated Article
Please note that this article was automatically translated using Microsoft Azure AI, Open AI, and Google Translation AI. We cannot ensure that the entire content is translated accurately. If you spot any errors or inconsistencies, contact us at hotline@kompas.id, and we'll make every effort to address them. Thank you for your understanding.
In wars between Russia-Ukraine, Israel-Hamas, or Israel-Iran, the use of autonomous weapons equipped artificial intelligence is increasing and developing rapidly. However, the use of artificial intelligence for weapons raises concerns for humanity.
Unmanned aerial vehicles (PUNA) aka drone are the spearhead of attacks to defeat opponents. The victims were not only soldiers and military infrastructure, but also many civilians and public facilities were also targeted. Humans are only considered as combat targets, not individuals with all their humanity.
Several wars in recent years have demonstrated the increasingly massive use of artificial intelligence (AI) in autonomous weapon systems by warring nations. However, to date, there has been no consensus among countries on regulating or developing ethics for the use of AI for military purposes.
Artificial Intelligence (AI) is a part of computer science that focuses on solving problems related to human cognition or the process of acquiring knowledge. This system is based on algorithms that imitate human intelligence, ranging from recognition, learning, to creating something.
Also read: The Dilemma of War and Humanity in Artificial Intelligence
Artificial intelligence systems rely on built algorithms. In a simple algorithm, a computer or machine will only respond according to questions or problems posed by humans. However, AI algorithms are generally based on machine learning (ML) systems which are able to create their own instructions based on the data and experience they have.
The progress or intelligence of this learning machine is largely determined by its ability to continue "learning" based on the data it has. This characteristic means that even if the input given to the machine is the same, the output or result is not always the same, making it difficult to predict. This output is what distinguishes simple algorithms from machine learning algorithms.
The characteristic of a machine learner is its ability to provide solutions without any commands or human intervention. That is why machine learners are often considered as a "black box". Although input data to the machine learner can be known, it is difficult to retrospectively explain the output produced by the machine.
The use of artificial intelligence for weapons raises humanitarian concerns.
The International Committee of the Red Cross (ICRC) on October 6, 2023 noted that the use of artificial intelligence for military purposes has already been used in at least three areas, namely improving the quality of autonomous weapons, supporting information and cyber warfare operations, and for quick decision-making in military environments.
Autonomous weapons
The application of AI in autonomous weapons has received major attention from various groups because it has serious impacts on humans and humanity. Artificial intelligence can be used to attack a person, vehicle, or other facility directly without human intervention.
According to the scientific adviser and senior policy of ICRC, Neil Davison, on the ICRC website on July 26, 2022, humans can activate autonomous weapons, but they don't specifically know who or what will be targeted by the weapon. Humans also don't know when and where the autonomous weapon will attack.
Autonomous weapons work with sensors and software that function to match what the sensors detect in the target environment. Autonomous weapons can detect military vehicles or someone's movements. However, the movements of the vehicle or person trigger an attack. The attack is not determined by the holder or controller of the weapon.
The process of determining the attacks is what raises concerns due to the lack of human involvement in the use of violence. As a result, autonomous weapons become difficult to control.
There is no guarantee that the attacked vehicles or people are truly from the military and not civilians. Even if the target is indeed military, there is no guarantee that nearby civilians or facilities will be spared from the impact of the attack.
This situation, according to assistant professor of psychology at Concordia University, Canada, Jordan Richard Schoenherr, in The Conversation, December 10 2023, leads to total war, namely all against all.
In this total war, the line between civilians and civil or military infrastructure becomes blurred. This means that those who are at war will consider all people, including women and children, all public facilities, including hospitals and other humanitarian facilities, as equal targets and legitimate targets.
Information war
Apart from being used for weapons, artificial intelligence is also widely used in military decision-making. AI-based computers can analyze, combine data, and even draw conclusions in identifying and assessing a person or certain object's behavior. AI can also quickly and efficiently predict future actions and situations.
The conclusions from the learning machine can be used as recommendations for carrying out military operations, determining who to attack and when, and even suggesting the use of nuclear weapons. This condition makes many people worry that the militarization of AI will actually have major consequences for international humanitarian law and ethics of war.
Although such concerns have already arisen in several recent wars, some parties actually perceive the opposite. Decision-making by machines is deemed advantageous because machines are better able to comply with laws and ethics and avoid civilian casualties as much as possible.
Artificial intelligence is also widely used as a tool to win the information war to support attacks. AI also becomes a key support in cyber wars to cripple the enemy's strength and facilities.
Artificial intelligence and machine learning can automatically search for vulnerabilities in enemy systems to be exploited, as well as detect weaknesses in their own systems. When cyberattacks occur, AI can automatically retaliate against the opponent's information system. This effort can minimize the impact of cyberattacks on the community or civil infrastructure.
In addition, in war, the winner is not solely determined by how many enemies can be defeated. The winner of the war is the group that is able to control the information circulating in the wider community.
The success of this information war can blur the actual war information, even getting support from communities from countries that are not involved in direct warfare.
Information warfare has long been an inseparable part of armed conflicts. The digital battlefield and artificial intelligence have changed the way information and disinformation are created and disseminated.
AI-supported systems have produced a lot of fake content in the form of text, photos, audio, and video. AI can also change the nature and scale of information manipulation and its real-world impacts.
Governance
Although the militarization of artificial intelligence has significant impacts on humanity, security, and global warfare, the mechanisms for governing its use and mitigating its potential risks are not yet available.
The Rector of United Nations University (UNU), Tshilidzi Marwala, wrote on the UNU website on July 24, 2023, about the difficulty of regulating the use of artificial intelligence for military purposes. This is due to the high technical complexity of artificial intelligence and the rapid development of artificial intelligence. As a result, existing rules will be difficult to comply with.
Also read: Robotic Technology and Artificial Intelligence Are Keys for US to Outperform China's Military
The governance of existing artificial intelligence is generally based on past events, not something that will happen in the future. The rapid development of artificial intelligence makes existing rules should be dynamic and able to adapt to the evolving situation in the future.
On the other hand, international cooperation in developing regulations for the use of artificial intelligence is also difficult to establish due to the lack of consensus among countries, especially those that are advanced in military and AI industries. Moreover, the dual nature of artificial intelligence, which can be utilized for both civilian and military purposes, complicates the process of regulating AI.
Regulating the ethics of artificial intelligence is also a complex issue. There are many problems related to the use of artificial intelligence that cannot be answered, such as how autonomous weapons distinguish between soldiers and civilians or who is responsible if AI-powered weapons cause unintentional damage to civilian facilities or kill civilians.
In addition, the fundamental question that has not yet been answered is whether it is ethical to delegate the decision of life and death of someone, even if they are an enemy or considered evil, to a machine? Machines that were originally created to facilitate human life will eventually become a means of killing humans.
Also read: Iran's attack on Israel, the peak of the feud between the Middle Eastern superpowers
We cannot retreat from the use of AI. Ultimately, the use of artificial intelligence will become inevitable. However, to what extent can humans control their desire so that the technology they create does not actually lower their humanity?