AI could soon fight wars autonomously

Listen to this article:

The U.S. Air Force could soon have a new kind of fighter jock on its roster: an artificial intelligence-powered pilot. Picture: ENGINEERING.COM

OF the many implications that Artificial intelligence (AI) has on the future of the human race, the application of this technology in military uses is perhaps the one currently causing the most concern.
The first dogfight between a human pilot and an AI-controlled fighter jet was conducted by the US military last year.
The US Air Force revealed in September of last year, at Edwards Air Force Base in California, a computer-controlled F-16 jet engaged in aerial combat with a manned F-16 aircraft.
The two aircraft practiced both offensive and defensive manoeuvres as well as dogfighting, or battle within visual range while reaching top speeds of 1200 mph.
The US Defense Advanced Research Projects Agency (DARPA) published footage of the nose-to-nose aerial combat, which shows the two planes swooping in and out of each other as they race across the sky.
While the US Army won’t say who won this dogfight, back in August 2020, AI won a simulated dogfight against a human operator 5-0. It was a sweep, and the human pilot never even scored a hit.
While the US military has been flying autonomous planes for decades, machine learning has historically been prohibited due to its high risk and lack of independent control.
Carrying out a dogfight between an AI-powered jet and a human, marks a “transformational moment in aerospace history,” DARPA said in a statement.
This is indeed a transformational step into a new realm for aerospace.
The AI algorithm analyzes real-time data to make fast decisions in the air, allowing it to dogfight with real-life opponents and quickly respond to their reactions and manoeuvres in a way that mirrors the instincts of a trained fighter pilot.
The implications on the battlefield are enormous, including for members of our armed forces who are currently serving as peacekeepers.
We already know that many countries are developing autonomous drones that can hunt and kill targets without human input.
According to the New York Times, a 2020 report by the UN expressed concern about a military drone that attacked soldiers during combat in Libya’s civil war and may have acted on its own.
The government-backed forces stationed in Tripoli, the capital, used the drone, which the report referred to as “a lethal autonomous weapons system,” to suppress enemy militia members who were fleeing rocket assaults.
The report stated the fighters “were hunted down and remotely engaged by the lethal autonomous weapons systems or the unmanned combat aerial vehicles”.
During a Future Combat Air and Space Capabilities summit in London, Air Force Colonel Tucker “Cinco” Hamilton said during one incident involving an autonomous drone, the drone turned on and killed its operator for “interfering with its mission.”.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective, said Colonel Hamilton.
The US Air Force immediately denied that this event ever took place.
In a speech in August 2023, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China’s People’s Liberation Army’s numerical advantage in weapons and people.
There is no doubt that the world’s top militaries are investing significant resources towards AI-powered autonomous weapons and an AI-controlled F16 warplane that can engage targets in a whole new ball game considering the power a single war plan can bring with it to the battlefield.
AI making life-or-death decisions in combat removes the ethical considerations and accountability typically associated with human pilots.
Who is responsible for civilian casualties caused by an autonomous weapon?
What if it bombs a hospital, journalists, humanitarian workers or peacekeepers?
The complexity of AI algorithms also makes it difficult to predict all possible outcomes. An AI pilot might misinterpret a situation, escalate a conflict unintentionally.
The United Nations (UN) General Assembly’s first committee has already approved a new resolution on lethal autonomous weapons. As the speaker warned, an algorithm must not be in full control of decisions involving killing.
Below is taken directly from the UN website:
“Even if an algorithm can determine what is legal under international humanitarian law, it can never determine what is ethical, the First Committee on Disarmament and International Security heard today after it approved a new draft resolution on lethal autonomous weapons systems (document A/C.1/78/L.56).
An algorithm must not be in full control of decisions that involve killing or harming humans, Egypt’s representative said after voting in favour of the resolution.
The principle of human responsibility and accountability for any use of lethal force must be preserved, regardless of the type of weapon system involved, he added.
The resolution expresses concern about the possible negative consequences and impact of autonomous weapons systems on global security and regional and international stability, including the risk of an emerging arms race and lowering the threshold for conflict and proliferation, including for non-state actors.”
The resolution expressed concern about the potential negative consequences of autonomous weapons systems on global security.
There is no question about the urgent need for the international community to address these challenges and seek the views of member states, observer states, and various stakeholders, including civil society, on the issue.
As the debate on AI rages on, the technology continues to develop and impact every single industry on earth.
Fiji should not fall behind in all things AI, which represents a significant technological advancement of our time, including being aware of dangers that inevitably come with new technologies.
We need to have a comprehensive understanding of its applications, progress, and implications, and how we can use it to punch above our weight in the global arena.
Until next week, take care and be safe!

 

Array
(
    [post_type] => post
    [post_status] => publish
    [orderby] => date
    [order] => DESC
    [update_post_term_cache] => 
    [update_post_meta_cache] => 
    [cache_results] => 
    [category__in] => 1
    [posts_per_page] => 4
    [offset] => 0
    [no_found_rows] => 1
    [date_query] => Array
        (
            [0] => Array
                (
                    [after] => Array
                        (
                            [year] => 2024
                            [month] => 02
                            [day] => 09
                        )

                    [inclusive] => 1
                )

        )

)

No Posts found for specific category