The virtual pilot can make the most of the fighter’s tactical features, constantly learning to knock down the human pilot.
The Pentagon’s Advanced Defense Projects Research Agency (DARPA) in August conducted a computerized close-range flight simulation test called Alpha Dogfight. In this battle, a top US pilot with the codename “Banger” wears virtual reality glasses, using a simulated cockpit to control a close-range F-16 fighter with an intelligence fighter. Create (AI) company Heron Systems Control.
The AI-controlled fighter quickly defeated the Banger-driven F-16 in the first four battles. The American pilot tried to change his tactics in the last round, but still failed to hit a single shot and was knocked down by AI shortly after. Experts say there are many reasons why US pilots lose to artificial intelligence and warn of potential risks in using AI for military purposes.
According to military expert Sebastien Roblin, the AI pilot exceeded the human limit in testing by repeatedly launching a precise attack in a very short time. It also makes full use of the airframe’s stamina, while artificial intelligence doesn’t suffer from accelerated overload during complex maneuvers like humans.
Heron Systems AI uses deep reinforcement learning (DRL), in which a computer simulates nonstop combat. Initially, the AI pilot only learned how to avoid the fighter crashing to the ground. However, after about 4 billion combat simulations, Heron Systems AI has mastered the maneuverability skills in close-range aerial combat.
Humans can exploit the limitations of AI like the way Banger deals with computers in the final match. However, AI pilots can quickly learn from their failures and improve their combat capabilities themselves.
The focus of the test is on close-range artillery combat, preventing organizers from challenging AI on complex tasks like using long-range sensors and rockets. These are the factors that can determine the outcome of an air battle from before the two sides come into each other’s vision. In addition, the one-to-one combat form in the simulation is also far from reality, when each side can be supported by dozens of surface ships and air defense units.
AI’s machine learning approach also has major limitations. AI pilots can have difficulty working in teams. The limited data input also caused the experimental learning model to produce suboptimal results when faced with a new situation.
According to Roblin, dependence on AI also carries risks. Most drones (UAVs) today are remotely controlled by humans, combined with automatic algorithms to avoid a collision and return to their base when the connection to the ground control system is lost. The remote control will make the UAV unable to move at high speed and attack accurately.
The main trend in the military today is to use manned fighters to coordinate AI-equipped UAVs in specific, risky tasks such as swarm attacks and suppression of enemy air defense systems. These systems are always at risk of interference, causing them to lose combat capability or even fall into enemy hands.
The automatic war machines of the future also raise moral concerns when they do not have emotions like humans. UAVs equipped with facial recognition AI can have fatal errors and indiscriminate attacks.
“The Alpha Dogfight test show that AI outperforms humans when it comes to fast and accurate computation in a competition with clear rules. High-tech war remains a big question mark, ”said Roblin.
Duy Son (According to the CNBC)