Artificial intelligence is no longer sci‑fi from the movies, but the quiet engine of modern militaries. From the networked battlefield and drones to simulations, it is changing how commanders lead, fight, and decide. But along with the benefits come serious risks and ethical questions that cannot be avoided.
From the Networked Battlefield to Decision Superiority
Technologies have been flowing between the civilian sector, space, and the military in both directions for decades. Algorithms we now call AI have long helped detect and destroy aerial targets, and today’s militaries are built on data, computing power, and the cloud. The key is no longer just air or naval superiority, but the ability to decide faster and take effective action.
The lecture outlined the full 'octave' of AI use: from command-and-control systems, through intelligence and reconnaissance with satellites and tactical drones, to autonomous weapons and target prioritization. In cyber defense and offense, AI guards, attacks, and quietly harvests information for further decisions. In logistics it optimizes supply and movement, and in training it enables safe simulations from a single weapons platform all the way up to alliance-level staffs. Artificial intelligence also helps develop new materials and translate civilian ideas into military applications.
What We Gain and What We Risk
The advantages are tangible: better and faster information assessment improves decision quality. Unit effectiveness grows, and autonomous systems reduce risk to one’s own soldiers, for example in minefield reconnaissance or emergency resupply. More precise targeting reduces consumption of expensive munitions and minimizes strikes on decoy targets. Training in a synthetic environment is cheaper and saves lives.
But the risks are just as numerous: models can produce different outputs from similar inputs, confusing operations planners. The 'black box' of decision-making is a problem where proportionality and the legitimacy of using force must be demonstrated after the fact. There is a danger of misidentification and bias, as well as dangerous 'learning' from bad examples, when a system starts treating a civilian object as a legitimate target. Hallucinations and false positives also appear.
Ethics, Rules, and a Possible Future
The ethical dilemmas are fundamental: the loss of human control over the use of force, unclear accountability for errors and civilian casualties, and pressure to uphold international humanitarian law. The speaker also mentioned systems that propose targets and humans merely approve them, such as Hapsora or Lavender, which pushes the boundary of mechanized killing. Such practices collide with the question of human dignity and whether machines may decide over life and death. This is why practitioners call for transparency and explainability where today’s AI does not provide it.
The 'Skynet' question is not just sci‑fi, but a reminder that what is still missing above all is artificial general intelligence; according to some estimates it could arrive within 5 to 10 years. At the same time, ethical constraints are insufficient, increasing the risk that automated systems will get caught in a loop of rapid reactions and mutual misinterpretation. In the better scenario, such escalation can be halted and rules agreed for the use of AI in the military. In the worse one, the spiral tightens toward destruction, which is why now is the time to set and adhere to the rules.