A discussion of experts from banks, consulting, and security vendors showed that artificial intelligence is becoming part of cyber defense, but its deployment is slowed by data governance and regulations. Language models can make analysts’ work easier, but attackers use them as well. The key will be sensible internal rules, verification of outputs, and phased pilots.
How banks and integrators are experimenting with AI
Banks are still holding off on public tools like ChatGPT, mainly due to questions about where and what data is leaving and how it all holds up legally. The priority is cloud encryption, contractual arrangements, and precise rules of use; tools like Copilot or GitHub require agreed-upon terms. Some teams are already testing initial internal use cases exclusively on internal data, with clearly defined filters and constraints. In incident analysis they work mainly with metadata and pseudonymized data; detailed evidence is collected locally.
Consulting and integration firms, such as Aliter, are embedding AI elements into deployed products while also developing internal tools for process and document management. They leverage their own development teams and aim to optimize internal procedures as well. In the defense sector they run into stricter regulations, therefore development “in their own sandbox” is preferred and usage takes place in a limited, thoroughly controlled environment. Real deployments here move more cautiously, with an emphasis on data sovereignty.
Virtual assistants for SOC: help and limits
The nearest trend is to use large language models as assistants for security analysts and administrators. They can review configurations, flag errors, suggest priorities, and speed up manual information lookup for complex security products. Such tools can shorten both implementation and monitoring time if the prompts and context are well set. At the same time, control must be maintained: verify outputs and monitor whether the recommendations truly fit the given environment.
Attackers are already using AI for more convincing phishing campaigns, so the role of juniors will shift from scripting to verification and hands-on incident analysis. The panelists reminded that AI is “just” a statistical tool: it may hit 99,9 % of cases, but you still have to account for exceptions. Protection must therefore be multilayered and combine multiple systems, rather than relying on a single model. It is also important to protect training data and continuously check what the model generates, because the machine itself will not eliminate these risks.
When will AI become the key to cybersecurity?
Predictions vary: by one view, AI will become a key part of the toolset within a year, mainly thanks to increased productivity and better information retrieval. Others speak of a two- to three-year horizon for routine, narrowly defined tasks. Still others see three to five years and warn that the “truthfulness” of models must first be addressed and hallucinations minimized. They agree, however, that the ability to ask the right questions and rigorously verify outputs will be as important as the technology itself.
Complex tasks, such as a cohesive view of large volumes of security logs, may take longer – estimates of seven to eight years were mentioned. Turbulence among major players and in investment can speed up or slow down the pace, but the direction is clear. AI will assist more and more; in some places it will replace basic routine, elsewhere it will remain under human supervision. In practice, the decisive factors will be data-handling rules, regulations, and the willingness to verify, rather than blindly trusting what appears on the screen.