an asset to improve the relationship between people and AI

Clarification can strengthen the human-AI relationship and help improve the performance of security teams by combining natural language processing (NLP) and threat analysis.

Cybersecurity is a problem that is no longer on the human scale. Digital environments and security requirements are more complicated for people to always be able to manage, but we need to. A hacker can only be successful once where security teams have to be successful every time.

Organizations now hold a lot of data so people can track and anticipate every cyberattack. It is impossible to manually review all security logs in an organization and write static detections. Cybersecurity professionals need to be supported in this mission.

In a context where AI has become essential to cybersecurity, Explainable Artificial Intelligence (IAX) can be a major asset for security professionals and managers.

Cybersecurity specialists are skeptical of the nature. Before they can trust a system they are using, they need to understand it. Artificial Intelligence (AI) and teams need to work together to defend against hackers with more sophisticated techniques. While AI advances can help security teams optimize their performance, they are not limited to developing advanced mathematical algorithms alone. People need to be able to operate and control their systems and understand how AI can affect them.

The focus will be more on IAX, an idea that goes against the “black box” concept that is often associated with AI algorithms. In cybersecurity, a black box is a system whose future data and results appear without knowing its contents. However, these consequences are often made without explanation and security is at the center of concerns of the boards of directors of companies, teams must now be informed of the expected effects of AI, its potential biases and its actions, without understanding it. in and outs.

IAX reiterates this situation by ensuring that IT security professionals can look “under the hood” at the black box and understand the choices made by technology (and especially AI). IAX makes detailed reasoning to clarify AI decisions. To build the necessary trust, people need to remain in control and understand the AI ​​decision -making process. It’s not about understanding and questioning every decision, but deepening the decision -making process if necessary. This ability is essential when it comes to investigating cyber incidents and determining what action to take.

Now, it is no longer enough to see the existence of a security event: security teams need to understand how and why.

It is impossible to identify a vulnerability or the underlying cause of an attack without knowing how a cyber attacker was able to breach defenses or why the AI ​​blocked the threat. So how do you implement IAX in cybersecurity terms?

IAX makes the data available to people, where possible, and secure. AI-generated results are presented in simple, straightforward language and are always paired with visual aids. Processes and procedures that allow users to understand and trust the results and data generated by machine learning will soon be at the heart of the Security Operations Centers (SOC).

IAX helps to understand the different levels of decision making, from the highest to the lowest level of abstraction. By programming AI to explain why the micro-decisions it makes on a daily basis, teams are able to make macro-decisions, which affect the entire business and require context.

The use of automated natural language processing (NLP) in the analysis of threat data suggests that this phenomenon is favorable. Combined with sophisticated AI threat analysis and response, NLP helps to understand the data and autonomy to “write” comprehensive reports. These reports can explain the entire process of the attack, gradually, from the initial stages to the progression of the attack, including the necessary corrective measures. In some cases, NLP can also be applied to existing models, such as the commonly used MITER ATT & CK model, to help express results in a way that adds value to security analyst workflows. , even in the seasoned.

NLP may even expose the assumptions behind the cyber attack, explaining the “how” in addition to the “what”. It not only breaks down threat analysis and balanced response actions in a simple and easily digestible way, but also informs teams on how to prevent these threats from happening again.

Security leaders are not the only ones recognizing the importance of IAX: Regulators are aware of the risks of AI training methods

Typically, AI is trained on large and sensitive datasets, which can be shared by multiple teams and organizations in different regions of the world, complicating follow-up steps. As much as possible, to make life easier for organizations and regulators when faced with these complex issues, the IAX needs to be generalized and harmonized for transparency, objectivity and ultimately improve the robustness of AI. .

Organizations also need to take steps to use AI to benefit human groups and make them more efficient and robust. If biases or inaccuracies are detected in algorithms, organizations can rely on IAX to identify where biases are formed and take steps to reduce them (in addition to understanding the processes behind its decisions) .

By identifying and optimizing these biases, AI helps eliminate the challenges that human groups will face in the future, rather than magnifying them. For AI algorithms to truly strengthen security defenses, the person behind AI needs to understand decisions through explanation.


By Max HeinemeyerVP of Cyber ​​Innovation at dark tracking

Also read:

> “Thanks to AI we have created an immune system similar to human but for networks and systems”

> How Darktrace, a European startup, is transforming AI and cybersecurity

> Cybersecurity: AI brings scalability to whitelisting

> AI to take the lead in the hunt against cyberpirates

Leave a Comment