A conversation with Richard Marko, Chief Executive Officer, ESET, Bratislava
Led by: Patrick Tucker, Technology Editor, Defence One, Washington, D.C.
In recent years we can observe how cybercrime and AI have improved. It is harder to recognize a phishing email, but for claiming a cyber attack has happened, you need to have proof. It is also accepted that the weakest point of the defence system is human beings, therefore there is no wonder that developing and using AI has become almost like an arms race. Yet, we still have to bear in mind that it is the human being who trains the AI and so the AI is dependent on human intelligence. Sometimes, AI does not recognize the very simple wrongdoing that people do. What AI is better at is the memory and processing of data. We can not fully understand how the AI got to its decision and conclusions, therefore there is a lot of hesitance and doubt on the neuro-network technology. Mr Marko also stresses that it is not hard to obtain it but to use it correctly and efficiently is not that obvious and easy.
We also observe some legislation and regulation trends. The EU tries to identify tasks for which the AI may not or cannot be used; The big-tech companies are adopting ethical principles. However, tMr. Marko reminds us that cybercrime is not limited by the regulations and that these ethical standards need to be scrutinized in practice. He says that “balancing ethical aspects and business is a tricky thing.” Here, it is necessary to mention that how AI is applied in commercial practices can be a model of how it can be applied to identify the most vulnerable population as well. Here, also the EU needs to ensure that there is enough research and development in the area of cybersecurity as well as it possesses the tools to boost it further.