Richard Marko has been CEO of ESET since January 2011; before that, he was the company’s Chief Technology Officer from 2008 to 2010. As CEO he has overseen the deployment of ESET’s new generation of products for consumers and businesses, and the global growth that ESET has witnessed as a result.
Richard was one of the co-designers of ESET’s award-winning antivirus system and its proprietary scanning engine, released for the first time in 1995. In 1998, as Chief Software Architect, he co-designed ESET’s NOD32 technology for its next generation of products, and then oversaw the deployment of ESET’s new flagship consumer product – ESET Smart Security – in 2007. He developed ESET’s Advanced Heuristics, which in 2002 delivered a technological breakthrough in malware detection. Thanks to this technology, ESET is now a world leader in IT security.
Richard Marko began his career at ESET while still a student at the Faculty of Electrical Engineering and Information Systems at Košice Technical University, in Slovakia; he graduated in 1996 with a Master of Science with Distinction, and his dissertation won the Rector’s Award.
What are the benefits and limitations of using AI in cybersecurity? What needs to be done to prevent AI from being manipulated, or to ensure that biometric data are not used for surveillance? AI systems are increasingly used in critical sectors (transportation, health, law enforcement, and military technology) – what should be the key priority for policymakers to ensure the security of these systems?
Since the start of the COVID-19 pandemic, the number of cyberattacks has increased dramatically. With data security compromised more than ever, more and more companies are turning to AI for digital protection. A new wave of AI-powered solutions can keep malicious actors on their toes while giving IT teams much-needed relief. However, AI is far from being an invincible solution for all security-related incidents in the digital space. This is due to the inherent limitations of AI at the current stage, as well as the fact that malicious actors are increasingly using AI themselves to carry out attacks.