Patrick Tucker is the science and technology editor for Defense One where he writes on information technology, AI, big data, predictive analytics, cybernetics, nanotechnology, cybersecurity, invention, climate change and climate change mitigation, demography, social media, public policy, and all the ways that emerging technology influences national security. Prior to that, he was deputy editor for The Futurist magazine and director of communications for the non-profit World Future Society for nearly a decade. His writing has appeared in Slate, Salon, The Atlantic, Quartz, MIT Technology Review, National Journal, The Wilson Quarterly, The Johns Hopkins Magazine, the Utne Reader, and as part of Discovery channel broadcasts and special features among other outlets.
He's also the author of The Naked Future: What Happens In A World That Anticipates Your Every Move (Current, 2014.)
He's been quoted as a futurist in The Washington Post, The New York Times, The Chicago Tribune, PC magazine, Laptop, Elle Canada, Wired.com, Smart Money.com, Voice of America, and the Discovery Channel, and have been a guest on such networks and programs as CBS Sunday Morning with Charles Osgood, and Science Fantastic with Michio Kaku and does regular media appearances on MSNBC, FOX, CNN, and other cable news outlets to offer analysis about the intersection of emerging technology and national security.
He holds a master’s degree in writing from Johns Hopkins.
Where is the future of innovative satellite design heading? How can the international community mitigate further space junk to avoid satellite damage? What are the major problems facing the satellite industry? What role does the private sector have to play to overcome them? What does the democratization of the satellite industry resemble?
How can the international community strengthen the regulatory frameworks when it comes to the ethical use of AI in theatre operations? Should some emerging tech weapons, like strategic nano weapons, be categorized as weapons of massive destruction? How much research and development should be allotted by governments to human augmentation / human-machine teaming?
What are the benefits and limitations of using AI in cybersecurity? What needs to be done to prevent AI from being manipulated, or to ensure that biometric data are not used for surveillance? AI systems are increasingly used in critical sectors (transportation, health, law enforcement, and military technology) – what should be the key priority for policymakers to ensure the security of these systems?
Since the start of the COVID-19 pandemic, the number of cyberattacks has increased dramatically. With data security compromised more than ever, more and more companies are turning to AI for digital protection. A new wave of AI-powered solutions can keep malicious actors on their toes while giving IT teams much-needed relief. However, AI is far from being an invincible solution for all security-related incidents in the digital space. This is due to the inherent limitations of AI at the current stage, as well as the fact that malicious actors are increasingly using AI themselves to carry out attacks.