Michael Rogers: Why AI security is a priority now
Editor’s Note: This is the second of a 3-part series offering insights into artificial intelligence and the future of business distilled from a presentation to Vistage members earlier this year.
It’s no secret that artificial intelligence continues to incorporate itself into the lives of business leaders with each passing day. Analysis of the latest Vistage CEO Confidence Index survey revealed that 63% of small business CEOs reported they are adjusting their technology budgets to boost their AI capabilities in the coming year.
But as the use of AI increases, so too does the risk of cyber attacks, says futurist Michael Rogers, who recently completed a two-year residency at The New York Times.
At a recent Vistage event, Rogers spoke with CEOs and business owners about the potential risks AI can pose to companies and what actions they should take to prevent data loss, corruption and other dangers.
More in this series
Part 1: 4 key insights about AI from a futurist
Part 3: 5 questions about AI and the future of business
Cyber attacks: Potential for significant loss
Emerging technologies bring both threats and possibilities, says Rogers. AI can be used for spear-phishing and ransomware attacks, which can be successful due to access to personal information.
“The ChatGPT robot that has had access to all the information in the company and also co-workers’ Facebook pages and other sources of personal information,” Rogers says. “It can put together one of these spear-phishing emails asking you to transfer money into a bank account you’ve never heard of before. And it works a lot.”
That threat has led companies to focus on ways to identify cybercriminals, including using AI itself to root out attacks, Rogers says.
“It takes some pretty bright researchers to tear down a company’s website and figure out how can someone be fooled into thinking an email came from an employee,” he says. “Now ChatGPT can do that really fast.”
Fortunately “ethical hackers: are doing some pretty revolutionary things to understand and mitigate risks. Machine Learning can be used to detect malicious traffic on a network, accelerating the response time or mitigating an attack altogether.
Protecting your information with a digital identity
Another way preventative measure that’s currently being explored is creating a firm digital identity or “Digital ID” for people in the virtual world, Rogers says.
This would address the issue of multiple identities and logins, which provides multiple opportunities for cybercriminals to gain information. And given the ubiquity and importance of the virtual world, it makes sense to have a way to track people, Rogers says.
“If we require driver’s licenses on federal highways, why not on the Internet?” Rogers adds. “There’s a problem created by everyone having 67 different identities and logins, and it’s really an insecure system. So the digital ID in some countries will be mandatory.”
Estonia is a pioneer in this area, Rogers says. The country has successfully implemented a secure system for centralized health information accessible through unique digital cards. “It’s detailed to the level of ‘Do you want to be buried or cremated?’” he says.
“It’s a lot of information and the security of that has never been broken,” he adds. “Estonia is probably under the most concerted constant cyber attacks from real professionals in Russia, and it’s never been compromised. So I think real [digital] identities are on the way.”
The need for cyber risk insurance
Rogers adds that it is likely that insurance companies will become increasingly involved in the issue of cyber risk insurance. While they are already involved to some extent, many companies have been hesitant to take out cyber risk insurance due to the uncertainty surrounding what is and isn’t covered.
Insurers themselves are not entirely sure, as the potential damages from cyber risk can vary widely, Rogers says, from minor losses to significantly damaging a company’s professional reputation.
However, there are two main factors driving change in this area. Firstly, boards of directors are increasingly aware of the significant liability they face from cyber risk and are asking why they are not insured against it. Secondly, insurers are seeing the potential profits they are missing out on by not offering cyber risk insurance.
What we are likely to see, says Rogers, is insurers beginning to write cyber risk policies that cover AI to some extent. However, companies must meet certain standards of data security and data hygiene before insurers will cover them.
This process is similar to how workers’ compensation was established in the early 20th century, Rogers says, with insurers taking on the task rather than letting the government set regulations. “It worked really well,” he adds. “And I think we’re going to see the same thing here with insurance.”
The future of AI’s ‘true genius’ is still unclear
Rogers also shared a thought on the potential threat of AI to our society, mentioning current discussions about slowing down the development of artificial intelligence not only by experts in the field but also within the federal government.
Certainly, copyright and protection of Intellectual Property is another discussion that concerns those in the legal field and is a focus of the U.S. Copyright Office. Vistage speaker Amy B. Goldsmith shared in our expert roundtable the risks of creating images and content in AI and whether that is protected IP.
The uncertainty and speculation around AI reminded Rogers of when recombinant DNA was introduced in the ’70s. This created widespread panic among the public, he recalls, with fears of creating “designer babies” or accidentally spreading deadly diseases. On the flip side, some people believed that genetic engineering would revolutionize society and allow us to create wonders such as “pork chops growing on trees,” he says.
However, both sides were wrong, Rogers says. For all the speculation, blustering and protesting during that era, neither the disasters nor the great promises happened. “We discovered that genetics and DNA are far more complex than we initially thought, and we are still trying to understand it,” he says.
Ultimately, Rogers believes that both the business world and humanity will go through the same learning curve with AI as we delve more into its capabilities and find out just how much it can synthesize and learn on its own.
“It’s not going to be a simple drive to true genius AI,” he says. “We’re going to discover, ‘Oh, consciousness is actually complicated.’”
Related Resources
Category: Technology
Tags: AI, Cyber security, cybersecurity