Michael Rogers: 5 questions about AI and the future of business
Editor’s Note: This is the last of a 3-part series offering insights into artificial intelligence and the future of business distilled from a presentation to Vistage members earlier this year.
Michael Rogers, who recently completed a two-year stint as the futurist-in-residence for The New York Times, predicts that artificial intelligence will continue to penetrate all levels of business in the future. As a result, it will be the responsibility of CEOs and business leaders to identify how AI can help distinguish their companies from their competitors.
However, he also cautioned that while it’s important to drive innovation, leaders need to ensure that key ethical elements are in place to prevent abuse and guarantee a level playing field for everyone.
Those elements include:
- Transparency: This is crucial in AI, and vendors should be asked how to make their AI understandable to humans.
- Human oversight: This will be necessary to appeal AI decisions.
- Privacy: This is a concern as AI is used in various businesses and there may be privacy violations.
- Intellectual property rights: IP have become a big issue in recent years, in particular with the creative community.
At a recent Vistage event, Rogers spoke with CEOs and business owners about what’s next for AI, answering their questions and providing insights on what lies ahead.
More in this series
Part 1: 4 key insights about AI from a futurist
Part 2: Why AI security is a priority now
Q: If you’re completely new to AI and its applications, what would you recommend in terms of resources to learn more about AI and the things that are going on?
A: I think one of my favorite sites for bringing up social issues but also being firmly grounded in technology is MIT Technology Review, and they’ve got a great website that does a lot more. But when I look at all the reporting on technology and society, I always end up coming back to them. Actually, The Economist is pretty good. They did a long piece in one of their quarterly technology reports. It was excellent. It was really well done.
Q: So you said that empathy, collaboration, and creative problem-solving are uniquely human. I’m curious to hear your thoughts on what that means for the future of work.
A: I have a feeling that we will automate quite a bit in the workplace, and it’s going to be a very, very touchy situation. I believe economists have always said technology eliminates some jobs and creates new jobs. Well, the more and more I talk to people who know about this stuff, they agree. They don’t quite see what the new jobs are. What do you do with a thousand customer service representatives if you only need, say a hundred at most to be the backup for robot calls?
So I think one of the jobs of management will be to think of what those new jobs are because they do exist. There’s got to be this interface between the real and the virtual, and it’s a human who’s going to make the most of that.
It boils down to three things:
Empathetic communication. Fast forward 5 to 6 years and an insurance customer service bot will be able to answer all your questions, come up with a rating and come up with various premium options. It’ll be very satisfying. It will take all the time you want to explain the history of insurance to you if you want, no question is too much, please. But it will take a human to call back and ask a few questions about the family, share a little about their family, and say, “Maybe you would want to bump that policy up to $1 million from that $500,000.” That’s where the human comes in. That’s empathetic communication.
Open-ended problem-solving. AI is going to be great if you decide to build a new parking lot in your city. You can give it to an urban planning AI bot in 5 to 6 years. It would be brilliant. It would go through all your county and city records. It would pull out traffic patterns. It would look at real estate prices. It would do heat maps of traffic flows, and it would come back and say, “Here is the best place to put a parking lot.” It would never come back and say, “Why do you want a parking lot in the first place? Isn’t there a better idea?” And of course, there is.
Collaboration. The same thing applies to collaboration. Those are the skills we need to figure out how to apply to our businesses because those skills are uniquely human, and people who are just relying on AI and computers — and there will be people who do that — will not have that same advantage.
Q: Could AI be the end of the internet? As with any new technology, what are we supposed to trust?
I think we will always have a thing that is like the internet. But if you mean trust? We didn’t have much trust to begin with on the internet. The really big trainwreck that I see coming first is the 2024 election, and that’s when we’re going to see every fake thing you can do. And it’s just incredible what you can do. And hopefully, that will drive things like regulations.
There are ideas, for example, that it should be the law, and this is the kind of thing Europe can get away with more than we can, but it should be the law that if you create an AI-generated picture — or if one has been significantly modified — it has to have a watermark. There has to be some indication that it’s been modified.
I would say this is going to push the trust question totally over the edge. And we may well come up with a solution. Winston Churchill once said about Americans that you could always rely on Americans to do the right thing after they have tried every other alternative.
Q: Do you see AI breaking into the federal government or will politics hold that away?
A: The public sector has never been quick to adopt technology, to say the least. I mean, I remember during the Obama administration, I got optimistic because he actually put IT people in charge and it looked like we were going to see some real upgrades, but we still are chugging along on older software. So I think the whole adoption of AI will be very slow in the public sector.
But I agree with you completely that that’s useful. And they’re going to run into the same problem that a unionized organization would. How do you lay people off if you’re replacing their jobs? That’s a little more delicate perhaps in government, but there’s just such an inertia in the public sector. I mean, I talk to public sector groups often and talk about all the amazing things that can be done and everyone agrees, but the money and the will don’t seem to be there.
So I have a feeling AI will creep in slowly. It’ll be there, though.
Q: What are your thoughts on the robots that are produced by Boston Dynamics? Their work is funded in part by the government, and the applicability of robots like that to the military scares me.
I think that’s a huge question. And again, people are already talking about how there should be a regulation about not harming humans, sort of “Isaac Asimov” style. The Boston Dynamics stuff is terrifying. They’re way ahead of anything else in terms of tactility, the ability to run, the ability to maneuver. They’re still pretty expensive, which is a good thing.
So the thing to consider is, where do you draw the line? Right now we probably have what are AI systems that potentially could launch missiles that kill people. If you analyzed our whole drone armamentarium and things like that, it’s already built in there. So robots with the intent to kill are not something new, they’re not unheard of. It’s a concept that has been talked about quite a bit.
But the other notion that AI at some point unleashes a nuclear war, that’s another one that just doesn’t make sense to me. It would be irresponsible to put in AI with a finger on the button. And I don’t think any engineers would ever do that. Sure it’s possible, but I don’t know why it would happen.
Related Resources
AI roundtable: Best practices for small businesses [Webinar on demand]
Category: Technology
Tags: artificial intelligence, CEO, Technology