Futurist Logo
EpisodeListItem_speaker_name__KJvPp
2023-09-25T21:56:00.000Z

Regulating AI

with

Juliette Powell & Art Kleiner

As AI continues to develop at an unprecedented rate, the laws and regulations surrounding the technology have seemingly fallen behind the times. But can governments be trusted to keep AI in check?

Analysis complete. No addtional information is required for context. Proceed with transcript display ...

View Transcript

document button

This week on "The Futurist," Juliette Powell and Art Kleiner discuss how the same technology used to find human slaves can also be used to enslave them. It's less about the technology and more about us as humans and what we choose to do with it.

Welcome to "The Futurist." I'm your host, Brett King, and joining me in the hosting seat today is the lovely Katie King, Miss Metaverse. Welcome back, Katie. Hey, great to be back.

So we're going to get into AI today, and one of the question marks we're going through in this alignment phase of where artificial intelligence fits in the world is what role different organizations play in setting controls and regulation. Who should be regulating this? I wanted to get into this with a couple of authors who've just released a new book, "The AI Dilemma: Seven Principles for Responsible Technology." Let's bring into the show NYU professors and AI officios, Juliet Powell and AR Kleiner. Welcome to "The Futurist." Thank you.


Did you both meet at NYU, or was that how you came in contact? Were you introduced years ago by our mutual mentor, Napier Collins, who sadly passed away this year? He had a vision and felt that Art and I would do some good work together. In 2007, Napier insisted that I attend Art Kleiner's class on scenarios, and I never went back. We started working a few years later while Art was at PWC, publishing a magazine called "Strategy and Business." A few years after that, I was finishing my dissertation, Art read it and said, "Aha, you have a book here," and here we are.


Very good. Nice. This is not your first book, obviously. You've done "The Age of Heretics," "Who Really Matters," "Privilege and Success." You were also the editor of "Strategy and Business." Was it an e-magazine, or was it a print magazine? Gradually, people stopped reading paper and started reading online. So now it's online.


How long have you been at NYU? I've been there quite a number of years, on and off. Most of that time, I've actually been teaching a class on the future. It's "Future of Digital Media," and clearly now, it's more and more the future of AI. I think it's probably reasonable to say that you can't be a futurist today without being at least fairly competent in talking about AI, if not how AI works, then at a minimum, I think the implications of AI. Because so much of our future is going to be changed or impacted by artificial intelligence. It's also interesting that as futurists, there's a lot of common themes here that I find with the work that other futurists are doing, even my most recent book. I see a lot of alignment there. But AI is sort of different than anything else, isn't it? Maybe I'll kick off with you. What makes AI different in terms of the way society has to think about it?


First of all, it's an amplifying function or an amplifying instrument for whatever people are going to do. If you're looking at the future of AI, you're really looking at the future of decision-makers in business, government, everywhere. That future is now the same as it ever was but faster, broader, and affecting more people. When you're looking at the future of AI, you're looking at technology in its broadest range of possibilities. When people say, for instance, just a week ago, Gary Marcus proposed... Yeah, I saw that. Maybe the future of geniuses is a dud. It doesn't go anywhere. I don't understand Gary. For someone who's such an AI champion, he's awfully negative about AI at the moment. But I know it's not AI thematically necessary. It's just this sort of current generation and how we're too reliant on it or too accepting of it right now.


But he made some good points, you know, in terms of the regulatory framework and so forth. One of the things that you raise in the book fairly quickly is the problem of governments and what governments might want to do with artificial intelligence and then the issue with regulating the same so it doesn't do damage to society. So, I mean, this is obviously part of what you have in the title of the book, "The AI Dilemma," but can I ask you for a little bit more detail behind that, the theme of the AI dilemma and what it actually is? Juliet, do you want to kick that off?


Sure. I mean, I think the AI dilemma was actually explained to us by an 80-year-old woman at a funeral that we met very unexpectedly. She was looking for a distraction and asked if she could read the book in advance, and we sent it to her. She got back to us and sent this lovely email in which she said, "If I understand correctly, the AI dilemma is that, in the right hands, the technology can be beneficial to all of us, and in the wrong hands, it could really cause some serious harm." She nailed it. She said it in such a way that anybody on the planet can understand. It's a great technology, an inspiring technology, and it gives me a great deal of hope. But at the same time, the same technology that we use, for example, to find human slaves is the very technology that we use to enslave them in many cases. So it's less about the technology and more about us as humans and what we choose to do with it.


I think much of the discussion around the need to regulate artificial intelligence is coming from the fact that it depends on whose hands it is and what they aim to do with it. Many applications we hear about are commercial applications we find on our phones, laptops, and so on. AI already underpins our modern society as we know it in G8 countries, and there's great responsibility that comes with that.


Different countries are trying to look at regulation from different perspectives. So far, the United States has been less proactive compared to other places. The EU's risk-based approach makes sense, especially given its emphasis on human rights. Here in the United States, our laws are more focused on ownership and property. A lot of lawsuits related to open AI's GPT-3 involve intellectual property. However, we'll likely see more issues around harm to humans in the future.


Well, of course, one of the things that you guys get into, to some extent, although I haven't reached the second half of the book yet, is the impact of work. When you look at the implications of AI more broadly in terms of work, you hear economists make the argument that these technologies in the past have created more jobs than they've destroyed. But for the last 60 years, science fiction has been portraying a future where robots do all these jobs, from vacuuming homes to helping with homework. The intent of AI has always been to take human jobs, so the ultimate implications of AI are likely to fundamentally change the relationship of work in society.


Is it really taking human jobs, or is it helping humans elevate their jobs in different ways? It's supposed to be a tool. In scenario work, we try to distinguish between the things we know for sure (predetermined elements) and the things we don't know.



One of the things we know for sure is that the number of young people seeking work is going to decrease overall worldwide but increase in some emerging economies, especially Africa. Another thing we know for sure is that Moore's Law will continue for a while, so the power and capacity of technology will increase. One thing we don't know for sure is whether there will be visible productivity gains; we still don't know that. If there are productivity gains, we don't know how different decision makers will react. We might see some countries mandate a two-day work week or try to ban AI to keep humans employed. There's a range of potential outcomes, and if these things happen, we know for sure there will be a black market for AI.


We need to temper our predictions with the awareness that while there are reasons to be scared, there are also reasons to be optimistic, and much depends on what we create.


As for globalizing regulation on AI, there is an aspect of regulation that will likely need to become a global standard due to the profound impact of AI. GDPR serves as a precedent. Even big tech companies are playing catch-up with GDPR, which came out in 2018. No company is fully prepared for the AI Act as it stands. There's also the issue of self-regulation, where tech companies claim they will self-regulate or help shape government regulations. The question is, can individuals effectively fight back if AI discriminates against them or causes harm in various situations, such as car accidents involving self-driving cars? The lawsuits surrounding AI are a testament to the uncertainty and potential harm AI can bring.


Regarding AI-generated art not being copyrighted in the US, it raises questions about copyright in the age of AI and automation. Katie, did you have a question on this topic?


One of my close friends, Jonathan Ossin, who's an internet lawyer at Brooklyn Law School, brought a case to the Supreme Court, which was ultimately dismissed. The core idea was to grant AI the right to own its own copyright. Jonathan's perspective was that if the United States didn't allow this, other countries would, and it's not just about the current case but the larger picture of future ownership. We often find ourselves in a geopolitical game when discussing AI. It goes beyond technology, focusing on the power dynamics. This applies not only to individual data and copyright but also to AI's role in serving humanity as a whole, raising larger questions.


Regarding the biggest misuse cases of AI today, I have two concerns, and I'll leave the third one to you. Two areas that worry me the most are the use of AI in education and medicine. In the United States, there's a shortage of medical professionals, exacerbated by the pandemic. We also face a shortage of teachers at all levels. Using AI for education and medical purposes raises concerns about personal data protection. Another aspect that concerns me is the impact of distance learning, not just in developed countries but worldwide, including places like Africa, where access to education has been limited. Relying on unproven technology for teaching and medical diagnosis is risky. It's one thing to use AI with proper supervision and a well-defined corpus, and another to rely on it without oversight, hoping it won't make errors. However, we have seen AI perform well in tasks like cancer diagnosis, provided it has the right data and proper supervision.


Art, you've mentioned predictive analytics, which is the most immediate and serious concern, even more so than the others. The track record of automated systems in education is worrisome, where kids get placed in the wrong tracks due to misinterpreted data. Automated systems create this issue, and humans often simply rubber-stamp the decisions.


We discussed a case in the Netherlands, often seen as an enlightened country. This case involved taking children away from parents who were predicted to be at risk in terms of receiving government benefits, which disproportionately affected minority groups. When I watched "Minority Report" over a decade ago, I thought these things might happen but with awareness. We've talked to physicists about whether AI might be in charge of nuclear weapon decisions without human oversight due to the technology's deductive nature. Sadly, it's a path of least resistance, and we're often too busy to oversee it. As for me, well, as they say in the movies, "Come with me if you want to live." Now, for the lightning round:


Juliet, when did you first hear about the concept of artificial intelligence? Back in the '80s when I read about it in the Whole Earth Catalog. Marvin Minsky spoke about it at the TED conference.


Art, what do you think is the most influential or consequential technology we've ever invented? Fire, because it helped us survive. And, of course, the wheel.


Art, is there a futurist, researcher, entrepreneur, or mentor who has had a significant influence on your career? A few people come to mind: Stewart Brand, Napier Colllins, and Pierre Wack.


Juliet, is there a science fiction story that represents the future you hope for? I'd say "Foundation" by Isaac Asimov. The second season is on Apple TV.


Art, how about you? Kim Stanley Robinson's works, especially the Mars Trilogy and his New York series about a flooded city. Also, Thornton Wilder.


And with that, we'll take a quick break. You're listening to "The Futurist," and we'll be right back to discuss the AI dilemma with Juliet Pal and Art Kleiner. Stay tuned!


Provoke Media is proud to sponsor, produce, and support "The Futurist" podcast. Provoke FM is a global podcast network and content creation company with the world's leading fintech podcast and radio show, "Breaking Banks." They also have spin-off podcasts like "Breaking Banks Europe," "Breaking Banks Asia Pacific," and "The Fintech 5." They produce the official "Finate Podcast," "Tech on Reg," "Emerge Everywhere," and "The Podcast of the Financial Health Network" and "NextGen Banker." For more information about all their podcasts, you can visit Provoke FM or check out "Breaking Banks," the world's number one fintech podcast and radio show.


Welcome back to "The Futurist." Before the break, we had NYU professors and AI experts Juliet Pal and Art Kleiner talking to us about the AI dilemma and seven principles for responsible technology. Now, let's dive into the heart of the matter. You both seem fairly optimistic about the future of AI, and we'd like to know why.


Art: I tend to believe in Anne Frank's words that people are basically good at heart. I also believe in what poet Anne Herbert once said: "Why hasn't there been a nuclear war yet? Because everybody has that little thing they want to do tomorrow." I have faith in the innate goodness and the survival instinct of humans. Tightly coupled systems may be more vulnerable, but in the long run, reality is loosely coupled, and that's what prevails.


Juliet: I share Art's optimism and believe in the inherent goodness of people. AI provides us with the tools to make government more efficient and effective, reduce bureaucracy, and improve healthcare. It can be a force for good, but its impact depends on how it's wielded. The optimistic side is that more people now have access to the technology and can understand its effects on their lives and decisions. It empowers individuals, which gives me hope.


Host: The potential for control mechanisms and autocratic government styles is a theme you explore. The idea that AI can help reduce government and bureaucracy is appealing, but it requires political will and legislative change. How do you see this evolving in the next 20-30 years?


Juliet: Artificial intelligence is a versatile tool, and how it's used depends on those in power. Some may use it to reduce government bureaucracy, while others may leverage it for global dominance. What makes me optimistic is that more people have access to the technology, allowing them to understand its impact. This democratization of AI empowers individuals to engage and shape its use.


Art: The impact of AI on government and bureaucracy will depend on the political will and direction of those in power. AI can streamline processes and reduce bureaucracy, but it can also be used to control and manipulate. The key is empowering people to understand and influence AI's use.


So, the future of AI depends on how individuals and societies navigate its applications and implications.



I think many people were still trying to grasp what AI was and why they should even care. For some, data was limited to their phone data plans. However, I'm quite serious, and now we're discussing digital twins and empowering individuals. This means more people will start their own companies using these tools, and others will enter politics. Many Americans aren't content with their current politicians, so AI candidates might offer an alternative.


We were recently asked about AI in a religious context. We discovered various AI figures like AI Gods, AI Jesus, AI Ganesh, and AI Shiva. While these are distractions, the real power lies with those who have access to and know how to manipulate technology, especially in the next political cycle.


Art, you've been teaching about the future and policy-setting. How will AI change the way we approach policymaking? Who do you mean by "we"?


Art: When you mention "we," it's a complex issue. There's no single decision-making body. Many people with different interests clash, and history is shaped by these conflicts. The European Union regulators have categorized AI into four types: minimal risk AI, high-risk AI (like self-driving cars), limited risk AI (as long as it acknowledges it's AI), and unacceptable AI (e.g., nuclear weapons control). They have set policies that include auditing and bans. It's the alpha release of policy that will either set the tone worldwide or face a backlash.


In this model, anyone wanting to do business in the European Union, regardless of their other activities, must comply. It's the beginning of a policy framework that will influence the global landscape.


Then, from there, we will have feedback—humanity's feedback and decision. Well, China's regulatory ecosystem and the EU share quite a bit in common, so there seems to be more consensus than we might think. Their societies are focused on the collective good, not necessarily trying to guarantee individual rights in the same way.


Katie, you wanted to follow up, right?


Katie: Well, I mean, how far behind is the US in comparison to the EU regarding regulation and AI? Are we really far behind? Are regulators aware of what's going on? The last time I watched a conference about this, they were discussing TikTok and Facebook, and they seemed to know very little about technology, algorithms, or social media. How can we trust them to get involved in this space? What can we do to help catch them up to speed?


Juliet: I think the politicians are asking themselves the same question. The lack of understanding we've seen on television in recent years has been embarrassing. Senate committee members often struggle to grasp the technology, let alone what it will be in the future. That's why big tech companies are working with the US government to shape regulations that balance protecting citizens and fostering innovation.


The lack of education is a significant concern. People like Gary Marcus and others have signed a petition calling for a six-month moratorium on advanced AI development and research. They also propose creating a global governing body responsible for regulating AI across all use cases worldwide. Whether the world is ready for such an organization is a larger question.


I don't think we've seen a global consensus in my lifetime, even before considering climate impacts. AI, social media, and climate all require a global approach. Aristotle's view that the human species should thrive together is applicable here. But we struggle even with basic decisions, as shown by the Moral Machine experiment, where people can't agree on who should survive in certain scenarios involving self-driving cars.


Art: Yes, trust is a crucial aspect of this discussion. Government regulation and responsible use of AI raise questions about whether we can trust government to regulate AI wisely. We haven't used the word "trust" yet, but that's the core of the issue. Can we trust any entity, whether it's China's or Europe's approach or others, to govern AI effectively? Our original book title was "Who Watches the Watch Robots," emphasizing the need for oversight.


That is a big issue. How far does it go before you trust business? Well, we do if we're the entrepreneur. But how long before we're going down the rabbit hole? I can tell. Do we trust the engineers? Well, we do, again. Do we trust people to overcome the least common denominator approaches that are in the structures we live in? Well, the one thing counting for us is that if we don't develop that trust, if we don't develop that approach to working together, it's only the species that's at risk. No deal. But trust has to be earned. If we trust it, and our trust is misplaced, that's not a good outcome either.


Interviewer: Trust, alright. Well, let's get a bit sci-fi because we're running up to the end of the show, and we like to get a bit futuristic. So, obviously, I'm going to start with you, as the futurist of record in this conversation. But I want to look out to 2040-2050 timeframe. Tell me how you imagine artificial intelligence will be integrated into society at that point in time.


Juliet: We know three things for sure. We know that the average age of humanity will be older, the hardware will be faster and cheaper, and there will still be pressure from climate change and environmental crises. With that in mind, people will learn to use technology more effectively, for better and worse. It takes about 10 years for that to happen, so we'll have very high levels of acumen with AI in solving problems, whose problems we don't know yet, and who benefits we don't know. I don't think there's such a thing as an optimistic or pessimistic future. I think it's a future of agreed-upon guidelines and rules that people break all the time. Our future is what we make it.


Interviewer: Alright. So, how can people find the book and find out more about "The AI Dilemma" and keep in touch with what you guys are writing and talking about?


Juliet: You can find us at kleinerpowell.com. The book is there, our AI advisory is there, and bios on Art. And some of the work we've done for other clients. If you want to reach out to me directly, it's julietpoul.com. I'm also on LinkedIn. Art is also very active on LinkedIn, and we'd be happy to talk with you anytime.


Interviewer: Excellent. If we didn't have enough time this time, like we didn't even talk about Universal Basic Income (UBI), but if we're going to talk about AI taking over, we need to talk about UBI. We can't finish without that. This is the extended episode now.


Juliet: If you want to talk about UBI, we can do it.


Art: We're ready.


Interviewer: This is fantastic. So, let's ask about that because we've never had a technology that can simultaneously affect such a wide range of jobs so quickly. So that's why I think you either are going to have at some point massive technological unemployment or you'll have UBI, which is a sort of binary choice. But what do you guys think?


Juliet: I don't really think that the choice is binary at all. I think that both options will likely be on the table. The pandemic was a test to see what individuals would do with free money arriving and a set limit on what they can and cannot do physically. I think that a lot of governments collected a lot of data towards that possibility, especially in North America. Ultimately, if individuals can own their personal data, we will have more trusted frameworks where people have the ability to weigh in on what their data should be used for.


If we all agree that climate change is an issue and that we want to use our collective data in that way, we have the technology to do that today, but we don't own our data, so it's up to others to make those larger decisions for us. We can cure cancer today, we have enough data to do that, absolutely, but again, is this a priority so far? Not yet. We could have been on 100% renewable energy by now if we had made that decision back in the '70s.


Yes, so I do think that talking about these things much more openly is at least the first step in making any kind of change, and it's a change that's needed. I think that there are a lot of asymmetries of power that will change once people start monetizing their data or telling large organizations, "Do you want to monetize mine? Sure, you can negotiate with me and my 100,000 best friends over there, and I'll tell you how much it's going to cost you." Great, I like it.


Well, thank you both for joining the show. If you're listening, check out "The AI Dilemma: Seven Principles for Responsible Technology" wherever good books are sold. NYU professors Juliet Pal and Art Kleiner, thank you for joining us on The Futurists.


That's it for The Futurists this week. We'll be back with you next week. If you enjoyed the show, make sure you tweet it out or send it to your friends or make a comment on it. Give us a review; all of that helps people find the content. It seems like it's working because we're having some really tremendous growth after crossing the half-million downloads mark earlier this year. So keep in touch, listen out for the next episode. We are going to have Kim Stanley Robinson on in September, so looking forward to that one. Art, you can maybe listen in on that one as well; it's going to be a lot of fun. But until then, we will see you next week. Of course, because we will see you in the future.


That's it for The Futurists this week. If you liked the show, we sure hope you did, please subscribe and share it with people in your community. And don't forget to leave us a five-star review. That really helps other people find the show. You can ping us anytime on Instagram and Twitter at @futuristpodcast for the folks you'd like to see on the show or the questions you'd like us to ask. Thanks for joining, and as always, we'll see you in the future.


Related Episodes

Futurists1
Futurists2
Futurists3
Futurists4