Futurist Logo
EpisodeListItem_speaker_name__KJvPp
2023-09-21T18:08:47.308Z

Living with Super Intelligent AI

with

Roman Yampolskiy

This week we interview Dr. Roman Yampolskiy, a renowned specialist in Artificial Intelligence. We delve into the likely path that AI will take over the coming years, and how Artificial General Intelligence and then Super Intelligent AIs might change the course of human history, and life on our planet. How far away are AIs that would be as capable and as intelligent as humans? It may be much closer than you think.

Analysis complete. No addtional information is required for context. Proceed with transcript display ...

View Transcript

document button

this week on the futurist well we used to try to understand how tosolve a particular problem and then create an engineering project to solve it the latest success comes from we haveno idea how it works it's a big black box we throw lots of big data at it andhuge compute and the more compute we throw at it the better it does it surprises us it learns things we didn'tactually expect it to learn but it works it works cross-domain and as i said itsurprises us with new skills from the same data[Music] welcome back to the futurist featuringWelcomemyself brett king and robert turcheck we're both futurists but we like to interview the thought leaders thethinkers the sci-fi authors the engineers the practitioners the superforecasters that are actually building the world of the future you know one of the topics that comes upall the time when we're talking about the future is that some people are afraid of the future the future can be scary it can be a place that seems likea threat to folks and and candidly news media doesn't helpArtificial Intelligence as a Threathollywood doesn't help the stories that we tell ourselves in mass media usually frame the future assomething that's vaguely scary or specifically very scary kind of a dystopian notion and one of themythologies that's been promoted a lot in the press and in popular mediais this idea of artificial intelligence as a threat artificial intelligence running amok uh artificial intelligencestealing our jobs and so forth we've heard that again and again and again the t1 000 yesyeah that's right all the way back to the 2001 uh space odyssey but but it's a theme that keeps coming up there aredozens of books if you look on on amazon you'll see dozens of books about you know the the war against the robots orthe robots are going to steal your job and so forth and um well there's some validity to being concerned about that iIntroductionthink we tend to lose sight of the fact that artificial intelligence can be an incredibly powerful tool if it's managedproperly so for today continuing in our ongoing theme of practical futurism i thoughtit'd be helpful to bring in someone who's actually focused on this very subject roman jampolsky is a professorof computer science at the university of louisville in kentucky but that's not all he's theauthor of a number of books a number of books focused on artificial intelligence and ai safety and securityand the topic of super intelligence and these are some of the things i definitely want to get into so welcome to the show roman it's greatto have you here thank you for joining us thank you so much for inviting me now todaywe'd love to get from you is uh perspectives on where do things stand now if you can just tell us about thestate of the art in machine learning or machine intelligence so it's not static it's changing veryState of the art in machine learningquickly in fact in the last month probably about five or six papers came out each oneestablishing new state of the art just this week there was a paper claiming to produce uh somewhat general artificialintelligence capable of hundreds of different uh skills game playing robotmanipulation uh so i don't know what the state of the art is we wereabout to get into artificial general intelligence so nothing will happen again for a couple years you know thatthat's a topic that comes up every few years they say that often ai is is a is asubject that kind of has a permanent sunrise because topics arise but they never quite come to fruitionwhat's your personal take on agi do you think that's at around the corner some people say it will never happen what'sthe probability of agi happening in the next 10 years well now there's definitely a insane take on itWhats the probability of AI in the next 10 yearsmaybe 10 to 20 years is reasonable more and more people are saying five toseven that seems very very optimistic or pessimistic depending onhow you perceive it but if the rate of acceleration we'veseen in the last month continues then maybe because prediction markets are saying now it's 20 36it was about 20 45 a month ago right i was shaved like seven years in amonth and for those who aren't familiar with uh agi can you give us this explanation of the difference between narrow aiand and a general artificial intelligence sure so we used to try to make ai for a specific purpose you had aAI vs General AIspell checker you had a chess playing program you knew exactly what it's doingit was good at one thing maybe very good but it had no other capabilities and you couldn't reuse that code for anythingelse general is capable of doing good work in multiple domains so just like a humanbeing can drive a car and play chess and thousands of other things generally iwould also be able of uh doing many things and learning new skills in multiple domainsand and would a general ai be something more like a human personality is that an aspect of it or is that a differentdimension altogether well if by personality you mean certain errors and bugs it makes whileAI Personalityperforming its work yes they would have definite personalities that's a great question let me let me uhlet me um just ask you about um you know that that transition you talked about sothat you know we have the expert systems uh you know like the ibm um deep bluechallenging casper gary kasparov in chess and so forth but we've made enormous progress in recenttimes using machine learning so what was the breakthrough from an artificial intelligence perspective thattook us from those very heavily code based expert system type strategies tonow machines that actually can take in information and learn you know backward propagation and all of those thosecapabilities what was the turning point there well we used to try to understand how toThe Turning Pointsolve a particular problem and then create an engineering project to solve it so an expert system we would find ahuman expert in a specific domain there would be knowledge extraction process throughinterviews or something like that and we would try to encode that into a machine verytime intensive very difficult process uh the latest success comes from we haveno idea how it works it's a big black box we throw lots of big data at it andhuge compute and the more compute we throw at it the better it does it surprises us it learns things we didn'tactually expect it to learn but it works it works cross-domain andas i said it surprises us with new skills from the same data let's talkabout that black box because inside the black box uh is the unknown we're not quite sure howthese systems actually arrive at their conclusions what are some of the reasons we should be concerned about the black box whatare the fears there so i have a few papers published this year and last year about impossibilityFears of AIresults in ai safety and ai and those include unpredictability we don't know what thesystem is going to do we may have some idea about general direction it will take but not specifics of decisionsuh unexplainability we have no idea how those decisions were made and thesystem is unable to explain it to us either the explanation is so complex we would never get it or it's asimplification not very honest uh kind of what we tellkids then they ask uncomfortable questions and how do we differentiate thathow do we evaluate the um the system if we know that it's making uh you knowlet's say uh decisions based on something we can't comprehend uh that's unknown to usum but yet we start to see outcomes that demonstrate some sort of bias ormaybe maybe an undesirable outcome and there's plenty of examples of algorithmic bias already at work leavealone artificial intelligence uh so this is a known risk how do we govern for that how do wedetect it and then how do we correct for it so if you immediately detect problemsProblems with AIthat's easy the difficult problem is if it works beautifully you see no problems and then you deploy it and then itsurprises you a month later that's the difficult one and we don't know how to do it we havethe same problem with humans right we do lie detectors we do background checkswe do religion morality and then we betray you anyway so it's the same neural network just faster with moredata so this is actually quite interesting you know um if you go if you look backat ai theory we get back to rosenblatt and the perceptrons and the effort totry and create these neural circuits that mimic um human the human brainfunction um and so there's obviously been work in terms of neural chipsets and other things like that but um youknow are we at a point now where we recognize that the trajectory for machine learning agi and ai in generalis diverging from that human model of uh you know sort of the the neural network piece orare we still trying to essentially build machines that replicate the way humans think i think we learn a lot fromNeuromorphic chipsneuroscience and we borrow everything they discover and it seems to improve uh performance of those systems andsimilarities are very strong in including in terms of errors and biases we observe so not only do they performas well or better but they also fail in sort of predictable human ways a lot of timesand so um you know we are seeing you know do you do you think that ai chipsets are going to have to supportthat sort of um you know that sort of the software-based perceptron approach oris it just about as you said compute power well there is advantage to having neuromorphic chips which canlearn and adapt as a native neural network but it seems even with outcome we're getting excellent performance onmany domains excellent tell us about your work on safety and security for artificial intelligence this is a bigarea of focus for you tell us the things that you're concerned about and the methods you're using to explorethat learn about it and maybe govern security in this space right so almost everyone in ai works andWhat are your concernsdeveloping more capable systems either in a specific domain or in generalvery few people as a percentage of total number of researcherstry to understand what problems might we encounter with those systems especiallyas they become more and more capable things i mentioned in terms of our desirabletools for controlling those systems would be being able to predict what they do explain what they do verifytheir software verify overall architecture goal systems andfor all those they're already well known established problems some are just impossible to accomplishothers we don't know how to do yet maybe we'll discover how to do them in theory at least maybe not inprinciple so uh what i do is i usually try tounderstand historical context how do intelligent systems fail i have a papersurveying historical accidents and trying to find trends in that everyone has trends showing exponentialimprovement but there is a similar trend in terms of bugs and failures and impacts from those problemscan we predict the type of uh bug we'll have with your service or product based on what you expect it will be doing wellyeah most of the time we can so anticipating problems and trying to seeare there actually possibly some solutions for addressing them one of the things that is sort of a weaknessglobally in terms of ai right now is sort of clear regulationwhen it comes to ethics and you know those a lot of those behavioral elementsi think part of this stems from the fact that you know when when we are talking about artificial intelligenceum you know many governments think of this as a future problem rather than a today problemand we spend a lot of time debating whether ai is going to impact the workforce and things like that when weshould be investing more time in helping human society transitionto incorporating artificial intelligence in society you know when when you're havingthese conversations at a senior level of in government and so forthhow do you articulate that need to think about much sort of longer term implications of ai and how do you guideyou know organizations like this in terms of policy setting and um you know absorbing artificial intelligence in thesociety well it's not even obvious that governance would help if we look at ourHow do you govern AItechnologies where politicians try to interfere computer viruses spam simplymaking those things illegal didn't really solve anything uh it may actually make things worse if youkind of limit what the good guys can do and the bad guys have to go to a different country to develop unsafe evenmore unsafe products so even that is still research in terms of how do you govern advancedintelligence systems for narrow ai for predictable deterministic systems yeahyou can have very specific uh requirements okay employment system you cannot have racialbias this is easy to check we can look at the data okay this is doable but this is tool ai this is something humans usein the office to make it easier if we switch to general agent ai then allthose standard methods of testing and shaking no longer apply and we are notat the level where we can monitor life or evaluate performance of those systemsnow some people hearing that might say well then we ought not to pursue this because we don't even know how toanticipate the potential problems let alone recognize or correct for the problems so perhaps it would be betternot to do it at all what are some of the arguments for pursuing this to continue to pursue it continue to develop it wellBenefits of AIthe benefits if we do it right if we get a friendly super intelligence a tremendous economic benefit would be intrillions of dollars of physical cognitive labor uh scientific researchcuring all sorts of diseases unknown unknowns in terms of benefits would be greatalso you can't stop this the financial incentives are so high it's simply notmeaningful to say and now google facebook apple all stopped doing newresearch let's see what happens it's just uh another impossibility result inai you cannot ban it oh so in a way it's similar then to cryptocurrency where there are plenty ofcritics and some people say we should ban cryptocurrency but the proponents of cryptocurrency say how can you stop itit's already out there you might stop it in one jurisdiction or one country but that just means thatinnovation will divert to another location and continue so you think it's a similar thingAI and smart contractsit's very similar and you can actually see exact parallels a smart contract as it becomes more advanced is exactly aisafety problem how do you control something ahead of time not knowing how it's going to be used not knowing whatsensory data it's relying on that's that's the ice safety today the the smart contracts arerelatively primitive and that has something to do with the amount of compute power in the distributedcomputer and the blockchain but already we're starting to see a large number of dows distributedautonomous organizations decentralized organizations ai based corporations actually right that's where it's leadingthat's what it seems to me uh we spoke to wolf call just a couple weeks ago on this topicand he shared some thoughts about ai governance um it's an interesting notion to have a ai corporation or you know acloud robot if you will that manages people um roman what's the prospect for that likehow what timeline do you envision for a business being managed or businessdecisions being managed by an artificial intelligence i have a paper about that legally it's possible today you can haveAI and business decisionscorporate personhood and through that loophole grant legal status to ai so it may not be thesmartest ceo but you can definitely get it done today you mentioned that uhmodern smart contracts are very primitive but that doesn't stop them from failing every week and if it's onlya 10 million dollar loss we kind of go that's nothing last week it was 600 million so even if this primitive kindof bunch of if statement contract cannot be verified what hope do we have for something which truly controls a lot ofcyber infrastructure the whole economy military response that's actually very goodway to introduce people to the problems we are truly concerned about he just brought up a whole range ofother possible scenarios uh you know including the idea of robotic warfarewe're kind of on the brink of that right now we're seeing that happen right now in the conflict in ukraine where thereare robotic systems or semi-autonomous systems being deployed and this is where it starts to get veryscary for people because we wonder well wait if we start to go with full-onautonomous uh military systems what happens if those get out of controlis that the kind of problem that you're concerned with are you focused on that at all it is anotherAutonomous military systemsdifficult uh challenge specifically malevolent by design intelligence where you have in addition to all the standardproblems bugs misaligned systems you have malicious payload the system isdesigned to kill people by definition so it's problematic but it's uh i think notas big of a challenge as unexpected surprise failures in systems whichactually impact billions of people in terms of just daily life electrical greed airline industry so that would beeven more impactful than a crazy drone just starting firing in the wrong direction maybe people are unaware howbroadly distributed this technology already is i mentioned the electrical grid and other infrastructure and ofcourse the eye in in financial transactions and in banking uh can you give us a sort of state ofthe art overview where are where is artificial intelligence deployed today in the economyWhere is AI deployed todayyou would think we would never be crazy enough to surrender control to machines but we did it before we became evenhuman level intelligent most stock market trades are done by bots something like 85 90 probably at leastuh almost everything is uh automated to the point where you cannot manually takeover it's just the system is too complex so you have to rely on it we see it withsome recent crashes of airplanes how the pilot is too complex no one fullyexercised or understood all the code behind it so the pilots have to kind of either trust it completely ortry to fight control out of the hands of the autopilot and we've seen accidentshappen so pretty much every industry today has a significant amount of software making decisionsroman i'm i'm interested in how you got into artificial intelligence what whatinspired you to um you know invest your time and andand you know become an expert in this field because obviously you know you are one of the top guys in the spaceWhat inspired you to become an experti was uh very passionate about improving the world isaw a lot of benefit from this technology if we can get help again with scientific research with engineering iwas not thinking about the side effects at the time so i was very optimistic i looked over my statementapplying to phd programs and i was like that's so cute look at that guy that's coolwhen do you when what's in your memory what's your first awareness of artificial intelligence oryour first interaction with ai video games since i was avery young child that's that was a big part of my life and i was initially interested in designing games creatingmore interesting ai characters for them well i think right now we should probably take a break you're listeningto the futurists with brett king and me robert terczyk and our guest today is drroman yampolskiy dr yampulski is an expert in artificial intelligence and just after this breakwe're going to get into the big topic of super intelligence [Music]welcome to breaking banks the number one global fintech radio show and podcasti'm brett king and i'm jason henricks every week since 2013 we explored the personalitiesstartups innovators and industry players driving disruption in financial services from incumbents to unicorns andfrom cutting edge technology to the people using it to help create a more innovative inclusive and healthyfinancial future i'm jp nichols and this is breaking banks[Music]hi welcome back to the futurists this week brett king and i are interviewingdr roman yampolsky an expert in artificial intelligence and ai safetyand security now one of the things that you've focused on is this thing called the aicontrol problem roman can you tell us a little bit about what is the control problemTypes of controlthat's a great question uh there is still some debate about what specifically it means i have a paperwhere i try to formally define different types of control and i arrived at four differenttypes one being direct control where you just give orders to the robot very simple uh we all know how it fails it'skind of a genie problem with wishes you wish for something then you go it's terrible you wish for that to be undoneand now you have one wish left and you're worried the exact opposite of it is idealadvisor where the system is smarter than you it knows better what you want you don'thave to wish for anything it just takes care of things for you you're not really in control but youmight be happy and then there is two kind of hybrid where it's like it sort of knowswhat you want but it still needs you to make orders verify them and so on andfor each one there seems to be some level of problems either of the orders backfire or you're just not in controlanymore you are pet in a zoo and there is a zookeeper and they know exactly what to feed youso depending on how you want to see it it may be somewhat unsolvableso that's the ai control problem but it doesn't sound like the problem has a great solutionfrom what i see it doesn't and then i list all the ingredients possible solution would havefrom political science economics mathematics computer science everysub-domain you can think of there are well-known proven impossibility resultsin those fields so no we cannot all agree on a common voting procedure no there is not away to you know allocate resources fairly without bias and so on so if all theingredients impossible it's very unlikely that we can kind of mix them together and get this uh possiblesolution now that seems to be kind of a risky scenario from my viewpoint because whatwe know is there's a bit of a race happening right there's a kind of arms race in artificial intelligencenot just between private corporations that are putting resources into it but also at a national level becausecertainly this is u.s as an example right that's right certain countries see this as a national priority artificialintelligence and so when you have an arms race going on most people are focused on makingadvances um finding new things you know pushing innovation forwardand i think if you're a voice saying well hang on slow down we should double check let's put some control measures inplace you might find it your voice in the wilderness and not many people are listening to youdo you have that experience or do you find that people are actually receptive to these questions about control of aiand safety well there are certain two camps right those who are already convinced and theyAre people receptivewould love to see progress put on some sort of standby moratoriumlet us see what's going on figure it out and those who don't know any problemswith the technology they develop so they are trying to get there first as fast as they can there is also interestingvariant and where people agree that there are problems but still try to get there as fast as they can andthat one i just cannot crack let's be beyond my human intelligence you know when we look at china they'vehad significant progress because they've got such large data pools to call upon in respect to human behavior for machinelearning and things like that we're seeing them make significant progress in autonomous transportationand uh and so forth um you know what sort of advantage does china have in respect totheir ability to seed machine learning capabilities and the fact that from a societal perspective they seem much moreinterested in infusing artificial intelligence at a society level how doyou think that will play out in term terms of china's economics over the next 20 or 30 yearsChinas economicsso they have an advantage i think in terms of less regulation and being more willing to experiment on people sothey'll deploy and get data without concern for privacy it's definitely an advantage when youneed more data i don't think they are at the forefront in terms of innovativealgorithms they are very good at taking existing approach and scaling it deploying it and obviously you can seefrom their economy they are doing really well in all of those approaches but i don't think they arethe first in terms of deploying completely novel methodsyou know and you're going to get obviously chinese ais that have been trained in china and you're going to getyou know us-based ais from the tech giants here and so forth um you knowwhere where do you get the problems in terms of trying to take those models offshore or do youthink that um this is sort of going to disappear with global data models feeding this or do you get the are yougoing to get regional and national flavors of ai that don't only tend to work uh you know in those in thosegeographies well if you train on local data you're going to get local bias and localpreferences and local common sense right so what is considered very standard in one country maybe a fancyfor criminal and another and we'll definitely see it initially again all of it until we get togeneral beyond human level performance with tools after that i don't think borders will make much of a differencefor that the on a related note there's a concern when you look at this from outside ofcountries like china and the united states that are at the forefront of developing artificial intelligence whenyou look at this from the perspective of the global south there's a concern about a kind of data colonialism where data assets are beingaccumulated by countries in the global north and those are those those data assetsare used to train artificial intelligence that is then unleashed on the countries in the south and that's aform of artificial intelligence imperialism if you will do you see that as a possibility is thatanother kind of risk that we should be concerned with this kind of data colonialism and ai imperialismthere is a lot of what i would say current problems uh algorithmic bias technologicalunemployment everything you just mentioned there are great people working on it iam not one of them my work is mostly about future systemsand human level and beyond not because those problems are not important but because the impact fromthem is limited maybe economic hardship maybe something else but each one of them isvery unlikely to exterminate all of humanity whereas with super intelligenceit's a possibility existential risk is something i'm very concerned about sothat's what i concentrate on well then since you bring up the topic tell us what is super intelligence giveus a definition there so it's a system projected to be better than any human in any domainor humanity as a collective so it would be a better chess player better driver better artistessentially it would dominate in terms of its intelligence in all domains we're interested in and beyond thatokay so we've seen over the course of the past 20 years machine intelligence systems or machinelearning systems that could surpass humans in one dimension in one domain one after another but what you're nowproposing is that that we might end up with an ai relatively soon it sounds like that can surpasshuman capacity in every domain is that what you're talking about when you say super intelligence right exactly andSuper Intelligent Systemsspecifically the domains of concern of science and engineering it would be better at designing next generation ofintelligent systems so is that is that going to be you know do we do that on an aggregation basistaking like for example you know what tesla's doing with full self-driving andyou know what what um boeing is doing with uh you know or or these uh you know pilotless dronesare doing with you know for flying cars and and um gpt-3 with natural languageprocessing do we sort of take all of these individual competencies and aggregate it up or do you work on asuper intelligence as a as a unique approach to ai singularlyAggregationi think aggregation doesn't work it's very hard to add chess playing knowledge to car driving knowledge so it's asingle system trained to be generally competent and we see it with the latest large models they are pre-trained on alot of different data and then with a few examples master new domains so i think that's the approach and i thinkthat's what we see humans are capable of doing as well we don't just glue another human so you can play chess now we'regonna train the same guy to do that so who who who's working on on the superintelligent frameworks well then obviously yourself i don't work on it i'm trying to slow itdown uh obviously not every company would call it that but in terms of what theyactually do deepmind is definitely working on solving intelligence google company uh open ai is another very bigplayer and everyone else is kind of trying to get into that space but maybe a few steps behindwhat are some of the physical uh infrastructure requirements for super intelligence because we know that evenwith the large learning models like gpt3 that there's a tremendous computationalcost that you need a lot of infrastructure just to run a that's a relatively domain specific uh model butnow we're talking about super intelligence so i would imagine that you need a much bigger infrastructure to support it talk a little bit about theprocessing power the compute power the energy consumption all right so there are estimates basedHuman brain emulationon estimated capacity of human brain if we can get same level of computationwe expect same level of performance and actually somewhat more efficient because biological neurons are slowso if we can get to that level and that's what a lot of predictions from ray kurzweil are based on when will weget to human brain emulation capability emulation for 8 billion brains and that tells you okay2023 is when we'll have enough compute to do it it's also possible that we'll find more efficientways to train the same model and there is some indication then that's happening so more efficient in terms of justcompute in terms of electricity necessary so it may actually be a much morereasonable process also what really makes a difference is the difference between training it and deploying it itmay be expensive to train it at me cost a trillion dollars but then it's almost free to deploy it and you quickly recoupall that cost from actually using that super intelligent systemlet's talk about data collection and data assets because one of the things that's quietly been happening over the past decadeis uh that we've been deploying sensors and iot all over the world in manyhidden places i think most people are unaware that when they walk around with a smartphonethey're carrying something that's loaded with sensors and even this apple watch has about a dozen sensors on it and soin our daily lives we're throwing off tremendous amounts of data smog but that data doesn't just disappearinto the ether it's actually been collected by unseen systems so has there been a significant changenow with the collection of data assets has that accelerated the learning process is that something that ishelping drive this movement towards agi i think there is at least understanding that that's a very valuable thing tohave it's not just a side effect you delete it's probably the main product of most companies eventually so there isdefinitely interest in preserving it collecting it and even more interestingly a lot of datawhich was previously safe and encrypted may become not encrypted with some breakthroughs inquantum computing and again more powerful intelligence so we might have access to things you assumed were veryprivate in the future to learn about you specifically tell us a little bit more about quantumif you don't mind because this is a this is a burning topic but brett and i are keenan uh you know the the greatadvances being made there right now we've been tracking the progress of that and uh you know it's even deployed insome respects where there's tools we can start developers can start to use apis to develop in in quantum computingand there are concerns that encryption models will be rendered irrelevant other people say well they'll be quantum encryptionso it's just a chess game there'll be a new form how do you see quantum computing affecting artificialintelligence and perhaps accelerating it so it may help us train systems quicker butAI and quantum computingit's also possible we don't need quantum computing we could get there with standard phenomena architecture i thinka lot more impact would be on cryptography and cryptocurrency specifically and privacy of existingcommunications so switching to quantum encryption may help in the future but it doesn't help you preservethe last 30 years of your communications purchasing history and so on and that's where the danger iswe're starting to see foreign governments try to acquire even encrypted data exactly for this reasonlater on we'll be able to decrypt it and get access to all the secrets souh and so that encrypted data from the past as you say the last 30 years thatmight be useful in training an ai then once it's decrypted using quantum techniques uh that might be a basis fortraining an ai and bringing it up to speed it may be another source of interestingprivate information about most humans which can be used for some additional manipulationthe quantum itself you don't feel it has a significant input into the developmentof ai as we think about it today it may we don't have advanced enoughquantum computers yet it seems that human brain is probably not relying heavily on quantum effects and all wewant is to get to this human level of generality and then it will kind of self-improve enough to get us to anylevel of performance feasible so this brings us to the point of consciousness really right is that umyou know there's obviously talk about the quantum mind and how consciousness might work at a subatomic level andthings like that um but when we talk about consciousness from an aiperspective a lot of the ai that we're talking about now is mimicking human behavior because it's learned thatthrough machine learning but do you think you know machines that you know and particularly super intelligentmachines that you you you've studied and researched will they be conscious in the same waywe think of human consciousness or are we talking about effectively an alien consciousness herea different form of thinking and thought processing so unless we think humans have somemagical components uh beyond physical soul or something like that then obviously any emulation of human brainhuman as a whole would have same property so i would be surprised if that was notcopied over and yes just like you would have super intelligence you would have super consciousness system capable ofexperiencing something much more complex uh maybe from different sensorymodalities so this quality humans have it stays good this looks good that system can have something for sourcecode or this compiles beautifully i don't know seems like it's likelyuh i have a paper where i try to understand what those qualia are and ikind of map them on bugs and errors in your computational process so what makesyou unique and special is how your hardware and your algorithms combine to interpret streams of data you're gettingfrom the world and that's your experience that's those unique bugs is what differentiatesyou and me and since we know ai definitely has bugs and sensoryproblems they may already be at some very primitive level experiencing those uh unique quality aswell roman um what i'd like you to do right now is sort of like you know take us ona journey into the future 30 to 50 years out where where super intelligence is infact integrated into our society how do you think we will live with aiwill we need to compete with ai like elon suggests you know enhancing our you knowaugmenting our own intelligence so that we can keep up with these machines or doyou see that society will um you know adapt to having super intelligence uh you knowembedded in our world well i think the moment we get super intelligence the main character in thatWill super intelligence keep us aroundmovie changes we are no longer the main character we don't really have much to contribute so another question is willthe super intelligence decide to keep us around for some reason and what that reason might be and in what state itwould keep us around at all it may happen much sooner than 50 years from now it can happen and i think there isai countdown.com or something like that and it's like at six years right now so it may be a much sooner uh situation uhi don't see what an unaugmented human has to contribute and if we doupload ourselves speed it up integrate with machines i don't seewhat we contribute we become a biological bottleneck and it makes sense to remove us from this hybrid systembecause again we don't have much to offer i don't keep my old iphone tape to my head because maybe it's still usefulfor something i just remove it because it really is just not necessary sohow do you think we incentivize these super intelligences then to keep us around i don't have that answer i don't knowWhat is super intelligencewhat super intelligence is a into what they prefer their motivations will be a lot of uhsafety and security research right now is to shape that motivation into having a bias towards liking humans we'retrying to remove all sorts of race bias gender bias but we want to instill thislike squishy biological humans bias treat them well and we don't know how todo it properly because all those definitions what is a human what does it mean to treat them well are not welldefined this reminds me of isaac asimov's rule three rules of robots uh turns out hewas he had some foresight in the remaining time talk to us a little bit about methodology because we're always interested in giving ourlisteners some practical advice about how to think about the future one of the things i i understand from listening toyou is that a lot of the work that you do involves writing papers where you're processing a lot of information and thenyou formulate it into a constructive narrative that helps people might help other people understand it but one other thingi noticed that you've done is that you collaborate and in your book on ai safety andsecurity you collaborated with 47 different researchers to create that bigbook so talk about the collaboration process because i think some of the people listening might think that you work inisolation that you're not necessarily collaborating across disciplines no it's definitely a team effort aDeploying ideassingle person can never accomplish much in science at least in terms of deploying ideasthere is a lot of different chapters it's an edited book and i tried to getkind of half and half half of a book is very big names famous people superstarswho at any point in history were concerned with that topic so from the first paper and singularity tokurzweil's predictions about super intelligence uh i try to kind of bringout their concerns they were independent chapters they didn't collaborate in a sense of agreeing and what to say but they allseem to express that we are likely to hit this novel level of performance and that maynot be all pure good and then the second part of the book is kind of younger researchersnot super famous names maybe yet who are proposing different technical solutions to those problems and again they'relooking at very different subsets i'm looking at safety of industrial robots i was sayinterested in social media manipulation by machines so again very diverse i tried to do more interms of covering different areas just to show those who are not convinced yet thatit's a legitimate area of research there is a lot of open problems and impact iscross-domain it's not just computer science there are chapters in economics chapters and philosophy so it'sdefinitely something worth looking at if you have not been explored exploring this area ofinvestigation and one of the interesting things i noticed when i was looking at amazon for your book on ai safety and securityis the reviews talk about how practical this book is uh it might seem entirelytheoretical the way we've described it many of the folks who've written reviews are working on artificial intelligencethey're doing artificial intelligence research and deployment and they have your book handy and they say they make more reference to thatparticular volume than any other book on their bookshelf so that's quite a nice kind of complimentyeah i think from the amazon reviewers who can be quite harsh as we all know well this has been an interestingconversation roman jampolsky thank you very kindly for taking the time to share with us your perspectives on artificialintelligence and the looming super intelligence i was not aware that we were moving thatquickly towards this uh scenario so it's exciting and a little bit scary to hearabout it thank you very much for joining us on the futurist this has been a pleasure chatting with you thanks for inviting me we should do itOutroagain in a few years see how long i got it yeah yeah well you know or how right you diddr yam polsky how how do people find out more about um your work on super intelligence and and how how can theyfollow you they can follow me on twitter they can follow me on facebook don't follow me home the ownerwhat's your twitter id um a roman yam y am if you google myname it shows up no we'll make sure to tweet that out all my papers in google scholar you can download for freefantastic well we really appreciate you joining us and thank you for lending your expertise itwas a very detailed and fascinating conversation you're listening to the futurist withmyself and robert turcheck we just interviewed dr roman yampolsky on superintelligence and a whole range of things if you like this uh topic or you like this interview um youknow please share it and please share with us what you'd like to cover next on the futurist who you'd like us to talkto about uh building uh you know a better fit future for humanity and don'tforget to leave a review of the podcast on your favorite podcast channel where you downloaded it but fornow make sure to keep listening and we'll be back with you next week but until then we'll see youin the future [Music] well that's it for the futurists thisweek if you like the show we sure hope you did please subscribe and share it with people in your community and don'tforget to leave us a five star review that really helps other people find the showand you can ping us anytime on instagram and twitter at futurist podcastfor the folks that you'd like to see on the show or the questions you'd like us to ask thanks for joining and as always we'llsee you in the future

Related Episodes

Futurists1
Futurists2
Futurists3
Futurists4