Super Forecasting

with Regina Joseph

Super Forecasting

with Regina Joseph

In episode #4 of The Futurists, Regina Joseph talks us through the mechanics of the methodology known as Superforecasting. From her work with Pytho, Sibylink, the National Science Foundation, the Intelligence Advanced Research Projects Activity (IARPA) and beyond, we learn how forecasting extends well into the future through human collective intelligence techniques and statistical analysis. What does it take to see the trends of the future emerging as the world seems chaotic, disruptive and unpredictable? Follow @superforecastr 

Super Forecasting

Regina Joseph is a political scientist, systems designer and creative director with a record of award-winning technical products & platforms. Her analytical work assists governments and multinational organizations. Her corporate work spans the world, from Sony to Hearst to Liberty Global. Her entrepreneurial work consists of such technical advances as the creation of Blender, the world’s first digital magazine. Joseph is a top-ranked Good Judgment Project (GJP) Superforecaster; her research in prediction & decision-making has delivered patents, platforms & cognitive training protocols in prediction and analysis. Her training programs are official curriculum coursework for European governments. Joseph is a published author; her work has appeared in outlets including The New York Times, Forbes and Reuters, as well as in academic journals.

deliver better foresight skills and tools to the private and public sectors.

Regina Joseph is co-founder of Pytho.io who deliver better foresight skills and tools to the private and public sectors, training  people to become sharper  through diagnostic and assistive methods and techniques.

Regina’s cofounder is Pavel Atanasov a decision scientist with a product-development bent.

Decision Science is the collection of quantitative techniques used to inform decision-making at the individual and population levels. It includes decision analysis, risk analysis, cost-benefit and cost-effectiveness analysis, constrained optimization, simulation modeling, and behavioral decision theory, as well as parts of operations research, microeconomics, statistical inference, management control, cognitive and social psychology, and computer science.

Regina is also founder of  Sibylink  a consultancy at the nexus of decision-making, foresight and information design

 

 

School of International Futures teach, mentor and champion individuals and organisations to use applied, participatory foresight and longer-term planning for public good

 IARPA The Intelligence Advanced Research Projects Activity is an organization within the Office of the Director of National Intelligence. IARPA takes real risks, solve hard problems, and invest in high-risk/high-payoff research that has the potential to provide  the USA with an overwhelming intelligence advantage.

Philip E. Tetlock  political science writer, and Professor at the University of Pennsylvania. Author of Expert Political Judgment which explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.

 

Aggregative Contingent Estimation (ACE) was a program at IARPA the stated goals of ACE are “to dramatically enhance the accuracy, precision, and timeliness of intelligence forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts.”

Hybrid Human and AI Decision Making

 

Daniel Kahneman a psychologist and economist notable for his work on the psychology of judgment and decision-making, as well as behavioral economics, for which he was awarded the 2002 Nobel Memorial Prize in Economic Sciences. Author of Thinking, Fast and Slow, the book’s main thesis is that of a dichotomy between two modes of thought.

xR, AR, VR, MR

 

02:02 Where did the term Super Forecasting come from

04:00 Why weren’t the intelligence community able to foresee 911

04:25 Who is Prof Philip Tetlock Author of expert political judgment

05:45 What is the good Judgement Project

06:11  Why can experts be bad at predicting the future

07:04 What makes a good futurists or Super Forecaster

10:21 Understanding human behaviour when making predictions

10:44 Hybrid decision making withs human and artificial intelligence

12:32 The importance of historical base rates for forecasting 16:15

Why temporal scope is critical to forecasting

20:22 Forecasting starts with systemic inductive reasoning

25:00 Expertise can lead to blindspots when forecasting

25:44 Why do experts get things so wrong 

this week on the futurists you know we’re entering into a world where we have to hybridize our decision making with machine intelligence and so when we think about a human intelligence whether that’s at the individual level or at the collective human intelligence level where are the advantages you know that we have on the human side that can actually outperform machine intelligence because i think that is a monolithic idea that people generally have that machine intelligence will always outperform human intelligence that it is necessarily better well it is good to identify where we add value in the future system right exactly and so but you can’t do that unless you measure it you simply cannot do that unless you establish certain benchmarks

hello and welcome to the futurists where we’re interested in talking to the people who anticipate influence and invent the future i’m rob terczyk and i’m brett king welcome to our podcast today our guest is a polymath someone who’s skilled in a number of different fields but fields that are really relevant to this topic of forecasting and anticipating the future our guest today is regina joseph she is a super forecaster and we’ll get into exactly what that means in just a minute she’s also a cognitive science researcher and she’s launched a number of interesting projects so in this show we’ll talk a little bit about how she got into super forecasting and then later we’ll get into some of the new things that she’s working on because they’re super relevant to this idea of futurists well welcome to the show regina thank you so much for having me it’s great to see you guys and uh thank you it’s a great pleasure to be here okay so let’s start off with the question everybody’s probably wondering about what the heck is a super forecaster yeah you have to wear a cape i know we get that all the time uh and i do have a cape somewhere but probably not the ones that most people have in their heads but uh uh so the the term super forecaster actually comes from a uh research program that began in 2011 and ran for four years until 2015 and it was funded by the intelligence community in the united states um uh iarpa which stands for the intelligence advanced research projects activity uh which is basically it’s the research and development arm of the intelligence community the office of the director of national intelligence in the united states and they run high risk high payoff experiments that are basically designed to avoid surprise um and so many people are familiar with darpa darpa is the sort of analogous division for the department of defense iarpa is for the intelligence community and so they they both uh you know darpa was really behind the development of what we now know as the internet so the kinds of things that most people uh trickle down into most people’s lives are developed maybe 10 sometimes 15 20 years before through these kinds of r d agencies they they put a lot of money in helping to develop experimental research and that often uh you know they they are the future right they’re they’re in the business of funding what problem was ayarpa trying to solve when they set up the futurecasters program so uh at the time of iraq’s creation you know this was just after uh this was in around 2006 but you know there was a lot of concern that uh uh the type of uh global conflicts that the united states was experiencing the types of global outcomes uh that people around the world were witnessing uh didn’t seem to be affected by the intelligence and information that was coming to experts like people working in the intelligence community and so why weren’t we able to detect 911 why weren’t we able to detect the fall of the berlin wall why weren’t we able to do something about even recently like the evacuation afghanistan there’s always a question of whether or not the experts that are deployed to make predictions or forecasts about outcomes in the future whether or not those are the best people that you have so our invited a group of several groups of teams right to to start to predict do forecasts and predictions but one team did particularly well as i recall that’s right and a lot of the inspiration for that research from the beginning came from a book written by philip tetlock who’s a professor at the university of pennsylvania he wrote a book called expert political judgment and basically what that book was was a a sort of a serial examination over many years of millions of forecasts made by pundits experts people who have the sort of public legitimacy and authority to claim expertise over a particular area of of interest and so what he what he did in that book was examine is there a correlation between expertise and accuracy in prediction especially around aerial politics and economics and so that book came out and uh in 2006 and so ayarpa was interested in thinking you know maybe we should test that out and so they build like darpa does they build their uh uh experimental programs uh around competitions where people are invited to submit proposals to a question that is postulated by the by by irfa and darpa and so teams are invited to make these proposals and then there’s a process of selection where maybe three or four teams will get to compete against each other phil tedlock had his own team called the good judgment project and that was not only one of the original four teams that was part of the ace program when it began in 2011 but it also became the winning team and i was a member he proved his thesis yes and and and then some and uh you know because before this there i think there was a certain amount of skepticism uh uh both uh in the public sector and the private sector as to whether or not well you know certainly uh some experts are very good at at prediction uh but it turns out that actually people are often surprised uh by the lack of direct correlation in certain cases between that type of expert credentialing and the accuracy of uh in fact i think telog’s team wasn’t even comprised of so-called experts it was comprised of people who had better thinking habits that he had you know he had kind of attracted and then sifted through to find the people who had the best thinking skills and you know quite a lot about that can you tell us about the attributes of somebody who’s considered a super forecaster what makes them a super forecaster sure um well uh uh phil and and and and his wife and and the co-principal and one of the co-principal investigators of that experiment barb miller’s you know what they were looking at was identifying some of the psychometrics or the sort of uh traits of what is is related to the propensity for being good at forecasting and i think to refine i mean i think a lot of the focus was well you know the people in the good judgment project phil and barb’s team uh were not experts that’s not entirely true some of us actually do this for a living i was already working at a think tank doing exactly this was building out a futures division for a think tank to make geopolitical predictions right an expert in futurism but you weren’t necessarily an expert in defense or intelligence or foreign policy

the largest foreign policy think one of the largest foreign policy think tanks in europe so so so that was exactly what i was doing and had had had already credentials around that so and there were also a few other super forecasters who did uh similarly have that but in general uh uh many of the people who were involved in the experiment and who became identified as super forecasters were people who were not working in the world of making predictions about geopolitical situations and economic situations but what was interesting was you know the commonality behind the traits that we share so things like most of us are kind of numbers nerds uh we did very well on uh the types of psychometric uh um assessments like number series berlin numeracy these are tests that measure how good are you at detecting patterns and numbers how good are you at being able to assign numeric probabilities and make numeric estimations about things rather than using words and that also speaks to a long-standing concern within the intelligence community about the quality of making a prediction you know it’s one thing to say whether or not you think that an outcome x is likely or unlikely but there’s a lot of wiggle room uh between what likely means to brett versus what likely means to robert and so uh and so some of the side experimentation that was occurring during the ace program period uh which was done by phil and barb and others involved on the team was looking at this idea of quantifying what those words mean and when we do when when you do that you realize there’s so much ambiguity in terms of what that means that you’re actually leaving something like 20 percent a 20 difference in accuracy on the table if you’re using words rather than numbers and so that that was a big part of understanding how do we communicate this what kind of process are we using to enable better uh predictive assessment so regina you you’ve mentioned psychometrics um obviously you mentioned you you’re talking about the statistical side of things but often this you know when we’re trying to predict the future or look at how our systems are going to respond to that we’re either looking at individual human behavior or collective human behavior so how much of the art of this is understanding the behavior of humans and historical precedents that can impose themselves on a forecasting model that’s a really great question because i think that leads to a lot of the current work that we are dealing with now you know sort of years after the end of the ace program which is about you know we’re entering into a world where we have to hybridize our decision making with machine intelligence and so when we think about a human intelligence whether that’s at the individual level or at the collective human intelligence level where are the advantages you know that we have on the human side that can actually outperform machine intelligence because i think that is a a monolithic idea that people generally have that machine intelligence will always outperform human intelligence that it is necessarily better well it is good to identify where we add value in the future system right exactly and so but you can’t do that unless you measure it you simply cannot do that unless you establish certain benchmarks and so going back to your point one of the things that i’ve been involved in and that i’m doing in my current research is is looking at identifying what daniel kahneman who won the nobel prize in decision theory and which also is one of the cornerstone foundational sets of ideas that powered the ace program the the iarpa research program and a lot of the anticipatory intelligence concepts that have been coming from it is this idea of when you are trying to make a decision about something most people are going to make a gut decision most people are not going to take a step back and think yes but what are the statistical realities about uh uh whether something is going to happen or not happen right most people default to their personal they make a gut decision and then they look for evidence to support their decisions so they’re going with confirmation bias right correct but what daniel kahneman was looking at was this idea um you know which is known as outside thinking at the statistical level you know what what we’re looking you know we would call that the base rate you know what is the historical uh rate of occurrence of an event that you could use to uh build a more accurate prediction about things that have yet to happen so so so being able to identify a base rate is one of the keystones of what i work on in my research uh um and also which is what we’ve seen in in our work especially in terms of the work that i do in establishing a systemic process the base rate is everything how you find it how you present it to the end user how you can adjust away from it so all of those are absolutely critical factors in sharpening the accuracy of a of a forecast or a prediction so give us a for instance because this is pretty high level so is there uh what’s an example of a base rate that you know so let’s say we wanted to forecast a scenario about um tech adoption you know maybe maybe uh vr and ar and xr and these these new technologies that are coming so we could formulate all kinds of predictions but they would be based on anecdotal stuff uh you know reports in the press what we’ve gathered maybe some numbers about you know headsets that have been sold that’s not as you’re saying that’s that’s sort of a trap right because we’re likely to make an inaccurate forecast based on on just kind of this randomly assembled data you’re proposing a base rate so what would the base rate be for something like that how would we go about constructing a better model for making a forecast well um you know i’m so so vr is a great example i mean i i’ve been doing work in vr as as early as the early 90s you know so so this is something that has been around and has been in development for many decades and so and yet right before ces the consumer electronics show every year it’s one of the biggest uh uh uh one of the biggest tech shows in the world annually uh there are always predictive roundups right every newspaper has them you know they ask a bunch of experts and pundits you know what do you think is going to be the hot thing at ces what do you think because ces is held in the spring it’s usually and it’s a bunch of people spitballing ideas about whatever they think is hot let’s get real those end of the year forecasts about technology are [ __ ] our bs and so so and and and for for a period about 10 years i was being asked year after year uh from like around 2013 up until even like a few years ago i would get a call from newspapers like the guardian or whatever saying what do you think you know everybody says vr is going to be really hot and i said this is exactly what everybody’s been saying every single year and so if it wasn’t hot then and you still haven’t sold through you know a a sort of minimum viable level of uptake in american households at a certain price point level you know you’re always going to be in this setup where yeah vr is going to be the next big thing okay but hang on hang on a sec i’m sorry to bust in here but hang on a sec so if someone were to say well okay vr it’s one of those industries where the sun never really rises so my prediction for next year is vr is also going to be disappointing um they would have been right for the last five or six years because there’s been a lot of hype since 2017 every year it’s like oh this is a year vr and that doesn’t actually happen but they’d be wrong eventually right it might be this year that they’d be wrong because facebook’s going to sell 10 mil 10 million units of the of the oculus quest uh two which is a pretty good headset it’s really the first decent headset right 10 million is a pretty important benchmark right that’s sort of like you’re talking about sony playstation level you know in the first version of the playstation came out that was a big turning point the game console business so narrowly defined you could say well things are starting to trend differently now okay let’s get back to the base rate so what’s our base rate in that scenario where we’re trying to guess exactly next year so what you’re going to be looking for a couple of different things first of all is the temporal scope problem this is one of the biggest issues that people have in being in getting good at forecasting is being able to actually determine the time period of when a future event will occur uh typically pundits when they make forecast they’ll say well there’s going to be a recession yeah eventually but if you’re making it on year x and by year x nothing has happened you you weren’t very sensitive to the temporal scope of your prediction right if it happens 15 years later 20 years later well then it doesn’t really matter um you know so so temporal scope is is a big problem so part of the secret in being able to extract better accuracy has a lot to do with not just about the forecast it’s about the questions that you ask and so that is one of the areas that we specialize in too is this idea of question generation asking the right questions is a form of meta forecasting you have to be able to formulate a question in a way that it’s still going to matter a year from now two years from now three years from now it should make the forecasts it should allow the forecast to remain relevant uh so so if somebody says yeah vr is going to be a big hit this christmas okay i mean robert you just post something which could be molded into a tentable question so by christmas of 2021 will facebook sell 10 10 million vr headsets that’s a great that’s a great question to pause it now the answer becomes what’s your forecast and this is where you need to start to integrate the idea of the base rate is okay let me think about that for a moment how many vr headsets got sold last year how many vr’s headsets got sold the year before that let me look at it we so so at pithom i come to hear my research partner and i pavlet tenacio we we talk about what we call the the base rate rule of ten right take ten incremental units of uh prior history whereby you can establish some kind of a pattern the answer for 2020 is 5.5 million units right so there you go so uh so so if we start to assemble that well if facebook is projecting 10 million you know and we’re looking at numbers uh look perhaps quite different from that projection right then your job as a forecaster is starting to make granular adjustments between what are previous rates of occurrence what is a say for example an estimate that somebody makes about a future outcome and uh what are all the other factual uh bits of information what are all the variables and parameters that you need to take into account to refine that granular adjustment you’re sort of let’s say you get a base rate we have x number of vr headsets sold at this point in time as of the time of this question okay that’s a that’s a good data point to have but then if you’re going to try to refine that judgment what you want to think about is okay well what are the price points of these headsets what is the ratio of the price point of the headsets the actual sell-through of the headset and how often has that happened every single time a new headset gets uh uh gets released onto the into the marketplace so game consoles and smartphones could be a proxy there for seeing what the price point is i would be looking at that data as well i would be looking at comparative rates of uptake you know in certain types of devices and gear that have some kind of similar function entertainment knowledge development you know look at the rates of adoption for those products and look for similarities are they made by the same manufacturers uh uh where is the shelter in different regions in the world right so so you have to apply you know a very specific process of systemic reasoning right you you have to start first with uh basically you you want to develop some inductive reasoning about the problem right you’re you’re trying to think through what are all of the elements that i need to sort of put into the mix in order to understand this problem and to generate a reasonable response so in terms of regina in terms of that base rate um you know like how much of that involves you actually learning about the domain to successfully come up with that base rate versus using analogies from other forecasts that you’ve had again a great question because that goes right to the heart of the idea of expertise and we are dealing with that problem uh very directly in the current research that we do called human forest which is we’re focusing so i’m going to answer your question with an example so right now our current research focuses on it’s called human forest and it focuses on the area of clinical trial transitions for new drugs right the process of a drug a really good thing to be investigating right now exactly and and and we did it for very temporal reasons we’re in the middle of a pandemic uh you know there’s a lot of change happening in the uh in the traditional uh uh rollout of how a drug goes through from development to approval and to marketing in the public domain so so what happens in that space you would assume lots of people assume that well if you’re setting up an environment where you’re asking people to make predictions on will drug x you know uh transition from phase two to phase three testing or will drug y uh receive clinical approval to market the drug by x date right these are all great things you can forecast on most people if you ask them it would say oh the people who would perform the best in that type of a contest would likely be people who work in the life sciences clinicians doctors pharmaceutical executives researchers that would be a very logical and rational expectation on the part of most people uh the reality is is that that’s not what we see in the research and and that has a lot to do with how do you present the information how do you format that information what kinds of additive information are you offering to those people what we’ve seen is that our non-experts lay people people have zero professional backgrounds in the life sciences biology medicine medical research uh these people actually outperformed experts and so this goes back to you know and again you know if phil has done a lot of that foundational work in establishing that you know there are a lot of reasons why experts don’t always get it right uh and much of that has to do with certain psychological issues um if you are in a small field of experts uh consensus matters because that involves reputational risk if you go out on a limb get it wrong regina let’s let’s hold off on that for a second because i want to make sure we delve deep into the blind spots and the cognitive bias uh after we go to break this is super interesting stuff let me let’s take us to break just for for those of you that just joined us so we’re talking to regina joseph the cognitive scientist um and uh a super forecaster from the school of international futures we’ll be right back after this break

welcome to breaking banks the number one global fintech radio show and podcast i’m brett king and i’m jason henricks every week since 2013 we explored the personalities startups innovators and industry players driving disruption in financial services from incumbents to unicorns and from cutting edge technology to the people using it to help create a more innovative inclusive and healthy financial future i’m jp nichols and this is breaking banks

hi and welcome back you’re listening to the futurists with brett king and me rob terczyk and today our guest is regina joseph and she’s a cognitive science researcher with an expertise in forecasting super relevant topic for our show in the previous half we were talking about methodologies and some of the background and some of the reasoning how a group of forecasters and people are interested in led by a psychologist named philip tetlock started to notice something that’s really important which is that expertise doesn’t always mean that you’re going to make accurate forecasts sometimes expertise brings with it a bunch of institutional blind spots and those blind spots actually when you start to measure out the results really track the results the forecast over time we can see is that sometimes experts are no better or more accurate than flipping a coin in fact the 50 50 forecast uh track record is pretty good for most experts one of the people who established that early on is daniel kahneman the nobel prize winning psychologist and researcher into cognitive bias and of course famously the author of thinking fast and slow now regina this part of the show what i want to talk about and i think brett and i are both keenly interested in is how do people get it wrong how do these people who are so smart so steeped in the information and so knowledgeable about the subject matter how do they have blind spots what are the cognitive biases that get in their way i’ve seen it so often in the banking space for example you know i was just in switzerland last week with the swiss bankers association and i’m meeting bankers that say no no no people are always going to prefer to to uh do banking with a human and i’m like that’s not even true now let alone in the future right but yeah how do we how do we get around those blind spots regina well i think overconfidence is is is one of the most common problems uh that that uh uh people who wish to make forecasts need to overcome uh i think that and and and certainly kahneman identified this that you know we everybody thinks they’re good at forecasting the natural tendency that we have we think that our decision making is pretty good our process is pretty good the reality is is that when you put that to the test yeah most people are pretty lousy it’s like people who trade stocks they always tell you about the stocks they picked that went up but they never talk about the stocks that they you know where they blew it they got it completely wrong and then over time in their own mind because they’re telling that story over and over again what they start to believe is that all of their picks are good that they’re quite good at this uh so it’s it’s really astonishing to me that the um we blind ourselves you know by repeating this story over and over again one of the things that uh daniel kahneman did so well was reveal the heuristics i’m not sure if i’m saying that exactly right but this idea that there are mental shortcuts we take because it’s quite difficult to actually think about your thought process and so we always take a shortcut some of those heuristics include things like the availability heuristic you know where you use the most recent example and that’s your kind of baseline if you will it’s like a fake baseline for predicting the next thing that’s going to occur but it’s not there’s not enough historical continuity there for that to be valid another one we talked about a moment ago is confirmation bias where we make a decision about what’s going to happen then we search for information that supports our opinion rather than disprove it it’s like the opposite of the scientific method regina can you tell us a little bit more about cognitive bias well um i i think that the i think the simplest way to talk about it is that we we all have them um and they are very very difficult uh even when you are aware of them uh they’re very very difficult to mitigate and so uh it does require really understanding how to identify them uh most people really don’t know uh if you if you ask somebody to identify a general bias that that sort of factors into their thinking most people i think would hem and haw about what what does that even mean um you know so so uh just being able to identify what are common biases that all of us are are sort of subject to confirmation bias overconfidence uh hindsight bias um so so just even being able to know what are the types of um mental actions that we typically tend to undertake when we’re making decisions that’s a good start one of the things i found so striking in both convon’s work and also in philip tetlock’s book super forecasters is um is about political beliefs or political convictions and so here’s a question for the audience to consider who do you think would make a more accurate forecast about a political election uh uh someone who’s who’s a very staunch advocate for one party or the other who’s you know extremely committed to politics and particularly to a particular party or someone who’s relatively neutral that’s a good question to ask and i know you know the answer regina so tell us about that and how that actually works out um yeah we i think the 2016 election was a fantastic uh use case uh for for uh examining that and and actually my my research partner and i uh we we did exactly do that we published a story in the washington post about um who got at least wrong um you know because everybody got it right right all the pundits were wrong a hundred percent of them were completely incorrect not a hundred not a hundred percent but it was pretty close i mean uh i tell you i got it wrong me too most people did and you know i mean if you want a funny story i um you know i’m a native new yorker uh uh uh the work that i do uh i i do a lot of work with with governments in europe and i was in the office of a senior political adviser um and we were talking about the upcoming 2016 election and uh and he said well you’re from new york you know you know hillary’s gonna win right and i said well i just cannot possibly imagine you know that and and this was my new york native bias coming out i thought there’s simply no way uh that uh uh donald trump could become the 45th president of this country because certainly you know people will come to uh see him in the same light that we in new york see him right the majority of new yorkers see him in a very particular light and uh so that was and so i was saying this to the advisor and both of us were sort of shaking our heads thinking yeah you know it’s very unlikely that he’s going to win about three weeks after i had that conversation was the uh fbi announcement about going back in to investigate hillary’s records i changed my forecast on that but not closely enough not close enough to the temporal scope not close enough to the end to get my score at a really good level but that was an example when i saw the same advisor maybe a few months later both of us had at least at that point in time when we were discussing it both of us were wildly wrong that was a clear example of my bias in presuming that everybody would see things the way i a native new yorker who had long experience with donald trump the way i would see it that was clearly wrong that was a good lesson for me um and and so when i saw this advisor a few months later and he said boy we both flubbed that one i said yeah we certainly flood that one even my adjustments towards the end was not significant enough you know to really make it a good forecast you’re bringing up a very good point too which is that where you are like physically on the planet is actually going to affect your perspective right so you’re in new york i’m here in the super liberal bubble of los angeles and so we have a distorted it’s hard to understand that but we do live with a distorted view because everybody around us thinks the same way and by the way the same thing’s true in the red states right so in the red states it’s unthinkable you know that donald trump could have lost the most recent election um because everybody around them was pretty much a fan and so they you know they saw widespread signs of support and and so one of the things you have to get in the habit of uh is is checking your geographic uh place you know like how how is that blinding you uh who you’re with the people you surround yourself with one of the ways to counter the cognitive bias is to align yourself with people who can challenge your thought process and i know you work with a business partner pavel for that very reason it’s like two brains are better than one and even you know in the super forecasting technique it’s a group it’s always a group uh so talk a little bit about how other people can help us see our own blind spots yeah i um you know uh when i one of the things that i think uh uh works really well in environments especially you know most decision making takes place in groups in teams so learning what is a good process to arrive at some kind of a predictive insight when you have to do it with a bunch of people and what we know is that yes diversity and thought makes the collective intelligence uh uh uh more powerful in many cases but there is also a step that you need to take before you get to that collective discussion which is independent estimation it’s better even if you’re operating in a team it’s better that every single individual within that group makes their own independent estimate about what they think is going to happen before they start talking with other people so that they are not allowing potential group think bias or anchoring bias or other types of biases i mean if if i’m working and i this happens a lot you know i’m usually i often find myself the only woman in a room or one of very few numbers of women in a room so there are biases associated with that the minute i walk before i even open my mouth there are going to be perceptions about that so how do you circumvent those kinds of biases how do you circumvent those kinds of problems that arise in the accuracy space so the first step is make independent estimations uh this also comes back to team selection regina so you know i mean i you you’ve talked about this um you know in terms of the uh the forecasting teams for uh ayappa and so forth um but you know how do you go through that process of team selection do you purposely pick people without domain expertise for example or um you know as a as a as a reference um at the experimental level we uh do random assignments so so we we to to maintain the ability to really detect whether or not our systems are working or not we usually do not uh pre-select people to be in certain groups however uh in the case of uh some of the research that we’re doing it’s not who we want to be in specific cohorts what we’re looking at is to see how the system applies to certain types of samples so in our case what we’re doing is we have one group made of people who are super forecasters people who have been shown to be very consistent very consistently accurate over an extended period of time so that’s one discrete group we have another discrete group of people who are life sciences professionals people who are experts in this field and then we have people who are total laypeople zero forecasting experience and zero uh biomedical experience but those folks actually have uh have some elements in common some traits in common they read widely they read eclectically they don’t hold very fixed political beliefs as i kind of alluded to in the previous comment or question you know they’re flexible thinkers and i think it’s also important for people to understand they’re not fixated on their prediction they’re perfectly willing to change their prediction as you explained you know with the 2016 election super forecasters are quite comfortable saying oh new information has come in and therefore we’re going to modulate a little bit we’re going to moderate and change our our prediction our forecast a little bit based on that new information and and some people are too fixated like they’re too rigid i made my forecast and i’m gonna stick to it and that’s almost a recipe for going wrong so so i think even if the folks are not experts in a particular subject matter they do share some common traits even if they come from divergent backgrounds yeah and i think that the key thing is that it’s a trainable skill so so yes super forecasters have a propensity to do this naturally but for people who don’t uh you can teach them well in fact that’s what you’re doing now can you talk a little bit about your program because this is a good opportunity for those who are listening to learn that this is actually something you can you can you can improve tell us about it thanks uh uh yeah so so uh well basically since 2012 i’ve been developing training programs and how to get people who don’t have uh this natural kind of super forecast or propensity but who need to be able to make good decisions good forecasts um teach them a step-by-step process on how to be a better forecaster how to think like someone who makes forecasts professionally how to do that well uh and much of it is really about practice uh it’s you you need to get them in an environment where they can just make a forecast because most people have never done that before just they haven’t done it they haven’t done in a disciplined way i mean we make forecasts every day we decide what clothes to wear whether or not to bring an umbrella you know people do that on kind of an intuitive level but they don’t think about their process and one of the keys to becoming a better forecaster is to start to expose your thought process and become familiar with it this is the idea of thinking about thinking and all the authors we’ve mentioned so far they write deeply about this because it takes a great deal of skill it’s also very hard for people to figure out how to think about their thought process is that one of the things that you train people on regina yes um so metacognition or thinking about thinking is is essential metacognition i love that then yeah and and so so uh what we really are trying to get people to do is to uh take a step back uh you know go into that system two mode of thought uh that daniel kahneman talks about and uh to really uh be careful about where they might uh where they uh might fall down uh in in their process and where they can boost that process and one of the things that is so important for us both as researchers as well as people who are who have a job of making people better forecast and providing highly accurate forecasts is getting people used to this idea of um the the the simple act of um training in an environment where you know it goes back to a little bit of what you were saying robert um when when when i described for people who’ve never done forecasting before the first part is to get them to frame it differently every single decision that we make every single one of them it’s a bet on a future outcome it’s a bet on something is yet to happen so everything that we do decision wise ultimately could be regarded as a forecast we just don’t think about it that way that’s a great way to put it and the the science fiction offered david brin who someday will get on the show he is very bold about saying i’ll place a bet on that he’s perfectly willing to put his money where his mouth is about his forecast he’s always challenging people on social media to place put their money where their mouth is it’s a good idea and one of the groups of people that tetlock identified that are consistently very good at this are people who invest in stocks uh the people who have been successful with that again they tune their bets they’re constantly you know adjusting their position based on new information they don’t just buy it and hold it all the time so so there’s a certain set of skills that can be taught and that’s an interesting thing so for those who are interested uh what url should they go to regina to find out about learning how to be a better forecaster uh so they can go to uh the easiest way to do it is to go to www.pithopytho.io

and if you look at a r e t e rnt that’s where you can sign up and we can send you more information about our training programs and about our research uh there are a variety of things that we can offer to people if they want to look at it more from the scientific side if they want to just learn and develop the skill if they go to www.pitho.io they can find a lot of information about that and you can also reach me on uh twitter or linkedin um and and i i think we could probably put the addresses on uh uh uh you know yeah

people get all that social media bump cool so joe so regina there were a couple things you talked about before we started recording that were super interesting and remaining time tell us about the anticipatory intelligence movement and the workshop and what you’re doing with the national science foundation so i think that there’s definitely a uh well there’s certainly a group of people who have been working on these problems in a variety of different places coming at it from uh different perspectives but still maintaining the same focus on a lot of what we’ve discussed in this last hour um the work of people like phil tedlock and bart nellers and and daniel kahneman and how we interpret what uh they learned in their research and how we’re adapting it as time moves forward again uh what we’re looking at in our work has a lot to do with the hybridization of human intelligence with machine intelligence and you know this has a lot of uh this has a lot of potential ramifications uh for our safety our security our knowledge um i think we’ve seen certain examples where it doesn’t go right um i i think um uh the the key is to get people and this is the really hard part um is so much of this is about giving people the sense of learning a process learning a high quality system that has a lot to do with uh developing a taste for nuance and uh that is a tough thing uh for people to develop uh there are a lot of factors that make it harder for people to do that so taste for nuance is what you’re saying uh example so i think that uh uh when when the pandemic hit uh last year i think there was uh so much confusion so much chaos uh that people were uh um they were focusing a lot of attention on certain types of drugs because they were in the press a lot so you would see those you would see those brand names uh you would see those company names before that i mean most people would would never talk about something like uh recombinant dna or you know crispr gene editing or genomic sequencing people talk about it now uh but back then you know there was such a there was such a a lack of nuance around what was happening mostly because of fear and panic uh disinformation campaigns and people recommending drinking bleach and so forth you know there was just a lot of bad information and it was hard for people to sort through it and the scientific process takes a long time the fda moved notoriously slowly to come out with any kind of pronouncement so people weren’t sure where to get their guidance from that’s a big issue is the media environment around us influences us you know if you see 10 headlines saying that mark zuckerberg thinks that facebook will be the metaverse yeah it wouldn’t be surprising for most people to arrive at the conclusion that that’s probably going to happen even if it’s just a press release you know even if it’s just a concerted press push it’s widely understood in politics but it’s also true for companies companies are trying to craft a perception about where they’re going to be in the future they’re trying to influence shareholders and the stock market and so they put out these they put out they used sort of media disinformation campaigns to kind of influence people’s thinking a big part of the work that you do is to kind of illuminate that and show people hey your media diet is going to influence the thoughts that are inside your head and eventually those ideas are going to take root you might start to believe them whether or not they’re based on any kind of fact you know what you mentioned the pandemic and i have to ask you this question because it always this has been on my mind so nothing was easier to forecast literally nothing was easier to forecast than covet 19. for 20 years we’ve been hearing from everybody in the field of epidemiology that there was going to be another kind of global breakout of some kind of highly communicable disease and and we very well develop plans on how to tackle it that’s exactly right there were teens ready although they had been kind of deactivated in the us in some cases but you know like laurie nadal wrote a book called the next plague in like 2003 or fours and it’s been around for a long time there were books that came out just a year before and even uh one scientist had said look it’ll be a coronavirus it’ll come from a bat in the wuhan area like this was not this doesn’t require a rocket scientist you could just read what was already published and understand this was coming so here we had a case where there were excellent forecasts available but the people in charge ignored the forecasts so what’s that called i think of that as like the cassandra syndrome you know where you’ve got somebody outside of the temple telling you exactly what’s going to happen don’t go there it’s going to be bad and then agamemnon is like nope send the ships off to troy we’re going so what do you think of that i love that we’ve got so many sort of greco-roman and you know sort of classical references you know flowing through this conversation um you know i i think that that is part of the problem right is is is is that’s one small part of the problem but as to the reasons why leaders decision makers don’t follow through on copious amounts of evidence or copious amounts of data that they’ve got sitting in front of them to make the right decision there’s so many variables that affect that actual decision-making process that yes cassandra complex is part of it but there are other you know political liabilities right uh uh of personal incentives right whether it’s greed or power or you know so so it’s not just about um you know the the sort of cognitive factors at stake like uh overconfidence or uh uh there are so many layers uh to a decision especially at a high level that you you know and what we try to do is to kind of decompose that uh if if we’re kind of looking back at forecasts that go wrong what are all the possible pathways where the decision making took a wrong you know to took the wrong side of the fork um and so so that’s a very complex process and part of what we’re talking about now part of what we’re thinking about is is is how do we make getting through that process a little easier but i often find that in decision making at a high level it’s really down to usually it’s the person who has the most money most power most seniority they do what they want they’re often not as easily influenced as you think they might be fantastic well that’s uh i maybe let me just um finish with one question um so we can wrap this up regina um you know if if you’re talking to the average person out there today could you give them a list of actions to take or ways to change their their lifestyle so that they’re better placed for the future yeah uh uh i think that the the first thing is to be informed uh uh being well informed really is the cornerstone of of of being a good forecaster or just making good decisions uh is is to be well informed to and update yourself on that information it’s not enough to know a fact one day and then just sort of let it alone until you have to make a decision years later eclectically across the many different disciplines like people get stuck in a bubble right they get a habit and then they get reinforced uh okay go ahead i’m sorry i’m interrupting you oh no it’s it’d be diverse your thought uh uh it’s it’s be well informed be diverse in your thought have a process right and and that can start very easily with something as simple as you know learning how to make a decision table uh you know where you are basically evaluating what are the what’s the trade-off i’m making if i have to make a decision where are the trade-offs then you’re you just have to score which trade-offs are the worst ones you know which are the least uh uh the least acceptable adverse trade-offs that i have to make uh just being able to do that is a good start um you know and and so what we do is to try to provide processes by which people can at least learn how to do that stuff quickly and easily and the more you practice it the more you put yourself in an environment where you are testing yourself and tracking yourself because again much of what we talk about is about uh you know we we go back to you know there are lots of people out there in the world who say that they’re futurists but if they aren’t tracking themselves in an environment where they can definitively say yes for the last 10 years i have had a consistent track record and being able to accurately predict here’s my briar score you know this is my performance in this year and this year and this year until you do that you know i think that we fall into the trap of calling people futurists um you know people who are probably not that accurate not not that good at it so so when we talk about the track record issue yeah i think that we need to be putting more futurists in the environment where okay can you put your money where your mouth is you’re absolutely right regina we’re going to need a lot more futurists in the future at least future-minded people who we’re trying to reach with this program because the world is changing fast and it’s really important for people to develop their own methodology for navigating through that fast changing world now folks you’re listening to the futurists and our guest today has been regina joseph she is a super forecaster she was actually one of the top performing super forecasters in that iarpa program we talked about the beginning of the show she’s also a cognitive science researcher and a geopolitical analyst and you can learn more about her at her website p-y-t-h-o dot io and you can also take a course there and learn how to become a super forecaster yourself now you’ve been watching or listening to the futurists with brett king and me rob terczyk and we will see you in the future well that’s it for the futurists this week if you like the show we sure hope you did please subscribe and share it with people in your community and don’t forget to leave us a five star review that really helps other people find the show and you can ping us anytime on instagram and twitter at futurist podcast for the folks that you’d like to see on the show or the questions you’d like us to ask thanks for joining and as always we’ll see you in the future

Breaking Banks
Hosted By Brett King, Jason Henrichs, & JP Nicols
The #1 global fintech radio show and podcast. Every week we explore the personalities, startups, innovators, and industry players driving disruption in financial services; from Incumbents to unicorns, and from the latest cutting edge technology to the people who are using it to help to create a more innovative, inclusive and healthy financial future.

https://provoke.fm/show/breaking-banks/

Entering into a world where we have to hybridize our decision making with machine intelligence we need to think about where are the advantages that we have on the human side that can outperform machines.