Text

Physicist, Startup Founder, Blogger, Dad

Saturday, May 19, 2018

Deep State Update


It's been clear for well over a year now that the Obama DOJ-FBI-CIA used massive surveillance powers (FISA warrant, and before that, national security letters and illegal contractor access to intelligence data) against the Trump campaign. In addition to SIGINT (signals intelligence, such as email or phone intercepts), we now know that HUMINT (spies, informants) was also used.

Until recently one could still be called a conspiracy theorist by the clueless for stating the facts in the paragraph above. But a few days ago the NYTimes and WaPo finally gave up (in an effort to shape the narrative in advance of DOJ Inspector General report(s) and other document releases that are imminent) and admitted that all of these things actually happened. The justification advanced by the lying press is that this was all motivated by fear of Russian interference -- there was no partisan political motivation for the Obama administration to investigate the opposition party during a presidential election.

If the Times and Post were dead wrong a year ago, what makes you think they are correct now?

Here are the two recent NYTimes propaganda articles:

F.B.I. Used Informant to Investigate Russia Ties to Campaign, Not to Spy, as Trump Claims


Code Name Crossfire Hurricane: The Secret Origins of the Trump Investigation

Don't believe in the Deep State? Here is a 1983 Times article about dirty tricks HUMINT spook Stefan Halper (he's the CIA-FBI informant described in the recent articles above). Much more at the left of center Intercept.

Why doesn't Trump just fire Sessions/Rosenstein/Mueller or declassify all the docs?

For example, declassifying the first FISA application would show, as claimed by people like Chuck Grassley and Trey Gowdy, who have read the unredacted original, that it largely depends on the fake Steele Dossier, and that the application failed to conform to the required Woods procedures.

The reason for Trump's restraint is still not widely understood. There is and has always been strong GOP opposition to his candidacy and presidency ("Never Trumpers"). The anti-Trump, pro-immigration wing of his party would likely support impeachment under the right conditions. To their ends, the Mueller probe keeps Trump weak enough that he will do their bidding (lower taxes, help corporations and super-wealthy oligarchs) without straying too far from the bipartisan globalist agenda (pro-immigration, anti-nativism, anti-nationalism). If Trump were to push back too hard on the Deep State conspiracy against him, he would risk attack from his own party.

I believe Trump's strategy is to let the DOJ Inspector General process work its way through this mess -- there are several more reports coming, including one on the Hillary email investigation (draft available for DOJ review now; will be public in a few weeks), and another on FISA abuse and surveillance of the Trump campaign. The OIG is working with a DOJ prosecutor (John Huber, Utah) on criminal referrals emerging from the investigation. Former Comey deputy Andrew McCabe has already been referred for possible criminal charges due to the first OIG report. I predict more criminal referrals of senior DOJ/FBI figures in the coming months. Perhaps they will even get to former CIA Director Brennan (pictured at top), who seems to have lied under oath about his knowledge of the Steele dossier.

Trump may be saving his gunpowder for later, and if he has to expend some, it will be closer to the midterm elections in the fall.


Note added: For those who are not tracking this closely, one of the reasons the Halper story is problematic for the bad guys is explained in The Intercept:
... the New York Times reported in December of last year that the FBI investigation into possible ties between the Trump campaign and Russia began when George Papadopoulos drunkenly boasted to an Australian diplomat about Russian dirt on Hillary Clinton. It was the disclosure of this episode by the Australians that “led the F.B.I. to open an investigation in July 2016 into Russia’s attempts to disrupt the election and whether any of President Trump’s associates conspired,” the NYT claimed.

But it now seems clear that Halper’s attempts to gather information for the FBI began before that. “The professor’s interactions with Trump advisers began a few weeks before the opening of the investigation, when Page met the professor at the British symposium,” the Post reported. While it’s not rare for the FBI to gather information before formally opening an investigation, Halper’s earlier snooping does call into question the accuracy of the NYT’s claim that it was the drunken Papadopoulos ramblings that first prompted the FBI’s interest in these possible connections. And it suggests that CIA operatives, apparently working with at least some factions within the FBI, were trying to gather information about the Trump campaign earlier than had been previously reported.
Hmm.. so what made CIA/FBI assign Halper to probe Trump campaign staffers in the first place? It seems the previously advanced BS story for the start of the anti-Trump investigation needs some help...

Friday, May 18, 2018

Digital Cash in China



WSJ: "Are they ahead of us here?"

UK Expat in Shenzhen: "It's a strange realization, but Yes."

Thursday, May 17, 2018

Exponential growth in compute used for AI training


Chart shows the total amount of compute, in petaflop/s-days, used in training (e.g., optimizing an objective function in a high dimensional space). This exponential trend is likely to continue for some time -- leading to qualitative advances in machine intelligence.
AI and Compute (OpenAI blog): ... since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.

... Three factors drive the advance of AI: algorithmic innovation, data (which can be either supervised data or interactive environments), and the amount of compute available for training. Algorithmic innovation and data are difficult to track, but compute is unusually quantifiable, providing an opportunity to measure one input to AI progress. Of course, the use of massive compute sometimes just exposes the shortcomings of our current algorithms. But at least within many current domains, more compute seems to lead predictably to better performance, and is often complementary to algorithmic advances.

...We see multiple reasons to believe that the trend in the graph could continue. Many hardware startups are developing AI-specific chips, some of which claim they will achieve a substantial increase in FLOPS/Watt (which is correlated to FLOPS/$) over the next 1-2 years. ...

Tuesday, May 15, 2018

AGI in the Alps: Schmidhuber in Bloomberg


A nice profile of AI researcher Jurgen Schmidhuber in Bloomberg. I first met Schmidhuber at SciFoo some years ago. See also Deep Learning in Nature.
Bloomberg: ... Schmidhuber’s dreams of an AGI began in Bavaria. The middle-class son of an architect and a teacher, he grew up worshipping Einstein and aspired to go a step further. “As a teenager, I realized that the grandest thing that one could do as a human is to build something that learns to become smarter than a human,” he says while downing a latte. “Physics is such a fundamental thing, because it’s about the nature of the world and how the world works, but there is one more thing that you can do, which is build a better physicist.”

This goal has been Schmidhuber’s all-consuming obsession for four decades. His younger brother, Christof, remembers taking long family drives through the Alps with Jürgen philosophizing away in the back seat. “He told me that you can build intelligent robots that are smarter than we are,” Christof says. “He also said that you could rebuild a brain atom by atom, and that you could do it using copper wires instead of our slow neurons as the connections. Intuitively, I rebelled against this idea that a manufactured brain could mimic a human’s feelings and free will. But eventually, I realized he was right.” Christof went on to work as a researcher in nuclear physics before settling into a career in finance.

... AGI is far from inevitable. At present, humans must do an incredible amount of handholding to get AI systems to work. Translations often stink, computers mistake hot dogs for dachshunds, and self-driving cars crash. Schmidhuber, though, sees an AGI as a matter of time. After a brief period in which the company with the best one piles up a great fortune, he says, the future of machine labor will reshape societies around the world.

“In the not-so-distant future, I will be able to talk to a little robot and teach it to do complicated things, such as assembling a smartphone just by show and tell, making T-shirts, and all these things that are currently done under slavelike conditions by poor kids in developing countries,” he says. “Humans are going to live longer, healthier, happier, and easier lives, because lots of jobs that are now demanding on humans are going to be replaced by machines. Then there will be trillions of different types of AIs and a rapidly changing, complex AI ecology expanding in a way where humans cannot even follow.” ...
Schmidhuber has annoyed many of his colleagues in AI by insisting on proper credit assignment for groundbreaking work done in earlier decades. Because neural networks languished in obscurity through the 1980s and 1990s, a lot of theoretical ideas that were developed then do not today get the recognition they deserve.

Schmidhuber points out that machine learning is itself based on accurate credit assignment. Good learning algorithms assign higher weights to features or signals that correctly predict outcomes, and lower weights to those that are not predictive. His analogy between science itself and machine learning is often lost on critics.

What is still missing on the road to AGI:
... Ancient algorithms running on modern hardware can already achieve superhuman results in limited domains, and this trend will accelerate. But current commercial AI algorithms are still missing something fundamental. They are no self-referential general purpose learning algorithms. They improve some system’s performance in a given limited domain, but they are unable to inspect and improve their own learning algorithm. They do not learn the way they learn, and the way they learn the way they learn, and so on (limited only by the fundamental limits of computability). As I wrote in the earlier reply: "I have been dreaming about and working on this all-encompassing stuff since my 1987 diploma thesis on this topic." However, additional algorithmic breakthroughs may be necessary to make this a practical reality.

Sunday, May 13, 2018

Feynman 100 at Caltech


https://feynman100.caltech.edu

AI, AGI, and ANI in The New Yorker


A good long read in The New Yorker on AI, AGI, and all that. Note the article appears in the section "Dept. of Speculation" :-)
How Frightened Should We Be of A.I.?

Precisely how and when will our curiosity kill us? I bet you’re curious. A number of scientists and engineers fear that, once we build an artificial intelligence smarter than we are, a form of A.I. known as artificial general intelligence, doomsday may follow. Bill Gates and Tim Berners-Lee, the founder of the World Wide Web, recognize the promise of an A.G.I., a wish-granting genie rubbed up from our dreams, yet each has voiced grave concerns. Elon Musk warns against “summoning the demon,” envisaging “an immortal dictator from which we can never escape.” Stephen Hawking declared that an A.G.I. “could spell the end of the human race.” Such advisories aren’t new. In 1951, the year of the first rudimentary chess program and neural network, the A.I. pioneer Alan Turing predicted that machines would “outstrip our feeble powers” and “take control.” In 1965, Turing’s colleague Irving Good pointed out that brainy devices could design even brainier ones, ad infinitum: “Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” It’s that last clause that has claws.

Many people in tech point out that artificial narrow intelligence, or A.N.I., has grown ever safer and more reliable—certainly safer and more reliable than we are. (Self-driving cars and trucks might save hundreds of thousands of lives every year.) For them, the question is whether the risks of creating an omnicompetent Jeeves would exceed the combined risks of the myriad nightmares—pandemics, asteroid strikes, global nuclear war, etc.—that an A.G.I. could sweep aside for us.

The assessments remain theoretical, because even as the A.I. race has grown increasingly crowded and expensive, the advent of an A.G.I. remains fixed in the middle distance. In the nineteen-forties, the first visionaries assumed that we’d reach it in a generation; A.I. experts surveyed last year converged on a new date of 2047. A central tension in the field, one that muddies the timeline, is how “the Singularity”—the point when technology becomes so masterly it takes over for good—will arrive. Will it come on little cat feet, a “slow takeoff” predicated on incremental advances in A.N.I., taking the form of a data miner merged with a virtual-reality system and a natural-language translator, all uploaded into a Roomba? Or will it be the Godzilla stomp of a “hard takeoff,” in which some as yet unimagined algorithm is suddenly incarnated in a robot overlord?

A.G.I. enthusiasts have had decades to ponder this future, and yet their rendering of it remains gauzy: we won’t have to work, because computers will handle all the day-to-day stuff, and our brains will be uploaded into the cloud and merged with its misty sentience, and, you know, like that. ...

Thursday, May 10, 2018

Google Duplex and the (short) Turing Test

Click this link and listen to the brief conversation. No cheating! Which speaker is human and which is a robot?

I wrote about a "strong" version of the Turing Test in this old post from 2004:
When I first read about the Turing test as a kid, I thought it was pretty superficial. I even wrote some silly programs which would respond to inputs, mimicking conversation. Over short periods of time, with an undiscerning tester, computers can now pass a weak version of the Turing test. However, one can define the strong version as taking place over a long period of time, and with a sophisticated tester. Were I administering the test, I would try to teach the second party something (such as quantum mechanics) and watch carefully to see whether it could learn the subject and eventually contribute something interesting or original. Any machine that could do so would, in my opinion, have to be considered intelligent.
AI isn't ready to pass the strong Turing Test, yet. But humans will become increasing unsure about the machine intelligences proliferating in the world around them.

The key to all AI advances is to narrow the scope of the problem so that the machine can deal with it. Optimization/Learning in lower dimensional spaces is much easier than in high dimensional spaces. In sufficiently narrow situations (specific tasks, abstract games of strategy, etc.), machines are already better than humans.

Google AI Blog:
Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone

...Today we announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.

Here are examples of Duplex making phone calls (using different voices)...
I switched from iOS to Android in the last year because I could see that Google Assistant was much better than Siri and was starting to have very intriguing capabilities!


Friday, May 04, 2018

FT podcasts on US-China competition and AI

Two recent FT podcasts:

China and the US fight for AI supremacy (17min)
In the race to develop artificial intelligence technology, American engineers have long had an edge but access to vast amounts of data may prove to be China's secret weapon. Louise Lucas and Richard Waters report on the contest for supremacy in one of this century’s most important technologies.

Gideon Rachman: The dawn of the Chinese century (FT Big Picture podcast, 25min)

See also Machine intelligence threatens overpriced aircraft carriers.

Tuesday, May 01, 2018

Gary Shteyngart on Mike Novogratz and Wesley Yang on Jordan Peterson

Two excellent longform articles. Both highly recommended.

One lesson from Jordan Peterson's recent meteoric rise: the self-help market will never saturate.
Wesley Yang profile of Jordan Peterson (Esquire):

...The encouragement that the fifty-five-year-old psychology professor offers to his audiences takes the form of a challenge. To “take on the heaviest burden that you can bear.” To pursue a “voluntary confrontation with the tragedy and malevolence of being.” To seek a strenuous life spent “at the boundary between chaos and order.” Who dares speak of such things without nervous, self-protective irony? Without snickering self-effacement?

“It’s so sad,” he says. “Every time I go to these talks, guys come up and say, ‘Wow, you know, it’s working.’ And I think, Well, yeah. No kidding! Nobody ever fucking told you that.”

"...When he says, ‘Life is suffering,’ that resonates very deeply. You can tell he’s not bullshitting us."
This is a profile of a guy I happen to have met recently at a fancy event (thx for cigars, Mike!), but it's also a reflection on the evolution (or not) of finance over the last few decades.
Novelist Gary Shteyngart on Mike Novogratz (New Yorker):

... And yet the majority of the hedge funders I befriended were not living happier or more interesting lives than my friends who had been exiled from the city. They had devoted their intellects and energies to winning a game that seemed only to diminish the players. One book I was often told to read was “Reminiscences of a Stock Operator,” first published in 1923. Written by Edwin Lefèvre, the novel follows a stockbroker named Lawrence Livingston, widely believed to be based on Jesse Livermore, a colorful speculator who rose from the era of street-corner bucket shops. I was astounded by how little had changed between the days of ticker tape and our own world of derivatives and flash trading, but a facet that none of the book’s Wall Street fans had mentioned was the miserableness of its protagonist. Livingston dreams of fishing off the Florida coast, preferably in his new yacht, but he keeps tacking back up to New York for one more trade. “Trading is addictive,” Novogratz told me at the Princeton reunion. “All these guys get addicted.” Livermore fatally shot himself in New York’s Sherry-Netherland Hotel in 1940.

... Novogratz had described another idea to me, one several magnitudes more audacious—certainly more institutional, and potentially more durable—than a mere half-a-billion-dollar hedge fund. He wanted to launch a publicly traded merchant bank solely for cryptocurrencies, which, with characteristic immodesty, he described as “the Goldman Sachs of crypto,” and was calling Galaxy Digital. “I’m either going to look like a genius or an idiot,” he said.

... On the day we met at his apartment, a regulatory crackdown in China, preceded by one announced in South Korea, was pushing the price of bitcoin down. (It hasn’t returned to its December high, and is currently priced at around seven thousand dollars.) Meanwhile, it appeared that hedge funds, many of which had ended 2016 either ailing or dead, were reporting their best returns in years. After six years of exploring finance, I concluded that, despite the expertise and the intelligence on display, nobody really knows anything. “In two years, this will be a big business,” Novogratz said, of Galaxy Digital. “Or it won’t be.”

Saturday, April 28, 2018

A Brief History of the (Near) Future: How AI and Genomics Will Change What It Means To Be Human

I'll be giving the talk below to an audience of oligarchs in Los Angeles next week. This is a video version I made for fun. It cuts off at 17min even though the whole talk is ~25min, because my team noticed that I gave away some sensitive information :-( 

The slides are here.



A Brief History of the (Near) Future: How AI and Genomics Will
Change What It Means To Be Human


AI and Genomics are certain to have huge impacts on markets, health, society, and even what it means to be human. These are not two independent trends; they interact in important ways, as I will explain. Computers now outperform humans on most narrowly-defined tasks, such as face recognition, voice recognition, Chess, and Go. Using AI methods in genomic prediction, we can, for example, estimate the height of a human based on DNA alone, plus or minus an inch. Almost a million babies are born each year via IVF, and it is possible now to make nontrivial predictions about them (even, about their cognitive ability) from embryo genotyping. I will describe how AI, Genomics, and AI+Genomics will evolve in the coming decades.

Short Bio: Stephen Hsu is VP for Research and Professor of Theoretical Physics at Michigan State University. He is also a researcher in computational genomics and founder of several Silicon Valley startups, ranging from information security to biotech.

Friday, April 27, 2018

Keepin' it real with UFC fighter Kevin Lee (JRE podcast)



A great ~20 minutes starting at ~1:01 with UFC 155 contender Kevin Lee. Lee talks about self-confidence, growing up in an all-black part of Detroit, not knowing any white people his age until attending college, getting started in wrestling and MMA. If you don't believe early environment affects life outcomes you are crazy...

They also discuss Ability vs Practice: 10,000 hour rule is BS, in wrestling and MMA as with anything else. Lee was a world class fighter by his early twenties, having had no martial arts training until starting wrestling at age 16. He has surpassed other athletes who have had intensive training in boxing, kickboxing, wrestling, jiujitsu since childhood. It will be interesting to see him face Khabib Nurmagomedov, who has been trained, almost since birth, in wrestling, judo, and combat sambo. (His father is a famous coach and former competitor in Dagestan.)

Here are some highlights from Lee's recent domination of Edson Barboza.

Wednesday, April 18, 2018

New Statesman: "like it or not, the debate about whether genes affect intelligence is over"

Science writer Philip Ball, a longtime editor at Nature, writes a sensible article about the implications of rapidly improving genomic prediction for cognitive ability.
Philip Ball is a freelance science writer. He worked previously at Nature for over 20 years, first as an editor for physical sciences (for which his brief extended from biochemistry to quantum physics and materials science) and then as a Consultant Editor. His writings on science for the popular press have covered topical issues ranging from cosmology to the future of molecular biology.

Philip is the author of many popular books on science, including works on the nature of water, pattern formation in the natural world, colour in art, the science of social and political philosophy, the cognition of music, and physics in Nazi Germany.

... Philip has a BA in Chemistry from the University of Oxford and a PhD in Physics from the University of Bristol.
I recommend the whole article -- perhaps it will stimulate a badly needed discussion of this rapidly advancing area of science.
The IQ trap: how the study of genetics could transform education (New Statesman)

The study of the genes which affect intelligence could revolutionise education. But, haunted by the spectre of eugenics, the science risks being lost in a political battle.

... Researchers are now becoming confident enough to claim that the information available from sequencing a person’s genome – the instructions encoded in our DNA that influence our physical and behavioural traits – can be used to make predictions about their potential to achieve academic success. “The speed of this research has surprised me,” says the psychologist Kathryn Asbury of the University of York, “and I think that it is probable that pretty soon someone – probably a commercial company – will start to try to sell it in some way.” Asbury believes “it is vital that we have regulations in place for the use of genetic information in education and that we prepare legal, social and ethical cases for how it could and should be used.”

... Some kids pick things up in a flash, others struggle with the basics. This doesn’t mean it’s all in their genes: no one researching genes and intelligence denies that a child’s environment can play a big role in educational attainment. Of course kids with supportive, stimulating families and motivated peers have an advantage, while in some extreme cases the effects of trauma or malnutrition can compromise brain development.

... Robert Plomin of King’s College London, one of the leading experts on the genetic basis of intelligence, and his colleague Sheila Walker. They surveyed almost 2,000 primary school teachers and parents about their perceptions of genetic influence on a number of traits, including intelligence, and found that on the whole, both teachers and parents rated genetics as being just as important as the environment. This was despite the fact that 80 per cent of the teachers said there was no mention of genetics in their training. Plomin and Walker concluded that educators do seem to accept that genes influence intelligence.

Kathryn Asbury supports that view. When her PhD student Madeline Crosswaite investigated teachers’ beliefs about intelligence, Asbury says she found that “teachers, on average, believe that genetic factors are at least as important as environmental factors” and say they are “open to a role for genetic information in education one day, and that they would like to know more”.

... But now it’s possible to look directly at people’s genomes: to read the molecular code (sequence) of large proportions of an individual’s DNA. Over the past decade the cost of genome sequencing has fallen sharply, making it possible to look more directly at how genes correlate with intelligence. The data both from twin studies and DNA analysis are unambiguous: intelligence is strongly heritable. Typically around 50 per cent of variations in intelligence between individuals can be ascribed to genes, although these gene-induced differences become markedly more apparent as we age. As Ritchie says: like it or not, the debate about whether genes affect intelligence is over.

... Genome-wide polygenic scores can now be used to make such predictions about intelligence. They’re not really reliable at the moment, but will surely become better as the sample sizes for genome-wide studies increase. They will always be about probabilities, though: “Mrs Larkin, there is a 67 per cent chance that your son will be capable of reaching the top 10 per cent of GCSE grades.” Such exam results were indeed the measure Plomin and colleagues used for one recent study of genome-based prediction. They found that there was a stronger correlation between GPS and GCSE results for extreme outcomes – for particularly high or low marks.

... Using GPSs from nearly 5,000 pupils, the report assesses how exam results from different types of school – non-selective state, selective state grammar, and private – are correlated with gene-based estimates of ability for the different pupil sets. The results might offer pause for thought among parents stumping up eyewatering school fees: the distribution of exam results at age 16 could be almost wholly explained by heritable differences, with less than 1 per cent being due to the type of schooling received. In other words, as far as academic achievement is concerned, selective schools seem to add next to nothing to the inherent abilities of their pupils. ...

Monday, April 16, 2018

The Genetics of Human Behavior (The Insight podcast)



Intelligence researcher Stuart Ritchie interviewed by genomicists Razib Khan and Spencer Wells. Highly recommended! Thanks to a commenter for the link.

Sunday, April 15, 2018

Sweet Tweet Treats

For mysterious reasons, this old tweet has attracted almost 200k impressions in the last day or so:




If you like that tweet, this one might be of interest as well:



I'm always amazed that so many people have strong opinions on topics like Nature vs Nurture, How the World Works, How Civilization Advances (or does not), without having examined the evidence.

Friday, April 13, 2018

Evolution of Venture Capital: SV + Asia dominate


Comparison to dot com bubble of 2000 probably not appropriate as global pool of startup innovation is order of magnitude larger now.
WSJ: Silicon Valley Powered American Tech Dominance—Now It Has a Challenger

An exclusive WSJ analysis shows how venture-capital investment from Asia is skyrocketing, threatening to shift power over innovation ...

Tuesday, April 10, 2018

SenseTime: most valuable AI startup in the world?



Scientific publications of the founder and CEO Li Xu.
Bloomberg Technology: SenseTime Group Ltd. has raised $600 million from Alibaba Group Holding Ltd. and other investors at a valuation of more than $3 billion, becoming the world’s most valuable artificial intelligence startup.

The company, which specializes in systems that analyze faces and images on an enormous scale, said it closed a Series C round in recent months in which Singaporean state investment firm Temasek Holdings Pte and retailer Suning.com Co. also participated. SenseTime didn’t outline individual investments, but Alibaba was said to have sought the biggest stake in the three-year-old startup. ...

Tuesday, April 03, 2018

AlphaGo documentary



Highly recommended -- covers the matches with European Go Champion Fan Hui and 18 time World Champion Lee Sedol. It conveys the human side of the story, both of the AlphaGo team and of the Go champions who "represented the human species" in yet another (losing) struggle against machine intelligence. Some of the most effective scenes depict how human experts react to (anthropomorphize) the workings of a complex but deterministic algorithm.
Wikipedia: After his fourth-game victory, Lee was overjoyed: "I don't think I've ever felt so good after winning just one game. I remember when I said I will win all or lose just one game in the beginning. ... However, since I won after losing 3 games in a row, I am so happy. I will never exchange this win for anything in the world." ... After the last game, however, Lee was saddened: "I failed. I feel sorry that the match is over and it ended like this. I wanted it to end well." He also confessed that "As a professional Go player, I never want to play this kind of match again. I endured the match because I accepted it."
I wonder how Lee feels now knowing that much stronger programs exist than the version he lost to, 4-1. His victory in game 4 seemed to be largely due to some internal problems with (that version of) AlphaGo. I was told confidentially that the DeepMind researchers had found huge problems with AlphaGo after the Lee Sedol match -- whole lines of play on which it performed poorly. This was partially responsible for the long delay before (an improved version of) AlphaGo reappeared to defeat Ke Jie 3-0, and post a 60-0 record against Go professionals.
Wikipedia: ... Ke Jie stated that "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go."

In this video interview, Ke Jie says "I think human beings may only beat AlphaGo if we undergo a gene mutation to greatly enlarge our brain capacities..."  ;-)
Last year I was on an AI panel with Gary Kasparov, who was defeated by DeepBlue in 1997. (Most people forget that Kasparov won the first match in 1996, 4-2.) Like Lee, Kasparov can still become emotional when talking about his own experience as the champion representing humanity.

It took another 20 years for human Go play to be surpassed by machines. But the pace of progress is accelerating now...
Wikipedia: In a paper released on arXiv on 5 December 2017, DeepMind claimed that it generalized AlphaGo Zero's approach into a single AlphaZero algorithm, which achieved within 24 hours a superhuman level of play in the games of chess, shogi, and Go by defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case.
Some time ago DeepMind talked about releasing internals of AlphaGo to help experts explore how it "chunks" the game. Did this ever happen? Might give real insight to scholars of the game who want to "touch the edge of truth of Go" :-)

Sunday, March 25, 2018

Outlier selection via noisy genomic predictors


We recently used machine learning techniques to build polygenic predictors for a number of complex traits. One of these traits is bone density, for which the predictor correlates r ≈ 0.45 with actual bone density. This is far from perfect, but good enough to identify outliers, as illustrated above.

The figures above show the actual bone density distribution of individuals who are in the top or bottom 5 percent for predictor score. You can see that people with low/high scores are overwhelmingly likely to be below/above average on the phenotype, with a good chance of being in the extreme left/right tail of the distribution.

If, for example, very low bone density elevates likelihood of osteoporosis or fragile bones, then individuals with low polygenic score would have increased risk for those medical conditions and should receive extra care and additional monitoring as they age.

Similarly, if one had a cognitive ability predictor with r ≈ 0.45, the polygenic score would allow the identification of individuals likely to be well below or above average in ability.

I predict this will be the case relatively soon. Much sooner than most people think ;-)

Here is a recent talk I gave at MSU: Genomic Prediction of Complex Traits

Saturday, March 24, 2018

Public Troubled by Deep State (Monmouth Poll)

If you use the term Deep State in the current political climate you are liable to be declared a right wing conspiracy nut. But it was Senator Chuck Schumer who warned Trump (on Rachel Maddow's show) that
“Let me tell you, you take on the intelligence community, they have six ways from Sunday at getting back at you,”
In 2014 it was Senator Dianne Feinstein who accused the CIA (correctly, it turns out) of spying on Congressional staffers working for the Intelligence Committee. Anyone who is paying attention now knows that the Obama FBI/DOJ used massive government surveillance powers against the Trump team during and after the election. (Title 1 FISA warrant granted against Carter Page allowed queries against intercepted and stored communications with prior associates, including US citizens...) Had Trump lost the election none of this would have ever come to light.

If this is not a Deep State, then what is?
Monmouth University: A majority of the American public believe that the U.S. government engages in widespread monitoring of its own citizens and worry that the U.S. government could be invading their own privacy. The Monmouth University Poll also finds a large bipartisan majority who feel that national policy is being manipulated or directed by a “Deep State” of unelected government officials. Americans of color on the center and left and NRA members on the right are among those most worried about the reach of government prying into average citizens’ lives.

Just over half of the public is either very worried (23%) or somewhat worried (30%) about the U.S. government monitoring their activities and invading their privacy. There are no significant partisan differences – 57% of independents, 51% of Republicans, and 50% of Democrats are at least somewhat worried the federal government is monitoring their activities. Another 24% of the American public are not too worried and 22% are not at all worried.

Fully 8-in-10 believe that the U.S. government currently monitors or spies on the activities of American citizens, including a majority (53%) who say this activity is widespread and another 29% who say such monitoring happens but is not widespread. Just 14% say this monitoring does not happen at all. There are no substantial partisan differences in these results.

“This is a worrisome finding. The strength of our government relies on public faith in protecting our freedoms, which is not particularly robust. And it’s not a Democratic or Republican issue. These concerns span the political spectrum,” said Patrick Murray, director of the independent Monmouth University Polling Institute.

Few Americans (18%) say government monitoring or spying on U.S. citizens is usually justified, with most (53%) saying it is only sometimes justified. Another 28% say this activity is rarely or never justified. Democrats (30%) and independents (31%) are somewhat more likely than Republicans (21%) to say government monitoring of U.S. citizens is rarely or never justified.

Turning to the Washington political infrastructure as a whole, 6-in-10 Americans (60%) feel that unelected or appointed government officials have too much influence in determining federal policy. Just 26% say the right balance of power exists between elected and unelected officials in determining policy. Democrats (59%), Republicans (59%) and independents (62%) agree that appointed officials hold too much sway in the federal government.

“We usually expect opinions on the operation of government to shift depending on which party is in charge. But there’s an ominous feeling by Democrats and Republicans alike that a ‘Deep State’ of unelected operatives are pulling the levers of power,” said Murray.

Few Americans (13%) are very familiar with the term “Deep State;” another 24% are somewhat familiar, while 63% say they are not familiar with this term. However, when the term is described as a group of unelected government and military officials who secretly manipulate or direct national policy, nearly 3-in-4 (74%) say they believe this type of apparatus exists in Washington. This includes 27% who say it definitely exists and 47% who say it probably exists. Only 1-in-5 say it does not exist (16% probably not and 5% definitely not). Belief in the probable existence of a Deep State comes from more than 7-in-10 Americans in each partisan group, although Republicans (31%) and independents (33%) are somewhat more likely than Democrats (19%) to say that the Deep State definitely exists.

Friday, March 23, 2018

Genetics and Group Differences: David Reich (Harvard) in NYTimes

Harvard geneticist David Reich writes below in the New York Times. The prospect that human ancestry clusters ("races") might differ in allele frequencies, leading to quantifiable group differences, has been looming for a long time. Reich writes
I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position.
See Metric on the space of genomes (2007), Human genetic variation and Lewontin's fallacy in pictures (2008), and What's New Since Montagu? (2014).
How Genetics Is Changing Our Understanding of ‘Race’

By David Reich

March 23, 2018

In 1942, the anthropologist Ashley Montagu published “Man’s Most Dangerous Myth: The Fallacy of Race,” an influential book that argued that race is a social concept with no genetic basis. A classic example often cited is the inconsistent definition of “black.” In the United States, historically, a person is “black” if he has any sub-Saharan African ancestry; in Brazil, a person is not “black” if he is known to have any European ancestry. If “black” refers to different people in different contexts, how can there be any genetic basis to it?

Beginning in 1972, genetic findings began to be incorporated into this argument. That year, the geneticist Richard Lewontin published an important study of variation in protein types in blood. He grouped the human populations he analyzed into seven “races” — West Eurasians, Africans, East Asians, South Asians, Native Americans, Oceanians and Australians — and found that around 85 percent of variation in the protein types could be accounted for by variation within populations and “races,” and only 15 percent by variation across them. To the extent that there was variation among humans, he concluded, most of it was because of “differences between individuals.”

In this way, a consensus was established that among human populations there are no differences large enough to support the concept of “biological race.” Instead, it was argued, race is a “social construct,” a way of categorizing people that changes over time and across countries.

It is true that race is a social construct. It is also true, as Dr. Lewontin wrote, that human populations “are remarkably similar to each other” from a genetic point of view.

But over the years this consensus has morphed, seemingly without questioning, into an orthodoxy. The orthodoxy maintains that the average genetic differences among people grouped according to today’s racial terms are so trivial when it comes to any meaningful biological traits that those differences can be ignored.

The orthodoxy goes further, holding that we should be anxious about any research into genetic differences among populations. The concern is that such research, no matter how well-intentioned, is located on a slippery slope that leads to the kinds of pseudoscientific arguments about biological difference that were used in the past to try to justify the slave trade, the eugenics movement and the Nazis’ murder of six million Jews.

I have deep sympathy for the concern that genetic discoveries could be misused to justify racism. But as a geneticist I also know that it is simply no longer possible to ignore average genetic differences among “races.”

Groundbreaking advances in DNA sequencing technology have been made over the last two decades. These advances enable us to measure with exquisite accuracy what fraction of an individual’s genetic ancestry traces back to, say, West Africa 500 years ago — before the mixing in the Americas of the West African and European gene pools that were almost completely isolated for the last 70,000 years. With the help of these tools, we are learning that while race may be a social construct, differences in genetic ancestry that happen to correlate to many of today’s racial constructs are real.

Recent genetic studies have demonstrated differences across populations not just in the genetic determinants of simple traits such as skin color, but also in more complex traits like bodily dimensions and susceptibility to diseases. For example, we now know that genetic factors help explain why northern Europeans are taller on average than southern Europeans, why multiple sclerosis is more common in European-Americans than in African-Americans, and why the reverse is true for end-stage kidney disease.

I am worried that well-meaning people who deny the possibility of substantial biological differences among human populations are digging themselves into an indefensible position, one that will not survive the onslaught of science. I am also worried that whatever discoveries are made — and we truly have no idea yet what they will be — will be cited as “scientific proof” that racist prejudices and agendas have been correct all along, and that those well-meaning people will not understand the science well enough to push back against these claims.

This is why it is important, even urgent, that we develop a candid and scientifically up-to-date way of discussing any such differences, instead of sticking our heads in the sand and being caught unprepared when they are found.

To get a sense of what modern genetic research into average biological differences across populations looks like, consider an example from my own work. Beginning around 2003, I began exploring whether the population mixture that has occurred in the last few hundred years in the Americas could be leveraged to find risk factors for prostate cancer, a disease that occurs 1.7 times more often in self-identified African-Americans than in self-identified European-Americans. This disparity had not been possible to explain based on dietary and environmental differences, suggesting that genetic factors might play a role.

Self-identified African-Americans turn out to derive, on average, about 80 percent of their genetic ancestry from enslaved Africans brought to America between the 16th and 19th centuries. My colleagues and I searched, in 1,597 African-American men with prostate cancer, for locations in the genome where the fraction of genes contributed by West African ancestors was larger than it was elsewhere in the genome. In 2006, we found exactly what we were looking for: a location in the genome with about 2.8 percent more African ancestry than the average.

When we looked in more detail, we found that this region contained at least seven independent risk factors for prostate cancer, all more common in West Africans. Our findings could fully account for the higher rate of prostate cancer in African-Americans than in European-Americans. We could conclude this because African-Americans who happen to have entirely European ancestry in this small section of their genomes had about the same risk for prostate cancer as random Europeans.

Did this research rely on terms like “African-American” and “European-American” that are socially constructed, and did it label segments of the genome as being probably “West African” or “European” in origin? Yes. Did this research identify real risk factors for disease that differ in frequency across those populations, leading to discoveries with the potential to improve health and save lives? Yes.

While most people will agree that finding a genetic explanation for an elevated rate of disease is important, they often draw the line there. Finding genetic influences on a propensity for disease is one thing, they argue, but looking for such influences on behavior and cognition is another.

But whether we like it or not, that line has already been crossed. A recent study led by the economist Daniel Benjamin compiled information on the number of years of education from more than 400,000 people, almost all of whom were of European ancestry. After controlling for differences in socioeconomic background, he and his colleagues identified 74 genetic variations that are over-represented in genes known to be important in neurological development, each of which is incontrovertibly more common in Europeans with more years of education than in Europeans with fewer years of education.

It is not yet clear how these genetic variations operate. A follow-up study of Icelanders led by the geneticist Augustine Kong showed that these genetic variations also nudge people who carry them to delay having children. So these variations may be explaining longer times at school by affecting a behavior that has nothing to do with intelligence.

This study has been joined by others finding genetic predictors of behavior. One of these, led by the geneticist Danielle Posthuma, studied more than 70,000 people and found genetic variations in more than 20 genes that were predictive of performance on intelligence tests.

Is performance on an intelligence test or the number of years of school a person attends shaped by the way a person is brought up? Of course. But does it measure something having to do with some aspect of behavior or cognition? Almost certainly. And since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.

You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work. Indeed, the study led by Dr. Kong showed that in Iceland, there has been measurable genetic selection against the genetic variations that predict more years of education in that population just within the last century.

To understand why it is so dangerous for geneticists and anthropologists to simply repeat the old consensus about human population differences, consider what kinds of voices are filling the void that our silence is creating. Nicholas Wade, a longtime science journalist for The New York Times, rightly notes in his 2014 book, “A Troublesome Inheritance: Genes, Race and Human History,” that modern research is challenging our thinking about the nature of human population differences. But he goes on to make the unfounded and irresponsible claim that this research is suggesting that genetic factors explain traditional stereotypes.

One of Mr. Wade’s key sources, for example, is the anthropologist Henry Harpending, who has asserted that people of sub-Saharan African ancestry have no propensity to work when they don’t have to because, he claims, they did not go through the type of natural selection for hard work in the last thousands of years that some Eurasians did. There is simply no scientific evidence to support this statement. Indeed, as 139 geneticists (including myself) pointed out in a letter to The New York Times about Mr. Wade’s book, there is no genetic evidence to back up any of the racist stereotypes he promotes.

Another high-profile example is James Watson, the scientist who in 1953 co-discovered the structure of DNA, and who was forced to retire as head of the Cold Spring Harbor Laboratories in 2007 after he stated in an interview — without any scientific evidence — that research has suggested that genetic factors contribute to lower intelligence in Africans than in Europeans.

At a meeting a few years later, Dr. Watson said to me and my fellow geneticist Beth Shapiro something to the effect of “When are you guys going to figure out why it is that you Jews are so much smarter than everyone else?” He asserted that Jews were high achievers because of genetic advantages conferred by thousands of years of natural selection to be scholars, and that East Asian students tended to be conformist because of selection for conformity in ancient Chinese society. (Contacted recently, Dr. Watson denied having made these statements, maintaining that they do not represent his views; Dr. Shapiro said that her recollection matched mine.)

What makes Dr. Watson’s and Mr. Wade’s statements so insidious is that they start with the accurate observation that many academics are implausibly denying the possibility of average genetic differences among human populations, and then end with a claim — backed by no evidence — that they know what those differences are and that they correspond to racist stereotypes. They use the reluctance of the academic community to openly discuss these fraught issues to provide rhetorical cover for hateful ideas and old racist canards.

This is why knowledgeable scientists must speak out. If we abstain from laying out a rational framework for discussing differences among populations, we risk losing the trust of the public and we actively contribute to the distrust of expertise that is now so prevalent. We leave a vacuum that gets filled by pseudoscience, an outcome that is far worse than anything we could achieve by talking openly.

If scientists can be confident of anything, it is that whatever we currently believe about the genetic nature of differences among populations is most likely wrong. For example, my laboratory discovered in 2016, based on our sequencing of ancient human genomes, that “whites” are not derived from a population that existed from time immemorial, as some people believe. Instead, “whites” represent a mixture of four ancient populations that lived 10,000 years ago and were each as different from one another as Europeans and East Asians are today.

So how should we prepare for the likelihood that in the coming years, genetic studies will show that many traits are influenced by genetic variations, and that these traits will differ on average across human populations? It will be impossible — indeed, anti-scientific, foolish and absurd — to deny those differences.

For me, a natural response to the challenge is to learn from the example of the biological differences that exist between males and females. The differences between the sexes are far more profound than those that exist among human populations, reflecting more than 100 million years of evolution and adaptation. Males and females differ by huge tracts of genetic material — a Y chromosome that males have and that females don’t, and a second X chromosome that females have and males don’t.

Most everyone accepts that the biological differences between males and females are profound. In addition to anatomical differences, men and women exhibit average differences in size and physical strength. (There are also average differences in temperament and behavior, though there are important unresolved questions about the extent to which these differences are influenced by social expectations and upbringing.)

How do we accommodate the biological differences between men and women? I think the answer is obvious: We should both recognize that genetic differences between males and females exist and we should accord each sex the same freedoms and opportunities regardless of those differences.

It is clear from the inequities that persist between women and men in our society that fulfilling these aspirations in practice is a challenge. Yet conceptually it is straightforward. And if this is the case with men and women, then it is surely the case with whatever differences we may find among human populations, the great majority of which will be far less profound.

An abiding challenge for our civilization is to treat each human being as an individual and to empower all people, regardless of what hand they are dealt from the deck of life. Compared with the enormous differences that exist among individuals, differences among populations are on average many times smaller, so it should be only a modest challenge to accommodate a reality in which the average genetic contributions to human traits differ.

It is important to face whatever science will reveal without prejudging the outcome and with the confidence that we can be mature enough to handle any findings. Arguing that no substantial differences among human populations are possible will only invite the racist misuse of genetics that we wish to avoid.

David Reich is a professor of genetics at Harvard and the author of the forthcoming book “Who We Are and How We Got Here: Ancient DNA and the New Science of the Human Past,” from which this article is adapted.

This is a recent Reich lecture with the same title as his forthcoming book.


Thursday, March 22, 2018

The Automated Physicist: Experimental Particle Physics in the Era of AI



My office will be recording some of the most interesting of the many talks that happen at MSU. I will post some of my favorites here on the blog. See the MSU Research channel on YouTube for more! Audio for this video isn't great, but we are making improvements in our process / workflow for capturing these presentations.

In this video, high energy physicist Harrison B. Prosper (Florida State University) discusses the history of AI/ML, Deep Learning, and applications to LHC physics and beyond.
The Automated Physicist: Experimental Particle Physics in the Age of AI

Abstract: After a broad brush review of the history of machine learning (ML), followed by a brief introduction to the state-of-the-art, I discuss the goals of researchers at the Large Hadron Collider and how machine learning is being used to help reach those goals. Inspired by recent breakthroughs in artificial intelligence (AI), such as Google's AlphaGoZero, I end with speculative ideas about how the exponentially improving ML/AI technology may, or may not, be helpful in particle physics research over the next few decades.

Wednesday, March 21, 2018

The Face of the Deep State: John Brennan perjury


Just for fun, Google John Brennan perjury and follow the trail. Here is former CIA Director Brennan raging at President Trump:
Here is The Guardian, charging Brennan with lying about CIA spying on the Senate in 2014. What do Democrat Senators Feinstein and Wyden think of Brennan's credibility? No need to guess, just keep reading.
Guardian: CIA director John Brennan lied to you and to the Senate. Fire him. (2014)

As reports emerged Thursday that an internal investigation by the Central Intelligence Agency’s inspector general found that the CIA “improperly” spied on US Senate staffers when researching the CIA’s dark history of torture, it was hard to conclude anything but the obvious: John Brennan blatantly lied to the American public. Again.

“The facts will come out,” Brennan told NBC News in March after Senator Dianne Feinstein issued a blistering condemnation of the CIA on the Senate floor, accusing his agency of hacking into the computers used by her intelligence committee’s staffers. “Let me assure you the CIA was in no way spying on [the committee] or the Senate,” he said.

After the CIA inspector general’s report completely contradicted Brennan’s statements, it now appears Brennan was forced to privately apologize to intelligence committee chairs in a “tense” meeting earlier this week. Other Senators on Thursday pushed for Brennan to publicly apologize and called for an independent investigation. Sen. Ron Wyden said it well:

Ron Wyden (@RonWyden)
@CIA broke into Senate computer files. Then tried to have Senate staff prosecuted. Absolutely unacceptable in a democracy.

July 31, 2014
Here is Brennan, under oath, claiming no knowledge of the origins of the Steele dossier or whether it was used in a FISA application -- May 23, 2017! Credible?



See also How NSA Tracks You (Bill Binney).

Wednesday, March 14, 2018

Stephen Hawking (1942-2018)


Roger Penrose writes in the Guardian, providing a scientifically precise summary of Hawking's accomplishments as a physicist (worth reading in full at the link). Penrose and Hawking collaborated to produce important singularity theorems in general relativity in the late 1960s.

Here is a nice BBC feature: A Brief History of Stephen Hawking. The photo above was taken at Hawking's Oxford graduation in 1962.
Stephen Hawking – obituary by Roger Penrose

... This radiation coming from black holes that Hawking predicted is now, very appropriately, referred to as Hawking radiation. For any black hole that is expected to arise in normal astrophysical processes, however, the Hawking radiation would be exceedingly tiny, and certainly unobservable directly by any techniques known today. But he argued that very tiny black holes could have been produced in the big bang itself, and the Hawking radiation from such holes would build up into a final explosion that might be observed. There appears to be no evidence for such explosions, showing that the big bang was not so accommodating as Hawking wished, and this was a great disappointment to him.

These achievements were certainly important on the theoretical side. They established the theory of black-hole thermodynamics: by combining the procedures of quantum (field) theory with those of general relativity, Hawking established that it is necessary also to bring in a third subject, thermodynamics. They are generally regarded as Hawking’s greatest contributions. That they have deep implications for future theories of fundamental physics is undeniable, but the detailed nature of these implications is still a matter of much heated debate.

... He also provided reasons for suspecting that the very rules of quantum mechanics might need modification, a viewpoint that he seemed originally to favour. But later (unfortunately, in my own opinion) he came to a different view, and at the Dublin international conference on gravity in July 2004, he publicly announced a change of mind (thereby conceding a bet with the Caltech physicist John Preskill) concerning his originally predicted “information loss” inside black holes.
Notwithstanding Hawking's premature 2004 capitulation to Preskill, information loss in black hole evaporation remains an open question in fundamental physics, nearly a half century after Hawking first recognized the problem in 1975. I read this paper as a graduate student, but with little understanding. I am embarrassed to say that I did not know a single person (student or faculty member) at Berkeley at the time (late 1980s) who was familiar with Hawking's arguments and who appreciated the deep implications of the results. This was true of most of theoretical physics -- despite the fact that even Hawking's popular book A Brief History of Time (1988) gives a simple version of the paradox. The importance of Hawking's observation only became clear to the broader community somewhat later, perhaps largely due to people like John Preskill and Lenny Susskind.

I have only two minor recollections to share about Hawking. The first, from my undergraduate days, is really more about Gell-Mann: Gell-Mann, Feynman, Hawking. The second is from a small meeting on the black hole information problem, at Institut Henri Poincare in Paris in 2008. (My slides.) At the conference dinner I helped to carry Hawking and his motorized chair -- very heavy! -- into a fancy Paris restaurant (which are not, by and large, handicapped accessible). Over dinner I met Hawking's engineer -- the man who maintained the chair and its computer voice / controller system. He traveled everywhere with Hawking's entourage and had many interesting stories to tell. For example, Hawking's computer system was quite antiquated but he refused to upgrade to something more advanced because he had grown used to it. The entourage required to keep Hawking going was rather large (nurses, engineer, driver, spouse), expensive, and, as you can imagine, had its own internal dramas.

Saturday, March 10, 2018

Risk, Uncertainty, and Heuristics



Risk = space of outcomes and probabilities are known. Uncertainty = probabilities not known, and even space of possibilities may not be known. Heuristic rules are contrasted with algorithms like maximization of expected utility.

See also Bounded Cognition and Risk, Ambiguity, and Decision (Ellsberg).

Here's a well-known 2007 paper by Gigerenzer et al.
Helping Doctors and Patients Make Sense of Health Statistics

Gigerenzer G1, Gaissmaier W2, Kurz-Milcke E2, Schwartz LM3, Woloshin S3.

Many doctors, patients, journalists, and politicians alike do not understand what health statistics mean or draw wrong conclusions without noticing. Collective statistical illiteracy refers to the widespread inability to understand the meaning of numbers. For instance, many citizens are unaware that higher survival rates with cancer screening do not imply longer life, or that the statement that mammography screening reduces the risk of dying from breast cancer by 25% in fact means that 1 less woman out of 1,000 will die of the disease. We provide evidence that statistical illiteracy (a) is common to patients, journalists, and physicians; (b) is created by nontransparent framing of information that is sometimes an unintentional result of lack of understanding but can also be a result of intentional efforts to manipulate or persuade people; and (c) can have serious consequences for health. The causes of statistical illiteracy should not be attributed to cognitive biases alone, but to the emotional nature of the doctor-patient relationship and conflicts of interest in the healthcare system. The classic doctor-patient relation is based on (the physician's) paternalism and (the patient's) trust in authority, which make statistical literacy seem unnecessary; so does the traditional combination of determinism (physicians who seek causes, not chances) and the illusion of certainty (patients who seek certainty when there is none). We show that information pamphlets, Web sites, leaflets distributed to doctors by the pharmaceutical industry, and even medical journals often report evidence in nontransparent forms that suggest big benefits of featured interventions and small harms. Without understanding the numbers involved, the public is susceptible to political and commercial manipulation of their anxieties and hopes, which undermines the goals of informed consent and shared decision making. What can be done? We discuss the importance of teaching statistical thinking and transparent representations in primary and secondary education as well as in medical school. Yet this requires familiarizing children early on with the concept of probability and teaching statistical literacy as the art of solving real-world problems rather than applying formulas to toy problems about coins and dice. A major precondition for statistical literacy is transparent risk communication. We recommend using frequency statements instead of single-event probabilities, absolute risks instead of relative risks, mortality rates instead of survival rates, and natural frequencies instead of conditional probabilities. Psychological research on transparent visual and numerical forms of risk communication, as well as training of physicians in their use, is called for. Statistical literacy is a necessary precondition for an educated citizenship in a technological democracy. Understanding risks and asking critical questions can also shape the emotional climate in a society so that hopes and anxieties are no longer as easily manipulated from outside and citizens can develop a better-informed and more relaxed attitude toward their health.

Blog Archive

Labels