Rats in a casino

From Adam Alter’s Irresistible: Why We Can’t Stop Checking, Scrolling, Clicking and Watching: Juice refers to the layer of surface feedback that sits above the game’s rules. It isn’t essential to the game, but it’s essential to the game’s success. Without juice, the same game loses its charm. Think of candies replaced by gray bricks and none of the reinforcing sights and sounds that make the game fun. ... Juice is effective in part because it triggers very primitive parts of the brain.

Greg Ip's Foolproof: Why Safety Can Be Dangerous and How Danger Makes Us Safe

Greg Ip’s framework in Foolproof: Why Safety Can Be Dangerous and How Danger Makes Us Safe is the contrast between what he calls the ecologists and engineers. Engineers seek to use the sum of our human knowledge to make us safer and the world more stable. Ecologists recognise that the world is complex and that people adapt, meaning that many of our solutions will have unintended consequences that can be worse than the problems we are trying to solve.

Does presuming you can take a person's organs save lives?

I’ve pointed out several times on this blog the confused story about organ donation arising from Johnson and Goldstein’s Do Defaults Save Lives? (ungated pdf). Even greats such as Daniel Kahneman are not immune from misinterpreting what is going on. Again, here’s Dan Ariely explaining the paper: One of my favorite graphs in all of social science is the following plot from an inspiring paper by Eric Johnson and Daniel Goldstein.

Cathy O'Neil's Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

In her interesting Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O’Neil defines Weapons of Math Destruction based on three criteria - opacity, unfairness and scale. Opacity makes it hard to assess the fairness of mathematical models (I’ll use the term algorithms through most of this post), and it facilitates (or might even be a key component of) an algorithm’s effectiveness if it relies on naive subjects.

Is it irrational?

Over at Behavioral Scientist magazine my second article, Rationalizing the ‘Irrational’, is up. In the article I suggest that an evolutionary biology lens can give us some insight into what drives peoples’ actions. By understanding someone’s actual objectives, we are better able to determine whether their actions are likely to achieve their goals. Are they are behaving “rationally”? Although the major thread of the article is evolutionary, in some ways that is not the main point.

Garry Kasparov's Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins

In preparation for my recent column in The Behavioral Scientist, which opened with the story of world chess champion Garry Kasparov’s defeat by the computer Deep Blue, I read Kasparov’s recently released Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. Despite the title and Kasparov’s interesting observations on the computer-human relationship, Deep Thinking is more a history of man versus machine in chess than a deep analysis of human or machine intelligence.

Humans vs algorithms

My first column over at the Behavioral Scientist is live. The column is an attempt to bring together two potentially conflicting stories. The first is that the best decisions result from humans and machines working together. This is encapsulated in the story of freestyle chess, whereby the best software is trumped by a human-computer team. The other is the deep literature on whether humans or algorithms make better decisions, starting with Paul Meehl’s classic Clinical Versus Statistical Prediction.

The "effect is too large" heuristic

Daniel Lakens writes: I was listening to a recent Radiolab episode on blame and guilt, where the guest Robert Sapolsky mentioned a famous study [by Danziger and friends] on judges handing out harsher sentences before lunch than after lunch. The idea is that their mental resources deplete over time, and they stop thinking carefully about their decision – until having a bite replenishes their resources. The study is well-known, and often (as in the Radiolab episode) used to argue how limited free will is, and how much of our behavior is caused by influences outside of our own control.

Behavioral Scientist is live

The folks at ideas42, the Center for Decision Research, and the Behavioral Science and Policy Association have kicked off a new online magazine, The Behavioral Scientist. I am one of the founding columnists, and it looks like I am part of a pretty good line up. My first column should appear in late July. You can sign up to the Behavioral Scientist email edition on the homepage, or follow on twitter.

Gerd Gigerenzer, Peter Todd and the ABC Research Group's Simple Heuristics That Make Us Smart

I have recommended Gerd Gigerenzer, Peter Todd and the ABC Research Group’s Simple Heuristics That Make Us Smart enough times on this blog that I figured it was time to post a synopsis or review. After re-reading it for the first time in five or so years, this book will still be high on my recommended reading list. It provides a nice contrast to the increasing use of complex machine learning algorithms for decision making, although it is that same increasing use that makes some parts of the book are seem a touch dated.