Neurohumbug
by Ken Arneson
2012-09-25 19:25

Early in my life, I really didn’t have any sort of vision for a career. I just kind of drifted towards whatever opportunities came to me. I had an aptitude for computers, partly because my dad, who was an electronics technician, understood that they were the Next Big Thing. In 1980, he bought a TI-99/4, hoping that I would fiddle with it and learn from it. I did. And so as I grew up, the opportunities that fell into my lap happened to be with computers, because whenever there was some computer stuff that needed to be done, I seemed to be the guy who could figure it out.

Then in 1994, I was asked to set up a web server. Immediately, I knew. It was like walking up a big hill and just staring at your feet the whole time, and then suddenly you reach the top, see the view, and you suddenly realize the world is a whole lot bigger than the size of your feet. The Internet was going to be huge. It was going to be exciting. I decided I would bet my career on it.

I was far from the only one who understood that the Internet was a Big Deal. Looking back on it now, it’s clear that I was right THAT the Internet would be huge. It’s also clear that neither I nor anyone else had any idea whatsoever HOW it would be huge.

And so the dot-com bubble came and burst, and there were plenty of Pets.com and Webvan.com examples, where my generation made all sorts of big bets on the THAT, and completely missed on the HOW. The Internet would indeed change our lives, but it wasn’t going to be by giving us new ways to sell dog food.

* * *

About 10 years ago, I came to a similar epiphany with neuroscience. I had taken a class at UC Berkeley in the late 80’s that was primarily about aesthetics. The class asked, what made this work of art a classic, but that one forgotten? The question stuck with me for years, but I never could find an answer that made any sense to me. But one day in the early 2000’s it struck me that the answer wasn’t in the artwork, it was in the brain’s interpretation of the artwork. So I googled the word “neuroaesthetics”, wondering if there was such a thing. It turned out there was an International Conference on Neuroesthetics was being held in Berkeley just a few months later. I decided to attend.

I discovered that neuroaesthetics is a baby science, where everyone, including me, was excited THAT we can try to understand art from a scientific point of view, but at the same time, a science where no one really has any clue as to HOW understanding the brain will help us understand art. It seemed to me like looking at a jigsaw puzzle without knowing what it’s really a picture of yet. You start out by looking at this detail and that one, and seeing if any of the pieces fit together at all.

It’s taken about 10 years, but now people are trying to take this information and attach it to their existing models of human activity, to see how this changes the picture we thought we were looking at. Some of these attempts will probably turn out to be the equivalent of attaching the Internet to dog food. But we don’t learn that these things don’t work until we try and fail. Watching this process unfold is as interesting to me as watching the dot-com craze play itself out.

And like any craze, the bubble will eventually pop. Perhaps the first sign of that pop was when the leading journalist covering this neurofever, Jonah Lehrer, was found guilty of various forms of plagiarism. Since then, there has come a natural backlash against trying to apply brain research to all these forms of human activity. The most scathing attack came a couple weeks ago by Steven Poole in the New Statesman:

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

Indeed, there are flaws with many of these models that use brain studies for supporting evidence. I’m especially skeptical of those that use brain scans that show the brain “lighting up” in response to this or that stimulus. That’s like trying to understand how a computer works by making note of when the hard drive makes a noise when it spins. It can tell you a little bit about how a computer works, but not nearly enough to build an accurate model from.

I also am suspicious of any model that claims that there are “4 kinds of X” or “7 different Y”, such as Jonathan Haidt’s five six moral foundations. In computer programming, there’s an axiom that you design for cases of 0, 1 or N. You make sure your program can handle it when there’s no data. If there’s one specific thing you’re trying to solve, it’s OK to write something that handles that one specific case. But if you’re going to be handling a number of cases that’s above one, then you abstract your program to a level that can handle ANY number of cases, not just the number of cases you know about. Because otherwise, any time some new situation comes up, you have to write a whole new program. So I find it hard to believe that our brain has wired these specific six moral foundations into our brains, and only these six.

So Poole has a good point. We really don’t know enough about the brain yet to be drawing any grand conclusions from the information with a lot of confidence.

* * *

“Essentially, all models are wrong, but some are useful.”
George Box

But at the same time, if we don’t use what little knowledge of the brain we have, we’d still be asking and trying to answer the same questions about ourselves. Only we’d be doing it without this added scientific information. What we had before this explosion in brain research in fields like aesthetics was not really a science at all. It was mostly just academic jargony humbug.

It’s like condemning the entirety of the Internet because Webvan.com was a disaster. Yes, there were a lot of crap businesses at the beginning of the Internet, and there are a lot of crap theories at the beginnings of neuroscience. But that’s part of the process. Until we can exactly replicate a human brain from scratch, everything is just an imperfect model.

Some of these models will be more useful than others. Today’s models may be deeply flawed, but they’ll be less flawed than yesterday’s. And upon a few of these models, the Googles and Facebooks and Twitters of neuroscience will be born, the models of the human mind that we find truly useful. I see no reason to give up on that vision.

This is Ken Arneson's blog about baseball, brains, art, science, technology, philosophy, poetry, politics and whatever else Ken Arneson feels like writing about
Original Sites
Recent Posts
Contact Ken
Mastodon

LinkedIn

Email: Replace the first of the two dots in this web site's domain name with an @.
Google Search
Web
Toaster
Ken Arneson
Archives
2021
01   

2020
10   09   08   07   06   05   
04   

2019
11   

2017
08   07   

2016
06   01   

2015
12   11   03   02   

2014
12   11   10   09   08   04   
03   01   

2013
12   10   08   07   06   05   
04   01   

2012
12   11   10   09   04   

2011
12   11   10   09   08   07   
04   02   01   

2010
10   09   06   01   

2009
12   02   01   

2008
12   11   10   09   08   07   
06   05   04   03   02   01   

2007
12   11   10   09   08   07   
06   05   04   03   02   01   

2006
12   11   10   09   08   07   
06   05   04   03   02   01   

2005
12   11   10   09   08   07   
06   05   04   03   02   01   

2004
12   11   10   09   08   07   
06   05   04   03   02   01   

2003
12   11   10   09   08   07   
06   05   04   03   02   01   

2002
12   10   09   08   07   05   
04   03   02   01   

1995
05   04   02