Two thoughts on AI and genius

Roger Federer’s neural net

Here’s a paragraph that caught my attention when I was re-reading one of David Foster Wallace’s late essays Roger Federer as Religious Experience a couple of evenings ago:

Successfully returning a hard-served tennis ball requires what’s sometimes called “the kinesthetic sense,” meaning the ability to control the body and its artificial extensions through complex and very quick systems of tasks. English has a whole cloud of terms for various parts of this ability: feel, touch, form, proprioception, coordination, hand-eye coordination, kinesthesia, grace, control, reflexes, and so on. For promising junior players, refining the kinesthetic sense is the main goal of the extreme daily practice regimens we often hear about. The training here is both muscular and neurological. Hitting thousands of strokes, day after day, develops the ability to do by “feel” what cannot be done by regular conscious thought. Repetitive practice like this often looks tedious or even cruel to an outsider, but the outsider can’t feel what’s going on inside the player — tiny adjustments, over and over, and a sense of each change’s effects that gets more and more acute even as it recedes from normal consciousness.

It made clear to me something that’s probably been clear to others for some time: the distinction between savantism and genius, and how that informs how we think about machine intelligence.

The savant has an excess of natural talent but no acquired talent. Certainly being born on the far right of various bell curves is a prerequisite for playing top-level international tennis, and probably for performing at that high a level at anything, but it is not enough on its own. True genius synthesises freakish natural ability with a lifetime’s worth of experience to create a second sort of talent, the ability to act and react in ways that are impossible by anything except something like rote, no matter how much natural skill one possesses.

Is this not like how top-level machine learning works, by endless fitness of instinct? Earlier conceptions of superhuman AI suppose a kind of dominance by sheer computing power, the ability to calculate and model the world as it happens. But computers have had speed and memory inconceivable to humans for years and yet still sucked at some basic tasks, particularly the ones where something like genius is possible. The machine learning AI paradigm seems to spurn mere savantism andreflects an approach more reflective of what we think of as human genius.

The mind of Amazon

In March 2016, Google DeepMind’s Go playing program AlphaGo beat Lee Sedol 4-1 in a five game match, the first time any Go program had beaten a human professional of the top rank (9-dan). Shortly afterwards the program won 60 out of 60 games against professionals on an online Go server and in May 2017 it beat world champion Ke Jie three games to zero. Afterwards Ke commented that “AlphaGo is improving too fast, [it] is like a different player this year compared to last year.”

What’s staggering is that before this decade no Go program in the world could compete with a good decent amateur, let alone a professional. The first computer victory against a human professional was not until 2015 when AlphaGo beat Fan Hui—think about that, over three years Go programs have gone from ‘unable to beat a professional’ to ‘on a par with the best players’ to ‘better than the world champion’ to ‘so good its playing a fundamentally different game than humans’. Apparently when humans now pay the program they can’t understand its moves at all: nothing makes sense, the machine never seems to be building a winning position until suddenly the game turns and the human’s position is revealed to be hopeless. This is kind of awe-inspiring when confined to a Go board, and (to me) worrying outside of it. I thought those worries were a long way off.

However. Last month I went to a meeting in which the sales people updated us on the business’ relationship with Amazon (which, as you can imagine, is a big chunk of sales for a publishing company). Understanding how wholesalers decide to buy stock and predicting future ordering patterns is a major concern for Sales and this is particularly difficult in the case of Amazon for a number of reasons: sheer scale means that no single person is ultimately responsible for buying policy and policy decisions are taken a long way away from their effects; it is a prestigious and competitive company and while our sales people stay in role a long time, Amazon’s buyers rotate rapidly.

It is a cliché that large organisations are in some ways like minds with individual workers as neurons. Being far vaster than averagely large corporations, the mind of Amazon is correspondingly more complicated and inscrutable. I often think of Sales’ interactions with Amazon as an oracle trying to interpret a capricious God.1 True understanding is impossible but the skilled seer can, sometimes, correctly interpret the pattern of entrails across the spreadsheet’s cells.

Or so it used to be. In this particular meeting our sales people were very happy because Amazon’s ordering is way up on last year. Why? they asked Amazon’s buyers. The buyers didn’t know, but not for the usual cog-in-a-wheel reason: it turns out Amazon has turned over demand planning to machine learning algorithms, the same technologies that govern AlphaGo and others.2 The organisation-as-a-mind metaphor is replaced with the very real question of what we call the entity that’s making Amazon’s buying choices now. There’s no reason to believe, so far as I can see, that this entity must prove any less adept (relative to humans) at the buying-and-selling game than AlphaGo has at the world’s deepest board game.

Which, for me, raises an obvious question: if Amazon sees itself as in competition with its suppliers to get the most product from them at the lowest price, what happens next year when it makes its superhuman game-changing move?

  1. The other metaphor I turn to is an abusive relationship, but let’s keep things plausibly professional here. ↩︎

  2. The person explaining this to us said: “I don’t know, I expect you guys understand this a lot better than me, but the gist is that the parameters for the ordering algorithms aren’t set by humans—their algorithms write their algorithms”.
    (The algorithms can be thought of as the BRAIN of the computer.) ↩︎


Post navigation


© Tom Harris 2015–2018.

Powered by Hydejack v6.6.1