Views undoubtedly diverge the most over the term “artificial intelligence.”
Callner One definition I like starts with machine learning as I described it. On top of that, there’s an added ability to simulate something and to thus formulate predictions and, lastly, to make a decision based on all that. I think that’s very similar to the way humans learn. Aside from being a physicist, I’m also a musician, so I know from firsthand experience how important it is to practice the right sequence of notes in a melody. If I repeatedly play it incorrectly, that forms memory traces in the brain. On the stage, under the spotlight, in front of an audience, I’m guaranteed to play the wrong note because it has become etched in my mind.
Kaufmann The theory of neural networks that you mentioned in the context of deep learning is arguably the best theory that we have at present to explain that. We neuroscientists say: “What fires together, wires together.” Neurons that fire at the same time in reaction to something form preferential connections with each other in the brain.
Callner And that’s exactly what we’re able to model. In “my” automatic parameter calibration, the parameters would be the connections between neurons. For example, there are methods by which we can teach a computer to recognize a cat. We work backwards from the image of a cat to individual pixels. The more images we input, the better we can calibrate the parameters and the stronger the connection is between the artificial neurons.
Kaufmann But how successful are we really at recreating the structure of the human brain in a computer? To me, the brain still possesses something magical. Neither better statistics nor faster computers enhance the quality of artificial intelligence, in my opinion. I also don’t place much stock in Big Data. Do I really need 300 million images of cats to be capable beyond a doubt of telling them apart from cows? Intelligence to me means to a far greater extent learning from Small Data. A toddler sees a cat once and knows what a cat is for the rest of his life. Don’t get me wrong, I’m unshakably confident that we’ll be able one day to create artificial intelligence on par with the human mind. After all, research groups around the world, like the Swiss Mindfire Foundation, are attempting to crack the brain code.
Why is that worth pursuing? Do I need to understand the brain in its entirety to create smart applications?
Kaufmann We have to specify more precisely what we’re talking about here. Even though automation and the quest for artificial intelligence are often conflated, there’s a difference between them. Humans have always sought automation. Back in ancient times, Archimedes’ screw made it easier for people to pump water. Today we automate by digitizing – thanks to machine learning, Big Data, etc. That occasionally has yielded some spectacular results. But we don’t have to crack the brain code to be able to automate. I’m trying to do it for other reasons. I want to understand, for instance, how humans plan, sometimes even beyond their own life spans. I don’t have to understand human biology to do that, by the way. Loosely along the lines of Leonardo da Vinci, I don’t want to replicate a bird, I want to build a flying machine. I’m interested in the principle of human intelligence.