Today's Elites

Wednesday, August 10, 2011

Rotten Ideology



How Computational Complexity Will Revolutionize Philosophy

The theory of computation has had a profound influence on philosophical thinking. But computational complexity theory is about to have an even bigger effect, argues one computer scientist.


KFC 08/10/2011
2 COMMENTS


Since the 1930s, the theory of computation has profoundly influenced philosophical thinking about topics such as the theory of the mind, the nature of mathematical knowledge and the prospect of machine intelligence. In fact, it's hard to think of an idea that has had a bigger impact on philosophy.

And yet there is an even bigger philosophical revolution waiting in the wings. The theory of computing is a philosophical minnow compared to the potential of another theory that is currently dominating thinking about computation.

At least, this is the view of Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology. Today, he puts forward a persuasive argument that computational complexity theory will transform philosophical thinking about a range of topics such as the nature of mathematical knowledge, the foundations of quantum mechanics and the problem of artificial intelligence.

Computational complexity theory is concerned with the question of how the resources needed to solve a problem scale with some measure of the problem size, call it n. There are essentially two answers. Either the problem scales reasonably slowly, like n, n^2 or some other polynomial function of n. Or it scales unreasonably quickly, like 2^n, 10000^n or some other exponential function of n.

So while the theory of computing can tell us whether something is computable or not, computational complexity theory tells us whether it can be achieved in a few seconds or whether it'll take longer than the lifetime of the Universe.

That's hugely significant. As Aaronson puts it: "Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number."

He goes on to say that it's easy to imagine that once we know whether something is computable or not, the problem of how long it takes is merely one of engineering rather than philosophy. But he then goes on to show how the ideas behind computational complexity can extend philosophical thinking in many areas.

Take the problem of artificial intelligence and the question of whether computers can ever think like humans. Roger Penrose famously argues that they can't in his bookThe Emperor's New Mind. He says that whatever a computer can do using fixed formal rules, it will never be able to 'see' the consistency of its own rules. Humans, on the other hand, can see this consistency.

One way to measure the difference between a human and computer is with a Turing test. The idea is that if we cannot tell the difference between the responses given by a computer and a human, then there is no measurable difference.

But imagine a computer that records all conversations it hears between humans. Over time, this computer will build up a considerable database that it can use to make conversation. If it is asked a question, it looks up the question in its database and reproduces the answer given by a real human.

In this way a computer with a big enough look up table can always have a conversation that is essentially indistinguishable from one that humans would have

"So if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory," says Aaronson.

Instead, a more fruitful way forward is to think about the computational complexity of the problem. He points out that while the database (or look up table) approach "works," it requires computational resources that grow exponentially with the length of the conversation.

Aaronson points out that this leads to a powerful new way to think about the problem of AI. He says that Penrose could say that even though the look up table approach is possible in principle, it is effectively impractical because of the huge computational resources it requires.

By this argument, the difference between humans and machines is essentially one of computational complexity.

That's an interesting new line of thought and just one of many that Aaronson explores in detail in this essay.Of course, he acknowledges the limitations of computational complexity theory. Many of the fundamental tenets of the theory, such as P ≠ NP, are unproven; and many of the ideas only apply to serial, deterministic Turing machines, rather than the messier kind of computing that occurs in nature.

But he says these criticisms do not allow philosophers (or anybody else) to arbitrarily dismiss the arguments of complexity theory. Indeed, many of these criticisms raise interesting philosophical questions in themselves.

Computational complexity theory is a relatively new discipline which builds on advances made in the 70s, 80s and 90s. And that's why it's biggest impacts are yet to come.

Aaronson points us in the direction of some of them in an essay that is thought provoking, entertaining and highly readable. If you have an hour or two to spare, it's worth a read.




If the readers of this review are not able of their own accord to comprehend the unbridgeable gulf that separates the human creative mind from any possible mechanical device they are hopelessly lost (or perhaps dead) souls.

7 comments:

  1. Teedeepee4:23 AM

    Thingumbob, count me as a hopelessly (or perhaps dead) soul then, whatever "soul" means.

    Stating that the brain is nothing more than a biological computational device, and that it can therefore be subjected to studying, understanding, modeling, and ultimately replication using biological or inert components, trivializes neither its immense complexity nor what makes human creativity and sentience stand out in the animal kingdom.

    Granted, we are probably decades away, if not more, from understanding and modeling the electrochemical interactions in the human brain with enough precision to simulate a mind and its emergent properties (such as creativity) well enough to pass enough a rigorous Turing test.

    But what is your basis for saying that the gulf is unbridgeable, save for some mystical property of the brain? If DNA contains the software required to assemble a fully-functioning brain in just a few months, why would this process escape the reach of a (future) mechanical device?

    And I don't think this would belittle the nature of human intelligence - for it is human intelligence itself which would have given rise to this new artificial intelligence, an enormous feat if there ever was one.

    ReplyDelete
  2. Nicholas of Cusa, the collaborator of Leonardo DaVinci's mentor Luca Pacioli and Toscanelli who inspired Columbus, is often labeled a mystic by the so-called school of reductionists who embrace formal logic. However, in truth Cusa's method of hypothesizing a higher hypothesis is the basis for Kepler's and Einstein's principle of universal gravitation...You would do well to peruse Cusa's works such as Of Learned Ignorance and The Non Other. It is a mystery to me how the steadfast belief that human creativity is merely some mechanistic formula is not immediately recognized as laughably groundless.

    ReplyDelete
  3. robma5:21 AM

    I don't agree with your view. Your unbridgable gulf is there to be bridged. Imagine a neuronal network as big as our brain with sensors and actuators attached which has 30 years time to learn in real world conditions. It will probably be away some decades but nevertheless technically possible. Imagine further this machine is constantly trying to optimize its input and output to gain whatever are the initially set merit functions and I'm quiet sure it will be creative. The reason for this is that neuronal networks and also our brain represent their knowledge in a very different way as it appears to our consciousness. This kind of representation makes it easily possible that new ideas emerge, because it does not represent the original input as it was.

    ReplyDelete
  4. The human mind is qualitatively and fundamentally distinct from all other orders of life (let alone any merely mechanical device) in the biosphere. Human creativity willfully produces changes through scientific progress that effect increases the energy flux density or reducing power of society. This places the individual human mind in an harmonic historical relationship with the past and future generations of the necessary conatus of humanity. It is an outright imbecilic and fraudulent absurdity, at best, to posit that any mechanical device could be creative in this sense.

    ReplyDelete
  5. robma3:17 AM

    to clarify: a mechanical device isn't the thing I asked to imagine. It will be more like a robot with a huge neural network as main control center, probably not less mechanical than a biological organism. It will be optimzed for learning in a wide sense. When I worked on optimization with evolutionary strategies I came to understand, that human imagination is very limited when it comes to huge numbers of anything. Our brains are just not good with these. Because of this it seems reasonable to assume it cannot be assumed technically. But I think its only because there is a lack in knowledge how the brain for example works. And I'm convinced that we do not have to understand the brain fully before creativity will be shown in such a device. By the way, what would be a good proof for creativity? What is genuine creative for you?

    ReplyDelete
  6. naasking3:18 AM

    Human creativity willfully produces changes through scientific progress that effect increases the energy flux density or reducing power of society.

    The nature of will is irrelevant to the question of whether a process is mechanical or not, because will itself would then also be mechanical. Evolution by natural selection produces exactly the progress that you ascribe to human creativity, and yet you would surely disqualify it because it is not "willful". Except "willfulness" has no concrete definition, and is unfalsifiable.

    It is an outright imbecilic and fraudulent absurdity, at best, to posit that any mechanical device could be creative in this sense.

    Let us suppose that you are correct for a moment, and that consciousness is not a mechanistic process. How then do you explain that a non-mechanistic phenomenon exists in what otherwise appears to be a fully mechanistic universe? You must posit that some aspect of this universe is special in not being mechanistic, but somehow only exhibits this special aspect in the consciousness of creatures, of which you happen to be a member.

    In contrast, the mechanistic explanation is simply that our subjective experience, our intuitive assumption that we our "will" is free, that we are "creative" divorced from external stimulus, is biased and in fact untrue. That the universe works contrary to our intuition has been demonstrated many times, so there is significant precedent supporting this explanation. It will simply be yet another way in which our assumptions by subjective experience are violated.

    Occam's razor prefers the mechanistic explanation, since your explanation requires additional assumptions which are not justified. Until such evidence comes to light, I'm afraid your argument is "imbecilic and fraudulent absurdity".

    ReplyDelete
  7. Since you seem to be trying to prove to me that you are in essence no different than a mere dead mechanical device, I must perforce give the field to you and admit to you your point... recognizing, of course, that it would be pointless to attempt to argue with a confessed soulless thing such as yourself. But for those still sentient and potentially creative beings who may read this, I hope I have amply demonstrated to your satisfaction the fatal flaw in the ludicrous conceit of "artificial intelligence." And with that I will close.

    ReplyDelete

Blog Archive