+ Reply to thread
Results 1 to 2 of 2

Thread: Will strong AI ever be achieved?

  1. #1
    like Gandalf in a way Nrblex's avatar
    Registered
    Jul 2009
    Posts
    844

    Default Will strong AI ever be achieved?

    Loki had suggested I do something on AI theory. I wasn't really sure where to go with this, especially since I don't know how all up in this everybody else is, so I figured I'd go with something fairly basic.

    Strong AI is an artificial intelligence that would match or exceed human intelligence. Not everybody agrees with what that might require. Some follow the Alan Turing school of thought, which states that if something appears to be intelligent, then it is, since that's the only way we can measure intelligence.

    John Searle viewed it differently, however. His thought experiment was to imagine that he is in a room and Searle's only contact with the outside world is through pieces of paper with Chinese writing on them that are slipped into the room. He cannot read Chinese, but he has a set of rules that allow him to manipulate characters based off of what symbols are on the pieces of paper so that he appears to be responding to the messages he receives. But since he is only operating a formula without any understanding of its signicance--he's just a human carrying out a computer program--he will never actually understand Chinese, even if he appears to.

    There are a lot of other philosophical takes on the matter, but I think those are two good starts.

    My feeling on it is closer to Turing's. If you appear to understand Chinese and I can carry on a conversation with you in Chinese characters, then as far as I'm concerned you speak and understand Chinese. And if you can convince me you're intelligent, I have no reason to believe otherwise. All that matters is the results.

  2. #2
    The Queen Zuul's avatar
    Registered
    Mar 2009
    Location
    Wisconsin
    Posts
    9,908

    Default

    Oh Searle. My old arch-nemesis. While I agree there's no real measure of intelligence except what can be observed, the problems with the Chinese room argument run deeper than that. I find it almost embarrassing that this really terrible argument gets pulled up again and again as if it proves something.

    The basis of the idea is that computers are programmed to follow rules. Any computer program can then be carried out by a human being following those rules without actually understanding the meaning behind the rules, and so since a human being following those rules wouldn't understand the program (or what the program is reacting to) then the computer is also incapable of ever understanding.

    There is a fallacy in cognitive theories that is almost exactly like this argument and if people theorizing about AI spent more time studying brains and less time arguing about programming they would have all recognized it a long time ago. It's called a homunculous argument. It's assuming that a brain follows rules and, therefore, someone (a "homunculous" within the brain) is following those rules. But then the homunculous must have a homunculous within him to make him operate by those rules, and that second homunculous has his own rules, etc, etc, ad nauseum.

    The problem with the Chinese room is that it is just another homunculous. Instead of sitting in a brain, he's in a room. But he's still just following rules. While this is a view of AI programming that was held in the '50s and '60s, luckily we're moving beyond that to a probabilistic approach. Searle views all programming as relying upon rules and claims that any attempt to program is going to run into the Chinese room problem. But, there are rules and then there are rules. While I wouldn't claim that, say, Google is self-aware, it's also not picking up a script and following it A, B, and C to always come up with D. The results change depending on where you are, what everyone else in the world is doing, and a long list of information Google has on you. Sure, there are "rules" involved, but not in the way Searle is envisioning them.

    And that's where strong AI is going to come from. I don't have that much faith in those trying brain simulations and things like that, because copying biology isn't the important part. It's not the hardware the mind is using that's important, but it's software.

    Since computers are symbolic systems, Searle believes they are incapable of "understanding" anything, because they're forever manipulating symbols, not reality. Bullshit. That's what human minds do. When I look at my computer, I don't see a computer. I see the concept of it, built off of a million different assumptions and past experiences. As referenced in this excellent article, the ability to infer that most birds fly, most flying birds have a wingspan of a certain ratio to their body size, and that penguins are birds but have wings of the wrong ration and so are less likely to fly is something that is programmable. And that's the first step to understanding.

    Will we get there in the next ten years? I'm not sure. But we're going to get there and it's going to appear to come out of nowhere simply because most people won't realize that we're already stumbling in the right direction.

+ Reply to thread

Posting rules

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts