Quantcast
Channel: Philosophy – William M. Briggs
Viewing all articles
Browse latest Browse all 529

Machines Can’t Learn (Universals): The Abacus As Brain Part II

$
0
0

Read Our Intellects Are Not Computers: The Abacus As Brain Part I first.

Machines can learn, all right. But they can’t learn like us. Machines cannot apprehend universals, ideas of truth and falsity, and logical connections. We can.

Here’s a recent headline: Google’s New AI Is Better at Creating AI Than the Company’s Engineers. The article under it is breathless and full of wonder as these things tend to be. It’s last sentence (from which I’ll allow you to infer the theme of the rest, something a computer cannot do) is “The potential of AI then draws to mind the title of another sci-fi film: limitless.” Alas, the promises implied here will not be kept.

Let’s return to our abacus, which has been appended with extra levers and beads so that it can translate, à la Searle, from Chinese to Nepali. The beads of our giant abacus are put into a position so that one state—the full and complete picture of the beads and slides and whatever other steam-driven gears we might have in there at any moment—represents (we say) a Chinese word and its equivalent Nepali word. Whenever the abacus input is slid into the position such that the beads represents this Chinese word, other beads are shifted so that its output (some portion of the abacus) is said to represent the equivalent Nepali word.

This state, or rather these two states, are a form of learning, if you like. The abacus, the purely mechanical contraption, has “learned” in this very weak sense how to translate. But don’t forget that each “learned” word is just a position of beads.

Obviously, we can extend this “learning” to include more words, and even sentences. We can have the abacus grow (add more beads and slides) such that texts which it has never seen before are also translated. To do this, we can “train” the abacus to form states that tell of translation success and failure, say by sliding a bead up for every success, and down for every failure. And we can “try out” various possible states via some purely mechanical process to try and maximize this reward of beads. (There is a great deal of silliness talked about random checking of states, and Monte Carlo, and so on, all of which is false. No matter what kind of process you have to run through states, it is always seen as purely mechanical in the end. There is no “randomness” to it. But it will do little harm here to believe in “randomness” if you want. An article on this is coming soon.)

Well, this is “machine learning”. Our machine has “learned.” That is, it will form certain states in response to certain stimuli, i.e. inputs. Two things should be of perfect clarity. One, the machine does not know in the sense of understand or apprehend what it is doing. It is just a bunch of beads, levers, and slides: a tremendous pile of wood. Two, this wooden computer is no different than the machine learning electronic computers on which AI systems are being built. The electronic ones are just faster and smaller. But we learned last time speed of computation does not bring us to knowing, i.e. to intellection.

Our abacus is a neural net, if you like fancy marketing language, but it isn’t rational. It does not have an intellect or will. Understanding or apprehension is more than just a position of beads on slides. Prove this to yourself in the following way.

Your intellect will agree that, for natural numbers A, B, and C, if A > B, and B > C, then A > C. Now within an awful limitation, we can build an abacus that either makes a test of this for any three natural numbers, or we can represent, as with the Chinese characters, the “theorem” in beads. Either way, we have a pile of wood, and in neither case does that pile of wood know the truth of the theorem. And in the first case we’re limited by space and cost, since we cannot build an abacus big enough to test very large numbers. Though our intellects can test any numbers short of infinity instantaneously.

Suppose you think that knowing, i.e. apprehension of universals like our theorem, is a purely material process. Then it should be possible to build this knowing-unit on top of the rest of the abacus. Let’s do this. I don’t know how it would work, and neither do you, but I know what it will look like. It will be another collection of beads and slides, indistinguishable from the other beads and slides. And the whole will just be—what else?—an enormous pile of wood.

It is no limitation that our abacus only takes one state at a time. So do our brains, albeit the phase space of our brains is huge and the abacus small. Still, like we said last time, this is a thought experiment (which computers can’t do), and so size is no limitation. The abacus, since it will be powered by steam and wooden gears, won’t run fast, but speed is not of the essence, again as we already proved.

We can go on adding beads and slides, but no matter what, in the end all we have is a pile of wood. There will be no understanding in it. It can’t learn to apprehend, though it can “learn” any non-intellective task. It we add wheels to our abacus, it can, in the end, say, simulate an animal, and respond to stimuli of all sorts. But it will never know truth. It can register stimuli and report on its internal states, but it will never know, and can never be said to have free will. It will never be self-aware in the sense that we are. It is just a pile of wood.

The translation algorithm, and the algorithms we design to do whatever task our abacus might do, turn out to be just positions of beads and slides. And the same is true of the “deep learning”, “machine learning”, and AI built into electric abacuses. A neural net (i.e. a parameterized non-linear regression) is just an abacus with the beads/parameters set a certain way. The meta-algorithm that puts the neural net beads in this certain way is itself just a set of beads/parameters set a certain way. And so on for any algorithm that can be represented.

This is not to say the position of those beads, whether made of wood or electrical states, are not useful to us in some way. Of course they often are. But so is a lawnmower useful, yet no one would make the mistake of thinking it possessed of an intellect. There are strong, unbroachable limits on the abacus, no matter from what material it is built. The abacus can have no experience of the infinite. We can—and do.

Every time we apprehend a universal, we experience the infinite, for a universal is true everywhere and everywhen; universals are eternal. This is a commonplace in mathematics, where the infinite appears on a routine basis (as it did when we knew, given the stated premises, A > C). Now whether it is true that a facility for experiencing the infinite is necessary to possess intellect and will, and I think it is, it is true we can speak of the (various kinds or flavors or properties) infinite, and no machine can. Of course, a machine can be built that has beads which we say mean this or that infinity, but the machine cannot be said to understand or grasp this, as shown.

An abacus as self aware and comprised of an intellect and will is absurd. That it is made of wood and could therefore only chug along at a sedate pace made this absurdity stark. There is something mystical about an electric machine, though, even when the person building the machine knows where every transistor goes.

To many, computers “feel” alive. We tell ourselves stories where they are, just as we used to tell stories of intellective mice and birds.


Viewing all articles
Browse latest Browse all 529

Trending Articles