Quantcast
Channel: Philosophy – William M. Briggs
Viewing all articles
Browse latest Browse all 529

Proof Cause Is In The Mind And Not In The Data

$
0
0

Pick something that happened. Doesn’t matter what it is, as long as it happened. Something caused this thing to happen; which is to say, something actual turned the potential (of the thing to happen) to actuality.

Now suppose you want to design a clever algorithm, as clever as you like, to discover the cause of this thing (in all four aspects of cause, or even just the efficient cause). You’re too busy to do it yourself, so you farm out the duty to a computer.

I will take, as my example, the death of Napoleon. One afternoon he was spry, sipping his Grand cru, and planning his momentous second comeback, and the next morning he was smelling like week-old brie. You are free to substitute this event for one of your liking.

Plug into the computer, or a diagram in the computer, or whatever you like, THE EVENT.

Now press “GO” or “ACTIVATE” or whatever it is that launches the electronic beastie into action.

What will be the result?

If you said nothing, you have said much. For you have said your “artificial intelligence” algorithm cannot discern cause. Which is saying a bunch. Indeed, more than a bunch, because you have proven lifeless algorithms cannot discover cause at all.

End of proof.

“Very funny, Briggs. Most amusing. But you know you left out the most important element.”

I did? What’s that?

“The data. No algorithm can work without data. It’s the data from which the cause is extracted.”

Data? Which data is that?

“Why, the data related to the event your algorithm is focused on.”

Say, you might be right. Okay, here’s some data. The other day I was given a small bottle of gin. In the shape of a Dutch house in delft blue. You weren’t supposed to drink it, but I did. In defense, I wasn’t told until after I drank it that I shouldn’t have.

“What in the name of Yorick’s skull are you talking about? That’s not data. You have to use real data. Something that’s related to your event. What’s this Dutch gin house have to do with that?”

Well, you know what Napoleon did in Holland. And what’s my choice have to do with anything? We want the algorithm to figure out the cause, not me. Shouldn’t it be the business of the algorithm to identify the data it needs to show cause?

“I’m not sure. That’s a tall order.”

An infinite one, or practically so. Everything that’s ever happened, in the order it happened, is data. That’s a lot of data. That tall order is thus not only tall, but impossible, too, since everything that’s ever happened wasn’t, for the most part, measured. And even it if it was (by us men), no device could store all this data or manipulate it.

“Of course not! Why in the world are you bringing in infinity and all this other silly business? You can be obtuse, Briggs. No, no. The data we want are those measurements related to the event you picked.”

Related? But don’t you mean by related those measures which are the cause of the event, or which are not the direct causes, but incidental ones, perhaps measures caused by the event itself, or measures that caused the cause of the event, and that sort of thing? Those measures which a prominent writer called in his award-eligible book (chap. 9) “the causal path”?

“They sound like it, yes.”

Then since it is you who have partial or full knowledge of the full or partial cause of the event, or of other events in the causal path of the event itself, isn’t it you and not the algorithm that is discerning the cause? Any steps you take to limit the data available to the algorithm in effect makes the algorithm’s finding of cause (or correlation) a self-fulfilling prophecy. By not putting in my gin means you are going all the work, not the algorithm. It means you have figured out the cause and not the algorithm. That makes the cause in your mind and not the data, doesn’t it?

“Perhaps.”

The best any algorithm can do is to find prominent correlations, which may or may not be directly related to the cause itself, using whatever rules of “correlation” you pre-specify. Your algorithm is doing what it was told in the same way as your toaster. These correlations will be better or worse depending on your understanding of the cause and therefore of what “data” you feed your algorithm. The only way we know these data are related to the cause, or are the cause, is because we have a power algorithms can never have, which is the extraction of and understanding of universals.

“I guess.”

And all that that is even before we consider predictive ability or, more devastating to your cause (get it? get it?), under-determination, Duhem, Quine, and all that. The idea that even if we think we have grasped the correct universal, and have indeed used our algorithm to make perfect predictions, we may be in error and that another, better, explanation is the truly true cause.

“That seems to follow.”

Then it also follows is that the only reason we think algorithms can find cause is because we forgot the cause of causes, or rather the cause of comprehending causes, which are our own minds.

Note that this explanation, which is a proof, does not explain why most use algorithms in the hope of finding “causes” to repeated events, or events which are claim to be repeated. That’s a whole ‘nother story, which involves, at the end, abandoning the notion probability is a real thing.


Viewing all articles
Browse latest Browse all 529

Trending Articles