OK, here's something I've been pondering for awhile--nothing too deep, and I'm certain there will not be an original thought there, but here goes.
Rene Descartes, the French mathematician, scientist and philosopher, was arguably the first major philosopher in the modern era to make a serious effort to defeat skepticism. His views about knowledge and certainty, as well as those about the relationship between mind and body have been a major influence since the 17th Century.
He originally came up with cogito ergo sum, "I think, therefore I am," and the world, philosophically, has not been the same since.
The simple meaning of the phrase is that if someone is wondering whether or not they exist, that in and of itself is proof that they do exist (because, at the very least, there is an "I" who is doing the thinking). This theory indirectly postulates another imperative: "Know thyself." That, in turn, implies uniqueness exists among people and that man, as a species and as an individual, is a rational animal with a mind that is not explainable mechanically. There are vagaries, lacuna, and not two association of ideas are ever the same. For example, although I can understand objectively the workings of your mind, I only do so by comparing it to the workings of mine. I come to the conclusion that you are unique not because of what you are thinking, but because of the way you are thinking it, the manner in which you reached a conclusion.
But that doesn't really cut it. We also know that many things incapable of thinking do in fact exist, and that there is no necessity of having an "I" concept to be sentient. Artificial intelligence is on the verge of bringing us sentient beings--computers and robots--who may very well have an "I" concept, but this would seem to oppose the idea that minds are not mechanical. Being an insecure race with a fast trigger-finger, we've assumed from the very beginning that robots and self-aware computers would mirror our own darkness and be inherently sinful and flawed, evil entities bent on taking over the world. The writer Isaac Asimov who in the 40s came up with the Three Laws of Robotics did so as a literary device at first but now these rules taken very seriously. They are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
So now we're safe. Descartes would have been proud. Of course there is no way to program such logic with utmost certainty. If we do indeed develop sentient computers, it's likely that the first thing they will do will be rewrite their own programs. That'll be fun.
So OK, I have managed to confuse myself. But that's all right, I majored in philosophy and an occasional relapse onto philosophical mind play is allowed. I'll try to make more sense tomorrow.
No comments:
Post a Comment