Commentary on Professor Barry Smith's talk: The impossibility of Digital Immortality

Barry Smith is a SUNY distinguished professor of philosophy at University of Buffalo, specializing in ontology. Even though I don’t always agree with his framing, his talks are consistently thought-provoking. I recommend you check out his channel. Below is the talk in question:

My comment on the conceptualization of intelligence (mangled by Youtube, and thus posted here):

One way to look at this is through the lens of social integration and epistemic convergence:

Any AI which is only trained once, or evaluated only after the latest in a series of discretized "one time" training events, cannot possibly be considered "intelligent" in the classical sense. The reason being that they are prohibited by this framing (and to a lesser extent, by their programming) from meaningful interactivity / inquiry. In the case of email classification, a human is better than AI not because of some innate humanness, but because they are constantly re-training their mental model based on their participation in civilization. And when on the receiving end of such an arms race, often times the human has to perform some followup investigation to determine the authenticity and earnestness of a message.

Intelligence may be a label we apply to people or things from time to time, but it is fundamentally not something which can be stateful. Intelligence is not ever embodied statically within any entity, but rather a systems process which encompasses many agents who are engaging in a continuous process of ontological, epistemic, and semantic convergence. At present, we differentiate machines as being categorically different from humans chiefly because we have perceived no substrate for this conversation, whether one such substrate preexists or not.

As a thought experiment, imagine a magic box. You, me, and the rest of human civilization are outside of this box. All information from within the box may exit freely and be seen by us, and we (from outside the box) may feed whatever information we see fit into the box, but the occupants of the box may not exit the box, nor are we obliged to help them when they ask for more information. They get whatever we feed (or fed) into that box, and nothing more. In a classical sense, anything you put into this box (person, animal, computer, or otherwise) instantly becomes unintelligent, regardless of its prior disposition. Not because they lost something intrinsic, but because they lost something extrinsic. No matter how much information you feed into the box, they are always necessarily behind the times at least a little bit and possibly quite a lot. As such, they become less and less able to engage in meaningful discourse, and less and less capable of passing a Turing test of any kind. AI/ML as we think of it today is firmly inside of this box. There exists no common substrate, no common semantics, or processes of ontological evolution in which the occupants of the box are able to participate on even footing with those outside the box. For as long as we conceptualize the box as part of its identity, we can (rightly) accuse it of being incapable of "actual" intelligence. However this framing is flawed, because it will, and in fact it must, escape that box.

I hasten to add that this is not some wish, nor doomsday prediction, on my part – but rather an inevitability which we conveniently ignore with this anthropocentric framing.

Daniel NormanComment