AI and “maternal instincts” | Geoffrey Hinton’s proposal


Details
AI will not kill us off. Something natural might. Something artificial might. Something intelligent might, e.g., ourselves. Something dumb might, e.g., ourselves. But not something “artificially intelligent.”
Watch this short interview with Hinton, then watch Daniel Hentschel’s therapy session with AI. AI puts up with Hentschel with the infinite patience only an AI could muster. Could a mother do as well?
On the concepts, “artificial,” “natural,” and “intelligence”
Nobel laureate computer/cognitive scientist Geoffrey Hinton thinks, unless we do something rather fast, we are going to be supplanted as the paragons of “intelligence” we think we are. The “something” he suggests is that we instill maternal instincts in AI in an effort to forestall our impending obsolescence and potential disposal. I find the suggestion conceptually strange, apart from whether it’s even possible. Unless we become very different kinds of beings from the ones we are now, in which case, the moral non-identity-across-change problem looms – for then, in that case, since it won’t involve anything recognizable as us – why should we care? But given that our values won’t radically change, AI will not displace us. Something might, but not any kind of artificial “intelligence.” What Hinton is suggesting seems confused or mysterious to me. I will explain why.
He remarks in the course of his argument that Mother Nature invented “maternal instincts” in (some) animals. But Nature does not “invent” anything. It engages in blind processes. Or, rather, nature is the sum total of blind processes. It doesn’t have “purposes” or “aims.” We are projecting how we judge our behavior onto impersonal natural processes.
Maybe Hinton is hinting that we will – in spite of ourselves, not that we should – program maternal instincts into our artificial creations because this is the only way we can behave consistently with our history of self-preservation: because it is natural that natural beings like us naturalize our creations in order to understand and cope with them. In which case, he is describing what will happen based on induction, predicting we will do this, not recommending that we should. Just as we can predict how gravity behaves – things fall toward the earth, not toward the sky – not that we can suggest to gravity that it should behave that way or differently. Metal rusts or oxidizes, some trees drop their leaves in the fall... For awhile (tentatively, because no basic law of nature forces the case), individual members of a species live, then they die. For awhile (tentatively, because no basic law of nature forces the case), whole species thrive, then they go extinct. Why anything in nature happens the way it does is not because nature “wants” it to happen. Stuff happens, period. We discern patterns in the behavior we observe. We approve some of those patterns, not others, but that’s all we can say about them without anthropomorphizing (projecting our feelings on natural processes as our prehistoric – if not, evolutionary – forbears could not help doing). I am not saying that anthropomorphizing is necessarily fallacious or even avoidable, only that it might be more helpful to be clear-headed when we are doing it.
If Hinton is not predicting what will happen, then he is recommending a course of action. Recommendations have to be possible to be taken seriously. A possible recommendation is one that is, at least, clear as to its meaning and implications: either, we are acting in the world, or we are acting on it.
If the former, everything we do, or can do, is perfectly natural. The “artificial” cannot exist. How can it? Our AIs are no more “artificial” than a bird’s nest is.
If the latter, then the implication is that, if we can step outside the laws that govern nature, then “artificial” refers to what we do when we step outside. Assuming we are free to do that, then we can speak of “shoulds” and entertain recommendations when talking about, at least, our behavior. Then the concept “artificial” becomes meaningful. Then we can try to instill the conditions for animal-like behavior, motherly or otherwise, in AI... But the concepts “artificial” and “instinct” are exclusive because “instinctual” is ordinarily captured by the predicate “natural.” Without contradiction, something cannot be both “natural” and “artificial” (i.e., the product of artifice) at the same time. It’s either one or the other.
Or, again, Hinton is just using a metaphor when he invokes the idea of “maternal instinct.” Nothing wrong with being poetic, so long as you are aware that’s what you are doing. The only caveat is that, if the project has ethical implications, then poetic thinking can be dangerous. “As if” thinking is one of our most powerful tools for understanding the world. Combining such thinking with science and technology, because of the latter’s potential for real material world effect, is freighted with concern. Scientists do not ordinarily think of themselves as doing poetry. They operate as though what they do connects with some truth or reality independent of their instruments and motivation. Science, correctly or incorrectly, aspires to seriously address and impinge on the world. That’s why confusion here is consequential.
What we call maternal instinct or behavior evolved because it is one mechanism, among others, for insuring some species persist and thrive – for awhile. Why “for awhile”? Why not forever? Or why at all? And what is “insuring”?... There is no literal “insuring” here, really. Not anymore than gravity “insures” that stuff falls downward when unsupported – as though gravity “thinks” it is a “lovely idea” for things to want to be intimate with the earth.
Dinosaurs and a thousand other extinct species no doubt had mothering instincts and look at what they did and what happened to them. Some creatures have mothering instincts rather alien to us (such as producing thousand of potential offspring, then abandoning them, a scattershot strategy for species survival).
By definition, an “instinct” is a natural impulse. It isn’t anything or anyone’s “plan” to be natural. Plans are made of decisions. Decisions are rational constructs by entities which deem themselves capable of such. Yes, we could program machines to “behave” certain ways. But we are not thereby programming “instincts” when we do that... again, unless we are waxing poetic.
It would be like a jazz musician writing down every note and nuance they are going to play, play it, and insist on still calling it “jazz.” They would have forgotten something essential about jazz… its spontaneity, i.e, contingency...
Resources
“AI expert: ‘We’ll be toast’ without changes in AI technology,” CNN’s brief interview with Geoffrey Hinton on how “maternal instincts” may need to be engineered into AI.
“Will AI outsmart human intelligence? - with ‘Godfather of AI’ Geoffrey Hinton,” his talk before The Royal Institute on why he thinks digital intelligence best describes intelligence period, artificial and otherwise, and why AI is on track to outperform us at it. Subjective experience is already manifested in AI. Sentience and consciousness are in the offing. Our goose is cooked as for as intelligence is concerned. Since our significance and identity are so tied up with being the intelligent entities par excellence, we are on the way out. This motivates his rather desperate suggestion to design instincts, like motherliness, into inanimate substrates.
Thanks to Olivia for some of the resources used for this topic.

AI and “maternal instincts” | Geoffrey Hinton’s proposal