Finding the Key: Rote Memory, Perception of Meaning, and Neural Networks

What we remember of what we have learned can depend on how well it was encoded when we learned it.  For example, if we learn a word list for a test, we might be able to recall some of the words after half an hour or so, but we can recognize more of the words we missed if we are given a list of words to choose from, some of which were on the list.

So, if I learn a list of words—say, “cat, tree, chair, piano, table, box, pail, clock, glasses, radio, door”—and then say as many as I remember after half an hour or so, I might remember “cat, tree, piano, box, radio, door.”  Then if I am shown a list of words that have all the words on the list in it, as well as a bunch of other words that were not on the list as distractors, I might recognize, “chair, table, pail, clock, glasses.”  The words I remembered without any prompting would be the ones that were better encoded when I first learned them, and the words I didn’t remember at first but recognized from the list of possible words would be words that were were not so well encoded.

Now, encoding, like any other learning process, requires the formation of neural networks; tens or hundreds of thousands or more of neurons in the brain, distributed across various parts of it, interconnecting and forming this memory.  The words that were better encoded, the ones that I remembered without prompting, had neural networks (called “neural nets” for short) with more neurons, and perhaps involving more parts of the brain.  The ones that I didn’t remember but could recognize had fewer neurons in their encoding networks, perhaps distributed over a smaller portion of the brain.

That’s a rote memory example.  Now let’s think about the understanding of meaning in language, by looking at a teaching story.  I first came across this story in the Nasrudin stories gathered and edited by Idries Shah, but I’ve since come across it in business, educational, and other contexts, without Nasrudin appearing as a character; which is an interesting example of cultural assimilation.  Nasrudin is down on his hands and knees outside, when his friend comes along and asks what he’s looking for.  “I’ve lost my key.”  So his friend helps him look, but they can’t find it.  “Are you sure you lost it here?” asks the friend.  “No, I lost it at home,” says Nasrudin.  “Then why are you looking for it here?” asks his friend.  “There is more light here,” answers Nasrudin.

Now, this story can be meaningful in a lot of different contexts.  For example, in a military, business or educational context, it can mean that conventional thinking is not going to be able to solve the problem of how to succeed in a particular operation or project; or of what the military calls “lessons learned;” understanding, after the fact, what went wrong.  I use this story in my work—in psychotherapy, diagnostic evaluation, consultation and supervision—to indicate that the way my client or student has been thinking about a problem may not lead to a solution.

What happens in the brain when a little story like this, a set of words that may not have much meaning for us except as a sort of joke showing how stupid people can be, suddenly illuminates an important problem in human thought and behavior, that has direct application in our own lives?   Surely there is a huge expansion, extension, interconnection, of neural networks, recruiting many more neurons and involving much more of the brain than the original encoding of the story.  Thus, the perception of meaning in metaphor can be understood as a significant event within the brain.

(See “links,” under the “resources” menu, for collections of Nasrudin stories)