
Advertisement
Advertisement
Professor Michael Graziano: Here’s a quick background. I can be conscious that I am me and I am human. Whatever that consciousness is, is an experience. What I am asking is what set of information is that consciousness. What does it mean to have an actual subjective experience of something?What’s unique about your method of inquiry? This question sounds like something a lot of people have tried to figure out.
To start off, many scientists are asking the wrong question. They’re asking, “What does it mean to have the magical inner feeling?” You start with the assumption that there’s magic and then you start experimenting. The better question is how and for what adaptive advantage do brains attribute that property to themselves? And right away that puts it into the domain of information processing, something that can, in principle, be understood.How is it that the cognitive machinery in our brains accesses internal data and arrives at a conclusion and can sometimes report, “I have experienced, I am aware of something.” Not just “that is blue,” but “I am aware that that is blue.”OK, so how do brains do that?
Brains construct models, informational models of all kinds of things, in fact it’s one of the things brains do best, make models of the external world and models of things going on inside your body.
Advertisement
So let’s think about what the physical project of attention is: there’s an agent, a brain, a being that’s focusing its processing power on a particular set of signals that neuroscientists call attention; the signals might pertain to the sandwich you’re holding. There’s an agent and there’s a sandwich, and there’s a relationship between the two: that is, the agent is focusing its resources on the sandwich. That’s attention.So when you build a model of that it will have a large amount of information about the agent—who you are, where you are, your memories, your information about yourselves—that model should contain information about the sandwich, and it should contain information about the relationship between the two. And, crucially, the model will have information about what it means for an agent to focus attention on a thing. What I’m saying is that there is information in the brain, a large dossier with lots of descriptive information that there’s a you, and there’s a sandwich and a specific relationship: you are aware of the sandwich.
Advertisement
I think there is a deep connection between language and all these other issues. One aspect of this theory is that there’s a constant evolutionary change and what may have started out as a simple model to help control attention then evolved into a way of of keeping track of other people’s state of attention and then evolved into a key part of our social machinery. An outgrowth of our social capability is language capability, and in fact, the main language area of the brain—it’s called the Wernicke's area—is basically an evolutionary outgrowth of the same regions involved in social thinking that we think might be involved in attributing awareness in ourselves and others. So the actual brain mech[anics] of language have a very deep connection to all these issues of consiouness and awareness.OK, well can we make robots self-aware? Can we turn Pinocchio into a real boy?
A robot can do a bunch of things, but it does not have the information to report: “I have experience, I have an inner experience.” It does not have that algorithm. But I think that’s programmable, and it think it’s coming.
Advertisement
