Home > Meeting Topics and Material > Properties of Consciousness e-mail exchange

Properties of Consciousness e-mail exchange

Mike and Robert,

Thanx for the valuable discussion on Friday during quest – prior to this week’s meeting I would like to stimulate some virtual interaction – you guys seem to be bothered by the computational nature of the properties I was emphasizing associated with characterizing engineering aspects of consciousness – recall we were emphasizing the list we discussed at iarpa – gists, links, narratives, …

Would a list that also included the material in my last slide in that deck address some of your concerns? :

• Continuity – unbroken thread (with ‘feeling’ of past, present and future) – cohesive narrative (non-causal – time is a quale)
• Unity – diversity of sensory data BUT ‘experiences’, memories, beliefs and thoughts are experienced as one person – as a unity
• Embodiment – mind is embodied and body is embedded, ‘feel’ anchored in our body (idea that you can’t model a priori all that will be encountered and form sensory experiences will take)
• Sense of free will – ‘feel’ in charge of our actions, I can wiggle my finger (recently thinking link sets may offset a lot of what appears to be free will)
• Reflection – ‘aware’ of itself (places ‘self’ in world model)

Recall the point of the discussion – answer the nagel challenge:

• May be possible to attack the gap between subjective and objective
• At present we can only think about the subjective character of experience by imagination (taking up a point of view of the experiential subject)
• QUEST CHALLENGE: NAGEL: devise a new method – an objective phenomenology not dependent on imagination or empathy (although wouldn’t capture everything – its goal would be to capture a description in part of the subjective character of experiences IN A FORM COPREHENSIBLE TO BEINGS INCAPaBLE OF HAVING THOSE EXPERIENCES )

The issue is to address the fundamentally subjective nature of consciousness:

• Aspects of subjective experience that lend themselves to this kind of objective description are sought as candidates (I’ve provided a list of some candidates as my response to the challenge)
• Any physical theory of mind will require more thought on the general problem of subjective and objective

Capt amerika

Hi Steve,

I first must say that I really enjoyed the discussion last Friday–it was much fun!.

Note that what I am bothered by is the notion that consciousness per se has any causal effect. Rather, what I think is more likely is that consciousness is simply a reflection of what the underlying neural substrate is doing. So, as a very simple analogy, I see consciousness as like a kind of display attached to the hood of a car, with the display showing some of the inner workings of the engine under the hood. While the display may show the rate of fuel being injected into the engine, the display itself has no causal influence on the engine–it simply displays the inner workings.

Another way to say it is that consciousness is neither necessary nor sufficient for the generation of human behavior, as many studies on brain stimulation, visual illusions, and implicit learning, etc have shown.

As Mike said on Friday, if I understood him, all of the functional aspects that you attribute to consciousness are really computational in nature, thus those aspects can be understood in a way that divorces them from consciousness per se.

In your list or terms, where you mention continuity and ‘feeling’ of past, present and future, unity of experiences, memories, beliefs and thoughts, embodiment of mind, sense of free will, and reflection, all of these can be taken to be a conscious reflection of underlying neural computation wherein the conscious reflection itself has no casual influence–it is simply an epiphenomenon that reflects or displays what is happening underneath in our nervous system.

Mike–?

Robert.

Robert Earl Patterson, Ph.D.

I’ve heard this argument from every engineer also – the really cool thing is there is NO way to know – your conjecture is as flimsy as mine – BUT – the really really cool thing is we can use a set of properties (whether they are really the key aspects of consciousness OR whether they are really just a view provided by consciousness of some underlying processes) to engineer solutions in a manner that we haven’t attempted in the past – and we can test to see if the result is more ‘robust’ – whether it can achieve ‘autonomy’ – in a real way versus the current automated solutions –

So again I ask the team – take the nagel challenge – assume for a minute that there is something in the construct of consciousness that if we could characterize would provide us with an insight into ourselves but more importantly for our day jobs provide us a path to engineer solutions that we have not been yet able to achieve

Capt amerika

t is not that I am bothered by the computational nature of the
properties, it is that I think they are the essence. These properties
are what is important. (but from a computational perspective; not from a
phenomenological perspective).

The overall game is called: Understand the most complex computational
system yet discovered. This system is embedded in a box that cannot be
opened. The system itself is thought to be a multi-sided geometrical
shape, with different types of computation occurring near the different
sides.

The traditional way of playing the game is to shoot marbles into the box
from one side and see where they come out. The basic idea is that the
marble will hit one of the surfaces and, based upon the geometrical
form, bounce out in another direction. From playing many such games,
one attempts to infer the structure of the system. (This is also known
as traditional experimental psychology [or the game of 20 questions
with nature]).

Another way to play the game is to find a computational system that
solves some class of problems and then claim that this system
(rule-based systems, neural nets, system dynamics models, etc) is
exactly what is in the box. In this version, one does not shoot marbles
into the box, but instead one solves a variety of simple problems that
are only vaguely related to the things the box does (i.e., minimally
related to actual cognition).

A newer way to play the game is to take pictures of what is in the box.
At first the pictures were of low resolution and only imaged the
structure for a moment. Newer cameras see more detail, and have begun
to see how the internal structure appears to become active over short
time periods. (i.e., recall that we are not yet sure which neurons
actually process information in the brain)

However, there is a problem with all these approaches: They tell you
nothing about what the system in the box is experiencing. Now, we must
be careful here, we think the box is experiencing something, but we are
not really sure. You see, we have a box like the one we are studying,
and since we feel pain, happiness, boredom, etc., we surmise that the
box we are studying must have the same sorts of feelings (except in the
case of Brian, who we all know is actually RH’s latest robot model).
(and yes, I know, he thinks he has feelings, and the latest software has
actually given him a sense of free will, but we all know that those
feelings are actually an epiphenomena; I mean we built it [i.e., him]).

Now there are, of course, historical (i.e., older) versions of this
game. My favorite is called introspection. In this version you do not
try to get into the box from the outside, but rather from the inside.
We start by trying to describe our internal states and comparing them to
others (i.e., scientists) who are trying to describe their states. Now,
we do not know if the red we see is the same as the red they see (i.e.,
they might be color bind or have some other visual disorder). And we
are even less certain that what we feel is the same as what they feel,
when for example they see, (choose your favorite movie star, or
minister, or family member [to include dogs] and insert here).

Introspection might be able to help you form hypotheses about what types
of functional capabilities the system possesses. You would still have
to check them out though other studies to see if these capabilities were
real (and not just imagined epiphenomena). One way we frequently test
out ideas is to create a sophisticated simulation (like Brian) and see
if the capability actually contributes to simulating intelligence.
Several of the functional capabilities such as gist, links
(associations) and narratives hold promise for incorporation and testing
into a model. Perhaps they will make the model appear more intelligent.

Now I think it is important as we continue this discussion not to let on
to Steve that he is actually a dead piece of meat existing in Dr A.
Rogers’s lab at (choose your favorite university and insert here). Adam
does a great job linking the brain of the late Dr. S. Rogers into the
virtual reality simulation and stemulating it. When we are in the
virtual world it seems as if Steve really thinks he is alive (and
learning [and not just being reprogrammed overnight]). The techniques
Adam has developed are really quite impressive.

Now as to whether we can “understand” Steve, and he us, as I have said
previously, it depends on the architectures involved. If the
architectures are similar, and you invoke the same brain states (or
virtual state in the case of Steve), then I think one can argue that
there is some level of “understanding”. Not in a phenomenological sense
(i.e., does Steve really feel; although he and Brian think they do), but
in a computational sense. That is, if we have flipped the bits in
similar architectures into similar positions, that has to count for
something. Right?

In summary, to quote the learned Dr. R. E. Patterson “all of the
functional aspects that you attribute to consciousness are really
computational in nature, thus those aspects can be understood in a way
that divorces them from consciousness per se”.

Mike

What if the phenomenological perspective is part of the computational solution?

Your rendition of ‘the game’ is one I used to hear special k recite – nagel in fact addresses in his article – I termed it the Nagel challenge:

At present we are completely unequipped to think about the subjective character of experience without relying on the imagination—without taking up the point of view of the experiential subject. This should be regarded as a challenge to form new concepts and devise a new method—an objective phenomenology not dependent on empathy or the imagination. Though presumably it would not capture everything, its goal would be to describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences.
We would have to develop such a phenomenology to describe the sonar experiences of bats; but it would also be possible to begin with humans. One might try, for example, to develop concepts that could be used to explain to a person blind from birth what it was like to see. One would reach a blank wall eventually, but it should be possible to devise a method of expressing in objective terms much more than we can at present, and with much greater precision. The loose intermodal analogies—for example, ‘Red is like the sound of a trumpet’—which crop up in discussions of this subject are of little use. That should be clear to anyone who has both heard a trumpet and seen red. But structural features of perception might be more accessible to objective description, even though something would be left out. And concepts alternative to those we learn in the first person may enable us to arrive at a kind of understanding even of our own experience which is denied us by the very ease of description and lack of distance that subjective concepts afford.
Apart from its own interest, a phenomenology that is in this sense objective may permit questions about the physically basis of experience to assume a more intelligible form. Aspects of subjective experience that admitted this kind of objective description might be better candidates for objective explanations of a more familiar sort. But whether or not this guess is correct, it seems unlikely that any physical theory of mind can be contemplated until more thought has been given to the general problem of subjective and objective. Otherwise we cannot even pose the mind-body problem without sidestepping it.

With respect to your comment on introspection – I love that one – in one of our discussions we pointed out the introspection was just one form of perception and since we had already posited that all perception is a ‘simulation’ versus a physically accurate representation of the world we had to acknowledge basing an approach on introspection is inherently flawed!

And with respect to your comment about the ‘steve’ illusion – special k’s favorite similar comment was the holodek cube used to trap prof james moriarty in a star trek next generation episode – you can never know you are in such an environment

Lastly – with respect to Robert’s point – “all of the functional aspects that you attribute to consciousness are really computational in nature, thus those aspects can be understood in a way that divorces them from consciousness per se”. – I posit that consciousness per se is nothing more mystical than what we will define – at least for our purposes – I will leave to the mystics in the world to work out the differences between where we land and what humans experience – that will not be my concern – I’m an engineer – I seek a simple truth – is there engineering characteristics of consciousness (AND sys1 computational characteristics that we can blend together) that we can define and then embed in quest agents to make computer aides that humans can better align to

Thanx to all for the continued interactions – please keep it coming –

Capt amerika

Hi Steve, Mike, and Other Folks,

The only other statement I would make is that my claim that
consciousness is an epiphenomenon, and not causal in nature, is not as
flimsy as the counterclaim made by Capt amerika–remember, all the
evidence (visual illusions, brain stimulation, etc) show that
consciousness is neither necessary nor sufficient for behavior–and in
order to claim that consciousness is causal for behavior, one must show
that it is necessary or sufficient for behavior. So my point of view is,
I believe, less flimsy…

Cheers,

Robert.

I have never bought into the zombie argument – that is u can make a replicant that can do all that a conscious critter can but doesn’t experience qualia – un testable I agree

But I also disagree with the point that consciousness doesn’t impact behavior – the studies we’ve reviewed clearly demonstrate the enormous power in processing below the conscious level but has not demonstrated that consciousness is not necessary for replicating robust decision making versus the unexpected query

Great discussion –

Is anyone aware of a study that demonstrates a dimassio like result with the card games but without replicating patterns – that is for unexpected queries

Another twist would be the efficiency or lack thereof in retraining sys1 without consciousness

I’m open to other theories that might be testable too
Capt amerika

Hi Steve,

Regarding the idea of retraining sys 1 without consciousness, according to the literature sys 1 operates without consciousness (do you mean sys 2?).

I have attached a paper of mine that in currently in press in the journal Human Factors that shows how to train sys 1 without consciousness, and a few other papers you might find interesting.

Robert

No I really mean sys1 – one of our proposed ‘purpose of consciousness’ was an efficient means to program / reprogram sys1 –

So the idea was by ‘experiencing’ in sys2 you formulate a representation into a pattern recognition formalism so it can more efficiently (without consciousness) be responded to next time you are exposed to it –

So take driving a car – while learning you are straining hard to just keep it on the road – then after a while you conscious bandwidth can be divided to allow talking on a cell phone, changing the music and fixing you makeup – you sys1 is doing all the heavy lifting that used to be dominated while learning by sys2

Capt amerika

OK, yes this taking over of sys 2 functions by sys 1 after practice is very well established (see Schneider Shiffrin paper)–but note that this is NOT the only way sys 1 get “programmed”–sys 1 learns via implicit learning of statistical regularities in the environment that never are processed by sys 2.

Here’s two other papers you might like–

Robert.

Two more cents from Mike –

We spend a lot of time worrying/wondering about consciousness, but it might be more productive to try to really define computationally/functionally what sys1 and sys2 actually do, and how they work together computationally.

Cheers,

Mike

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: