Archive

Archive for February, 2012

Weekly QUEST Discussion Topics, Feb 24

February 23, 2012 Leave a comment

QUEST Discussion Topics and News Feb 24

This week will again focus on ‘alignment’. We had a series of interactions via email that can be reviewed on the blog. We will start by allowing people to voice their positions in those emails. Then Capt Amerika will discuss the application of the concept of alignment to agents embarked in a joint activity. This requires all involved in the coordinated joint activity work to enable three key characteristics: interpredictable, common ground and ability to redirect. We will use the articles by Gary Klein, Common Ground and coordination in Joint Activity and Ten Challenges for Making Automation a “Team Player” in Joint Human-Agent Activity, to frame our discussion. There is the need to agree to the pieces of the representations and communications to achieve ‘joint activity’ to allow our math team to formulate the common math framework for human and computer participants in the joint activity. We will consider both sys1 and sys2 alignment in the joint activity paradigm.

Also note that next Friday March 2, Capt Amerika will be on the road, so if someone wants to use the opportunity to present something let us know soon or we will cancel the meeting.

Advertisements

Email discussion on Alignment

February 22, 2012 Leave a comment

Seems to me there is another piece to the alignment issue that I’ve not
addressed. That is when I use the term alignment I’m implying there are
axes that I’m attempting to associate in my representation with aspects of
the another agents internal representation to accomplish some inference
task.

Capt amerika

Currently, Jared, Kirk and I are using “information sharing” in a generic way. I believe part of the “information” can be the internal representation. But agent B has to transform its internal representation into a form that agent A can understand/read/interpret (choose a word that makes sense). If agent B cannot transform it (that is, no transformation exists for agent B) then B is not aligned with A for the task in question.

Ox

Following up on Mark’s comments — if agent B’s internal representation with respect to a given inference task is aligned (in the sense that Steve mentions) with his representation of agent A’s representation, then this tranformation of information to something that agent A can (presumably) easily digest should be more efficient. So it seems that that there are two pieces here to alignment — one that only depends on the agent sharing the information (how well does his own representation match up with his approximation of the other agent’s representation), and another that requires looking at both agents (how “good” of a theory of mind does one agent have with respect to another).

The trust issue seems huge here also — agent B might be perfectly aligned with agent A and providing exactly the right information, but if agent A does not trust agent B then one would not expect agent A to act on agent B’s information, rendering it useless. I’m not sure that I agree with Mike that we need different terms for human/human, human/machine and machine/machine trust. I’m a mathematician, so all agents are spheres, or something like that….

Jared

After thought on alignment… What do you call a situation where X number
of operators are internally aligned on a task or mission, but not aligned
with the “real” world?

Human error.

Mike

Good point – but my position would say mutually aligned between agents can
sometimes do the right thing when their representation is NOT physically
correct –

I haven’t been using the word alignment to capture the ‘truth’ that is the
mutual information between the representation and the situation in the world
– but following your thought process I would have to call the case where the
team is all mutually aligned and the representation that they have high
agreement between in fact is based on a situational representation that is
NOT correct but they do the right action in terms of improving their
performance = LUCK but if they do the wrong action = human error

Capt amerika

After a couple of dog walks (I’m grand-dog sitting this weekend for Adam) – I retract my email below,

To eventually take the math to the issue of a Theory of Knowledge I have to account for the ‘truth’ (probably what terry was emphasizing and I didn’t get it) – but by truth I have to modulate the concept to be those aspects of reality that can be posed into a situation that is applicable for impacting the current inference task – if god was solving this inference task this is the physically real situation she would use to accomplish that –

So going down this path Mike’s ‘human-error’ is a measure of the mis-alignment of the agent or team of agents from god’s representation of that relevant situation

So walk-aways: although the word alignment might be taken on an axes by axes basis I’m hoping we will use it associated only with combinations of axes of the representation = situations —- so the degree of alignment or not will be associated with the similarity of the situational based representations

So now back to my homework – reading another Klein article he sent after the meeting yesterday AND thinking about an example to drive home ‘alignment’

Capt amerika

A few thoughts that I had this weekend:

Tasks (or joint activities if you like) are just specialized cases of
situations — and only exist in the representation of a particular agent.
Each agent involved in a joint activity will have a different understanding
of what the overall goals are, and what his/her/its particular role is. Thus
alignment (even if restricted to inference tasks) cannot be defined without
reference to the agents’ varying roles in the activity (think of the
asymmetric information flow in Steve’s breast cancer work). Alignment seems
to be something to the effect of being able to optimally complete one’s own
role in a joint activity based on being able to accurately represent the
other agents involved in the activity.

That is, the docs were aligned to the machines because they were able to
accurately represent when the machines would succeed/fail and then use this
information correctly. On the other hand, the machines were aligned to the
doctors because they could accurately predict what information the doctors
really needed (on say microcalcifications) and then share this information.
The doctors were not sharing any information with the machine and were not
helping the machine to make any decisions, because that was not the role of
the doctor in that joint activity — this should not mean that the doctors
were not aligned with the machines.

If the joint activity is an agent trying to be aligned to the environment,
then the role of the agent is just to understand the environment and so
alignment just means a good model of the environment. I’m not sure if the
idea of a god representation is really all that useful — is this ever
knowable? Perhaps I misunderstood that part of your comments.

Jared

Inference tasks can be a joint activity – we would like to consider
alignment to be associated with a joint activity in general – where joint
activity has the characteristics of the Klein article (maybe I should
revisit that this week in quest) – but among those characteristics are
‘interpredictability’ – that is based on ‘common ground’ – which is based on
a representation of the other agents ‘pertinent knowledge’ = pertinent for
the joint activity being considered, pertinent beliefs and pertinent
assumptions — I was thinking ‘alignment’ would be our measurement of that
common ground —

Perfect alignment would result in as you say my optimal use of what you as
an agent are providing me to use for the given task because I can map it
using my representation of your pertinent knowledge, beliefs and assumptions
into implications of my own representation of the data to be considered to
accomplish the given inference tasks

With respect to my statement about god’s representation – I was thinking I
wanted to address mike’s point of ‘aligning’ to the environment – if I was
going to restrict my use of the word alignment to be as above to be between
two agents then what is the second agent when I’m concerned about my
representation and reality – so I could call that truth but to hold it in
the same formalism I added ‘god’ – so if god is truth then she has the
representation that is reality – and I can attempt to align with her
representation – and all the above still fits – so I could talk about the
alignment of my representation to the world – then I could talk about what
in that world I need my representation to have good alignment with for a
given inference task

Capt amerika

Actually, it was Terry’s point that got all this started. I just picked up
on it.

In general, I agree with Jared, and disagree with Steve. I do not see how
you can say “situation awareness” is different from “joint activity”. A
representation is a representation.

Further, there is not just one representation for any situation, but many;
and they change from moment to moment. That is, alignment is dynamic.

The challenge is how to talk about differences in representations in a
meaningful fashion.

The other dimension (i.e., other than alignment) that has (partially)
dropped out from Friday is objective versus subjective knowledge of the
world and situations.

Notionally (because I have never gotten a firm answer on this), let’s say
that Sys1 maintains a representation of the world, and Sys2 creates and
maintains a representation of agent goals and tasks. Further Sys2 can
maintain two representations, one of its own state, and one of another’s
state (via its simulation capability).

In this situation, Sys1’s representation is probably more objective than
Sys2’s. It is not completely objective because we know that humans scan the
environment in different ways and pick-up different information. Further,
one can easily imagine a situation where three individuals are looking at
the world using different sensors (e.g., infrared, radar, and human
eyeball). If it is dark out, it is obvious they will have different
(subjective) representations of the “objective” world.

Information in Sys2 (or perhaps even Sys1) is a mapping into a “meaning
space”. The information is Sys1 leads to (triggers, creates) mappings from
physical objects to denotations and connotations that are divorced from the
details of the physical objects. The meaning space emphasizes connections
between things (objects) and how these sets of objects may evolve in a
temporal sense. How can they be used to accomplish goals, or how they might
impede goals.

If I understand Steve, he wants to limit the term alignment to comparisons
between representations in the (what I am calling the) meaning space and he
is calling the joint activity space. That’s OK, but there are still
“alignment” issues between those representations and the “objective” world.
Imagine a situation where there are several individuals performing a task
and two individuals are using different sensors (or the same type of sensors
with different characteristics, i.e., radars with different capabilities).
There are going to be joint activity alignment problems in these situations.

I would favor a more general approach that measured alignment across a range
of representations and times.

Mike

I don’t say situational awareness is different than joint activity – I just needed to account for the ‘objective situation’ – the reality of the world – if I want to discuss alignment with that I needed to capture it in an agent formalism to keep with the joint activity thread –

Completely agree alignment is dynamic

With respect to objective versus subjective – I would maintain even a sys1 ‘objective representation’ is subjective as it was developed for the critters/agent’s unique experiences and sensors … – but I do like the point of breaking out the differences in aligning with sys1 versus sys2 – and I really like trying to capture the idea of where we are talking about alignment occurring – in sys2 although again I think RH has plenty of examples of team training where we teach teams of responders to react reflexively to each other – example of alignment at the sys1 level – I’ve modeled your representation and encounter it so often I push it down to sys1 for quick efficient responses

With respect to your other sys1 / sys2 question – sys2 has to maintain a representation (simulation) of the world – not just sys1 – it is the backcloth that it uses to weave its narrative – it as you point out is subjective and can allow many entries that are imagined versus measured (they are inferred to exist to allow stability, consistency and usefulness)

I also do like the flexibility of alignment over space and time

Really good discussion –

Capt amerika

Steve,

I think that a way to get around your problem is to concede that alignment itself is subjective — that is, it depends on the person/thing doing the measuring of the difference between an agent’s representation and reality. In this setup, the measuring agent would just use her (subjective!) world representation as the “truth” and there would be no need for an objective representation of reality.

Jared

Again thanx for keeping this going – I like the ongoing discussion

With respect to the captain being too eager – I believe his last input had the potential attackers very close and closing on his position – he wasn’t aligned with reality nor with the ped crew’s representation and for that matter probably not with the air crews – had he been I suspect he would not have said hit them

The newness of ‘alignment’ in our discussions is our tenet that suggest that alignment improve performance since the alignment results in a more efficient reduction of uncertainty in the decision making agent’s representation due to it being able to better assimilate pertinent inputs from the agent it is aligned to and that other agent being engaged in a joint activity (to include the basic compact – they are working together …) is doing its best to help in the inference task

In this discussion I’m not trying to fix the human to human chain of joint activity – but – I’m trying to understand the computer tool to human agent within the context of the current morass of humans trying to do inference tasks like the la times story – and recall it is my conjecture that at the end of the day we only have hope if we can put all the pieces (humans and computers) into a common framework —- so the discussion about human-to-human alignment is relevant to my end goal

With respect to your paragraph:

So what is alignment to me? Alignment is the assessment of my theory of mind of the other given his/her theory of mind of me. To have a conversation with someone, in my mind, is to continue to evaluate that assessment so we can discuss them (information exchanging). The attached file is meant to represent it. But alignment does not necessarily need to end with agreement or trust. I may fully understand Person or Agent X (because I am fully aligned to him/her) but I still don’t agree with or trust him/her.

I’m ok with parts of where you were going – yes alignment is a measure of my alignment to another agent’s representation – thus the theory of mind idea is correct – and yes it as part of that representation includes my assessment of how that other agent has modeled me correctly or not – and yes I agree alignment does NOT imply agreement or trust —

I do like keeping separate for now the sources of error – external sensing / phenomenology sources of error and internal representational sources of error what you are calling internal noise sources

Capt amerika

I do agree alignment is an internal measure of some agent – thus subjective
– I can never know how much I’m aligned with anything else – whether it is
another agent or with the world – thus as brian suggest I’m always dinking
around with my interactions with the other agent / the world to refine my
alignment – ‘if this means that to the other agent then if I give them this
stimuli they will respond this way — oh shit they didn’t — that must meant
this doesn’t mean that let me try this stimuli …’

You can imagine going through the same interactions with the world – as I
refine my alignment with reality – that is what my grandson ‘Boo’ does — he
experiments then refines they experiments – that is what dating couples do –
they put in stimuli and measure responses attempting to generate a good
enough model of the person to determine if they are worth marrying – Anne
did this with her fiancé (hopefully her model is correct)

The world continually changes as do the people in it – so as mike points out
alignment never ends

Capt amerika

Categories: Uncategorized

Weekly QUEST Discussion Topics and News, Feb 17th

February 16, 2012 Leave a comment

QUEST Discussion Topics and News Feb 17

This week will again focus on ‘alignment’. Specifically we want to have a discussion on improving Processing, Exploitation, Analysis, Production and Dissemination (PAD). Traditional views of this process focus on the PAD cell working somewhat in a vacuum versus making PAD a ‘joint activity’ with the consumer of the products that are being disseminated as well as a joint activity of the cell-members and the tools we are providing them. A joint activity approach is completely consistent with what we have defined as Mission Driven PCPAD. This requires all involved in the coordinated joint activity work to enable three key characteristics: interpredictable, common ground and ability to redirect. We will use the article by Gary Klein, Common Ground and coordination in Joint Activity, to frame our discussion. Please keep in mind that we are interested in our computer aides also being a participant in the joint activity and specifically how can we accomplish that. Then there is the need to agree to the pieces of the representations and communications to achieve ‘joint activity’ to allow our math team to formulate the common math framework for human and computer participants in the joint activity.

Klein2004_Common ground and coordination in joint activity

Weekly QUEST Discussion Topics and News, Feb 10

February 9, 2012 Leave a comment

QUEST Discussion Topics and News Feb 10

This week, we are excited to turn the stage over to some respected colleagues working on the challenge of Flexible Autonomy. Andy Rice and Bruce Preiss from RY and RH (Layered Sensing Exploitation Division) will lead the discussion. Below please find a more in-depth summary from Andy.

“Advances in real-time processing and expert algorithms have made
automation of analysis possible today. However, most analyst have little
confidence in autonomous analysis. This effort seeks to find a way to
introduce ‘trusted’ autonomy to analyst to reduce analyst workload and
increase the quality of their products. The utility and influence of
automatic algorithms spans the spectrum from no involvement to full
control. Each analyst, mission, system, and task have very unique needs
and requirements therefore flexibility is needed to satisfy this broad
operating space. This concept has been coined ‘Flexible Autonomy’. As a
first pass through this flexible autonomy space, the team will be
focusing on multiple-target tracking in a wide-area-motion-imagery
context. Specifically, the track-stitching function will be analyzed
(WAMI) to determine state of the art in automated algorithms, the human
analyst’s use of this product will be researched, and a prototype
flexible autonomy WAMI track stitcher will be developed.”

Next 711 HPW “Tech Talk” – 9 March 2012: “Intuitive Decision Making”

February 6, 2012 Leave a comment

Next 711 HPW “Tech Talk” – 9 March 2012: “Intuitive Decision
Making”

All –

The second Tech Talk is scheduled
for 9 March 2012, 1000-1100, in the Bldg. 441 Auditorium. This talk will be
broadcast via VTC to the Tri-Service Research Lab (TSRL) for those located
at Ft Sam Houston TX.

Dr Robert Patterson, Senior Research Psychologist, assigned to the Human
Effectiveness Directorate’s Training Division (RHA) will talk about
“Intuitive Decision Making,” as he describes below:

“Intuitive decision making refers to the making of decisions based upon
situational pattern recognition. This type of decision making is largely
unconscious, fast, relatively effortless, and less affected by stress than
decision making based on conscious deliberation. The cognitive systems that
support intuitive decision making are relatively primitive, and humans
engage in this type of decision making virtually every day of their life. It
is therefore surprising that intuitive decision making has attracted little
experimental research over the years. I am attempting to fill this gap in
understanding by studying the underlying cognitive mechanisms of this type
of decision making. This, in turn, should lead to the development of methods
for training intuitive decision making in various Air Force applications. To
date, we have found that implicit learning can be a means for developing
intuitive decision making provided that a wide range of patterns and
environmental cues are experienced during training. I am hopeful that the
findings discovered by this program of research can be extended to a number
of Air Force applications where training decision making based on
situational patterns could be important, such as patterns of network
activity that might emanate from a cyber attack, patterns of signals coming
from sensor feeds within the context of ISR, or patterns of biological
motion created by someone wearing a bomb vest.”

Weekly QUEST Discussion Topics and News, Feb 3

February 3, 2012 Leave a comment

We have a couple of discussion topics lined up for this week’s meeting, while we do our best to reschedule the presentation from Dr. Brown on Active Authentication.

1. Brian Tsou will present to the group a proposed experiment that he has been working up with help from Dr. Grimalia and Trevor Bihl. Will share afterwards any presented material that he is comfortable making public.
2. Math discussion of alignment, particularly some recent work/discussions on fusion, including some new definitions to pass by the group. Dr. Oxley and his math team will likely lead this section of the discussion.
3. As a follow-on to the second topic, we may hit the group with a new proposed definition for ‘context’, including what kinds of agents can take in and share context, as well as how it affects their internal representation compared to other stimuli.

QUEST Discussion Topics and News Feb 3