Home > Uncategorized > Email discussion on Alignment

Email discussion on Alignment

Seems to me there is another piece to the alignment issue that I’ve not
addressed. That is when I use the term alignment I’m implying there are
axes that I’m attempting to associate in my representation with aspects of
the another agents internal representation to accomplish some inference

Capt amerika

Currently, Jared, Kirk and I are using “information sharing” in a generic way. I believe part of the “information” can be the internal representation. But agent B has to transform its internal representation into a form that agent A can understand/read/interpret (choose a word that makes sense). If agent B cannot transform it (that is, no transformation exists for agent B) then B is not aligned with A for the task in question.


Following up on Mark’s comments — if agent B’s internal representation with respect to a given inference task is aligned (in the sense that Steve mentions) with his representation of agent A’s representation, then this tranformation of information to something that agent A can (presumably) easily digest should be more efficient. So it seems that that there are two pieces here to alignment — one that only depends on the agent sharing the information (how well does his own representation match up with his approximation of the other agent’s representation), and another that requires looking at both agents (how “good” of a theory of mind does one agent have with respect to another).

The trust issue seems huge here also — agent B might be perfectly aligned with agent A and providing exactly the right information, but if agent A does not trust agent B then one would not expect agent A to act on agent B’s information, rendering it useless. I’m not sure that I agree with Mike that we need different terms for human/human, human/machine and machine/machine trust. I’m a mathematician, so all agents are spheres, or something like that….


After thought on alignment… What do you call a situation where X number
of operators are internally aligned on a task or mission, but not aligned
with the “real” world?

Human error.


Good point – but my position would say mutually aligned between agents can
sometimes do the right thing when their representation is NOT physically
correct –

I haven’t been using the word alignment to capture the ‘truth’ that is the
mutual information between the representation and the situation in the world
– but following your thought process I would have to call the case where the
team is all mutually aligned and the representation that they have high
agreement between in fact is based on a situational representation that is
NOT correct but they do the right action in terms of improving their
performance = LUCK but if they do the wrong action = human error

Capt amerika

After a couple of dog walks (I’m grand-dog sitting this weekend for Adam) – I retract my email below,

To eventually take the math to the issue of a Theory of Knowledge I have to account for the ‘truth’ (probably what terry was emphasizing and I didn’t get it) – but by truth I have to modulate the concept to be those aspects of reality that can be posed into a situation that is applicable for impacting the current inference task – if god was solving this inference task this is the physically real situation she would use to accomplish that –

So going down this path Mike’s ‘human-error’ is a measure of the mis-alignment of the agent or team of agents from god’s representation of that relevant situation

So walk-aways: although the word alignment might be taken on an axes by axes basis I’m hoping we will use it associated only with combinations of axes of the representation = situations —- so the degree of alignment or not will be associated with the similarity of the situational based representations

So now back to my homework – reading another Klein article he sent after the meeting yesterday AND thinking about an example to drive home ‘alignment’

Capt amerika

A few thoughts that I had this weekend:

Tasks (or joint activities if you like) are just specialized cases of
situations — and only exist in the representation of a particular agent.
Each agent involved in a joint activity will have a different understanding
of what the overall goals are, and what his/her/its particular role is. Thus
alignment (even if restricted to inference tasks) cannot be defined without
reference to the agents’ varying roles in the activity (think of the
asymmetric information flow in Steve’s breast cancer work). Alignment seems
to be something to the effect of being able to optimally complete one’s own
role in a joint activity based on being able to accurately represent the
other agents involved in the activity.

That is, the docs were aligned to the machines because they were able to
accurately represent when the machines would succeed/fail and then use this
information correctly. On the other hand, the machines were aligned to the
doctors because they could accurately predict what information the doctors
really needed (on say microcalcifications) and then share this information.
The doctors were not sharing any information with the machine and were not
helping the machine to make any decisions, because that was not the role of
the doctor in that joint activity — this should not mean that the doctors
were not aligned with the machines.

If the joint activity is an agent trying to be aligned to the environment,
then the role of the agent is just to understand the environment and so
alignment just means a good model of the environment. I’m not sure if the
idea of a god representation is really all that useful — is this ever
knowable? Perhaps I misunderstood that part of your comments.


Inference tasks can be a joint activity – we would like to consider
alignment to be associated with a joint activity in general – where joint
activity has the characteristics of the Klein article (maybe I should
revisit that this week in quest) – but among those characteristics are
‘interpredictability’ – that is based on ‘common ground’ – which is based on
a representation of the other agents ‘pertinent knowledge’ = pertinent for
the joint activity being considered, pertinent beliefs and pertinent
assumptions — I was thinking ‘alignment’ would be our measurement of that
common ground —

Perfect alignment would result in as you say my optimal use of what you as
an agent are providing me to use for the given task because I can map it
using my representation of your pertinent knowledge, beliefs and assumptions
into implications of my own representation of the data to be considered to
accomplish the given inference tasks

With respect to my statement about god’s representation – I was thinking I
wanted to address mike’s point of ‘aligning’ to the environment – if I was
going to restrict my use of the word alignment to be as above to be between
two agents then what is the second agent when I’m concerned about my
representation and reality – so I could call that truth but to hold it in
the same formalism I added ‘god’ – so if god is truth then she has the
representation that is reality – and I can attempt to align with her
representation – and all the above still fits – so I could talk about the
alignment of my representation to the world – then I could talk about what
in that world I need my representation to have good alignment with for a
given inference task

Capt amerika

Actually, it was Terry’s point that got all this started. I just picked up
on it.

In general, I agree with Jared, and disagree with Steve. I do not see how
you can say “situation awareness” is different from “joint activity”. A
representation is a representation.

Further, there is not just one representation for any situation, but many;
and they change from moment to moment. That is, alignment is dynamic.

The challenge is how to talk about differences in representations in a
meaningful fashion.

The other dimension (i.e., other than alignment) that has (partially)
dropped out from Friday is objective versus subjective knowledge of the
world and situations.

Notionally (because I have never gotten a firm answer on this), let’s say
that Sys1 maintains a representation of the world, and Sys2 creates and
maintains a representation of agent goals and tasks. Further Sys2 can
maintain two representations, one of its own state, and one of another’s
state (via its simulation capability).

In this situation, Sys1’s representation is probably more objective than
Sys2’s. It is not completely objective because we know that humans scan the
environment in different ways and pick-up different information. Further,
one can easily imagine a situation where three individuals are looking at
the world using different sensors (e.g., infrared, radar, and human
eyeball). If it is dark out, it is obvious they will have different
(subjective) representations of the “objective” world.

Information in Sys2 (or perhaps even Sys1) is a mapping into a “meaning
space”. The information is Sys1 leads to (triggers, creates) mappings from
physical objects to denotations and connotations that are divorced from the
details of the physical objects. The meaning space emphasizes connections
between things (objects) and how these sets of objects may evolve in a
temporal sense. How can they be used to accomplish goals, or how they might
impede goals.

If I understand Steve, he wants to limit the term alignment to comparisons
between representations in the (what I am calling the) meaning space and he
is calling the joint activity space. That’s OK, but there are still
“alignment” issues between those representations and the “objective” world.
Imagine a situation where there are several individuals performing a task
and two individuals are using different sensors (or the same type of sensors
with different characteristics, i.e., radars with different capabilities).
There are going to be joint activity alignment problems in these situations.

I would favor a more general approach that measured alignment across a range
of representations and times.


I don’t say situational awareness is different than joint activity – I just needed to account for the ‘objective situation’ – the reality of the world – if I want to discuss alignment with that I needed to capture it in an agent formalism to keep with the joint activity thread –

Completely agree alignment is dynamic

With respect to objective versus subjective – I would maintain even a sys1 ‘objective representation’ is subjective as it was developed for the critters/agent’s unique experiences and sensors … – but I do like the point of breaking out the differences in aligning with sys1 versus sys2 – and I really like trying to capture the idea of where we are talking about alignment occurring – in sys2 although again I think RH has plenty of examples of team training where we teach teams of responders to react reflexively to each other – example of alignment at the sys1 level – I’ve modeled your representation and encounter it so often I push it down to sys1 for quick efficient responses

With respect to your other sys1 / sys2 question – sys2 has to maintain a representation (simulation) of the world – not just sys1 – it is the backcloth that it uses to weave its narrative – it as you point out is subjective and can allow many entries that are imagined versus measured (they are inferred to exist to allow stability, consistency and usefulness)

I also do like the flexibility of alignment over space and time

Really good discussion –

Capt amerika


I think that a way to get around your problem is to concede that alignment itself is subjective — that is, it depends on the person/thing doing the measuring of the difference between an agent’s representation and reality. In this setup, the measuring agent would just use her (subjective!) world representation as the “truth” and there would be no need for an objective representation of reality.


Again thanx for keeping this going – I like the ongoing discussion

With respect to the captain being too eager – I believe his last input had the potential attackers very close and closing on his position – he wasn’t aligned with reality nor with the ped crew’s representation and for that matter probably not with the air crews – had he been I suspect he would not have said hit them

The newness of ‘alignment’ in our discussions is our tenet that suggest that alignment improve performance since the alignment results in a more efficient reduction of uncertainty in the decision making agent’s representation due to it being able to better assimilate pertinent inputs from the agent it is aligned to and that other agent being engaged in a joint activity (to include the basic compact – they are working together …) is doing its best to help in the inference task

In this discussion I’m not trying to fix the human to human chain of joint activity – but – I’m trying to understand the computer tool to human agent within the context of the current morass of humans trying to do inference tasks like the la times story – and recall it is my conjecture that at the end of the day we only have hope if we can put all the pieces (humans and computers) into a common framework —- so the discussion about human-to-human alignment is relevant to my end goal

With respect to your paragraph:

So what is alignment to me? Alignment is the assessment of my theory of mind of the other given his/her theory of mind of me. To have a conversation with someone, in my mind, is to continue to evaluate that assessment so we can discuss them (information exchanging). The attached file is meant to represent it. But alignment does not necessarily need to end with agreement or trust. I may fully understand Person or Agent X (because I am fully aligned to him/her) but I still don’t agree with or trust him/her.

I’m ok with parts of where you were going – yes alignment is a measure of my alignment to another agent’s representation – thus the theory of mind idea is correct – and yes it as part of that representation includes my assessment of how that other agent has modeled me correctly or not – and yes I agree alignment does NOT imply agreement or trust —

I do like keeping separate for now the sources of error – external sensing / phenomenology sources of error and internal representational sources of error what you are calling internal noise sources

Capt amerika

I do agree alignment is an internal measure of some agent – thus subjective
– I can never know how much I’m aligned with anything else – whether it is
another agent or with the world – thus as brian suggest I’m always dinking
around with my interactions with the other agent / the world to refine my
alignment – ‘if this means that to the other agent then if I give them this
stimuli they will respond this way — oh shit they didn’t — that must meant
this doesn’t mean that let me try this stimuli …’

You can imagine going through the same interactions with the world – as I
refine my alignment with reality – that is what my grandson ‘Boo’ does — he
experiments then refines they experiments – that is what dating couples do –
they put in stimuli and measure responses attempting to generate a good
enough model of the person to determine if they are worth marrying – Anne
did this with her fiancé (hopefully her model is correct)

The world continually changes as do the people in it – so as mike points out
alignment never ends

Capt amerika

Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: