Home > Meeting Topics and Material > Virtual Discussion following the ‘QUEST for Flexible Autonomy’ presentation

Virtual Discussion following the ‘QUEST for Flexible Autonomy’ presentation

Everyone

Thanks for including me.. the discussions look interesting to me… wish I were there.  The paradox you point out is the crux of the issue.  And it is huge.

In an application that is close to me, S&T can give the user more data.  But in fact, more data makes the user less effective.  What he needs is more actionable info, which is usually a combination of technology outputs, schmoozed together with SME human judgment… not easy to do today, so the user isn’t getting it!  What he gets is even more data than he got yesterday… makes him even less effective than he was yesterday… and sours him on S&T!  The user has come to view S&T as part of the problem, not part of the solution.

I suppose the clearest real example of that is non-AF example, 911.  S&T was providing the data to the user that could have prevented 911.  Problem:  the data was not in the form of actionable info, so the data was lost, mixed into the tidal wave of other, meaningless data being piped to the user.  The solution after 911?  Collect even MORE data on more people and give THAT to the user.

We need to focus less on ‘automated dot generation’ and more on ‘automated dot CONNECTION’.

V/R
Guns Atoll

Charles Sadowski Jr., Contractor

Chuck

Well said!  I agree totally.  What the user needs is the ability to ingest/create actionable information.

Remember that when all you have is a hammer everything is a nail.  Data is the hammer today.

Kevin Priddy

Hi Kevin and Chuck,

I am following your thread of thought a little bit and agree totally with your arguments. I have the following to add:

What we need, I think, is the ability to package information in a way so that the human can chunk it into meaningful units and patterns. As noted in a famous study in the 1950s by George Miller, humans have the ability to chunk a lot of low-bit pieces of information into a smaller number of higher-bit pieces of information, which occurs in large part based on meaning. de Groot in the 1960s showed that master chess players can reproduce almost exactly a complicated chess position when it was exposed to them for only 5 seconds, yet the chess masters were no better than novices when trying to reproduce a chess board composed of randomly placed pieces. It was the meaningful arrangement of the pieces in the former situation that allowed the chess masters to encode the positions of the pieces in meaningful chunks and patterns and then remember all of the positions. Follow up work by Chase and Simon in the 1970s showed that methods could be developed to actually measure the size and number of the chunks that the chess masters employed in reproducing from memory various chess arrangements.

Thus, the key, I think, to handling the explosion of information is to get a handle on how to present the information in meaningful ways that allows individuals to easily chunk the information in working memory and thereby encode it into long term memory for later use. Methods developed by Chase and Simon might be useful for designing such a research project.

Cheers,

Robert Patterson.

Robert

Thank you for the insight.  You’ve now destroyed my thoughts of Fisher having superhuman memory recall.  Maybe it’s true but I suspect the study you highlighted will debunk the theory.  I agree that we scientists and engineers need to discover a way to chunk the data into manageable “information chunks.”  We all know that humans can only handle a limited number of chunks 7 +/- 2 at one time, but they can be extremely complicated in content.  That’s probably why chess masters could do well for non-random arrangements of chess pieces.

I look forward to working with everyone on finding better ways for us to present/represent information to the warfighter.

Kevin Priddy

Great meeting guys – what you are discussing in this chain is one of the S/T gaps that we will point out in the Flexible Autonomy talk – we need a theory of alignment – we will define alignment like we did in today’s meeting – the ability of one agent to provide ‘context’ to another agent – we will use the definition of context to be the interagent communication to improve performance (see Oxley pubs) …
Capt Amerika
Hello all,

I’ve been working the other end of this problem for a few years and have some ideas as to how to proceed. As part of the Robust Decision Making project, our team is trying to develop a cognitive model of the sensemaking process in the reverse engineering problem domain. One of the main differences between this and some of the early reasoning work in blocks world, cryptanalysis, or Tower of Hanoi problems is the amount (and role) of background knowledge and the amount of state that exists in the problem environment, which causes the problem-solver to have to select and focus attention on the environment state that is relevant.

I have a conceptual theory of how this happens, but I’m trying to develop a more robust theory through empirical analysis of reverse engineers performing the task (my prospectus/research plan for that work is in review right now).

One thing that I’d like from the group, if possible, is to gather some additional operational use cases that share these features, with which I could organize a cognitive task analysis. I understand the same type of abstraction problems exist in cyber, intelligence analysis, and target recognition, but I don’t have access to anyone that actually performs those tasks. Would anyone care to help me make these connections back to operational contacts?

Thanks and have a great Friday!
Adam B.

Adam,
When I was at university of Minnesota, in the Carlson school of Management there was a prof who was very much into decision sciences and computational models of the way people did stuff so that he could analyze and find flaws or explanations for why, or predictions of when and where people-based systems were going to have problems.. and how to avoid these.  His work included both the doctor-patient domain for diagnosis and treatment of type 2 diabetes, and the Auditor-Business domain for determining what kinds of situations would lead to auditors missing mistakes and misleading accounting documents, plus probably a lot more

His name is Paul Johnson – Carlson School of Management @ UMN

http://www.csom.umn.edu/faculty-research/faculty.aspx?x500=johns021
http://www.bmhi.umn.edu/aboutihi/people/faculty/pjohnson/index.htm

tell him I said Hi!

-Brett Borghetti

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: