Archive

Archive for September, 2019

Weekly QuEST Discussion Topics, 27 Sept

September 26, 2019 Leave a comment

QuEST 27 Sept 2019

We want to start this week by discussing the definition of ‘task’.

Based on the work of Jared / Ox / George:

A task (or goal) is an agent-centric desired future state or sequence of states of the agent’s representation.

We want to work through some examples they’ve thought about: physical manipulation of objects, prediction of future events, data classification and a question/answering system.  To have this discussion imagine a task specifying agent and an executing agent (both could be the same agent but often are not).  Note the definition above works for both specifying and executing the task.

Then continuing the discussion from last week – the series from the Deep mind Podcast – and have found some interesting links in the show notes:

https://deepmind.com/blog/article/welcome-to-the-deepmind-podcast

Reminded cap of a conversation with John Searle – some of the material is covered in Lecture One that we went through last week:

REITH LECTURES 1984: Minds, Brains and Science

John Searle

Lecture 1: A Froth on Reality

The mind brain problem – this week we want to continue some of that discussion – with assistance from ‘Old Mike’.

His other lectures in this series are fascinating and we want this week to hit some key ideas from them – including ‘beer cans and meat machines’ and ‘grandmother knew best’

From the ‘beer cans and mean machines’ lecture:

The prevailing view in philosophy, psychology and artificial intelligence is one which emphasizes the analogies between the functioning of the human brain and the functioning of digital computers. According to the most extreme version of this view, the brain is just a digital computer and the mind is just a computer program. One might summarize this view—I call it ‘strong artificial intelligence’, or ‘strong AI’— by saying that the mind is to the brain as the program is to the computer hardware….

Our internal mental states, by definition, have certain sorts of contents. If I’m thinking about Kansas City or wishing that I had a cold beer to drink or wondering if there will be a fall in interest rates, in each case my mental state has a certain mental content in addition to whatever formal features it might have. That is, even if my thoughts occur to me in strings of symbols there must be more to the thought than the abstract strings, because strings themselves can’t have any meaning. If my thoughts are to be about anything, then the strings must have a meaning which makes the thoughts about those things. In a word, the mind has more than a syntax, it has semantics,…

I want to conclude this lecture by putting together the thesis of the last lecture and the thesis of this one. Both of these theses can be stated very simply. And indeed, I’m going to state them with perhaps excessive crudeness. But if we put them together, I think we get a quite powerful conception of the relations of minds, brains and computers. And the argument has a very simple logical structure, so you can see whether it’s valid or invalid.

  1. Brains cause minds.

Well, of course, that’s really too crude. What we mean by that is that mental processes that we consider to constitute a mind are caused, that is, entirely caused, by processes going on inside the brain. But let’s be crude; let’s just write that down as three words—brains cause minds. And that’s just a fact about how the world works. Now let’s write proposition number two:

  1. Syntax is not sufficient for semantics.

That proposition is a conceptual truth. It just articulates our distinction between the notions of what is purely formal and what has content. Now, to these two propositions – that brains cause minds and that syntax is not sufficient for semantics – let’s add a third and a fourth:

  1. Computer programs are entirely defined by their formal, or syntactical, structure.

That proposition, I take it, is true by definition; it’s part of what we mean by the notion of a computer program. Now let’s add proposition four:

  1. Minds have mental contents; specifically, they have semantic contents.

And that, I take it, is just an obvious fact about how our minds work. Now, from these four premises, we can draw our first conclusion; and it follows obviously from two, three, and four; namely:

This is conclusion 1: No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.

That’s a very powerful conclusion, because it means that the project of trying to create minds solely by designing programs is doomed from the start. This is a purely formal, or logical, result from a set of axioms which are agreed to by all (or nearly all) of the disputants concerned. That is, even the most hardcore enthusiast for artificial intelligence agrees that in fact, as a matter of biology, brain processes cause mental states, and they agree that programs are defined purely syntactically. But if you put these conclusions together with certain other things that we know, then it follows immediately that the project of strong Al is incapable of fulfilment.

However, once we’ve got these axioms, let’s see what else we can derive. Here’s a second conclusion:

Conclusion 2: The way that brain functions cause minds cannot be solely in virtue of running a computer program. And this second conclusion follows from the first premise, as well as from our first conclusion. That is from the fact that brains cause minds and that programs are not enough to do the job, it follows that the way that brains cause minds can’t be solely by running a computer program. Now that, also, I think is an important result because it has the consequence that the brain is not, or at least is not just, a digital computer. We saw earlier that anything can be trivially described as if it were a digital computer, and brains are no exception. But the importance of this conclusion is that the computational properties of the brain are simply not enough to explain its functioning to produce mental states. And indeed, that ought to seem a commonsense scientific conclusion to us anyway, because all it does is remind us of the fact that brains are biological engines; that biology matters. It’s not, as several people in artificial intelligence have claimed, it’s not just an irrelevant fact about the mind that it happens to be realised in human brains.

Now, from our first premise, we can also derive a third conclusion:

Conclusion 3: Anything else that caused minds would have to have causal powers at least equivalent to those of the brain. And this third conclusion is a trivial consequence of our first premise: it’s a bit like saying that if my petrol engine drives my car at 75 miles per hour, then any diesel engine that was capable of doing that would have to have a power output at least equivalent to that of my petrol engine. Of course, some other system might cause mental processes using entirely different chemical or biochemical features from those that the brain in fact uses. It might turn out that there are beings on other planets or in some other solar system that have mental states and yet use an entirely different biochemistry from ours. Suppose that Martians arrived on earth, and we concluded that they had mental states. But suppose that when their heads were opened up, it was discovered that all they had inside was green slime. Well, still, that green slime, if it functioned to produce consciousness and all the rest of their mental life, would have to have causal powers equal to those of the human brain. But now, from our first conclusion, that programs are not enough, and our third conclusion, that any other system would have to have causal powers equal to the brain, conclusion four follows immediately:

Conclusion 4For any artefact that we might build which had menial states equivalent to human mental slates, the implementation of a computer program would not by itself be sufficient. Rather the artefact would have to have powers equivalent to the powers of the human brain.

The upshot of this entire discussion I believe is to remind us of something that we’ve known all along: namely, mental states are biological phenomena. Consciousness, intentionality, subjectivity, and mental causation are all a part of our biological life history, along with growth, reproduction, the secretion of bile and digestion.

And from the lecture on ‘ grandmother knew best”:

We feel perfectly confident in saying things like ‘Basil voted for the Tories because he liked Mrs Thatcher’s handling of the Falklands affair’, but we have no idea how to go about saying things like Basil voted for the Tories because of a condition of his hypothalamus.’ That is, we have common sense explanations of people’s behaviour in mental terms, in terms of their desires, wishes, fears, hopes, and so on. And we suppose that there must also be a neurophysiological sort of explanation of people’s behaviour in terms of processes in their brains. The trouble is that the first of these sorts of explanation works well enough in practice but is not scientific. Whereas the second is certainly scientific but we have no idea how to make it work in practice.

Now, that leaves us apparently with a gap, a gap between the brain and the mind. And some of the greatest intellectual efforts of the 20th century have been attempts to fill this gap, to get a science of human behaviour which was not just commonsense grandmother psychology, but was not scientific neurophysiology either.

Up to the present time, without exception, the gap-filing efforts have been failures. Behaviourism was the most spectacular failure, but in my lifetime I have lived through exaggerated claims made on behalf of, and eventually disappointed by, games theory, cybernetics, information theory, structuralism, socio-biology, and a bunch of others. To anticipate a bit, I am going to claim that all the gap-filling efforts fail because there isn’t any gap to fill….

Another area we are interested in is that cross our desks this week is associated with multi-agent solutions where the agent pool is a mixture of human agents and computer bot agents.  Our AACO effort will eventually have to address this area.  A recent work that addresses some issues in this space is:

Why Build an Assistant in Minecraft?

Arthur Szlam et al

arXiv:1907.09273v2 [cs.AI] 25 Jul 2019

In this document we describe a rationale for a research program aimed at building an open “assistant” in the game

Minecraft, in order to make progress on the problems of natural language understanding and learning from dialogue.

A Somewhat related article on Open AI work to teach a group of agents to play hide-and-seek:

https://www.technologyreview.com/s/614325/open-ai-algorithms-learned-tool-use-and-cooperation-after-hide-and-seek-games/

OpenAI’s agents evolved to exhibit complex behaviors, suggesting a promising approach for developing more sophisticated artificial intelligence.

… Researchers at OpenAI, the San Francisco–based for-profit AI research lab, are now testing a hypothesis: if you could mimic that kind of competition in a virtual world, would it also give rise to much more sophisticated artificial intelligence?

Bottom of Form

The experiment builds on two existing ideas in the field: multi-agent learning, the idea of placing multiple algorithms in competition or coordination to provoke emergent behaviors, and reinforcement learning, the specific machine-learning technique that learns to achieve a goal through trial and error….

In a new paper released today, OpenAI has now revealed its initial results. Through playing a simple game of hide and seek hundreds of millions of times, two opposing teams of AI agents developed complex hiding and seeking strategies that involved tool use and collaboration. The research also offers insight into OpenAI’s dominant research strategy: to dramatically scale existing AI techniques to see what properties emerge.

Another area of interest that crossed our desk is neural inspired AI:

Neuroscience-Inspired Artificial Intelligence

Demis Hassabis

The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace.

In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.

Neuron 95, July 19, 2017 ª 2017 Published by Elsevier Inc. 245

Categories: Uncategorized

Weekly QuEST Discussion Topics, 20 Sept

September 19, 2019 Leave a comment

QuEST 20 Sept 2019

Cap and Polie have been listening to a series from Deep mind Podcast – and have found some interesting links in the show notes:

https://deepmind.com/blog/article/welcome-to-the-deepmind-podcast

with the first four episodes additional links at:

Episode 1: https://deepmind.com/blog/article/podcast-episode-1-ai-and-neuroscience-the-virtuous-circle

Episode 2: https://deepmind.com/blog/article/podcast-episode-2-go-to-zero

Episode 3: https://deepmind.com/blog/article/podcast-episode-3-life-is-like-a-game

Episode 4: https://deepmind.com/blog/article/podcast-episode-4-ai-robot

One of the rabbit holes we went down was associated reminded cap of a conversation with John Searle – some of the material is covered in:

REITH LECTURES 1984: Minds, Brains and Science

John Searle

Lecture 1: A Froth on Reality

At the moment, the biggest problem is this: we have a certain commonsense picture of ourselves as human beings which is very hard to square with our overall ‘scientific’ conception of the physical world. We think of ourselves as conscious, free, mindful, rational agents in a world that science tells us consist entirely of mindless, meaningless physical particles. Now, how can we square these two conceptions?

How, for example, can it be both the case that the world contains nothing but unconscious physical particles, and yet that it also contains consciousness? How can a mechanical universe contain intentionalistic human beings – human beings that can represent the world to themselves? How can an essentially meaningless world contain meanings? …

 

In this first lecture, I want to plunge right in to what many philosophers think of as the hardest problem of all: what’s the relation of our minds to the rest of the universe? This, I am sure you will recognise, is the traditional mind-body or mind-brain problem. In its contemporary version it usually takes the form: how does the mind relate to the brain?

 

*** this was my early exposure to a model for the ‘micro-to-macro’ problem

To summarise: in my view, the mind and the body interact, but they are not two different things since mental phenomena are features of the brain. One way to characterise this solution to the mind-body problem is to see it as an assertion of both physicalism and mentalism. Suppose we define ‘naive physicalism’ to be the view that all that exists in the world are physical particles with their properties and relations. The power of the physical model of reality is so great that it’s hard to see how we can seriously challenge naive physicalism. And now let’s define ‘naïve mentalism’ to be the view that mental phenomena really exist. There really are mental states; some of them are conscious; many have intentionality; they all have subjectivity; and many of them function causally in determining physical events in the world. The thesis of this first Reith lecture can now be stated quite simply. Naïve mentalism and naive physicalism are perfectly consistent with each other. Indeed, as far as we know anything about how the world works, they’re not only consistent, they’re both true.

 

His other lectures in this series are fascinating – including ‘beer cans and meat machines’ and ‘grandmother knew best’

From the ‘beer cans and mean machines’ lecture:

In my last lecture, I provided at least the outlines of a solution to the so-called ‘mindbody problem’. Mental processes are caused by the behaviour of elements of the brain. At the same time, they’re realised in the structure that’s made up of those elements. Now, I think this answer is consistent with standard biological approaches to biological phenomena. However, it’s very much a minority point of view. The prevailing view in philosophy, psychology and artificial intelligence is one which emphasises the analogies between the functioning of the human brain and the functioning of digital computers. According to the most extreme version of this view, the brain is just a digital computer and the mind is just a computer program. One might summarise this view—I call it ‘strong artificial intelligence’, or ‘strong AI’— by saying that the mind is to the brain as the program is to the computer hardware….

Our internal mental states, by definition, have certain sorts of contents. If I’m thinking about Kansas City or wishing that I had a cold beer to drink or wondering if there will be a fall in interest rates, in each case my mental state has a certain mental content in addition to whatever formal features it might have. That is, even if my thoughts occur to me in strings of symbols there must be more to the thought than the abstract strings, because strings themselves can’t have any meaning. If my thoughts are to be about anything, then the strings must have a meaning which makes the thoughts about those things. In a word, the mind has more than a syntax, it has semantics,…

 

I want to conclude this lecture by putting together the thesis of the last lecture and the thesis of this one. Both of these theses can be stated very simply. And indeed, I’m going to state them with perhaps excessive crudeness. But if we put them together, I think we get a quite powerful conception of the relations of minds, brains and computers. And the argument has a very simple logical structure, so you can see whether it’s valid or invalid.

  1. Brains cause minds.

Well, of course, that’s really too crude. What we mean by that is that mental processes that we consider to constitute a mind are caused, that is, entirely caused, by processes going on inside the brain. But let’s be crude; let’s just write that down as three words—brains cause minds. And that’s just a fact about how the world works. Now let’s write proposition number two:

  1. Syntax is not sufficient for semantics.

That proposition is a conceptual truth. It just articulates our distinction between the notions of what is purely formal and what has content. Now, to these two propositions – that brains cause minds and that syntax is not sufficient for semantics – let’s add a third and a fourth:

  1. Computer programs are entirely defined by their formal, or syntactical,

structure. That proposition, I take it, is true by definition; it’s part of what we mean by the notion of a computer program. Now let’s add proposition four:

  1. Minds have mental contents; specifically, they have semantic contents.

And that, I take it, is just an obvious fact about how our minds work. Now, from these four premises, we can draw our first conclusion; and it follows obviously from two, three, and four; namely:

 

This is conclusion 1: No computer program by itself is sufficient to give a

system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.

That’s a very powerful conclusion, because it means that the project of trying to create minds solely by designing programs is doomed from the start. This is a purely formal, or logical, result from a set of axioms which are agreed to by all (or nearly all) of the disputants concerned. That is, even the most hardcore enthusiast for artificial intelligence agrees that in fact, as a matter of biology, brain processes cause mental states, and they agree that programs are defined purely syntactically. But if you put these conclusions together with certain other things that we know, then it follows immediately that the project of strong Al is incapable of fulfilment.

However, once we’ve got these axioms, let’s see what else we can derive. Here’s a second conclusion:

Conclusion 2: The way that brain functions cause minds cannot be solely in

virtue of running a computer program. And this second conclusion follows from the first premise, as well as from our first conclusion. That is from the fact that brains cause minds and that programs are not enough to do the job, it follows that the way that brains cause minds can’t be solely by running a computer program. Now that, also, I think is an important result because it has the consequence that the brain is not, or at least is not just, a digital computer. We saw earlier that anything can be trivially described as if it were a digital computer, and brains are no exception. But the importance of this conclusion is that the computational properties of the brain are simply not enough to explain its functioning to produce mental states. And indeed, that ought to seem a commonsense scientific

conclusion to us anyway, because all it does is remind us of the fact that brains are biological engines; that biology matters. It’s not, as several people in artificial intelligence have claimed, it’s not just an irrelevant fact about the mind that it happens to be realised in human brains.

Now, from our first premise, we can also derive a third conclusion:

Conclusion 3: Anything else that caused minds would have to have causal

powers at least equivalent to those of the brain. And this third conclusion is a trivial consequence of our first premise: it’s a bit like saying that if my petrol engine drives my car at 75 miles per hour, then any diesel

engine that was capable of doing that would have to have a power output at least equivalent to that of my petrol engine. Of course, some other system might cause mental processes using entirely different chemical or biochemical features from those that the brain in fact uses. It might turn out that there are beings on other planets or in some other solar system that have mental states and yet use an entirely different biochemistry from ours. Suppose that Martians arrived on earth, and we concluded that they had mental states. But suppose that when their heads were opened up, it was discovered that all they had inside was green slime. Well, still, that green slime, if it functioned to produce consciousness and all the rest of their mental life, would have to have causal powers equal to those of the human brain. But now, from our first conclusion, that programs are not enough, and our third conclusion, that any other system would have to have causal powers equal to the brain, conclusion four follows immediately:

Conclusion 4: For any artefact that we might build which had menial states

equivalent to human mental slates, the implementation of a computer program would not by itself be sufficient. Rather the artefact would have to have powers equivalent to the powers of the human brain.

The upshot of this entire discussion I believe is to remind us of something that we’ve known all along: namely, mental states are biological phenomena. Consciousness, intentionality, subjectivity, and mental causation are all a part of our biological lifehistory, along with growth, reproduction, the secretion of bile and digestion.

 

And from the lecture on ‘ grandmother knew best”:

 

We feel perfectly confident in saying things like ‘Basil voted for the Tories because he liked Mrs Thatcher’s handling of the Falklands affair’, but we have no idea how to go about saying things like Basil voted for the Tories because of a condition of his hypothalamus.’ That is, we have common sense explanations of people’s behaviour in mental terms, in terms of their desires, wishes, fears, hopes, and so on. And we suppose that there must also be a neurophysiological sort of explanation of people’s behaviour in terms of processes in their brains. The trouble is that the first of these sorts of explanation works well enough in practice but is not scientific. Whereas the second is certainly scientific but we have no idea how to make it work in practice.

Now, that leaves us apparently with a gap, a gap between the brain and the mind. And some of the greatest intellectual efforts of the 20th century have been attempts to fill this gap, to get a science of human behaviour which was not just commonsense grandmother psychology, but was not scientific neurophysiology either.

Up to the present time, without exception, the gap-filing efforts have been failures. Behaviourism was the most spectacular failure, but in my lifetime I have lived through exaggerated claims made on behalf of, and eventually disappointed by, games theory, cybernetics, information theory, structuralism, socio-biology, and a bunch of others. To anticipate a bit, I am going to claim that all the gap-filling efforts fail because there isn’t any gap to fill….

 

Another area we are interested in is that cross our desks this week is associated with multi-agent solutions where the agent pool is a mixture of human agents and computer bot agents.  Our AACO effort will eventually have to address this area.  A recent work that addresses some issues in this space is:

Why Build an Assistant in Minecraft?

Arthur Szlam et al

arXiv:1907.09273v2 [cs.AI] 25 Jul 2019

In this document we describe a rationale for a research program aimed at building an open “assistant” in the game

Minecraft, in order to make progress on the problems of natural language understanding and learning from dialogue.

 

A Somewhat related article on Open AI work to teach a group of agents to play hide-and-seek:

https://www.technologyreview.com/s/614325/open-ai-algorithms-learned-tool-use-and-cooperation-after-hide-and-seek-games/

OpenAI’s agents evolved to exhibit complex behaviors, suggesting a promising approach for developing more sophisticated artificial intelligence.

Researchers at OpenAI, the San Francisco–based for-profit AI research lab, are now testing a hypothesis: if you could mimic that kind of competition in a virtual world, would it also give rise to much more sophisticated artificial intelligence?

The experiment builds on two existing ideas in the field: multi-agent learning, the idea of placing multiple algorithms in competition or coordination to provoke emergent behaviors, and reinforcement learning, the specific machine-learning technique that learns to achieve a goal through trial and error….

In a new paper released today, OpenAI has now revealed its initial results. Through playing a simple game of hide and seek hundreds of millions of times, two opposing teams of AI agents developed complex hiding and seeking strategies that involved tool use and collaboration. The research also offers insight into OpenAI’s dominant research strategy: to dramatically scale existing AI techniques to see what properties emerge.

Another area of interest that crossed our desk is neural inspired AI:

Neuroscience-Inspired Artificial Intelligence

Demis Hassabis

The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace.

In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.

 

Categories: Uncategorized

Weekly QuEST Discussion Topics, 13 Sept

September 12, 2019 Leave a comment

Last week Dr. Rogers posed the question – How do we handle a query posed as natural language, perform inference/reasoning over our knowledge representation, and return what is deemed a satisfactory answer?

This week, we will continue the discussion by getting down to the basics of knowledge representation and inference.  We will discuss the benefits and limitations of commonly used knowledge representations. We will also explore methods for going beyond basic data extraction to perform inference over uncertain data. What limitations do these methods have and what considerations should we keep in mind as we develop these algorithms in ACT3?

Categories: Uncategorized

Weekly QuEST Discussion Topics, 6 Sept

September 5, 2019 Leave a comment

QuEST Sept 6, 2019

We want to lay some ground work this week for next week’s presentation by our colleague, Karleigh.  She will ground us in considerations for representations that impact our ability to make inferences to facilitate correct extraction of meaning and thus correct responses to include how to do this at scale.

We will start by revisiting some of the prior QuEST discussions for example:

Unreasonable Effectiveness of Data:  – by Alon Halevy, Peter Norvig, and Fernando Pereira, Google

  • Eugene Wigner’s article “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” 1 examines why so much of physics can be neatly explained with simple mathematical formulas such as fma or mc2.
  •  Meanwhile, sciences that involve human beings rather than elementary particles have proven more resistant to elegant mathematics.

Since the goal of QuEST is:

THE ULTIMATE GOAL of a theory of consciousness is a simple and elegant set of fundamental laws, analogous to the fundamental laws of physics.

Structural Coherence (interaction to ensure stable, consistent and useful representation)

Situation based processing (situations as variables) – fundamental unit of conscious cognition (narratives)

Conscious representation of situations are done via simulation (cognitively decoupled – imagined past, present and future in the form of a cohesive narrative)

We’ve hit this limitation head on:

  • Lesson of web scale learning! use available large-scale data rather than hoping for annotated data that isn’t available. – But invariably, simple models and a lot of data trump more elaborate models based on less data. (** less data AND human insight **)
  • With a corpus of thousands of photos, the results were poor. But once they accumulated millions of photos, the same algorithm performed quite well.
  • We know that the number of grammatical English sentences is theoretically infinite and the number of possible 2-Mbyte photos is 2,562,000,000.
  • However, in practice we humans care to make only a finite number of distinctions.
  • For many tasks, once we have a billion or so examples, we essentially have a closed set that represents (or at least approximates) what we need, without generative rules.

Natural language processing comes down to:

  • In reality, three orthogonal problems arise:

–     • choosing a representation language,

–     • encoding a model in that language,

–     • performing inference on the model

  • In the 1980s and 90s, it became fashionable to use

–     finite state machines as the representation language,

–     use counting and smoothing over a large corpus to encode a model,

–     and use simple Bayesian statistics as the inference method.

Statistical relational learning as representational language

  • But many other combinations are possible, and in the 2000s, many are being tried.
  • For example, Lise Getoor and Ben Taskar collect work on statistical relational learning—that is, representation languages that are powerful enough to represent relations between objects (such as first-order logic) but that have a sound, probabilistic definition that allows models to be built by statistical learning.

Semantic Web versus Semantic Interpretation

  • The Semantic Web is a convention for formal representation languages that lets software services interact with each other “without needing artificial intelligence.”11
  • The problem of understanding human speech and writing – the semantic interpretation problem-isquite different from the problem of software service interoperability.

–     Semantic interpretation deals with imprecise, ambiguous natural languages,

–     whereas service interoperability deals with making data precise enough that the programs operating on the data will function effectively.

  • Unfortunately, the fact that the word “semantic” appears in both “Semantic Web” and “semantic interpretation“ means that the two problems have often been conflated, causing needless and endless consternation and confusion.
  • The “semantics” in Semantic Web services is embodied in the code that implements those services in accordance with the specifications expressed by the relevant ontologies and attached informal documentation. ** defines all acceptable interactions **
  • The “semantics” in semantic interpretation of natural languages is instead embodied in human cognitive and cultural processes whereby linguistic expression elicits expected responses and expected changesin cognitive state.
  • Because of a huge shared cognitive and cultural context, linguistic expression can be highly ambiguous and still often be understood correctly.  ** the reason is simulation generating the best cohesive narrative disambiguates **

There is also the more recent discussion on the ‘common sense paper by Gunning’

Machine Common Sense
Concept Paper
David Gunning
DARPA/I2O

  • … including the taxonomy of approaches shown in Figure 1 below.
  • Shortly after co-founding the field of AI in the 1950’s, John McCarthy speculated that programs with common sense could be developed using formal logic [2].
  • This suggestion led to a variety of efforts to develop logic‐based approaches to commonsense reasoning (e.g., situation calculus [3], naïve physics [4], default reasoning [5], non‐monotonic logics[6], description logics [7], and qualitative reasoning [8]), less formal knowledge‐based approaches (e.g., frames [9], and scripts [10]), and a number of efforts to create logic‐based ontologies (e.g.,WordNet [11], VerbNet [12], SUMO [13], YAGO [14], DOLCE [15], and hundreds of smaller ontologies on the Semantic Web [16]).

And from our Kabrisky lecture:

Many computer agents only process information they can ‘see’

n  Word-based algorithms are limited by the fact that they can process only the information that they can ‘see’.

n  As human text processors, we do not have such limitations as every word we see activates a cascade of semantically related concepts, relevant episodes, and sensory experiences, all of which enable the completion of complex tasks (such as word-sense disambiguation, textual textual entailment, and semantic role labeling) in a quick and effortless way.

-à exformation comes to mind

We’ve often suggested this is the idea of ‘context’

Consciousness as context engine

n  Goal for use of context is to generate more useful meaning of a stimuli for example in object or situation recognition (correct assignment of object / situation labels requires consideration of other objects / prior-future situations / other sensory information, model seems to fit if the context is used to disambiguate between multiple competing alternatives / narratives)

n  Attempting to generate semantic meta-data bottom up only is ill-posed

n  Sources of context

n  Learning from training (co-occurrence – can be from other agents)

n  Pre-programmed in (Google sets examples)

n  Derived information (includes agent’s current and prior informational states includes

n  Environment (city, weather, location, orientation, proximity, change of proximity, time) User’s own activity User’s own physiological states)

n  One reason context can be important to consider is the statement: 

n  Total reliance on sensor data is metaphorically equivalent to trying to solve a set of equations when there exist more unknowns than equations

n  If our goal is the automated generation of semantic meta-data then it will require some means to incorporate context

n  Context provides the means to ‘situate’ new sensory representations

n  Context and Big Data – are current approaches to Big Data looking to account for just one aspect of Context – co-occurrence? 

n  If so can we look as another value added path for QuEST to provide a path to incorporate other aspects of Context (like relevant domain knowledge, other sensory paths)?

n  Another topic is the relationship of current proposed means to use context and compliance with QuEST tenets – Context provides the means to ‘situate’ new sensory representations – it is all the other stuff in the

representation that is being experienced – thus situating a representation is a big step towards QuEST compliance –

And since this is all about ‘alignment of agents’ – the prior discussion associated with Turing and code breaking and the Imitation Game:

PREMISE OF PIECE IS THAT ALAN TURING HAD AN APPROACH TO PROBLEM SOLVING

– HE USED THAT APPROACH TO CRACK THE NAZI CODE – HE USED THAT APPROACH TO INVENTING THE IMITATION GAME – THROUGH THAT ASSOCIATION WE WILL GET BETTER INSIGHT INTO WHAT THE IMPLICATION OF THE IMITATION GAME MEANING IS TO COMING UP WITH A BETTER CAPTCHA, BETTER APPROACH TO ‘TRUST’ AND AN AUTISM DETECTION SCHEME – and a unique approach to Intent from activity (malware) Or our interest in the ‘fragmentation problem’ 100 vendors hundreds packages none talk

** AND Conversational AI

  • I PROPOSE to consider the question, ‘ Can machines think ? ‘ This should begin with definitions of the meaning of the terms ‘ machine ‘ and ‘ think ‘.
  • QuEST:  The question really is can machines (performing agents) generate a meaning of a query that is acceptable (acceptable from perspective is could this be the meaning a human would generate – and since that is impossible to know {what a human thinks all you can really do is measure with respect to a task}) to the interrogator (evaluating agent).  If so we would conclude they think (where thinking is more a statement of it is human like in the meaning it generated – **as estimated from performance on a task like conversational AI)!

Think – QuEST – manipulation of the representation – manipulation associated

with a task is reasoning

the What Turing meant to say discussions converged on ‘alignment’ of agents’ representations

in those discussions we went down the path of word vector embeddings:

n  Continuous space language models have recently demonstrated outstanding results across a variety of tasks.

n  In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights.

n  We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a relation-specific vector offset.

n  This allows vector-oriented reasoning based on the offsets between words.

n  For example, the male/female relationship is automatically learned, and with the induced vector representations, “King – Man + Woman” results in a vector very close to “Queen.”

n  We demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions (provided with this paper), and are able to correctly answer almost 40% of the questions.

n  We demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer SemEval-2012 Task 2 questions.

Remarkably, this method outperforms the best previous systems

One of the really interesting discussions we had was:

n  How do we formalize this problem – assume there is a world – there is physics – there are photons – they have characteristics – observables by an agent’s sensors – the agent uses those observables to generate a representation to capture some aspect of the physical world – that transformation from sensory observations to an internal representation is guided by an objective function – the mechanism used by the agent to accomplish that (to generate the representation) may be in common with some other agents – and the sensors are similar to other agents – and the objective function is in common  – what can we say?

This is obviously relevant to our multi-agent / semantics concerns

And that brings us back to the article that is our closest alligator – extracting a representation of the content of documents:

Extracting Semantic Relations for Scholarly
Knowledge Base Construction

Rabah A. Al-Zaidy et al

College of Information Science and Technology

Pennsylvania State University

  • The problem of information extraction from scientific articles, found as PDF documents in large digital repositories, is gaining more attention as the amount of research findings continues to grow. Wepropose a system to extract semantic relations among entities in scholarly articles by making use of external syntactic patterns and an iterative learner. While information extraction from scholarly documents have been studied before, it has been focused mainly on the abstract and keywords.
  • Our method extracts semantic entities as concepts and instances along with their attributes from the fully body text of documents.
  • We extract two types of relationships between concepts in the text using an iterative learning algorithm.
  • External data sources from the web such as the Microsoft concept graph, as well as query logs, are utilized to evaluate the quality of the extracted concepts and relations.
  • The concepts are used to construct a scientific taxonomy covering the research content of the documents. To evaluate the system we apply our approach on a set of 10k scholarly documents and conduct several evaluations to show the effectiveness of the proposed methods.
  • We show that our system obtains a 23% improvement in precision over existing web IE tools when they are applied to scholarly documents.
Categories: Uncategorized