Archive

Archive for May, 2015

No QuEST Meeting this week, 29 May

There will NOT be a meeting this week. Unfortunately Capt Amerika will be travelling.  The topic this week was to be associated with two topics – there was a recent news article that suggested that Baidu has made a break through that surpassed previous performance by Google.

http://www.nydailynews.com/news/world/chinese-search-big-baidu-unveils-advanced-ai-article-1.2220947

Chinese search big Baidu unveils what it calls the world’s smartest artificial intelligence

BY COLTER HETTICH

NEW YORK DAILY NEWS

Wednesday, May 13, 2015, 2:25 PM

SHARE THIS URL

Watch out, Google and Microsoft: Baidu is coming for you in the artificial intelligence race.

Chinese web search giant Baidu unveiled its latest technology Monday, saying it had taken the lead in the global race for true artificial intelligence.

Minwa, the company’s supercomputer, scanned more than 1 million images and taught itself to sort them into about 1,000 categories — and did so with 95.42% accuracy, the company claims, adding that no other computer has completed the task at that same level.

Google’s system scored a 95.2% and Microsoft’s, a 95.06%, Baidu said.

All three companies’ computers, however, exceed human performance.

The concept of “deep learning,” or self-learning, algorithms is not unique to Minwa. Yet Baidu seems to have the upper hand and is not slowing down: the company has announced plans to build an even faster computer in the next 2 years, one capable of 7 quadrillion calculations per second.

Detailed results of Baidu’s report can be viewed at: http://arxiv.org/pdf/1501.02876v3.pdf

http://www.technologyreview.com/news/537436/baidus-artificial-intelligence-supercomputer-beats-google-at-image-recognition/

Baidu’s Artificial-Intelligence Supercomputer Beats Google at Image Recognition

A supercomputer specialized for the machine-learning technique known as deep learning could help software understand us better.

Why It Matters

Deep learning has produced breakthroughs in speech, image, and face recognition and could transform how we relate to computers.

Chinese search company Baidu built this computer to accelerate its artificial-intelligence research.

Chinese search giant Baidu says it has invented a powerful supercomputer that brings new muscle to an artificial-intelligence technique giving software more power to understand speech, images, and written language.

The new computer, called Minwa and located in Beijing, has 72 powerful processors and 144 graphics processors, known as GPUs. Late Monday, Baidu released a paper claiming that the computer had been used to train machine-learning software that set a new record for recognizing images, beating a previous mark set by Google.

“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project, speaking at the Embedded Vision Summit on Tuesday.Minwa’s computational power would probably put it among the 300 most powerful computers in the world if it weren’t specialized for deep learning, said Wu. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”

Computing power matters in the world of deep learning, which has produced breakthroughs in speech, image, and face recognition and improved the image-search and speech-recognition services offered by Google and Baidu.

The technique is a souped-up version of an approach first established decades ago, in which data is processed by a network of artificial neurons that manage information in waysloosely inspired by biological brains. Deep learning involves using larger neural networks than before, arranged in hierarchical layers, and training them with significantly larger collections of data, such as photos, text documents, or recorded speech.

..

The second topic was associated with a news article this week – that is the concept of ‘thought vectors’ as the next breakthroughs in computational intelligence.

http://www.theguardian.com/science/2015/may/21/google-a-step-closer-to-developing-machines-with-human-like-intelligence

Google a step closer to developing machines with human-like intelligence

Algorithms developed by Google designed to encode thoughts, could lead to computers with ‘common sense’ within a decade, says leading AI scientist

Joaquin Phoenix and his virtual girlfriend in the film Her. Professor Hinton think that there’s no reason why computers couldn’t become our friends, or even flirt with us. Photograph: Allstar/Warner Bros/Sportsphoto Ltd.

Hannah Devlin Science correspondent

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors”.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”

The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.”

Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence: mastering natural, conversational language, and the ability to make leaps of logic.

He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

“It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.”

In the past two years, scientists have already made significant progress in overcoming this challenge.

Advertisements
Categories: Uncategorized

No QuEST Meeting today, May 22

No QuEST meeting due to the Memorial Day weekend family day on Friday

Have a safe weekend

news summary (20)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 15 May

QuEST 15 May 2015

There are three vectors I’ve used my QuEST bandwidth on this week – I would like to discuss them and elicit feedback on the ideas with respect to implications for QuEST solutions:

Five Examples of Big Data Analytics and the Future of ISR

By Jon A. ‘Doc’ Kimminau

  • When we talk about U.S. Air Force intelligence, surveillance, and reconnaissance in 2023, we often depict it graphically as beginning with a global array of sensors that produces a variety of data absorbed in a cloud, from which multisource and all-source analysts produce decision advantage for both national and combatant decision makers.
  • Big data analytics is at the core of this vision, and its impacts to intelligence analysts and the way they execute their mission will be multifaceted.
  • Big data analytics offers the potential to revolutionize how analysis supports our warfighters and national decision makers with intelligence—the decision advantage in national security.

This revolution extends across the spectrum of intelligence analysis activity—from discovery and assessment, to explanation and anticipation, to delivery.

We’ve discussed big-data several times including the limitations of not being able to generate ‘meaning’ – but this is a good article that informs us how the Pentagon views what they expect out of big-data breakthroughs.  We need to be able to articulate the gaps to achieving the goals in this paper.

Deptula: ‘Combat cloud’ is ‘new face of long-range strike’

BY BRADLEY PENISTON

Lt. Gen. (ret.) David Deptula, the USAF’s first deputy chief of staff for ISR and newly the dean of the Air Force Association’s Mitchell Institute, headed a panel at AFA’s annual conference yesterday. Here are his remarks:

As we’ve heard throughout the conference so far, our challenges are the growing complexity of our security environment; the declining share of resources available for defense; and our changing strategy to meet the two.

What I’d like to do is expand on these issues from a couple of related perspectives. One thatemphasizes that we’re at a turning point in the character of warfare as a result of technology, and two, that’s good timing because we’re not going to be able to afford the last century paradigm of warfare where our objective was to put as many of America’s sons and daughters into harm’s way as quickly as possible.

However, to achieve the objectives of attaining superior U.S. warfighting capability at less cost will require more than new technology, adjusting manpower, or altering the number or type of widgets we operate. It will require applying concepts of operation enabled by information age capabilities in new ways. Information-centric, interdependent, and functionally integrated operations are the keys to future military success.

This paper provides us a vector that we can use to better understand how to position QuEST solutions.  The future is not one of single purpose monolithic intelligent solutions but a ‘cloud’ of capabilities that have to self-form and self-heal to accomplish desired effects.  This is very consistent with our recent discussions on parsing spaces like offensive cyber into tasks and look at how autonomy can be achieved.

Dramatica Theory Application

On World Problems

By Melanie Anne Phillips

Introduction to Dramatica Theory and Applications

The Dramatica Theory of Story is a model of the mind’s problem solving processes which has been successfully employed for seventeen years in the analysis and construction of fictional stories ranging from major Hollywood productions to novels, stage plays and television programs.

Software based on the Dramatica Theory is built around an interactive Story Engine which implements the problem-solving model as a method of determining the meaning and impact of data sets and of predicting motivations and actions based on potentials inherent in the data.

This is achieved by creating a Storyform – essentially, a schematic of the problem solving processes at work, their interactions, their outcomes, and the future course they will take.

The Dramatica system and its problem-solving algorithms can be applied with equal success to the analysis of real-world situations as well, specifically in determining the motivations behind the actions of a target group and in the prediction of their future actions and potentials for action.

I still can’t gather enough information to understand how this works – therefore I don’t yet trust it – BUT – I do like the area of generating narratives from sparse data – so I want to use this material to restate that thrust/gap.

news summary (19)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 8 May

8 May 2015 QuEST meeting:

The Capt Amerika would like to flip through some more information from a recent review of technology trends for discussions from WEBBMEDIA group 2015 trend report – (provided to us by our colleague Andres R.) although the topics themselves are of general interest one of the discussion points we would like to emphasize is relationship of these trends to QuEST. For example:

The next example from the technology trends document:

SMART VIRTUAL PERSONAL ASSISTANTS (SVPAS)

Second year on the list

2015 Tech Trends | webbmediagroup.com | © 2014 Webbmedia Group

Key Insight

SVPAs made our list last year because they were just beginning to enter the market as stand-alone mobile apps. (Others call this technology “predictive applications” or “predictive intelligence.”) They used semantic and natural language processing, mined data from our calendars, email and contact lists and used the last few minutes of our behavior to anticipate the next 10 seconds of our thinking in order to help consumers manage daily tasks, finances, diet and more. In 2015, we will see SVPA technology become a key part of emerging platforms and devices.

One company on our 2013 trends list, Expect Labs, has just transitioned its beta MindMeld app into an intelligent SVPA interface for any app, device or website to use. – the lessons/approach provides insight to our recent attempt to work speech recognition into some of our environments to help analysts

A couple of other examples are worth looking at for guidance into our interest in providing decision aides – Cue, Emu, Donna

COGNITIVE COMPUTING

Third year on the list

Key Insight

This trend evolved from a key idea in our trend 2013 report: anticipatory computing. Cognitive computing systems use natural language processing and artificial intelligence in order to understand our intentions.

Examples

In his seminal 1950 paper, computer scientist Alan Turing asked “Can machines think?” Ac­cording to IBM, the answer is yes. And pret­ty soon, faster than humans. IBM’s cognitive computing platform Watson is best known for eviscerating the reigning human Jeop­ardy champions in 2011. Prepare to hear a lot about Watson in 2015. IBM hasn’t built a clev­er computer gimmick, it’s built a revolutionary cognitive computing platform capable of learning, adapting and proposing solutions to ex­tremely difficult problems. Hos­pitals are using Watson to advise on seemingly impossible cases. Watson will be built into custom­er service workflows, to learn about our individual needs and respond with exactly the right in­formation when we need it.

We’ve previously in the QuEST meetings reviewed Watson but it is worth the bandwidth to return to how was Watson engineered – it provides a great case study and possibly some general rules that QuEST should keep in mind.

THE AI BEHIND WATSON — THE TECHNICAL ARTICLE

The 2010 Fall Issue of AI Magazine includes an article on “Building Watson: An Overview of the DeepQA Project,” written by the IBM Watson Research Team, led by David Ferucci. Read about this exciting project in the most detailed technical article available.

What’s Next

IBM is now developing advanced data-cen­tric supercomputing systems that will embed compute power everywhere data resides in a system, which means a convergence of analytics, modeling, visualization, and simu­lation, and driving new insights at very fast speeds1. In 2014, it announced the SyNapse chip, which processes information using a network of more than one million “neurons” that communicate via a system of electrical spikes. In other words, just like our brains. New Research shows robots transitioning from basic computational or productivity assistants to machines capable of creating unique forms of music or even evolving an entirely new language. We expect to see Watson’s cognitive computing power contin­uing to inspire developers and data scientists alike, who will begin to adapt this technol­ogy in a wide variety of ways in 2015. One possibility: Watson could be a boon for those working with difficult customers who can list the many, many things they dislike but can never articulate exactly what they do want

We’ve kept up with the SyNapse work with our Roman colleagues – but I also wanted to review a recent article on the HP effort to redefine computing – their big bet –

Machine Dreams

To rescue its struggling business, Hewlett-Packard is making a long-shot bid to change the fundamentals of how computers work.

The performance of computers, especially those that handle huge amounts of data, is limited by designs that date back decades

We will then flip through the other topics in the Tech Trends report – for SA – to see if there are other threads people would like to run to ground –

The next topic is a sequence of articles/reports / books on ‘A Theory of Story’ – the interest here goes back to our interest in narratives – narratives provides a means to situate – the information I would like to look at comes from Phillips and Huntley – Dramatica

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 1 May

1 May 2015 QuEST meeting:

The Capt Amerika would like to flip through some information from a recent review of technology trends for discussions from WEBBMEDIA group 2015 trend report – (provided to us by our colleague Andres R.) although the topics themselves are of general interest one of the discussion points we would like to emphasize is relationship of these trends to QuEST. For example:

ALGORITHMS

First year on the list

KEY INSIGHT

At its essence, an algorithm is simply a set of rules or processes that must be followed in order to solve a problem. For thousands of years (Euklid’s algorithm is 2,500 years old!) algo­rithms have been used to increase speed and efficiencies, and they’ve been applied to assist with our everyday tasks. In the coming year, we’ll see the launch of services using algorithms to create stunning designs, to curate the news and even to target voters for individual mes­saging in close political districts. We’ll see the rise of public algorithm exchanges. We will also begin questioning the ethics of how algorithms can be used, and we’ll scrutinize the tendency of some algorithms to go awry.

Project Dreamcatcher from Autodesk

Algorithmic Design

Project Dreamcatcher from Autodesk is the next wave of computational design systems. While it doesn’t replace a designer herself, it does give her the ability to feed a project’s de­sign requirements, constraints and exemplars into Dreamcatcher, whose algorithm will then return possible design concepts. If you’ve ever been in a meeting when a few people offer up an app they’d like to emulate, while others prefer a different user interface, algorithmic design systems can take the best of both, combine them into one and then help you refine the favored design.

Algorithm Marketplaces

Long ago, developers realized that everyone wins when knowledge is freely exchanged. As a result, communities of developers are offering up their algorithms in emerging algorithm marketplaces. Algorithmia is building a sort of Amazon for algorithms, where developers can upload their work to the cloud and receive payment when others pay to access it.DataXu offers a marketplace for its proprietary algorithms. Meantime Github, the code sharing network started by Linux creator Linus Torvalds, will continue to grow.

Algorithmic Curation  ** I’d like to discuss some of the issues on this topic – intimately related to our needs to provide information for decision makers**

Algorithmic curation is a process that automatically determines what content should be displayed or hidden and how it should be present­ed to your audience. Facebook’s NewsFeed already uses an algo­rithm to curate all the posts created in your network to serve only the content it thinks will engage you most. It has deployed a new service, FB Techwire, across its network to surface embeddable news sto­ries for media organizations. Google and Yahoo news will continue to refine their algorithms, which use our online behaviors to deter­mine which content to show. In 2016 and beyond, we expect to see algorithms curating news content not just based on our interests, but also for our most recent behavior. Rather than delivering a full breaking news story to our mobile phones, algorithms will deliver the “waiting in line at Starbucks” version of that story, a more in-depth longread to our tablets, and a video version of that story once we’re in front of our connected TVs. As a result, news organizations and other content producers have thrilling opportunities in the year ahead to supercharge and personalize content in ways we have never seen before. (See also: Consumer > Device.)

As an alternative / follow on discussion I would like to discuss an article on Social Curation – by Duh et al – ‘Creating Stories:  Social Curation of Twitter Messages’

Abstract … ”Social curation” has recently emerged as a promising new framework for organizing and adding value to social media, complementing the traditional methods of algorithmic search and aggregation. For example, web services like Togetter and Storify empower users to collect and organize tweets to form stories that are pertinent, memorable, and easy to read. While social curation services are gaining popularity, little academic research has studied the phenomenon.  In this work, we perform one of the first analysis of a large corpus of social curation data. We seek to understand why and how people curate tweets. We then present a machine learning system for assisting these social curators. Our system suggests new tweets that might belong an existing story, increasing the curator’s productivity and breadth of perspective.

And an article on Crowdsourced Content Curation – by Askalidis / Stoddard: To deal with the huge amount of potentially interesting con-

tent on the web today, users seek the help of curators to recommend which content to consume. The two most common forms of curation are expert-based (the editor of a newspaper decides which articles to place on the front page), and algorithmic-based (a search algorithm determines the ranking of websites for a given query). In recent years, content aggregators which use explicit vote-based feedback to curate content for future users have grown exponentially in popularity. The goal of this paper is to provide a descriptive analysis of these crowdsourced curation mechanisms.

Another example from the technology trends document:

SMART VIRTUAL PERSONAL ASSISTANTS (SVPAS)

Second year on the list

2015 Tech Trends | webbmediagroup.com | © 2014 Webbmedia Group

Key Insight

SVPAs made our list last year because they were just beginning to enter the market as stand-alone mobile apps. (Others call this technology “predictive applications” or “predictive intelligence.”) They used semantic and natural language processing, mined data from our calendars, email and contact lists and used the last few minutes of our behavior to anticipate the next 10 seconds of our thinking in order to help consumers manage daily tasks, finances, diet and more. In 2015, we will see SVPA technology become a key part of emerging platforms and devices.

news summary (18)

Categories: Uncategorized