Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Does Facebook M prove humans are the missing link in AI?

public://pictures/Tayven-James-Independent-Author.png
Tayven James Author, Independent
 

Facebook is looking to completely reinvent the paradigm of artificial intelligence with a new service called Facebook M. If it succeeds, the machines just might save us all.

The latest in a long list of undertakings by major tech companies looking to incorporate artificial intelligence (AI) into their services, Facebook M looks to pioneer the road ahead by taking what many would view as a step back. But Facebook isn't interested in teaching its machines to recognize classic paintings or outperform human competitors on Jeopardy. Instead, the social network wants to create an AI that can successfully make a dinner reservation, suggest a gift for a friend's baby shower, or prep you for an upcoming date.

So what makes Facebook M so unique? It comes with a human copilot.

Sure, it may sound counterintuitive to saddle a machine whose destiny expressly calls for functional autonomy with a living babysitter. But this type of one-on-one tutelage may just be the missing link in AI.

The classical approach: Deep learning

The concept of artificial intelligence—the basic idea that machines could think for themselves—was made popular by British mathematician Alan Turing in the years following World War II. Turing, in many ways the father of modern computing, also created the measuring stick, dubbed the Turing Test, still used in the field of AI to determine whether a machine can be considered "intelligent." But it's been in the past 30 years or so that AI and machine learning have made great strides, graduating from the realm of science fiction and blossoming into widely used technologies with tangible real-world benefits. And at the heart of most of them is a technique called "deep learning," a catchall term that encompasses several complex and nuanced models of machine learning.

This deep learning technology manifests itself in services that many of us interface with every single day. Google, for example, applies complex algorithms and self-teaching computers to optimize listings and paid ads on its search engine results pages. Chinese search giant Baidu does the same. The goal for each is to deliver only the ads that users are most likely to click on, based on behavioral data from both individual users and groups of people with similar interests and behaviors.

And search engines aren't the only ones using AI to target our likes and preferences. Amazon and Netflix have long relied on self-improving algorithms to tailor suggested purchases, movies, and television programs to their customers.

Services like these perform a fairly simple task: they start with a basic understanding of the factors that may lead a user to perform a certain action—such as purchasing a Bluetooth headset or clicking "play" on season three of "Parks and Recreation"—and test the most efficient paths to direct a user to complete the goal. This method is known as reinforcement learning, and it's something that machines do incredibly well.

This same technique was recently used to teach an artificial network of machine neurons to master complex Atari video games. The computer was given the simple goal of achieving the highest possible score, and over a series of several hundred attempts identified which actions resulted in an increased score and which did not. The result, according to the team's research, was a computer that had quickly learned to "achieve a level comparable to that of a professional human games tester across a set of 49 games."

The next step: Personal assistants and party tricks

More important to the conversation of modern AI isn't what machines can do but rather what they can do for humans. Robots are using trial and error reinforcement learning to grasp increasingly practical skills, such as screwing a cap onto a bottle, connecting sets of building blocks, and using a hammer to remove a nail from a wooden board. Such physical tasks may seem rudimentary, but their application within the realm of assisting humans is undeniable.

But deep learning is going further still. It's being used to power complex networks that are learning to recognize and classify subjects in images, a technology being leveraged by both Facebook and Google. It's also at the heart of personal assistants like Siri, Google Now, and Cortana. According to Geoffrey Hinton, emeritus professor at the University of Toronto and distinguished researcher for Google, these technologies are actually built on similar models.

In terms of classifying images, "What you want to do is find little pieces of structure in the pixels, like for example like an edge in the image," Hinton told WIRED in 2013. "You might have a layer of feature-detectors that detect things like little edges. And then once you've done that you have another layer of feature detectors that detect little combinations of edges like maybe corners. And once you've done that, you have another layer and so on."

Voice recognition is handled in much the same way; a neural network will begin by identifying separate parts of speech, such as vowels, consonants, and syllables. A second network will try to classify these into parts of speech, identifying a sentence structure. Further networks will analyze the relationships between the words and eventually what actions users want the machine to take. These functions are performed in the span of a few seconds or less, often pulling computing power from hardware in completely separate data centers.

For the moment, these machines are using their powers in rather benign ways; they suggest Facebook photos we might wish to tag or help us create reminders and ask for directions. But with neural networks beginning to master the art of analysis, teams are pushing the boundaries to discover just how adept these networks can become. The results range from pseudo-Shakespearean sonnets, to thought-provoking amateur poetry, to downright disturbing artwork.

Results like these are inching ever closer to passing the Turing test. They also prove that machines are getting closer and closer to understanding what makes human art and human speech, well, human. And they're capable of learning to reproduce these human characteristics all on their own.

The missing link: Humans helping AI

The AI community seems to sense just how close we're getting to designing machines that are capable of both performing and iterating their own QA tests and redefining the goals and parameters of those tests. Earlier this year, British-American computer scientist and machine learning pioneer Stuart Russell drafted an open letter calling for his colleagues in the field of AI to "research how to reap its benefits while avoiding potential pitfalls."

The pitfalls of runaway AI are much more numerous and nuanced than the classic "robot apocalypse" scenario we've all seen in films. In a recent interview with Quanta Magazine, Russell highlighted one such pitfall. "If you want to have a domestic robot in your house," he said, "it has to share a pretty good cross-section of human values; otherwise it's going to do pretty stupid things, like put the cat in the oven for dinner because there's no food in the fridge and the kids are hungry."

This is where the value of Facebook M comes into play. It uses a model Russell refers to as "inverse reinforcement learning" to learn how to create positive outcomes. When a difficult task comes along (like providing a suggestion on the perfect gift for a baby shower, something that is impossible for the current batch of app-based personal assistants), Facebook M will cede control and go into learning mode while its human tutor fulfills the request.

This allows Facebook's new service to learn on the fly instead of resigning itself to the traditional crash and burn routine that Siri users in particular have become accustomed to seeing. It could also turn Facebook M into the world's most powerful knowledge graph, one built with an understanding of human values, not merely human speech. And when Facebook opens an API to that knowledge graph, the possibility for developers may just be endless.

What does this mean for the future of AI? For the future of developers? We don't yet have the answer to these questions, but Russell believes that "the eradication of disease and poverty are not unfathomable." Who am I to argue with that?

Keep learning

Read more articles about: App Dev & TestingApp Dev