“Artificial intelligence” and “machine learning” used to conjure up images of sci-fi robots: Star Trek’s Data, Star Wars’ C-3PO, or the Sentinels from the Matrix.

Now machine learning is a part of our lives every time we conduct a Google Search. It doesn’t look scary; it doesn’t look cute; in fact, we don’t even notice it.

In October 2015, Google announced that machine learning was such an important part of Google Search that it had its own name (RankBrain) and of all the different functions Google computes when returning search results, RankBrain’s “opinion” is the third most important factor. Although Google higher-ups did say in March 2017 that their search algorithm will never be wholly machine learning, because if you get bad results, it’s too hard to figure out what went wrong.

How do Google Search and machine learning work together? If you do a Google search, you’ll find a host of articles, but most of them, even the ones that try to be simple, are hard to follow if you don’t have an advanced degree in mathematics. As Danny Sullivan pointed out in his overview of Google Machine Learning 101: “When a speaker tells you the math with machine learning is “easy” and mentions calculus in the same sentence, they have a far different definition of easy than the layperson, I’d say!”

So here is our version of Google Search Machine Learning 001 – no math required!

How do young children learn?

My 2 year old learns about cats by my showing him pictures of cats in books, cats in friends’ homes, stray cats on the street. He notices a pattern: all of these objects have tails, pointy ears, fur and whiskers. One day he sees a German Shepherd. “Cat!” he informs me.
“No – that’s a dog,” I tell him.
He’ll ponder this for a minute. What is his mind doing? It’s trying to figure out how this object with a tail, pointy ears, fur and whiskers is different from all the other “cat” objects with tails, pointy ears, fur and whiskers. He may decide that a “cat” is when the object is up to his knees, while a “dog” is when the object is taller than he is. Or he may decide that a “dog” has a long nose and a “cat” has a short one.
He’ll keep trying to define objects with tails, pointy ears, fur and whiskers according to the above modifications, and see if his assumption is confirmed or corrected.

Another example:

how children learn colors of apples

My 2 year old learns about what “yellow” is by my pointing out daffodils, dandelions and rubber duckies. He notices a pattern: all of these objects give him a similar visual experience. That experience, he assumes, must be called “yellow.” One day he sees a school bus. “Yellow!” he tells me.
“You’re right!” I exult. “The school bus is yellow!”
My 2 year old is excited. He has made a new connection based on prior experience, and that connection was affirmed. He apparently has the right definition of the word “yellow.”
The next day we’re in the produce section of the supermarket and my 2 year old sees a light green Granny Smith apple. “Yellow!” he tells me proudly.
I shake my head. “No, the apple isn’t yellow. The apple is green.”
He blinks, stares at the apple, and you can almost hear Stand by. Computing. He looks up at me. “Green?”
I nod. “Green.”
He looks back at the apple. “Green.” He then spies a ripe banana. “Yellow?”
“Yes! That’s yellow!” I point out the unripe banana sitting right next to it. “How about this?”
He scrutinizes it.
“G – geen?”
“Green! You got it!”

 

Children learn by taking information that we give them about connections between concepts. The concept of “yellow” is attached to the concept of “rubber duckie.” How is it attached? It defines: “the experience I have looking at the surface of the object.” “Yellow” is attached to “school bus” in the same way.
“Bath” is also a term that my 2 year old will learn to associate with “rubber duckie,” as I use it over and over again. “Do you want to take rubber duckie to the bath?” “Rubber duckie goes splash, splash in the bath!” “You’re ready for your bath? Okay – I’m getting rubber duckie!” He will not learn to associate “bath” with “school bus” – unless he has a bath towel with a school bus on it.

rubber-duck-1361280_1920

Our initial input is the first step toward our children’s learning process. Then comes the process of trial and error, as they try to apply their knowledge to new situations, and see if their application is confirmed or corrected.

With each confirmation, they learn to make that connection in the future. With each correction, they learn NOT to make that connection in the future, and what connection to make instead.

Confirmations and corrections can come from direct human input. “Yes – that’s a cat!” “It’s not yellow – it’s green.”
Or like a particularly salient experience from my own early childhood of my mother telling me that it was impolite to call our guest “fat.” My bewildered response: “But you call me a skinny-minny!”
I had as a connection in my mind: “Making comments about other people’s shapes is acceptable and even an act of endearment.” However, through my mother’s direct input, I learned the difference between acceptable comments and unacceptable comments about other people’s shapes.

boy-666803_1920

Confirmations and corrections can also come from experience: seeing if the world responds the way you thought it would. You’re playing jumprope with friends and you need to “jump in” while the rope is turning, without getting hit by it. By watching lots of kids jump and their subsequent success or failure, you develop a theory about when in the arc of the rope you should try to jump in order to not get hit by the rope.
It’s your turn. You jump in at the point that you assume will work. If it works – your theory is confirmed and you learn to keep jumping in when the rope is at that height. If you get hit by the rope, you’re going to try jumping in at a different point next time.

Machine learning works much the same way.

The machine often starts out with a set of information provided by a human. I provide my darling 2 year old machine with a set of pictures of cats. “This is a cat, and this is a cat, and this is a cat, and…these are all cats.”
My machine starts looking for what all these “cats” have in common. (Bearing in mind that the machine can’t actually “see” – it can look for patterns in the color codes that make up the pictures.) It may see the patterns that correspond to shape of ears, length of nose, tails and whiskers and make an attempted definition of “cat.”
The machine is then given another set of pictures defined as “cats,” “dogs,” “cockroaches,” etc. and checks if what it would have called each picture (“cat”) is the same as the the way I defined the picture.
If yes – confirmation! The machine sticks with his definition of “cat.”
If no (“sorry, it’s a dog”) – the machine is going to have to self-correct, and come up with a new definition of “cat” that would differentiate it from “dog.”

That’s machine learning. How does Google use it in search?

Google’s machines learn by being exposed to historical searches.
“Cats,” Google engineers told their sweet little computers. “Look – these are search results for “cats.”
“Play,” said Google engineers. “Look – these are search results for play.”
“Now you try,” said the engineer. “Show me results for “cats playing.”
Google’s machines produce a search engine results page for “cats playing.” “Good try!” says the engineer. “Here – these are search results for “cats playing.”
Google’s machines compare the search engine results they would have come up with and the historical results the engineer provides. Wherever the results match, Google receives confirmation, and that connection is strengthened. Wherever the results don’t match, Google realizes that it needs to correct its definition of cats, playing, or of “cats playing” when they come together.
Eventually, through this process of confirmation or correction, Google learns that the query “images cats play” has a different intent than “images cats the play.”

google search machine learning can tell that these are images for cats playing

machine learning capacities of Google search know that cats the play is different than cats play

Google’s RankBrain is able to apply this knowledge of connections to decipher long-winded queries that it’s never seen before. Google Search receives lots of queries per day – 3 billion, in fact. 450 million of those are queries that no one has ever asked Google before. How is it going to give you accurate results?

The first step is making a connection between that query and a simpler, related query. One example Google gave where machine learning is helping out is “what’s the title of the consumer at the highest level of the food chain.”

Over here, a consumer isn’t a synonym for a shopper at a grocery store. It’s a scientific term referring to an entity that eats other things lower than it on the food chain (although that does also happen to describe a shopper at a grocery store).

So this is a long-winded and complicated way to ask about the “top level of the food chain.”

Through being exposed to many historical searches connecting between “consumer” and “food chain”, “highest” and “top”, “top of food chain” and “predator,” Google’s machines will have a good guess of what this query means, and what results best connect to it.

But that is not all. Oh, no, that is not all.

Google Search isn’t the only way Google is using machine learning to deliver you more helpful results faster. Machine learning factors into any Google product that depends on its responsiveness to user needs and desires.
Take, for example, Allo – Google’s mobile messaging app.
Three weeks ago Google announced that Allo would give you suggested responses to your friend’s photos, like in the examples below. All you need to do is click the response you want.

machine learning tells google allo how to respond

allo can pick the right reactions based on machine learning

How does Allo know?
Google’s engineers have created a graph of concepts and the human language responses that connect to them. Slowly, by being exposed to other, approved connections, the machines behind Allo to expand the graph, making and modifying its connections between concepts and other concepts, concepts and responses.
Allo should also be able to learn through analyzing its own user response. If no one EVER clicks on “I love Italian food!” for the linguine picture, Allo will need to correct its connection between the phrase “I love Italian food” and images that follow the pattern of that one.

That’s all!

You’re done with Google Search and Machine Learning 001. What response would Allo offer you for that?

  • Phew!
  • I made it!
  • I’m ready for Machine Learning 101!

Go pick one and teach that machine.

For the curious: where else does Google use machine learning?

Google’s investment in and applications of machine learning just continues to grow. In fact, Sergey Brin, the president of Google parent company Alphabet, explained that Google Brain, their machine learning project, “probably touches every single one of our main projects, ranging from search to photos to ads to everything we do.”

Here is a sampling of Google offerings based on or using machine learning:

Video Intelligence API: to identify objects in videos and make them searchable

Perspective: an abuse-detecting service for online comments to prevent trolling

Smartwatches: smart reply function that provides basic responses to conversations

TensorFlow: Google’s open source machine learning framework

Nice they make it open source, no? Now everyone can try to reap the benefits of machine learning. Ready to give it a whirl?

 

 

Article updated March 28, 2017