How Google Uses Artificial Intelligence and Machine Learning

Srasthy Chaudhary
5 min readOct 20, 2020

Google services such as its image search and translation tools use sophisticated machine learning which allows computers to see, listen and speak in much the same way as humans do.

Machine learning is the term for the current cutting-edge applications in artificial intelligence. Basically, the idea is that by teaching machines to “learn” by processing huge amounts of data they will become increasingly better at carrying out tasks that traditionally can only be completed by human brains.

“Machine learning is a core, transformative way by which we’re re-thinking how we’re doing everything,” Google CEO Sundar Pichai said on the company’s earnings call in October 2015. “We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we’re in early days, but you will see us — in a systematic way — apply machine learning in all these areas.”

GOOGLE AI Strategy

Google Uses Artificial Intelligence And Satellite Data To Prevent Illegal Fishing:
An initiative taken by Google is already helping to protect vulnerable marine life in some of the world’s most delicate eco-systems. Using the publicly broadcast Automatic Identification System for shipping, machine learning algorithms have been shown to be able to accurately identify illegal fishing activity in protected areas.

By plotting a ship’s course and comparing it to patterns of movement where the ship’s purpose is known, computers are able to “recognize” what a ship is doing.

Brandt said- “All 200,000 or so vessels which are on the sea at any one time are pinging out this public notice saying ‘this is where I am, and this is what I am going.”

This results in the broadcasting of around 22 million data points every day, and Google engineers found that by applying machine learning to this data they were able to identify the reason any vessel is at sea — whether it is a transport ferry, container ship, leisure vessel or fishing boat.

“With that dataset, and working with a couple of wonderful NGOs — Oceana and Sky Truth- Google was able to create Global Fishing Watch — a real-time heat map that shows where fishing is happening,” says Brandt.

The initiative has already led to positive outcomes in the fight against illegal fishing in protected marine environments.

How ML is used in Project Sunroof?

Project Sunroof, launched in 2015, involves training Google’s systems to examine satellite data and identify how many homes in a given area have solar panels mounted on their roofs. As well as that, it can also identify areas where the opportunity to collect solar energy is being missed, as no panels are installed.
This resulted in the development of a machine learning system which took Google Earth satellite images, and combined it with meteorological data, to give an instant assessment of whether a particular location would be a good candidate for solar panels, and how much energy — as well as money — a householder might save.
Google’s image recognition algorithms were trained to recognize how to spot solar arrays in satellite images. This system was quickly put to use by the city of San Jose in California as part of an initiative to identify locations where 1 gigawatt of solar energy could be generated from new panels.

How Google uses Big Data in practice?

Google signalled how important deep learning was to its business with its 2014 acquisition of Deep Mind, a UK-based deep learning specialist. The startup’s pioneering work involved connecting cutting-edge neuroscience research to machine learning techniques, resulting in systems that acted more like “real” intelligence (i.e. the human brain).

Google’s first practical use of deep learning was in image recognition, where it was used to sort through millions of internet images, and accurately classifying them according to what was in the images, in order to return better search results. Now, Google’s use of deep learning in image analytics has extended to image enhancement. The systems can fill in or restore missing details in images, simply by learning from what’s already there in the image, as well as what it’s learned from other, similar images.

In video analytics, Google Cloud Video Intelligence has opened up video analytic technology to a much wider audience, allowing video stored on Google’s servers to be analysed for context and content. This enables the generation of automated summaries, or even security alerts when the AI detects something fishy going on in a video.

Google Assistant speech recognition AI uses deep learning to understand spoken commands and questions, thanks to techniques developed by the Google Brain project.

Google is to provide better video recommendations on YouTube, by studying viewers’ habits and preferences as they stream content, and working out what would keep them tuned in. Google already knew from the data that suggesting videos that viewers might want to watch next would keep them hooked, and keep those advertising dollars rolling in.

Google’s self-driving car division, uses deep learning algorithms in its autonomous systems, to enable self-driving cars to get better at analysing what’s going on around them and react accordingly.

An algorithmic improvement to “Did you mean?” — Google’s spell-checking feature for Search — will enable more accurate and precise spelling suggestions. Google says the new underlying language model contains 680 million parameters (the variables that determine each prediction) and runs in less than three milliseconds. “This single change makes a greater improvement to spelling than all of our improvements over the last five years,” Google head of search Prabhakar Raghavan said in a blog post.

Google says melodies hummed in Search are transformed by machine learning algorithms into number-based sequences. The models are trained to identify songs based on a variety of sources, including humans singing, whistling, or humming, as well as studio recordings. The algorithms also abstract away all the other details, like accompanying instruments and the voice’s timbre and tone. What remains is a fingerprint Google compares with thousands of songs from around the world to identify potential matches in real time, much like the Pixel’s Now Playing feature.

On the eCommerce and shopping front, Google says it has built cloud streaming technology that enables users to see products in augmented reality (AR). With cars from Volvo, Porsche, and other auto brands, for example, smartphone users can zoom in to view the vehicle’s steering wheel and other details — to scale.

References

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Responses (1)

Write a response

nice work srasthy

--