Five Ways We Use AI in Everyday Life - TurboFuture - Technology
Updated date:

Five Ways We Use AI in Everyday Life

Jillian has been interested in computer science since her second year of high school. She finds AI fascinating, even if the math stumps her.

#1 - Cameras

Since the dawn of the neural network, training computers to recognize human speech has been a challenge for software engineers in the field of artificial intelligence. Human language is, after all, immensely complex: English alone contains 44 unique sounds, or phonemes, which can be combined in nearly endless ways. Add natural variation in patterns of individual speech to the mix, and you end up with a massive set of potential sound bytes to parse. While doing so isn’t impossible, it requires a lot of computational heft, which most organizations simply lack the means to acquire.

Although the challenge of realizing computer-based language recognition is difficult for the reason enumerated above (and for several others I won’t go into detail on in this article), software developers have used a variety of clever tricks to produce tools that can categorize human speech with passing accuracy. As these tools have become more streamlined, tech companies have started looking for ways to integrate them with their products.

That brings us to the GoPro. Since its development in 2004 as a personal camera for extreme sports hobbyists, the device has undergone a long line of improvements. One of the more recent changes — and the one that involves AI — is added support for voice commands such as “Start video” and “Take a photo”. In order to parse these commands, the GoPro device implements a machine learning algorithm that knows how to match sounds to words, much in the way smartphones implement voice recognition software to power personal assistants like Cortana and Siri.


GoPro, however, has hardly cornered the market for AI-powered cameras. It faces strong competition from Google, whose eponymous Clips camera features an array of built-in functions driven by machine learning algorithms.

If you’d like to learn more about how these algorithms were built and trained, check out Aseem Agarwala‘s article on the subject. It really is a fascinating read for anyone interested in the intersection of photography and artificial intelligence.

#2 - Email

Email has existed in some form since the 1970s, but it’s only recently that providers have begun to realize the potential machine learning has to improve their services.

Just last year, for instance, Google released Smart Compose, a plugin for Gmail that evaluates your sentences as they’re being typed and suggests ways to complete them in real time.

To give an example of how the service works, imagine booting up your Gmail account and starting an email to your boss with the following phrase:

”I‘ve been meaning to ask you about...”

Deriving the ultimate meaning of the sentence this phrase sets up is tough even for a human: There‘s a nearly infinite range of things you could want to ask your boss about, and trying to guess which one it is by random selection amounts in the majority of times to a wasted effort.

Here’s where AI comes in. Much like humans use context clues to predict what other people will say, machine learning algorithms have been developed that gather such clues from an email’s subject line and use them to intuit what the writer most likely wants to say next. In the example above, you might have written “Yesterday’s Incident“, in which case Google Compose might complete the phrase you wrote as so:

”I’ve been meaning to ask you about the incident yesterday.“

There’s more to it, of course, but I’ll direct you to the experts at Google’s AI blog for the time being. Their explanation goes into much more detail than I could hope to put in a single article.


The series of steps Google Compose takes to autocomplete your sentences.

#3 - Phones

When the iPhone 4S debuted in 2011, one of its shiniest new features was Siri, a built-in personal assistant you could interface with in real time.

Released originally as a standalone app, Siri was developed in 2010 by the SRI International Artificial Intelligence Center, a research organization based in California. Within two months of its release, the program was acquired by Apple, after which it became a dedicated component of iOS software.

Siri relies on a variety of machine learning techniques to interpret users’ speech in real time, allowing her to perform tasks as simple as checking the weather to ones as complicated as forwarding money to another person through Apple Pay.

Although initial reviews of the software were critical, and some claim to this day that Siri lacks “innovation” compared to other virtual assistants, her usefulness cannot be denied. Even if its current state seems rigid, the fact that AI is capable of mapping specific phrases to actions shows just how far our understanding of computerized language recognition has advanced.

In addition to making virtual helpers tick, machine learning helps optimize the photos you capture with your phone’s built-in camera. Because AI software is so good at detecting nuances in light, hue, saturation, and contrast (just to name a few), it can automatically detect whether or not an image in frame is picture perfect. If something’s lacking, the software can correct it by nudging settings in the opposite direction—bumping up the brightness in a dark room, for example. The result, overall, is a better picture—no photography credentials required.


#4 - Social Media

Social media is everywhere, and as more people join sites like Instagram and Twitter with the goal of finding quality content published by members of the community, the value of using maching learning to optimize content delivery is becoming readily apparent.

If you use social media regularly, you’re likely familiar with the concept of a newsfeed, an ordered display of “posts” ranging from videos of rocket launches to photos of your dad’s latest business trip. As a consumer, you have the choice either to engage with a post or to scroll past it onto the next, repeating the process until you leave the site.

”The core idea with newsfeeds,” says Abinhav Sharma, a product developer for the social media site Quora, “is to use ML on past behavior to predict action probabilities in order to determine the most engaging stories and put them on top.”

In short, the posts you see immediately after logging into your Facebook, Twitter, or Instagram account are there not by chance but as the result of your own actions. Every click, comment, and share you make today influences the content you’ll see tomorrow, a self-perpetuating cycle that, in the ideal case, yields a constant supply of rich, engaging media for your consumption.


My customized Quora feed, courtesy of ML.

#5 - Video Games

Think back on the last video game you played. Was it an RPG? An MMO? A platformer? Regardless of its genre, the game likely featured non-player characters (NPCs) with which you could interact to influence your progression through the story—a Goomba in Super Mario Brothers, for example.

It may not have seemed obvious while you were playing against them, but those NPCs were operating on one of several machine learning algorithms, weighing decisions against a set of potential outcomes to determine the best action to take against you.

Artificial intelligence has been at the core of video game development since as far back as 1939, when a bot optimized to play the game Nim won ninety percent of its matches against human players at New York’s World Fair. In the decades that followed, Arthur Lee Samuel and Alan Turing wrote similar bots for checkers and chess, respectively, with both programs implementing educated decision-making. (Samuel, actually, coined the term “machine learning” to describe this process.) Though both programs have been improved upon since their original conception, and would be considered today rather crude implementations of artificial intelligence, they laid the groundwork for an enduring line of study in the field of computer science.

Today, you’ll find some form of artificial intelligence in just about any video game on the market, from RPGMMOs to real-time strategy games like Starcraft II. It’s what prompts enemies to take cover when you start shooting at them—and to close in for an attack when your defenses are lowered. It makes playing against a computer exciting, and to an extent, unpredictable, which is just what you want when the goal is to emulate the experience of playing against a human.

Within the last couple of years, advances in machine learning technology have led researchers to develop bots with skills bordering on the unimaginable. One, designed by a group at OpenAI, a nonprofit research organization founded by Elon Musk, among others, beat a team of professional Dota players two to one during an exhibition match held two weeks before the game’s annual world tournament. The victory prompted a storm of media coverage and stirred sentiments ranging from admiration to uncertainty, but when the dust settled, one thing was clear: AI had proven itself capable of analyzing and responding to situations in-game that mirrored the complexity of the real world.

“The hope,” say the bot’s developers, “is that systems which solve complex video games will be highly general, with applications outside of games.”

Only time can tell what those applications will look like, but with what we’ve seen so far, they’re bound to be impressive.


The bot performed consistently well against top-rated human players, estimating its odds of victory at times above 85%.