Chris is an engineer, thinker, and philosopher who enjoys exploring futuristic ideas and technology.
What Is Deep Learning?
As technology improves at an ever-increasing rate, we come closer to a time where the power of artificially intelligent machines will surpass the capabilities of the human mind. One area of A.I. development that is seeing huge advancements is deep learning.
Deep learning is a subfield of machine learning that uses algorithms and methods constructed to mimic the structure of the human brain. Since scientists and researchers are essentially creating a digital representation of a brain, this is why the programs are often referred to as “neural networks.” Deep learning neural networks can learn and interpret data without being explicitly programmed to do so.
How It's Changing the World
As more investment is made into deep learning, the algorithms that are developed are becoming increasingly complex. This has resulted in computer programs that can perform tasks once only reserved for humans and do things that were previously extremely difficult or even impossible. The power of a microprocessor has never been greater than it is today. Here are some ways that deep learning is revolutionizing our world.
- Image Recognition/Classification
- Geoprocessing of Large Datasets
- Deepfake Audio/Video Generation
- Learning to Play Video Games
1. Image Recognition/Classification
Deeping learning A.I. algorithms have been developed that can look at images and classify them based upon what they actually are. If you use Facebook then you are probably already somewhat familiar with this technology. Google’s YouTube is also a big user of this technology. User-created videos can automatically be tagged for their content and categorized based on what the algorithm sees. Videos can also be flagged for further review if the algorithm detects content that is not allowed on the site.
Image classification actually goes beyond just trying to differentiate between dogs, cats, and foxes. In fact, newer algorithms are actually classifying images on a pixel-by-pixel basis. When an algorithm classifies each pixel individually, this process is called semantic segmentation. Another approach to image classification is called instance segmentation. Using this method, images can more readily be broken into several discrete components. For instance, an image of a person can be separated into several components including the human figure, the background, foreground, etc.
2. Geoprocessing of Large Datasets
In the world of Geographic Information Systems (GIS), deep learning is being used to improve the speed and accuracy of dataset creation and analysis on a large scale. New algorithms can be used to generate a wide variety of land cover data simply by analyzing an aerial photograph.
For example, one deep learning GIS tool can generate data that describes vegetation type, vegetation density, and habitat viability just based on a high-resolution aerial photograph. In addition, using semantic or instance segmentation, other features such as building footprints and the limits of paved surfaces, can be extracted from the imagery and turned into new datasets.
Consider the images below for a simplified example of what can be done with some basic deep learning tools:
Another tall task that deep learning is trying to tackle is the ability to make predictions based on analyzing relationships between multiple data sets that are spatially varied. For example, you could develop a model to predict risks for vehicular accidents based on a point cloud representing past accidents. If you integrate this with existing traffic models and data on regional growth, you could help to predict and plan for future vehicular accidents.
3. Deepfake Audio/Video Generation
Deep learning can do much more than analyzing static imagery. This powerful technology has been used to analyze audio and video data and then generate fake videos that appear to be real. This is quite an incredible advancement in A.I. technology that should have everyone ready to start doubting everything they see on the Internet (if they haven't done so already). Deepfake technology has already made headlines numerous times in the past few years.
In 2019, A.I. deep learning technology was used to deepfake the voice of a CEO. This was then used by thieves to steal more than $243,000 from the company. Scammers used an A.I. program to mimic the voice of the CEO and then used it to direct the CEO's subordinates to wire transfer money to a third-party account. You can read more about this incident here: Scammer Successfully Deepfakes CEO's Voice to Steal $243,000.
Even as criminals and hackers begin to use this technology for nefarious purposes, there are still many other uses for it. For instance, the movie industry uses it for mapping the movements of people's faces onto animated or otherworldly characters. It could also be used to easily replace actors with new ones in an old movie. Below is a video showing some clips of the movie The Matrix as if the main character, Neo, was played by Will Smith instead of Keanu Reeves. It's actually very good!
There are certainly some entertaining uses for deepfake algorithms, however, I implore you to think carefully about the potential implications what this technology can have. For example, the image of prominent political figures could be used to easily spread misinformation. This can have far-reaching implications for the political process and governance in general.
For example, a few years ago, a deepfake figure of Barack Obama surfaced. Many people were fooled by the video; however, fortunately no major harm came of it. Take a look at clips of that video below:
Can we trust anything we see online anymore?
4. Learning to Play Video Games
For the most part, playing video games is no big deal. That is, if you are a human. Actually getting a computer to learn how to play a video game (and progress through it) is a phenomenal feat. Instead of programming an algorithm to play a specific game or complete a specific task, deep learning methods can be sued instead. With the new neural networks, the computer program teaches itself how to play the game by using trial and error. Over time, the deep learning algorithms can master a game by progressively figuring out what strategies work and what don't.
In 2015 Google's DeepMind was able to learn how to play several simple Atari video games. DeepMind eventually became so good at these games it could match or even beat human competitors. The video below shows DeepMind learning how to play a game called Atari Breakout:
Since 2015, DeepMind has learned to play dozens of other games including the most recent rendition of StarCraft II. With thousands of trials playing StarCraft II, DeepMind now ranks as one of the top players in the world. Even the most seasoned professional players find DeepMind to be a very tough competitor.
It's one thing to learn to play one-on-one video games; however, think about how challenging it would be to learn to play a multiplayer first-person game. After racking up what would amount to more than 4 years of gaming experience playing Quake III, DeepMind was finally able to collaborate with team members in order to beat its human competitors. This A.I. technology got so good at the game that it was able to categorically outperform everyone in a tournament featuring 40 very good players.
You can read more about this breakthrough technology on the DeepMind Blog.
© 2019 Christopher Wanamaker