Artificial intelligence is a revered area of research that has found traction in modern business. How many of us really understand what’s going on under the hood, though? Why are modern artificial intelligence systems not like those we see in the movies—and what makes it different from simple automation, anyway?
The kind of AI we usually see in sci-fi films is general AI. This mimics human behaviour and can do anything a human can without anyone explicitly teaching it how to. It simply learns whatever he needs to know as it goes along, as a baby does when it’s first developing cognitive skills.
General AI—also called strong, fall, or hard AI—was a focal point of early research, when scientists believed they could produce machines with capabilities that would rival human beings. That all fell apart when scientists recognized one of its central problems. General AI is intended to replicate human intelligence. Sadly for all of us waiting for sci-fi movies to become sci-reality movies, we don’t really understand how human intelligence works, which makes replicating it very difficult. This is why we don’t yet have robots that can develop their own, believable personalities and happily hold philosophical conversations with us.
Narrowing the artificial intelligence focus
AI research in the 1990s took a different approach. Researchers began focusing on narrow or applied AI. This branch of artificial intelligence didn’t try to learn anything and everything from the ground up. It focused on specific problems and developed machines that could handle just those things.
One of the most common branches of AI is known as machine learning. This uses various statistical models to analyze historical data and find patterns in it. It’s a computing model that supports many applications of narrow AI, ranging from the simple to the complex.
Scientists typically train machine learning algorithms in a process known as supervised learning. If we want a computer to recognize images of houses, and we show it two sets of pictures. One set has only pictures of houses, and one set contains only pictures of other things. The pictures are tagged so that a computer knows which is a house picture and which isn’t.
The machine learning algorithm produces a statistical model of all the pictures that includes a baseline suggesting what combinations of data an image of a house typically contains. It can then use the picture to analyze images that it hasn’t seen yet. It compares each new image against the statistical model and produces a score showing its level of confidence that the picture includes a house.
The interesting thing about machine learning is that programmers don’t code explicit rules for it. They don’t try to write programs describing what a house looks like. When the machine learning algorithm produces its result, programmers can’t follow a decision tree showing the inner workings of the program. It just “knows” based on its experience. That sounds an awful lot like how we as humans make decisions.
Machine learning makes way for deep learning
Scientists are applying machine learning in many areas, but one of the most successful has been interfaces. In addition to recognizing faces, narrow AI has become adept at recognizing speech and turning it into text. It has also learned how to interpret that text and respond to it intelligently, forming the basis for modern voice interfaces. When you ask a digital assistant to call your mother or find you the nearest gas station, narrow AI’s busy at work in the background fulfilling your request.
One of the most promising areas for narrow AI in the future is in information retrieval. We see it all around us: Modern workers are drowning in data. We’re producing more of it than ever before, thanks to our smartphones, sensors, and enterprise apps. It’s becoming impossible to make sense of this data, which is where big data analytics comes in.
Machine learning is rapidly giving way to its cousin, deep learning. This applies the same principles as machine learning but uses more computing power to produce its model. Deep learning often uses a neural network, which is a statistical analysis method using multiple layers to analyze various aspects of the data. It’s producing more accurate, insightful learning algorithms than its simpler predecessor.
Researchers are applying these and other narrow AI algorithms to the ocean of data available to modern businesses. By detecting patterns in data that humans can’t easily see, deep learning helped Chinese web giant Baidu to target ads on its online service.
What makes AI-powered analytics different from simple automation, though? Is it just another way to crunch a set of numbers? There’s some truth to that, and it’s easy for overeager marketers to attach an AI label to something that uses available numbers to do a simple job. AI-powered analytics will go a step further than simple spreadsheet crunches, often accessing unstructured data like social media posts and articles on websites. It will use narrow AI algorithms like deep learning to understand them and extract meaningful information from them (like sentiment).
One example is WayBlazer, which uses AI to power its contextual analytics platform, connecting travellers with hotels based on a constellation of different data. It reads online reviews and descriptions for hotels, combining them with nuanced data about individual customers and using the whole thing to produce personalized recommendations in human-like ways.
It’s important not to get too jazzed up about AI. Marketers would happily have you think that narrow AI algorithms breathe some kind of mystical life into modern software when it’s really a simple application of sophisticated statistical processes to produce human-like results. We’re unlikely to see the HAL-9000 delivering business intelligence results just yet, but that doesn’t mean that narrow AI can’t be useful.