5 min read time

AI and Machine Learning 101 - Part 1: Machine vs. Human Learning

by   in Cybersecurity

Artificial intelligence (AI) is everywhere‒at least, that’s how it seems. At Interset, the rise of AI is both exciting and challenging, because truly AI is at the heart of what we do at Interset. But as we’ve engaged with our peers, customers, and partners, we have come to realize that the concept of AI is not always easily understood. To kick off our AI and Machine Learning 101 blog series, we will unpack the AI puzzle by answering the main question many folks are asking: “What is artificial intelligence, really?”

The easiest way to understand artificial intelligence is to map it to something we already understand‒our own intelligence. How does non-artificial, human intelligence work? At the most basic level, our intelligence follows a simple progression: we take in information, we process it, and ultimately the information helps us act.

Let’s break this down into a system diagram. In the figure below, the three general steps of human intelligence from left to right: input, processing, and output. In the human brain, input takes place in the form of sensing and perceiving things. Your eyes, nose, ears, etc., take in raw input on the left, such as photons of light or the smell of pine trees, and then process it. On the system’s right side is output. This includes speech and actions, both of which are dependent on how we process the raw input that our brain is receiving. The processing happens in the middle, where knowledge or memories are formed and retrieved, decisions and inferences and made, and learning occurs.

AI and Machine Learning 101 - Part 1.pngPicture stopping at a roadway intersection. Your eyes see that the traffic light in front of you has just turned green. Based on what you have learned from experience (and driver’s education), you know that a green light indicates that you should drive forward. So, you hit the gas pedal. The green light is the raw input, your acceleration is the output; everything in between is processing.

To intelligently navigate the world around us‒answering the phone, baking chocolate chip cookies, or obeying traffic lights‒we need to process the input that we receive. This is the core of human intelligence processing, and it is ultimately broken down into three distinct aspects:

  1. Knowledge and memory. We build up knowledge as we ingest facts (i.e. the Battle of Hastings took place in 1066) and social norms (i.e. saying “Please” and “Thank you” is considered polite). Additionally, memory enables us to recall and apply information from the past to present situations. For example, Edward remembers that Jane did not thank him for her birthday present, so he does not expect her to thank him when he gives her a Christmas present.
  2. Decision and inference. Decisions and inferences are made based on raw input combined with knowledge and/or memory. For example, Edward ate a jalapeno pepper last year and did not like it. When Johnny offers a pepper to Edward, he decides not to eat it.
  3. Learning. Humans can learn by example, observation, or algorithm. In learning by example, we are told that one animal is a dog, the other is a cat. In learning by observation, we figure out on our own that dogs bark and that cats meow. The third learning method‒algorithm‒enables us to complete a task by following a series of steps or a specific algorithm (e.g. performing long division).

These aspects of human intelligence parallel artificial intelligence. Just as we take in information, process it, and share output, so can machines. Let’s take a look at the figure below to see how this maps out.

AI and Machine Learning 101 2.png

 In machines, the input part of artificial intelligence is exemplified by natural language processing, speech recognition, visual recognition, and more. You see such technologies and algorithms everywhere, from self-driving cars that need to sense the roadways and obstacles, to Alexa or Siri when it recognizes your speech. The output that follows are ways in which machines interact with the world around us. This might take the form of robotics, navigation systems (to guide those self-driving cars), speech generation (i.e. Siri), etc. In between, we have various forms of processing that takes place.

Similar to our accrual of knowledge and memories, machines can create knowledge representations or graph databases that help them store information about the world. Just as humans make decisions or draw inferences, machines can make a prediction, optimize for a target or outcome, and determine the best next steps or decisions to meet a specific goal.

Finally, just as we learn by example, observation, or algorithm, machines can be taught using analogous methods. Supervised machine learning is much like learning by example: the computer is given a dataset with “labels” within the data set that act as answers, and eventually learns to tell the difference between different labels (e.g. this dataset contains photos labeled as either “dog” or “cat”, and with enough examples, the computer will notice that dogs generally have longer tails and less pointy ears than cats).

Unsupervised machine learning, on the other hand, is like learning by observation. The computer observes patterns (dogs bark and cats meow) and, through this, learns to distinguish groups and patterns on its own (e.g. there are two groups of animals that can be separated by the sound they make; one group barks‒dogs‒and the other group meows‒cats). Unsupervised learning doesn’t require labels and can be preferable when data sets are limited and do not have labels. Finally, learning by algorithm is what happens when a programmer instructs a computer exactly what to do, line-by-line, in a software program.

Ideally, the most accurate and efficient artificial intelligence results require a combination of learning methods. Both supervised and unsupervised machine learning are useful methods‒it’s all about applying the right approach or approaches to the right use case.

In our next blog, we’ll put machine learning under the microscope to understand how this part of AI mirrors the neurons in our brain to turn input into to optimal output.

Stephan Jou is Chief Technology Officer at Interset.