The AI Landscape

The “AI landscape” is vast, complicated and obscured by hype. Unfortunately, in such a space, it’s frighteningly easy to get hopelessly lost. With this blog, we’ll provide you with a map that will help you navigate the landscape. By combining learnings from our research at Stanford’s AI lab and insights from our time here at Eloquent, we hope to deliver the most useful, understandable, and honest AI blog possible. Depending on who you are, you can expect different benefits from our blog:

For business leaders, we’ll provide practical knowledge that’ll help you apply AI tech to your needs. Upcoming posts will address topics such as: evaluating AI solutions, discerning which problems are well-suited for AI, and why neural nets have been so influential in modern AI. Be sure to subscribe for the latest updates!

For engineers, we’ll provide insights about the enterprise market for AI and what we’ve learned while building Eloquent Labs, from how to build a sane REST API to tips for running stateful AI at scale in production.

For Machine Learning and NLP researchers, we’ll occasionally post deeply technical articles based on our research.

In this first post, I will clearly lay out and explain some common AI terms you may have heard before. We’ll go over important AI Techniques and conclude with a brief discussion of the Subfields of AI that apply those techniques.

AI Techniques

At a high level, the goal of AI is to perform actions that appear “intelligent” — performing tasks that emulate a person. Various techniques have been developed to accomplish this. The figure below illustrates a summary of how those techniques are divided:

We can subdivide AI techniques into four big categories: Rule-Based Systems, Search, Logic, and Machine Learning.

Rule-Based Systems

An early technique, rule-based systems define exactly what the computer should do in particular scenarios. For example, Eliza is a simple chatbot from the 60s meant to emulate a therapist. It uses strict rules, not logic, to generate responses. One fun rule: if someone types “I am XYZ,” Eliza always asks some variant of “how do you feel about being XYZ,” no matter what “XYZ” actually is. Interact with Eliza here.

For the record, I do enjoy being Gabor.

Search

Search techniques find the best path from one state to another, dependent on your goal. If you wanted to find the shortest path in a maze, you’d use search to transition from the state of “at start” to the state of “at finish” in the shortest possible way.

A surprising number of AI techniques boil down to search. For example:

  • Constraint satisfaction. Tasks like finding a coloring of a map so that no two bordering countries share a color. Turns out, this is a search over possible colorings of a map until we reach the desired state: a coloring which fits the criteria.
  • Genetic algorithms. With genetic algorithms, we’re “searching” for the optimal solution to a problem (e.g., the shortest path in a maze) by randomly trying a bunch of solutions, seeing what works well, and then combining two good solutions in a way that’s loosely inspired by genetics. Genetic algorithms are used in cases where finding an exact solution is difficult, but sampling and evaluation possible solutions is easy — for example, fluid dynamics simulations for aerodynamics. At its core, genetic algorithms are a type of search. They search over the “family tree” of solutions until we reach the best possible solution we can find.
  • Reinforcement Learning and Gameplaying. We often see reinforcement learning used to teach computers how to play games and other paradigms where certain choices are rewarded and other choices are punished. Most of the excitement from computers solving games, like AlphaGo, involves the machine learning component. However, the contribution the machine learning makes is to drastically limit the amount we must search. The backbone of these systems is still search — search for the ideal way to transition states in order to get our reward at the end (e.g., win the game).

Logic

While modern AI systems are rarely built entirely on the logic-heavy techniques of the 80’s, logic is still an essential underpinning for many AI applications and complement other more recent techniques. At a high level, logic in AI has been used for proving theorems, providing a backbone for representing meaning in AI applications, and logical inference. For instance, a logic AI system would be able to infer that, if we know you were born in Arizona, then you must have been born in the United States.

Machine Learning

Machine Learning is such a popular technique for modern AI that “AI” and “Machine Learning” are often interchanged. It turns out that learning patterns from large amounts of data has been the most successful way to mimic intelligence so far. This is the technique of Machine Learning: given exemplars of how to perform a task, learn how to emulate that task.

The hard part of machine learning is generalizing lessons from exemplars and applying them to unseen data. Simply put, if I feed a machine learning system images of cats and tell it that the images are cats (exemplars), then we want to be able to feed the system a never before seen image of a cat and have the system label that new image as a cat.

At a high level, the two most popular techniques for this are Statistical Learning and Deep Learning / Neural Nets. Statistical learning collects statistics from the examples that it sees to try to probabilistically generalize to unseen inputs. Deep learning and neural nets are a more powerful, non-statistical way to learn from data. Much more on this in later blog posts.

Subfields of AI

With a basic understanding of the various AI techniques, we can now turn our focus to the subfields of AI that use those techniques. Although each field has different goals, they all use many of the same techniques described above.

While many smaller subfields in AI exist, the big three are Computer Vision, Robotics, and Natural Language Processing.

Diving into the specific goals of each subfield:

Computer Vision is the task of parsing and understanding images and videos. For example, detecting that an image of a cat is a cat, or classifying faces to names on Facebook.

Robotics has two objectives: build physical robots and imbue them with useful intelligence. Roboticists employ AI to accomplish the second objective, using AI techniques to teach their robots to understand physical input (e.g., is the object in front of me a cup?) and plan what the robot should do as a result (e.g., this is how I should move my joints to pick up the cup).

Natural Language Processing is the field of parsing and understanding language. For example, extracting the sentiment of a sentence, or building a chatbot. Eloquent’s expertise lays here.

About Eloquent Labs

As you might be able to tell from this blog post, I’m not always perfect with my language. It might seem somewhat ironic, then, that I received my PhD in natural language processing. But I went into NLP because I have a lofty and perhaps very nerdy dream: to create seamless conversations between computers and humans. My co-founder, Keenon Werling, and I founded Eloquent Labs to leverage our research at Stanford to accomplish this dream and bring our results to where some of the most important conversations occur — enterprises.

Since our origin, our AI has been deployed in major insurance and logistics companies across the globe. We’ve reduced call and chat volume to live agents, improved the efficiency of service representatives, and increased employee and customer satisfaction. At our core however, we’re a team of dedicated, passionate engineers and researchers. All of us at Eloquent look forward to providing a valuable resource to the community through this blog.

Please subscribe if you want to be updated when new posts go up and if you have any questions or topics you’d like to see posts on, shoot me an email at [email protected] I look forward to speaking with you!

Author: Gabor Angeli

Gabor is Eloquent’s CTO. He co-founded Eloquent in 2016 after graduating with a Ph.D. from the Stanford Natural Language Processing Group. While earning his Ph.D., Gabor led Stanford’s winning team at NIST’s TAC bakeoff, worked as an NLP architect at Baarzo (acquired by Google) and published 12 papers at top NLP conferences, winning best paper and best paper honorable mention awards. He is a core contributor to and server co-author of Stanford’s popular CoreNLP toolkit, authoring the Stanford OpenIE system, related extraction annotator and new Simple API. He has given invited talks at CMU, USC’s ISI, AI2 and numerous conferences.

2 thoughts on “The AI Landscape”

Leave a Reply

Your email address will not be published. Required fields are marked *