Artificial intelligence has the advertising industry bewitched, with agencies and clients alike clamoring to understand what AI can do for their strategies and marketing stunts.
While provocative, the industry doesn’t seem to have a standard or definition of AI and what it all entails. “I haven’t come across anything that says, ‘This is the standard on what AI is,’” said Annmarie Turpin, chief technology officer for Ocean Media’s analytics team. “People read lots of definitions of AI and then infer, ‘Well, this is tangentially related.’”
To help give some understanding, Digiday compiled a list of some of the key terms in AI that are frequently used in the ad industry. This is by no means an exhaustive list, but provides a glimpse into some of the words that are either new or increasingly prominent in the marketing lexicon.
Bid Optimization
What it is: AI-powered bid optimization is, simply put, advertisers throwing their hats into the ring to win an ad placement. It’s a process of AI-powered levers adjusting bidding strategies in real-time auctions to win ad placements, maximize campaign goals while simultaneously adhering to advertisers’ delivery preferences and constraints. Some agency execs say it’s where the rubber meets the road in helping agency executives determine ad campaign efficiency and return on investment.
Who’s using it: Google has increasingly been upping the ante when it comes to AI-backed tools, including bid optimization. In fact, Google’s DV360 (Display and Video 360) announced a recent expansion to its partnership with AI company Scibids to customize AI bidding, giving users more control. (Read more about Google and its foray into AI here.)
Computer Vision
What it refers to: Computer vision is pretty much what it sounds like, being a computer’s ability to make sense of digital images, videos and other visual inputs, and then make recommendations based on that intel. It’s a similar process to how humans process visual information. Like other aspects of AI, there’s a feed-the-beast nature to computer vision, in which the computer needs enough data to learn from before it’s able to recommend, react or do anything else.
How to use it: This is the technology that allows self-driving cars to detect objects around them and engage with their surroundings without causing an accident, for the most part. In this case, the vehicles use embedded cameras to capture video, feed it into the software and detect everything from traffic signals to pedestrians in real-time. Computer vision is also used regularly in facial recognition programs.
Digital Twinning
What it refers to: Digital twinning is a virtual simulation of a real life system of physical objects that uses machine learning to help with decision making. To put it in layman terms, digital twinning is the replication of a real-world object into the digital space. (Read our WTF is Digital Twinning here.)
How to use it: It works by incorporating real-time data feeds into the system, which enable continuous monitoring, analysis and simulation of alternative scenarios. The two main use cases are to use AI in digital twinning to virtually replicate an ad campaign before committing to it in real life. Or, in augmented reality, to virtually duplicate a physical activation to reach audiences both online and offline. To give a real life example of an AR digital twinning case, Mars’ candy brand M&Ms hosted a physical pop up at music festivals last year. Simultaneously, the chocolate brand hosted a virtual pop up activation offering online users a similar experience to festival goers.
Generative AI
What it refers to: Generative AI is a bit of an all-encompassing term that refers to the subset of AI that generates data or content, whether that’s text, images, music, video or anything else, based on patterns using user input data. Examples of this include OpenAI’s ChatGPT and Google Bard, where the machines have learned from anything on the internet and begin to make references or recommendations on their own.
How to use it: Like a fortune teller, users can ask generative AI machines just about anything and it’ll produce different types of content. At least 71% of agencies are already embracing the technology, especially on the heels of ChatGPT’s launch last November. Most, however, have been using generative AI mostly to streamline workflows, whether that be leveraging it to write social media copy or produce visual assets. Gen AI works based on user input data that is then processed, creating patterns as the machine learns, thus allowing it to create while also being independently creative. Sometimes gen AI spits out a hallucination (see next entry) that needs to be fact-checked.
Hallucination
What it is: It’s a metaphor for describing when AI models confidently generate inaccurate answers. The concept has become more common this year, but it’s been used within the AI community for years. (Google researchers used the word in a 2018 paper to describe how AI models were “susceptible to producing highly pathological translations that are completely untethered from the source material.”)
What it refers to: The hallucination effect is an industry-wide issue and poses a major problem for AI-generated misinformation — one of the many risks that have made marketers wary of using LLMs for content creation or for unchecked insights. Newer LLMs like GPT-4 have shown some signs of improvement, but misinformation experts say hallucinations are still prevalent with ChatGPT and Bard.
Key context: OpenAI’s own researchers have said hallucinations could be more dangerous as users become more trusting of the information AI models generate — something even OpenAI co-founder and CEO Sam Altman mentioned in May during his debut hearing in front of Congress.
LLM
What it stands for: Large Language Model
What it refers to: The backbone of generative AI, LLMs are a type of AI model that are trained on massive amounts text including news articles, social media content, computer code, technical manuals and entire books. Using its training data to predict what words would likely come next, LLMs can read, understand and generate text similar to how humans process thoughts. Although OpenAI’s GPT models are the most well known thanks to the rise of ChatGPT, it’s just one of many LLMs that tech giants have developed. Others include Google’s PaLM 2, Meta’s newly released open-source LLama 2, and Stability AI’s StableLM.
How it’s used: Since the beginning of this year (and even before that), numerous companies have built new chatbots and other tools using various LLMs. Although Bard uses Google’s own LLM, others are powered by OpenAI including Snap’s My AI, Nextdoor’s Assistant, and Profitero’s Ask Profitero. Other companies have used LLMs to build platforms for generating marketing content or for use in various other industries including financial intelligence, commercial real estate and customer services.
MMM
What it stands for: Marketing mix modeling
What it refers to: In technical terms, marketing mix modeling refers to the way marketers use statistical analysis as a tool to look back at sales over a period of time to pinpoint what drove said sales. In other words, it makes marketers’ job easier in determining what’s working and what’s not before shelling out more marketing dollars. (Watch our WTF explainer here.) It’s not a new tool, but the introduction of AI has sped up what was a data-intensive and costly event. And it’s being used by independent agencies to keep up with their bigger competitors.
How to use it: Data, data and more data. That’s how MMM happens. Marketers feed the machine data from years worth of marketing tactics they’ve used around digital, television, out-of-home, radio, podcast, social media and all other forms of media, in addition to data around seasonality and inventory. The MMM tool then computes it to spit out a reason for sales as well as predictions to help marketers make decisions for the future.
Machine learning
What it refers to: Not to be confused with generative AI, machine learning is a computational approach in which algorithms learn to make predictions or decisions based on what information they are fed. Often machine learning and generative AI are conflated, where people say generative AI but mean machine learning. The biggest difference is this: Generative AI has the capability to create new, original content whereas machine learning emphasizes learning from data input to make predictions.
How it’s used: Machine learning is more dependent on human control and optimization. The algorithms use what they have learned to make data-based predictions or decisions to help marketers do everything from predicting customer behavior and identifying patterns to personalizing marketing campaigns. It can also be used for media planning, like marketing mix modeling, or simply to automate repetitive tasks.
NLP
What it stands for: Natural Language Processing
What it refers to: A subset of AI, natural language processing helps bridge the gap between human and computer languages. Using various algorithms and computational models, NLP helps make connections between various terms to find context and meaning within human language. For example, NLP can analyze massive quantities of text to understand the sentiments and identify hidden trends. (It’s a helpful tool if a brand wants to know what social media users are saying about it or to find key topics, questions, and concerns across their consumer base.)
How it’s used: For years, NLP has been used to power voice assistants like Siri and Alexa, enable social listening tools, provide sentiment analysis and help with predictive tools for search engines, chatbots and various advertising tools.
Training data and test data
What it is: “Training data” is the massive amount of raw text ingested by an AI model as part of its supervised learning process. On the other hand, “test data” is used to see how well the AI model can predict answers based on data that wasn’t included in the original training set.
How it’s used to: Training data and testing data are both used in the process of developing an AI model. The quality of training data is also critical — as is how it’s sourced. As the AI model undergoes supervised learning based on training data, it’s able to understand various patterns in language to connect the dots. (Think of training data as the ingredients and the training data as the results of testing the recipe.)
Key challenges: There are a number of challenges related to the types of data AI models should be trained with. Some say companies should only train AI models with content they have permission to use — a key topic in recently filed lawsuits against OpenAI and Google and during Congressional hearings. Other challenges include data privacy concerns, properly labeling data and making sure data is sufficient to avoid bias and misinformation.
Honorable mentions:
Plenty of other terms could be included in this glossary, but here are a few others to include using words of various ad execs — along with a cameo by another non-human expert.
Anomaly detection:
“This is a machine learning and data analysis process that identifies patterns and detects data points that deviate significantly from the norm within that dataset. These could indicate errors, fraudulent activity or any outliers.”
– Brian Yamada, chief innovation officer at VMLY&R.
Deep learning:
“A subset of machine learning that uses unstructured data to learn from to better imitate how the human brain learns—most often confused for being AI.”
– B Lalanne, vp of research and insights at Crispin Porter + Bogusky
AGI
When people talk about the existential dangers of AI, they are often describing artificial general intelligence (AGI), which theoretically has human-level or beyond-human-level cognitive capabilities most often associated with AI seen in science fiction. Most experts say AGI isn’t possible yet, but others have said it’s not beyond the realm of reason. It’s therefore perhaps ironic that when Digiday asked Google Bard for its favorite AI term, the bot was quick to choose AGI.
“My favorite AI term is ‘artificial general intelligence’ (AGI),” Bard wrote. “AGI is a hypothetical type of AI that would have the ability to perform any intellectual task that a human being can. AGI is still a long way off, but it is a fascinating concept to think about.”
How soon should we be worried? That’s hard to predict.
Con información de Digiday
Leer la nota Completa > Digiday’s definitive, if not exhaustive, 2023 artificial intelligence glossary