With all the AI hearings happening in Congress and around the world, it can be tricky keeping track of all the discussions about AI’s potential risks and rewards — let alone how to regulate it. However, in a hearing last week with British Parliament, a former rocket scientist gave a memorable metaphor that helps illustrate the importance of quality data in the debate about generative AI.
During a hearing held by British Parliament last week, Peter Waggett, IBM’s U.K. director of research, recalled how he used to use the ozone layer as a calibration constant until researchers found a hole in it that totally changed their perspective. That also taught him how important it is to “understand the data that you’re taking into a system and not just taking anything at face value.”
“I just sat there thinking, ‘Why didn’t I spot it, what did I miss?’” Waggett said during the hearing. “As it turned out, the assumption had been made in the database that if the data isn’t constant, it must be wrong; throw it out. In that instance, I learned early on that you must understand what’s going on there.”
Other parts of the British government are examining how AI could impact competition and consumer protections. In a new report about AI foundation models, the Competition & Markets Authority examined a range of issues — including access, diversity, transparency and fairness — and made a list of ways the market might be more likely to “produce positive outcomes.” (The CMA estimates 160 foundational models have been developed and released since OpenAI released its first model in 2018.) Read the short version here or the full version here.
In the U.S., the Federal Trade Commission doesn’t want to make the same privacy mistakes with AI that it made with social media. In a speech to the BBB’s National Advertising Division, Samuel Levin, director of the FTC’s Consumer Protection Bureau, said self-regulation is being “put to the test” and that industry efforts improved in the past only after the FTC pressed Congress to (unsuccessfully) pass privacy legislation.
Here’s a sampling of other AI-related news from last week:
Giant “events”
- Microsoft unveiled a redesigned and expanded version of Microsoft Copilot, a unified AI assistant to help people navigate across apps, operating systems and devices. Along with new gen AI features across Windows, Edge, Bing and Microsoft 365, the company is also testing new conversational AI ad formats and adding partners for its new chat ads API for publishers announced in May.
- Snap is one of the first companies to use Microsoft’s new chat ads API for publishers, which powers conversational ads inside of My AI, Snapchat’s AI chatbot powered by ChatGPT. So far, more than 150 million users have sent more than 10 billion messages to My AI since it debuted in April, according to Snap. (Another publisher using Microsoft’s new chat ads API is the German daily newspaper BILD for its recently released “Hey_” chatbot.)
- Google rolled out improvements for Bard, which will now integrate with various apps including YouTube, Gmail, Google Docs and Google Maps.
Other AI news:
- OpenAI debuted DALL-E 3, the next version of its popular AI image generator, which is available now for researchers and will be in early October for ChatGPT Plus users and enterprise customers. (Microsoft also touted DALL-E 3 last week during its even by showing images of AI-generated pumpkins created in Bing Chat.)
- Viva la robot: Las Vegas visitors might be greeted by something less than human at the famed MSG Sphere. The Sphere announced five “humanoid” robots will welcome guests and serve as both “spokesbot and storyteller.” This marks the second time in a month that the Sphere was used for something AI-related, following an AI art installation by Refik Anadol earlier this month. (Anadol also has an installation at New York’s MoMA that generates art based on the museum’s collection and real-time data related to the sound, weather and other environmental inputs.
- A group of famous authors filed a new lawsuit against OpenAI, alleging that the company’s AI violates copyright law. The list of plaintiffs — which includes George R.R. Martin, John Grisham, Jonathan Franzen, David Baldacci and others — opens a new chapter in the legal battles over AI’s impact on intellectual property. It also comes not long after author Michael Chabon and several screenwriters filed a similar copyright lawsuits this month against OpenAI and Meta. (In good news for OpenAI, a judge dismissed a privacy-related class-action lawsuit filed against OpenAI this summer.)
- In other AI-related writing news, Writer — an AI startup focused on generative content for companies — raised a $100 million Series B round with participation from various investors including Accenture.
Skillsoft CTO talks about AI and training
Companies and workers alike are rushing to upskill themselves for AI era, but the learning platform Skillsoft has developed a way for AI to also help people improve their soft skills.
Earlier this month, the company released an AI-powered conversation simulator called CAISY, which helps employees practice a range of conversations with a variety of personalities and situations. Apratim Purakayastha, Skillsoft’s chief product and technology officer, explained how the AI was trained to play various personalities including “aggressive,” “defensive” and “dismissive” demeanors.
For starters, CAISY is trained in conversations related to coaching employees, discussing product launches, interacting with customers and managing internal changes. It’s also able to assist with more sensitive topics like HR situations and PR scandals. Although soft skills are “very needed,” Purakayastha said they’re also hard to practice. More scenarios and personalities are in the R&D phase, but CAISY someday may be able to help an ad agency practice a pitch or let a startup founder practice meetings with potential investors.
“We find these massive technology gaps, and these demands seem to be evergreen,” Purakayastha told Digiday. “It’s a good thing for us, to be honest.”
Skillsoft, which acquired Codeacademy in 2021, is also seeing an evolution in how people learn to code for AI-related reasons and others. He said conversational AI tools can also help people decide what to learn and then help teach them along the way.
“One of the ways we’re seeing the landscape changing is [that] learning is becoming very multi-modal,” Purakayastha said. “People are expecting blended modalities of learning – [whether] it’s on video or an audiobook or a coaching sessions or IoT class – that can be relevant.”
Con información de Digiday
Leer la nota Completa > AI Briefing: What the ozone layer might teach us about holes in data