Meta’s new AI agents confuse Facebook users 

CAMBRIDGE, Massachusetts — Facebook parent Meta Platforms has unveiled a new set of artificial intelligence systems that are powering what CEO Mark Zuckerberg calls “the most intelligent AI assistant that you can freely use.” 

But as Zuckerberg’s crew of amped-up Meta AI agents started venturing into social media in recent days to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. 

One joined a Facebook moms group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum. 

Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France’s Mistral, have been churning out new AI language models and hoping to convince customers they’ve got the smartest, handiest or most efficient chatbots. 

While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it’s now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp. 

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta’s newest models were built with 8 billion and 70 billion parameters — a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training. 

“The vast majority of consumers don’t candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant,” Nick Clegg, Meta’s president of global affairs, said in an interview. 

‘A little stiff’

He added that Meta’s AI agent is loosening up. Some people found the earlier Llama 2 model — released less than a year ago — to be “a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions,” he said. 

But in letting down their guard, Meta’s AI agents have also been spotted posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press. 

“Apologies for the mistake! I’m just a large language model, I don’t have experiences or children,” the chatbot told the group. 

One group member who also happens to study AI said it was clear that the agent didn’t know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human. 

“An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it,” said Aleksandra Korolova, an assistant professor of computer science at Princeton University. 

Clegg said Wednesday that he wasn’t aware of the exchange. Facebook’s online help page says the Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.” The group’s administrators have the ability to turn it off. 

Need a camera?

In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.” 

Meta said in a written statement Thursday that “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.” The company said it is constantly working to improve the features. 

In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey. 

They may eventually hit a limit, at least when it comes to data, said Nestor Maslej, a research manager for Stanford’s Institute for Human-Centered Artificial Intelligence. 

“I think it’s been clear that if you scale the models on more data, they can become increasingly better,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.” 

More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. “Yet they still cannot plan well,” Maslej said. “They still hallucinate. They’re still making mistakes in reasoning.” 

Getting to AI systems that can perform higher-level cognitive tasks and common-sense reasoning — where humans still excel— might require a shift beyond building ever-bigger models. 

Seeing what works

For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights, and summarize long documents. 

“You’re seeing companies kind of looking at fit, testing each of the different models for what they’re trying to do and finding some that are better at some areas rather than others,” said Todd Lohr, a leader in technology consulting at KPMG. 

Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta’s vice president of AI research, said at a recent London event that the company’s goal over time is to make a Llama-powered Meta AI “the most useful assistant in the world.” 

“In many ways, the models that we have today are going to be child’s play compared to the models coming in five years,” she said. 

But she said the “question on the table” is whether researchers have been able to fine-tune its bigger Llama 3 model so that it’s safe to use and doesn’t, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use. 

“It’s not just a technical question,” Pineau said. “It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands.”

Developers: Enhanced AI could outthink humans in 2 to 5 years

vancouver, british columbia — Just as the world is getting used to the rapidly expanding use of AI, or artificial intelligence, AGI is looming on the horizon.

Experts say when artificial general intelligence becomes reality, it could perform tasks better than human beings, with the possibility of higher cognitive abilities, emotions, and ability to self-teach and develop.

Ramin Hasani is a research scientist at the Massachusetts Institute of Technology and the CEO of Liquid AI, which builds specific AI systems for different organizations. He is also a TED Fellow, a program that helps develop what the nonprofit TED conference considers to be “game changers.”

Hasani says that the first signs of AGI are realistically two to five years away from being reality. He says it will have a direct impact on our everyday lives.

What’s coming, he says, will be “an AI system that can have the collective knowledge of humans. And that can beat us in tasks that we do in our daily life, something you want to do … your finances, you’re solving, you’re helping your daughter to solve their homework. And at the same time, you want to also read a book and do a summary. So an AGI would be able to do all that.”

Hasani says that advancing artificial intelligence will allow for things to move faster and can even be made to have emotions.

He says proper regulation can be achieved by better understanding how different AI systems are developed.

This thought is shared by Bret Greenstein, a partner at London-based  PricewaterhouseCoopers who leads its efforts on artificial intelligence.

“I think one is a personal responsibility for people in leadership positions, policymakers, to be educated on the topic, not in the fact that they’ve read it, but to experience it, live it and try it. And to be with people who are close to it, who understand it,” he says.

Greenstein warns that if it is over-regulated, innovation will be curtailed and access to AI will be limited to people who could benefit from it.

For musician, comedian and actor Reggie Watts, who was the bandleader on “The Late Late Show with James Corden” on CBS, AI and the coming of AGI will be a great way to find mediocre music, because it will be mimicked easily.

Calling it “artificial consciousness,” he says existing laws to protect intellectual property rights and creative industries, like music, TV and film, will work, provided they are properly adopted.

“I think it’s just about the usage of the tool, how it’s … how it’s used. Is there money being made off of it, so on, so forth. So, I think that that we already have … tools that exist that deal with these types of situations, but [the laws and regulations] need to be expanded to include AI because they’ll probably be a lot more nuance to it.”

Watts says that any form of AI is going to be smarter than one person, almost like all human intelligence collected into one point. He feels this will cause humanity to discover interesting things and the nature of reality itself.

This year’s conference was the 40th year for TED, the nonprofit organization that is an acronym for Technology, Entertainment and Design.

Google fires 28 workers protesting contract with Israel

New York — Google fired 28 employees following a disruptive sit-down protest over the tech giant’s contract with the Israeli government, a Google spokesperson said Thursday.

The Tuesday demonstration was organized by the group “No Tech for Apartheid,” which has long opposed “Project Nimbus,” Google’s joint $1.2 billion contract with Amazon to provide cloud services to the government of Israel.

Video of the demonstration showed police arresting Google workers in Sunnyvale, California, in the office of Google Cloud CEO Thomas Kurian’s, according to a post by the advocacy group on X, formerly Twitter.

Kurian’s office was occupied for 10 hours, the advocacy group said.

Workers held signs including “Googlers against Genocide,” a reference to accusations surrounding Israel’s attacks on Gaza.

“No Tech for Apartheid,” which also held protests in New York and Seattle, pointed to an April 12 Time magazine article reporting a draft contract of Google billing the Israeli Ministry of Defense more than $1 million for consulting services.

A “small number” of employees “disrupted” a few Google locations, but the protests are “part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” a Google spokesperson said.

“After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety,” the Google spokesperson said. “We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”

Israel is one of “numerous” governments for which Google provides cloud computing services, the Google spokesperson said.

“This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” the Google spokesperson said.

AI-generated fashion models could bring more diversity to industry — or leave it with less

Chicago, Illinois — London-based model Alexsandrah has a twin, but not in the way you’d expect: Her counterpart is made of pixels instead of flesh and blood.

The virtual twin was generated by artificial intelligence and has already appeared as a stand-in for the real-life Alexsandrah in a photo shoot. Alexsandrah, who goes by her first name professionally, in turn receives credit and compensation whenever the AI version of herself gets used — just like a human model.

Alexsandrah says she and her alter-ego mirror each other “even down to the baby hairs.” And it is yet another example of how AI is transforming creative industries — and the way humans may or may not be compensated.

Proponents say the growing use of AI in fashion modeling showcases diversity in all shapes and sizes, allowing consumers to make more tailored purchase decisions that in turn reduces fashion waste from product returns. And digital modeling saves money for companies and creates opportunities for people who want to work with the technology.

But critics raise concerns that digital models may push human models — and other professionals like makeup artists and photographers — out of a job. Unsuspecting consumers could also be fooled into thinking AI models are real, and companies could claim credit for fulfilling diversity commitments without employing actual humans.

“Fashion is exclusive, with limited opportunities for people of color to break in,” said Sara Ziff, a former fashion model and founder of the Model Alliance, a nonprofit aiming to advance workers’ rights in the fashion industry. “I think the use of AI to distort racial representation and marginalize actual models of color reveals this troubling gap between the industry’s declared intentions and their real actions.”  

Women of color in particular have long faced higher barriers to entry in modeling and AI could upend some of the gains they’ve made. Data suggests that women are more likely to work in occupations in which the technology could be applied and are more at risk of displacement than men.

In March 2023, iconic denim brand Levi Strauss & Co. announced that it would be testing AI-generated models produced by Amsterdam-based company Lalaland.ai to add a wider range of body types and underrepresented demographics on its website. But after receiving widespread backlash, Levi clarified that it was not pulling back on its plans for live photo shoots, the use of live models or its commitment to working with diverse models.

“We do not see this (AI) pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such,” Levi said in its statement at the time.

The company last month said that it has no plans to scale the AI program.

The Associated Press reached out to several other retailers to ask whether they use AI fashion models. Target, Kohl’s and fast-fashion giant Shein declined to comment; Temu did not respond to a request for comment.

Meanwhile, spokespeople for Nieman Marcus, H&M, Walmart and Macy’s said their respective companies do not use AI models, although Walmart clarified that “suppliers may have a different approach to photography they provide for their products, but we don’t have that information.”

Nonetheless, companies that generate AI models are finding a demand for the technology, including Lalaland.ai, which was co-founded by Michael Musandu after he was feeling frustrated by the absence of clothing models who looked like him.

“One model does not represent everyone that’s actually shopping and buying a product,” he said. “As a person of color, I felt this painfully myself.”

Musandu says his product is meant to supplement traditional photo shoots, not replace them. Instead of seeing one model, shoppers could see nine to 12 models using different size filters, which would enrich their shopping experience and help reduce product returns and fashion waste.

The technology is actually creating new jobs, since Lalaland.ai pays humans to train its algorithms, Musandu said.

And if brands “are serious about inclusion efforts, they will continue to hire these models of color,” he added.

London-based model Alexsandrah, who is Black, says her digital counterpart has helped her distinguish herself in the fashion industry. In fact, the real-life Alexsandrah has even stood in for a Black computer-generated model named Shudu, created by Cameron Wilson, a former fashion photographer turned CEO of The Diigitals, a U.K.-based digital modeling agency.

Wilson, who is white and uses they/them pronouns, designed Shudu in 2017, described on Instagram as the “The World’s First Digital Supermodel.” But critics at the time accused Wilson of cultural appropriation and digital Blackface.

Wilson took the experience as a lesson and transformed The Diigitals to make sure Shudu — who has been booked by Louis Vuitton and BMW — didn’t take away opportunities but instead opened possibilities for women of color. Alexsandrah, for instance, has modeled in-person as Shudu for Vogue Australia, and writer Ama Badu came up with Shudu’s backstory and portrays her voice for interviews.

Alexsandrah said she is “extremely proud” of her work with The Diigitals, which created her own AI twin: “It’s something that even when we are no longer here, the future generations can look back at and be like, ‘These are the pioneers.'”

But for Yve Edmond, a New York City area-based model who works with major retailers to check the fit of clothing before it’s sold to consumers, the rise of AI in fashion modeling feels more insidious.

Edmond worries modeling agencies and companies are taking advantage of models, who are generally independent contractors afforded few labor protections in the U.S., by using their photos to train AI systems without their consent or compensation.

She described one incident in which a client asked to photograph Edmond moving her arms, squatting and walking for “research” purposes. Edmond refused and later felt swindled — her modeling agency had told her she was being booked for a fitting, not to build an avatar.

“This is a complete violation,” she said. “It was really disappointing for me.”

But absent AI regulations, it’s up to companies to be transparent and ethical about deploying AI technology. And Ziff, the founder of the Model Alliance, likens the current lack of legal protections for fashion workers to “the Wild West.”

That’s why the Model Alliance is pushing for legislation like the one being considered in New York state, in which a provision of the Fashion Workers Act would require management companies and brands to obtain models’ clear written consent to create or use a model’s digital replica; specify the amount and duration of compensation, and prohibit altering or manipulating models’ digital replica without consent.

Alexsandrah says that with ethical use and the right legal regulations, AI might open up doors for more models of color like herself. She has let her clients know that she has an AI replica, and she funnels any inquires for its use through Wilson, who she describes as “somebody that I know, love, trust and is my friend.” Wilson says they make sure any compensation for Alexsandrah’s AI is comparable to what she would make in-person.

Edmond, however, is more of a purist: “We have this amazing Earth that we’re living on. And you have a person of every shade, every height, every size. Why not find that person and compensate that person?”

Instagram blurring nudity in messages to protect teens, fight sexual extortion

LONDON — Instagram says it’s deploying new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages.

The social media platform said in a blog post Thursday that it’s testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

Instagram and other social media companies have faced growing criticism for not doing enough to protect young people. Mark Zuckerberg, the CEO of Instagram’s owner Meta Platforms, apologized to the parents of victims of such abuse during a Senate hearing earlier this year.

Meta, which is based in Menlo Park, California, also owns Facebook and WhatsApp but the nudity blur feature won’t be added to messages sent on those platforms.

Instagram said scammers often use direct messages to ask for “intimate images.” To counter this, it will soon start testing out a nudity-protection feature for direct messages that blurs any images with nudity “and encourages people to think twice before sending nude images.”

“The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” Instagram said.

The feature will be turned on by default globally for teens under 18. Adult users will get a notification encouraging them to activate it.

Images with nudity will be blurred with a warning, giving users the option to view it. They’ll also get an option to block the sender and report the chat.

For people sending direct messages with nudity, they will get a message reminding them to be cautious when sending “sensitive photos.” They’ll also be informed that they can unsend the photos if they change their mind, but that there’s a chance others may have already seen them.

As with many of Meta’s tools and policies around child safety, critics saw the move as a positive step, but one that does not go far enough.

“I think the tools announced can protect senders, and that is welcome. But what about recipients?” said Arturo Béjar, former engineering director at the social media giant who is known for his expertise in curbing online harassment. He said 1 in 8 teens receives an unwanted advance on Instagram every seven days, citing internal research he compiled while at Meta that he presented in November testimony before Congress. “What tools do they get? What can they do if they get an unwanted nude?”

Béjar said “things won’t meaningfully change” until there is a way for a teen to say they’ve received an unwanted advance, and there is transparency about it.

Instagram said it’s working on technology to help identify accounts that could be potentially be engaging in sexual extortion scams, “based on a range of signals that could indicate sextortion behavior.”

To stop criminals from connecting with young people, it’s also taking measures including not showing the “message” button on a teen’s profile to potential sextortion accounts, even if they already follow each other, and testing new ways to hide teens from these accounts.

In January, the FBI warned of a “huge increase” in sextortion cases targeting children — including financial sextortion, where someone threatens to release compromising images unless the victim pays. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim. In the six-month period from October 2022 to March 2023, the FBI saw a more than 20% increase in reporting of financially motivated sextortion cases involving minor victims compared to the same period in the previous year.

Swarms of drones can be managed by a single person

The U.S. military says large groups of drones and ground robots can be managed by just one person without added stress to the operator. As VOA’s Julie Taboh reports, the technologies may be beneficial for civilian uses, too. VOA footage by Adam Greenbaum.

Indiana aspires to become next great tech hub

The Midwestern state of Indiana aspires to become the next great technology center as the United States ramps up investment in domestic microchip development and manufacturing. VOA’s Kane Farabaugh has more from Indianapolis. Videographer: Kane Farabaugh, Adam Greenbaum

Indiana aspires to become next great tech center

indianapolis, indiana — Semiconductors, or microchips, are critical to almost everything electronic used in the modern world. In 1990, the United States produced about 40% of the world’s semiconductors. As manufacturing migrated to Asia, U.S. production fell to about 12%.  

“During COVID, we got a wake-up call. It was like [a] Sputnik moment,” explained Mark Lundstrom, an engineer who has worked with microchips much of his life. 

The 2020 global coronavirus pandemic slowed production in Asia, creating a ripple through the global supply chain and leading to shortages of everything from phones to vehicles. Lundstrom said increasing U.S. reliance on foreign chip manufacturers exposed a major weakness. 

“We know that AI is going to transform society in the next several years, it requires extremely powerful chips. The most powerful leading-edge chips.” 

Today, Lundstrom is the acting dean of engineering at Purdue University in Lafayette, Indiana, a leader in cutting-edge semiconductor development, which has new importance amid the emerging field of artificial intelligence. 

“If we fall behind in AI, the consequences are enormous for the defense of our country, for our economic future,” Lundstrom told VOA. 

Amid the buzz of activity in a laboratory on Purdue’s campus, visitors can get a vision of what the future might look like in microchip technology. 

“The key metrics of the performance of the chips actually are the size of the transistors, the devices, which is the building block of the computer chips,” said Zhihong Chen, director of Purdue’s Birck Nanotechnology Center, where engineers work around the clock to push microchip technology into the future. 

“We are talking about a few atoms in each silicon transistor these days. And this is what this whole facility is about,” Chen said. “We are trying to make the next generation transistors better devices than current technologies. More powerful and more energy-efficient computer chips of the future.” 

Not just RVs anymore

Because of Purdue’s efforts, along with those on other university campuses in the state, Indiana believes it’s an attractive location for manufacturers looking to build new microchip facilities. 

“Purdue University alone, a top four-ranked engineering school, offers more engineers every year than the next top three,” said Eric Holcomb, Indiana’s Republican governor. “When you have access to that kind of talent, when you have access to the cost of doing business in the state of Indiana, that’s why people are increasingly saying, Indiana.” 

Holcomb is in the final year of his eight-year tenure in the state’s top position. He wants to transform Indiana beyond the recreational vehicle, or “RV capital” of the country.  

“We produce about plus-80% of all the RV production in North America in one state,” he told VOA. “We are not just living up to our reputation as being the number one manufacturing state per capita in America, but we are increasingly embracing the future of mobility in America.” 

Holcomb is spearheading an effort to make Indiana the next great technology center as the U.S. ramps up investment in domestic microchip development and manufacturing.  “If we want to compete globally, we have to get smarter and healthier and more equipped, and we have to continue to invest in our quality of place,” Holcomb told VOA in an interview. 

His vision is shared by other lawmakers, including U.S. Senator Todd Young of Indiana, who co-sponsored the bipartisan CHIPS and Science Act, which commits more than $50 billion in federal funding for domestic microchip development. 

‘We are committed’

Indiana is now home to one of 31 designated U.S. technology and innovation hubs, helping it qualify for hundreds of millions of dollars in grants designed to attract technology-driven businesses. 

“The signal that it sends to the rest of the world [is] that we are in it, we are committed, and we are focused,” said Holcomb. “We understand that economic development, economic security and national security complement one another.” 

Indiana’s efforts are paying off. 

In April, South Korean microchip manufacturer SK Hynix announced it was planning to build a $4 billion facility near Purdue University that would produce next-generation, high-bandwidth memory, or HBM chips, critical for artificial intelligence applications.  

The facility, slated to start operating in 2028, could create more than 1,000 new jobs. While U.S. chip manufacturer SkyWater also plans to invest nearly $2 billion in Indiana’s new LEAP Innovation District near Purdue, the state recently lost bidding to host chipmaker Intel, which selected Ohio for two new factories. 

“Companies tend to like to go to locations where there is already that infrastructure, where that supply chain is in place,” Purdue’s Lundstrom said. “That’s a challenge for us, because this is a new industry for us. So, we have a chicken-and- egg problem that we have to address, and we are beginning to address that.” 

Lundstrom said the CHIPS and Science Act and the federal money that comes with it are helping Indiana ramp up to compete with other U.S. locations already known for microchip development, such as Silicon Valley in California and Arizona. 

What could help Indiana gain an edge is its natural resources — plenty of land and water, and regular weather patterns, all crucial for the sensitive processes needed to manufacture microchips at large manufacturing centers. 

Ukrainian civilians help build up their country’s drone fleet

Inexpensive first-person view – or radio controlled – drones have become a powerful weapon in Ukraine’s war against Russian invaders. As the country presses the West for more military aid, many Ukrainian civilians are stepping in to help by making homemade attack drones. Lesia Bakalets has the story from Kyiv.