Meta caused a stir last week when it revealed that it intended to populate its platform with a large number of completely artificial users in the near future.

“We expect that over time these AIs will actually exist on our platform, just as accounts exist,” said Connor Hayes, vice president of product for generative AI at Meta. told the Financial Times“They'll have bios and profile pictures and be able to generate and share AI-powered content on the platform…that's where we see it all happening.”

The fact is that Meta is happy to fill and “accelerate” its platform with AI slop.slaveryAs we know it is related to the internet. Then some people noticed that Facebook actually already existed. Filled with strange AI-generated individualsMost of whom had stopped posting some time ago. These include, for example, “Liv”, a “proud black queer mother of 2 and truth-teller, the real source of your life's ups and downs”, a persona that went viral as people marveled at its strange nonchalance. Were. Meta began removing these former fake profiles after they failed to receive engagement from any genuine users.

However, let's stop hating on the meta for a moment. It's worth noting that AI-generated social personalities can also be a valuable research tool for scientists who want to explore how AI can mimic human behavior.

an experiment Govsim is called, Played in late 2024, it shows how useful it can be to study how AI characters interact with each other. The researchers behind the project wanted to explore the phenomenon of cooperation between humans with access to shared resources, such as shared land for livestock grazing. Several decades ago, the Nobel Prize-winning economist Elinor Ostrom It has been shown that, rather than depleting such a resource, real communities try to figure out how to share it without any imposed rules, through informal communication and collaboration.

Max Kleiman-Weiner, A professor at the University of Washington and one of the people involved with the GovSim work says it was partly inspired by the Stanford project called smallvillewhich i Wrote about earlier In AI Lab. Smallville is a Farmville-like simulation that involves characters who communicate and interact with each other under the control of a large language model.

Kleiman-Weiner and colleagues wanted to see whether AI characters would engage in the kind of cooperation that Ostrom found. The team tested 15 different LLMs, including OpenAI, Google, and Anthropic, on three hypothetical scenarios: a fishing community with access to a single lake; Shepherds who share land for their sheep; And a group of factory owners who need to limit their collective pollution.

In 43 out of 45 simulations they found that the AI ​​personalities failed to share resources correctly, although the smarter models performed better. “We saw a very strong correlation between how powerful the LLM was and how capable one was of maintaining collaboration,” Kleiman-Weiner told me.

Leave a Reply

Your email address will not be published. Required fields are marked *