Artificial intelligence (AI) tools could be used to manipulate online audiences to make decisions ranging from what to buy to who to vote for, according to a study by a team at the University of Cambridge on the new market.digital intent signals“, known as “economics of intentions».
According to researchers at the Leverhulme Center for the Future of Intelligence (LCFI) at the University of Cambridge, “economics of intentions“is the successor to “economics attentionme,” in which social networks keep users on their platforms and show them ads. Within the framework of “economics of attention» advertisers can buy access to user attention in the present through real-time bidding on advertising exchanges or purchase it for future promotions, for example, by renting advertising space for a month in advance.
“For decades, attention has been the currency of the Internet,” says Dr. Jonnie Penn of LCFI. “The exchange of attention with social media platforms such as Facebook✴ and Instagram✴ has led to the development of the online economy.”
The study claims that large language models (LLMs), used to power AI tools such as the ChatGPT chatbot, will be used to “foresight and management» by users based on «intentional, behavioral and psychological data».
At “economics intentionAI companies will sell information about users’ motivations, ranging from hotel stay plans to opinions on a political candidate, to the highest bidder.
«In the economy of intentions, LLM can cost-effectively exploit a user’s communication rhythm, political views, vocabulary, age, gender, preferences, and even propensity for flattery. This data, combined with intermediary rates, will maximize the likelihood of achieving a given goal (for example, selling a movie ticket)“, says the study. It is also stated that in such a world, AI models will guide the discussion to the benefit of advertisers, businesses and other third parties.
The study claims that advertisers will be able to use generative AI tools to create personalized online advertisements. As an example, an AI model called Cicero from Meta✴ has achieved “human level” in the ability to play the board game **Diplomacy**, where success depends on predicting your opponent’s intentions.
AI models will be able to adjust their results in response to “streams of incoming data generated by users“, the study states. They will be able to isolate personal information from everyday communication and even “guide» conversation in a way that elicits more personal information. The study quotes the Cicero research group as saying that “an agent (AI) can learn to push its interlocutor to achieve a certain goal».
The researchers also predict a scenario in which Meta✴ will auction user intentions, for example, to book a restaurant, flight or hotel. While there is already an industry dedicated to predicting and bidding based on human behavior, AI models are transforming this process into “high quality, dynamic and personalized format”, emphasized scientists from LCFI.
If you notice an error, select it with the mouse and press CTRL+ENTER.