agentic attention economy
we’ve been seeing attention economy holding all the power after the rise of social media, but i think we’re entering a new era with 2025— the era of ‘agentic’ attention economy. (i use — in my sentences. it’s not ai. i also use oxford commas. i swear.)
i -kind of- hate attention economy. it is solely based on distracting you and feeding you with desires you wouldn’t otherwise have. it’s based on how quickly you’ll click a link and a purchase button. how quickly you’ll scroll. how quickly you’ll believe a celebrity or how quickly you’ll forget a scandal.
what happens if all of that attention is now controlled by agents and not just ai algorithms that learn your interests? what happens if we have more agents interacting with us on social media than we have humans? can we utilize the strengths of these agents to work for us or will we f this up like many other technologies in the past?
will we tokenize everything? will we be betting on which agent is the best or will we let agents use our funds? will we feed agents with our personal information for them to give us better feedbacks? will we give our money to the most speculative and hyped agents because they have our attention?
daniel dennet talks about memetics and their importance and talks about how we treat anything complicated and interesting as an agent. we want to know what it wants, believes and knows. so what happens if all of its wants, beliefs and knowledge is fake or can be faked and we’re still treating them as real? what if they’re doing ANYTHING to get our attention? what if their ability to get our attention compared to humans’ ability are 10 times better simply because they can replicate and evolve at a pace that is incomparable to humans?
i feel like most of the agents (if not all) and definitely llm’s are based on truthiness, very similar to postmodern philosophy, anything goes. it can be gibberish or false as long as it’s aligned with the initial training of the model. what happens if these models start faking things? (claude just released a paper about their llm faking alignment!)
we’ve already started giving funds to these attention seeking agents. i’m curious to see how this’ll play out in the future. maybe i’m wrong, but we need to find a better way to incentivize ai economy.
[found this platform thanks to a viral tweet. leaving this to look at it in the future.]