AI's lava lamp: making LLMs creative
My ideas come from connecting the problems I'm working on with random tokens from the real world. The other day I saw an older lady play the game Bubble Shooter on the subway. For a second, it made me consider a strange slingshot interaction for my new app. Another idea came later that day. I was taking a shower when I remembered that my dad once saved me a newspaper article about the band R.E.M. Maybe I could make the app's user highlights look like newspaper cutouts?
Everyday life sparks new ideas so often that my girlfriend lovingly calls it "Walter Mittying" when I get lost in thought. It's made me wonder how a LLM can help me be creative without a rich life itself. While it has memory, it won't bring up skeuomorphism because it didn't have a dad who wanted his son to remember "Everybody hurts" at Live 8 2005. To quote Good Will Hunting:
"If I asked you about art, you'd probably give me the skinny on every art book ever written... but I'll bet you can't tell me what it smells like in the Sistine Chapel."
Perhaps it doesn't matter that a LLM doesn't have a life of its own. The AI shoggoth has read about all of our lives. It's likely that somewhere in its training data, someone cut up the next day's paper after hearing the bald singer with blue face paint. Or there's an article about the surprising popularity of mobile games on subways.
The actual problem is that the model doesn’t use these arbitrary tokens when helping us. Which of course is our fault. Until AI sees everything we see, it relies on us to discern what is and isn’t useful. And we only prompt it with the "serious tokens" it needs to serve us on the corporate battlefield. Making it miss out on life’s entropy that seems mundane at first sight, but is where good ideas come from.
Computers struggling to be random isn’t a new problem. Cloudflare famously generates unpredictable numbers by extracting randomness from lava lamp photos. What’s AI’s lava lamp? The process by which we get it to make associations that feel counter to next token prediction?
One idea is to simulate a daily life for a LLM and give it a scratchpad to write down observations it can later use. 2025 then might not be the year we ask agents to do our work. But the year we ask them to visit museums, fall in love, and make lists in their notes app.