Serendipity
In late 2021, I realized I needed a new challenge. It had been a little over three years since I had joined LinkedIn, during which time I had shipped some incredible work that I’ll always be proud of. However, I needed to make a choice: commit to something internally for the long haul or look outside for something different.
I had come to such forks before in my career, and I knew this time, it had to be the latter.
What I craved was a small company where I could express myself creatively and work with an extraordinarily talented team I could learn from. I also wanted something where I could apply my machine learning experience to a broader space of the product. Not only because I enjoy it (I really do), but also because that’s playing to my strengths.
The challenge was clear: such roles aren’t common. Most machine learning PM roles tend to be narrow (“improve search relevance,” for instance). And finding the right small company is a challenge in itself, as they’re often not well-known when you’re looking. I knew it would take time, and I was willing to wait.
Several weeks later, in January 2022, the job recommendations product I’d worked on for years placed this job from Netflix in front of me. So far, I hadn’t even considered the company—it didn’t fit my criteria of “small,” after all. But the role caught my attention. Netflix’s culture is famous in the Valley, and the scope of the opportunity was far broader than I had imagined for myself.
—
I often reference the above experience when we have discussions about what great algorithmic experiences need to optimize for. The naive opinion often is simply accuracy. That opinion misses the fact that most of us over-filter our options in our heads, without realizing that there are more possibilities out there than we would typically consider. Had the algorithm not surfaced this role, I would have overlooked it entirely.
Algorithms have an incredible impact on our lives, and anyone who believes themselves to be immune to them is just lying. What’s most interesting about algorithms isn’t when they get things right in obvious ways, but when they connect dots—deliberately or accidentally—that we might not consider ourselves. They can open us up to a range of possibilities we’d never even thought to explore.
As we move toward a world where algorithms not only suggest options but also take actions for us, we risk losing something critical: our own exploratory nature. If the systems rely too heavily on specificity in prompts—our explicit directions to systems—we risk constraining the outcomes to what we can describe.
Building systems that understand not only our true intents and desires but also account for how they might evolve is hard, but it’s going to be increasingly critical for us to consider for.
—