• AI overleverage - a prediction

    It’s really terrible to see executives right now laying people off ā€œbecause AIā€. Yes, an employee can do a lot more than they once could, but I’m guessing that a lot of these companies aren’t really thinking about the systemic and long-term changes they’d need to make this work out well.

    It seems to me that AI is creating a new kind of leverage - a leveraging of human capital. And, like precursors to other financial catastrophes in the past, a lot of companies are thoughtlessly over-leveraging their people with AI right now.

    So, I’d like to make a prediction: within the next 2 years, a major company (public or otherwise) is going to have a significant, multi-day outage. They may not say it publicly, but it will be due to a dramatic increase in size and complexity of systems (juiced by AI), and not having the people to actually figure out and fix what’s broken, when it does.

  • šŸ“š BOOK REVIEW

    The Serious Business of Small Talk by Carol Fleming (Review)

    ā˜…ā˜…ā˜…ā˜…

    This book acts as both a guide for improving small talk as a skill, and also an argument for why small talk is important in life. I thought it was helpful on both fronts.

    Fleming is very knowledgeable on the subject of speech and communication, but the book is written in a very accessible way that isn’t academic at all. She makes the case that small talk is the most important communication skill and not just a necessary evil to tolerate. So many of our current societal ills are directly related to the lack of connection people feel with those around them. Small talk is one answer to changing that.

    A really useful book—one of those that I wish I had read 20 years ago.

  • šŸ“š BOOK REVIEW

    The Serviceberry by Robin Wall Kimmerer (Review)

    ā˜…ā˜…ā˜…ā˜…

    A nice little book. It’s primarily an argument in favor of gift economies instead of the traditional monetary-based economies that we rely on. I think the case is fairly convincing on the small scale. It made me want to share more with my neighbors, with the people I’m connected with professional, and on online communities in general. I was less convinced by her case for gift economies to be used as a part of the broader economy, which is something I felt was tacked on at the end and not thoroughly developed

    I enjoy learning about different ways to view concepts that we tend to take for granted, like how Born to Run opened my eyes to a different way of approaching running. These always tend to come back to indigenous knowledge.

    A quick read, and I thought it was worth it as essay to make you think.

  • To build on the previous post’s concept of charting a nuanced path through the AI waters, I appreciated Stuart Winter-Tear’s recent post against autonomous agents and in favor of collaborative agents:

    Only build semi-autonomous agents with structured human oversight.

    Collaborative Agents are already delivering - in finance, healthcare, software - because they ask for help, adapt to feedback, and stay within the guardrails.

    ā€œThe most promising future for AI is not in systems that take over human roles, but in those that enhance human capabilities through meaningful partnership.ā€

    Amen.

    There’s a lot to be gained in the current moment by working to solve a real problem and not getting carried away by the hype.

  • Zed on Agentic Engineering

    Today, the team who makes Zed announced a new speaker series about agentic engineering. In reading their motivation for it, I really appreciate the thoughtful way in which they’re approaching working with AI:

    Software development is changing and we find ourselves at a convergence. Between the extremes of technological zealotry (ā€œall code will be AI-generatedā€) and dismissive skepticism (ā€œAI-generated code is garbageā€) lies a more practical and nuanced approach—one that is ours to discover together.

    As I’m trying to figure out how to navigate the changing world of work, so much of the content out there tends towards the maximalist end of the spectrum. People are playing on this weird cocktail of excitement, worry, and uncertainty to promote themselves. In the background is the combination of widespread tech layoffs and unemployment with the gold rush mentality of VCs, entrepreneurs, and companies trying to ride the AI wave.

    What I appreciate about how the Zed team talks about AI (see this great Changelog interview with Nathan Sobo), is that they have the courage to walk a middle path between the hype and skepticism. Work is changing, for sure. But you don’t have to buy in to the predominant narratives on social media and surrender to FOMO in charting a path through it all.

  • As I sit here waiting twiddling my thumbs waiting for Claude Code to finish working on a feature, I’m thinking about the stark difference in experience between the kind of coding when working with an AI vs. the kind of coding where you’re actually writing it yourself.

    Coding with AI isn’t the same as normal coding, but accelerated. It’s a completely different developer experience. One where you’re passively waiting for something to happen vs. actively struggling with a problem.

    It’s reminiscent of the comparisons between television and video games as ā€œlean backā€ vs. ā€œlean forwardā€ experiences. Vibe coding is, for the most part, a ā€œlean backā€ experience.

    It’s way faster, and being able to see features working quickly feels good. There’s less direct struggle in the creation process, which was painful before, especially for someone who isn’t a super efficient developer. But it also makes it feel more bland, like I’m sitting here at a slot machine pulling the lever, hoping something works eventually.

    What are the implications of this over the long term?

  • As I thought more about the previous post about how AI can do your job, it reminded me about how Horace Dediu used to talk about cars in the context of mobility. When you buy a car, you’re really buying a ā€œbundleā€ of trips (or ā€œjobsā€ in the parlance of Jobs-to-be-Done). And when there are many other, often cheaper options out there, such as a bicycle or an electric scooter rental, those options can compete for many of the ā€œjobsā€ that the car does.

    For most trips that people take in their daily lives, a car is overkill. More passenger space than you need, more storage space than you need, and more costly. It’s great to have when you do that yearly long roadtrip, but what about all of the drives in between?

    Is this also how humans will be viewed in the era of competition with AI? Is this the unbundling of humans? What ā€œjobsā€ are bundled together today when you hire someone? And when you subtract out the jobs that AI can do competitively, what is left?

  • AI can do your job already

    In some of the recent projects I’ve been working on, I’ve been using the fantastic OpenRouter, which makes it trivially easy to test and compare the output of different LLMs. It’s an interesting experience, because it’s easy to see the differences in quality, speed, and cost between each model. It puts me in the mindset of ā€œwhat’s the cheapest model that gets the job done?ā€

    As I was cycling through the different models, a dark future of employment dawned upon me. What if we humans are just considered another ā€œmodelā€ to route a request to, whose output will be compared against all of these other models? People will send a request to cheaper models like Claude Haiku, to more expensive models like Sonnet and Opus, and then to the most expensive models of all, humans, which might cost 100x, 1000x, or 10000x for a given request.

    If you do all of your work through a computer today, this is already something that is 100% technically feasible.

    The biggest shift here is going from binary thinking to gradient thinking. A lot of people tend to solipsistically think of the AI replacement question as a binary yes/no. Can an AI do my job or not? With that lens, the answer is usually ā€œnoā€ (at least from their own perspective).

    But this is not how those paying for jobs (micro-jobs like a single LLM, but also macro-jobs like a normal salaried position) will think. They’ll think of it like a gradient. Can an AI do this job? The answer is yes already. It can take the inputs and give the same kind of outputs that a human would.

    The only question is, how good of a job can it do? And at what price?

    When you start to think that way, things get a lot scarier.

  • I’ve continued using Claude Code after its recent addition to the Pro Plan. One thing that’s jumped out to me is the difference in experience between the terminal-only UI vs. the side-by-side in Cursor. With Claude Code, you can’t see all of the code, only snippets. And those views are all mediated by the AI. Setting its effectiveness as a coding agent aside, the fact that you can’t see the code forces you to trust and accept what it’s giving you.

    A lot has been said about how good or bad these agents are, but I think the mental shift here is the interface. You’re no longer working side-by-side with the AI, you’re fully depending on it.

    ā€œAI as middlemanā€ has many implications I don’t think we fully understand and appreciate yet.

  • The importance of pen and paper in the AI era

    One thing that’s jumped out to me as I use more and more AI tools, especially for coding: using pen and paper is even more important now that it was before.

    It’s always been important to work out things on paper. The Feynman anecdote on written notes comes to mind:

    Weiner: Well, the work was done in your head but the record of it is still here.
    Feynman: No, it’s not a record, not really, it’s working. You have to work on paper and this is the paper. OK? (source)

    Initially, using an LLM as a sounding board to work things out is tempting, and feels like an easy way to start. But I find that it’s essential to work the basics of what I want out for myself first before going to the LLM. Without some clarity in my mind, it’s easy to be led into a local maxima, even when it’s just me and the LLM in the room.

    So now I find myself drawn to pen and paper even more than before. Work it out for yourself, then go to the AI.

  • I saw Zapier’s ā€œAI fluencyā€ assessment by role recently, and it make me really uneasy. I’m enthusiastic and excited myself to take advantage of AI tools myself, but this approach to cram AI down the throats of employees feels gross. It’s the wrong kind of motivation to use.

    Looking at the rubric, it seems to be very prescriptive as to how each role should use AI, not just that they need to use it. To me, that seems very limiting. There are a lot more creative uses of AI I’m sure their employees would find if they weren’t afraid of getting a bad performance review by not using it in the ways demanded of them.

  • Adding Linear MCP to Claude Code

    After Anthropic recently added modest Claude Code capability to their Pro plan, I’ve been testing it out. I’m still undecided on whether I like it better than Cursor or not.

    I did run into one annoying thing, however, in the setup. It wasn’t very straightforward to set up MCP integration with Linear. This is important to me because it’s become a surprisingly important part of my workflow to just ask the LLM to see tickets. For some reason, the documentation on Linear’s side, on Anthropic’s side, and within Claude itself weren’t very helpful in figuring it out.

    Trying to add Linear MCP using the normal syntax and the SSE syntax didn’t work. The approach of adding it to Claude desktop and importing it into Claude Code didn’t work iether.

    What did work, however, was using the JSON syntax. So, without further ado, here is how you add the Linear MCP server to Claude Code in a way that will connect correctly (as of June 2025):

    claude mcp add-json linear '{"command": "npx", "args": ["-y","mcp-remote","https://mcp.linear.app/sse"]}'

  • How to use Notepads in Cursor

    Notepads is a relatively new feature that got introduced in the recent Cursor 0.41 (September 2024). When I first read about the feature, I got pretty excited. Specifying context is arguably the most important part of using Cursor effectively.

    I’ve always felt it cumbersome to have to manually @mention the whole list of files needed to work in a particular area of my project. It’s slow and it’s prone to error—sometimes I just forget to mentino a file or two and it throws everything off.

    The promise of Notepads is that you can pre-specify a context (like multiple files along with extra instructions) and then just @mention the notepad going forward instead. For example, if I’m working in my Rails app, I can create a Notepad for all of the relevant controller, model, views, and javascript files as a shortcut for working in an area.

    I’m still playing with the feature, but I’m hopeful that it will be a big unlock in effeciency for my workflow.

    The main problem I found is that, for some reason, it’s a huge pain in the ass to figure out how to use Notepads. As of this writing, I couldn’t find anything in the Cursor docs, the Cursor subreddit, or a cursory Google search on how to do it.

    So here’s how to use Notepads, at least as of Cursor 0.42:

    1. Open Composer (Cmd+I)
    2. Click the ā€œOpen control panelā€ icon in the top left of the Composer window (looks like a dial)
    3. In the left sidebar of the Composer control panel, see the ā€œNotepadā€ section.
    4. Press (+) to create a new Notepad
    5. Type whatever context you want into the Notepad, including @mention for files
    6. Exit out of the control panel
    7. Now, you should be able to @mention the notepad you just created in places like the chat sidebar

    Enjoy!

  • Building with AI - Dreaming and Incrementality

    The more I build with AI tools, the less and less I ask them to do. When I started using Cursor, I had gotten caught up in the demo hype. I asked it to build big, complex features in my product. Features I didn’t define very clearly and that would take serious architectural thinking across many files to make work.

    I quickly found myself spending more time chasing hallucinations and weird AI decisions than actually building new features. I learned that it was a lot cleaner and faster to ask the AI to take smaller, more focused, steps. This change of approach allowed me be productive again.

    But what’s the cost of conceding to this kind of pragmatism?

    In this morning’s Stratechery, Ben Thompson took a contrary view of Tesla’s recent ā€œWe, Robotā€ robotaxi event that has been widely mocked among a lot of people in tech. While Thompson acknowledges a lot of the weaknesses in the demo, he pointed out that what Elon Musk does well is communicate and pursue a big dream. By doing that, you’re forced to break through the current constraints that exist with the big step changes required to make the dream feasible.

    This approach is what allows you to make big leaps and tends to be proven right in the end. On the other hand, more incremental, pragmatic approaches tend to win in the short term but then dramatically underperform the big dreaming approaches in the long run.

    It’s not quite the same thing as the AI coding problem, but it was a good reminder for me. It’s okay in the short term to think incrementally and work pragmatically. After all, you need to actually release something now. And, unlike Elon Musk, most of us don’t have billions of dollars to bridge the gap between the present and the dream.

    But it’s important not to lose sight of the initial dream and the more hopeful applications of AI that you had when you first learned about these tools.

    Eventually they’re going to get good enough to handle those dreams. If you get stuck in the pragmatic mode of thinking, you’re going to limit what the AI can do.

    Pragmatism is good, but sometimes it’s important to dream a little every now and then.

  • Building with AI - The Distance Between Demos and Products

    The Distance Beteween Demos and Products

    Over the past several weeks, I’ve gotten excited about all these new AI tools that have been coming out. As a product manager with some knowledge, but ultimately fairly limited technical skills, it’s unlocked a whole range of new technologies and technologies that are not just for the realm of productivity that I couldn’t have done otherwise.

    what’s really been interesting, however, hasn’t been the new things that I’ve discovered, but it’s instead been the old truths that I’ve always known that have only been further highlighted by these new AI tools

    the first lesson that’s been reinforced for me is the huge distance that exists between demos and full products. You see a lot of cool demos of these new AI tools. People create new products from scratch in minutes, it seems. I found the reality to be a lot more complex than that. Yes, it’s easy to get something running in a day, but there’s a huge gulf between that and getting it to scale - something that’s actually releasable to a customer

    It reminded me of hackathons I’ve participated in at companies: someone makes something really cool in a day or two, and then the executives go, ā€œOh, it just took you a day or two to make that? How soon can you have this in production?ā€ The answer: Months. Years. And no one wants to hear that.

    With AI, I think we’ve figured out the zero-to-demo workflow. But the demo-to-product workflow still seems to be a challenge, something that’s in flux.

  • The Obsidian journey, embracing customization

    ā€˜ve tried to use Obsidian for a couple years now. My main aim has been to use it to set up a nice Zettelkasten system, which I’ve always hoped to do after reading ā€œHow to Take Smart Notesā€ years ago. Frankly, I’ve always struggled to get into a good rhythm with it, even though it’s clearly something that will be very useful for the way I work (I’m a heavy notetaker).

    Earlier this year, I read the fantastic book ā€œDuly Notedā€ by JorgĆ© Arango, which re-energized my interest in really figuring out my note-taking system with Obsidian once and for all. It’s made me think more deeply about why it hasn’t worked out for me in the past and become more sensitive to the friction points in the workflow that could be preventing me from using it more often.

    I’ve identified a few friction points on both the levels of the tool and my own mental model for note-taking. One breakthrough I’ve had recently is in the area of customization.

    Initially, I’ve been reluctant to customize Obsidian and add a bunch of plugins like many do. The plugin ecosystem is arguably the best thing about Obsidian.

    But I was reluctant about customization because I’m generally a fan of using vanilla tools out of the box. The more customization you do, the harder it is to migrate to new systems down the road, and the harder it is to use a tool across multiple systems at once. Previously, my primary note-taking system was Notion, and it’s a fairly opinionated tool that doesn’t invite or even necessitate a whole lot of customization.

    I’ve now realized that applying this mentality to Obsidian is wrong. Because it’s based on the file system and doesn’t really have a strong opinion about how you create notes, the tool itself is too simple from the beginning to allow for an easy workflow. For example, with Notion, I never had to think about where images are saved when I add them to a document—it just becomes a part of the document. In Obsidian, however, I did have to make decisions about this, otherwise the default option caused my vault to become cluttered with attachments immediately.

    I’ve found that this willingness to customize and break Obsidian out of its defaults is a really important step for actually making it a useful tool. With an open-ended tool like this, it’s more important to prioritize utility (through customization) first, rather than preserving the optionality that would come with using the out-of-the-box configuration.

  • Gardener's Intent

    After reading Carse’s Finite and Infinite Games, I’ve begun to recognize that the garden could be the most important metaphor for product development (at least the kind I like to do, anyway).

    The key idea is that, as the gardener, you don’t ā€œmakeā€ anything happen. A garden isn’t a machine that you create and control directly to get the outcome that you want. The machine approach is the default way that most people and companies think about product. Instead, a gardener creates the right environment for things to grow, and allows them to happen. This mindset of indirect action is an important mental shift.

    But beyond the core concept, I realized this morning as I was watering my own (real) garden that I don’t have a strong sense of where the gardener should apply intent in this model.

    In gardening, I like the idea of serendipity. If I see something growing on its own when I didn’t plant it myself, I like to let it go and see what happens. If it’s growing on its own without any extra effort from me, there’s a sense that it is the most robust thing that’s perfectly suited for the time and place that it’s growing. This approach can work well: last year, the most successful producer we had was a tomato plant that sprouted up this way.

    This year, however, I have a dilemma. I have a zucchini plant that sprouted up spontaneously and is growing very well. Unfortunately, it’s so big that it’s crowding the sun from nearby cucumber plants that I had intentionally planted earlier. On one hand, the pure gardener’s mindset of ā€œseeing what happensā€ would indulge the zucchini - if it’s showing the be the most successful, shouldn’t I go with it? On the other hand, if I let it shade out the cucumber plants, I’ll lose out on plants I really wanted to have happen. How does the gardener balance serendipity and intent?

    This reveals that the gardener asserts intent (or doesn’t) at several steps in the process. First is the seeding - you can intentionally choose what seeds you plant and where. The second is environmental control - what conditions do you foster for things to grow? A third is management - if different things sprout up, do you allow them to continue?

    What’s the best way to do all of these things? Perhaps there is no easy answer; it depends on the situation and the gardener and the garden. If you don’t allow anything you didn’t plant to grow, you forgo many opportunities that might be perfectly suited to your garden. If you allow anything to grow, you’re at the mercy of pure serendipity and it may not provide you things you really wanted. At the end of the day, mindfulness and intent is still the most important thing.

  • Funding Structure and Product Management

    One thing I’ve been thinking about a lot these days is the impact of the individual in the company relative to the impact of the company itself. As an individual contributor, how much of your performance and impact is determined by you, and how much is determined by the context in which you operate?

    It’s a question that has serious career implications. The answer potentially implies a big change in the kinds of narratives we tell ourselves about our careers and the careers of others.

    As I saw the news about Squarespace being acquired by the PE firm Permira the other day and was reflecting on my own career, I was shocked to realize how much of the work I did as a product manager was really determined by the funding structure.

    I’ve worked underneath several funding structures in my career: subsidiaries that had been acquired by large companies, a VC-backed firm, an Angel-funded firm seeking VC funding, a SPAC company, and a PE-owned company.

    When I think about it, the funding structure guided almost everything, because it determined the company’s motivations and incentives. At the subsidiaries, pure revenue and company performance weren’t the guiding principle; it was whatever the parent company’s priorities were. At the VC-backed companies, all of the work and cadence were dictated by what we could show at the next board meeting. The SPAC company was all about spending money and the ability to raise more money. And on and on.

    A lot discussions of product management treat the discipline as if it exists in a bubble—there’s a ā€œgoodā€ way to operate and decide what you work on regardless of where you’re at. But, in practice, to me, it seems that the organizational incentives from things like funding structure dictate the majority of what you actually work on. Maneuver within the lines best you can, but it’s never an open field of opportunity.

    The funding structure determines the end goal of what the company views as valuable. And that perspective on value dramatically narrows the range of ā€œvalidā€ initiatives you can actually pursue.

    When interviewing, this comes in to play. Interviewers often judge you on what you worked on or perhaps how you decided to pursue an opportunity. But how much of that was really up to you? And are most interviewers sensitive to the layers of incentives that exist in an organization that bear down on all that? I don’t think most do, even if they’re product managers themselves.

    If you’re evaluating a place to work, don’t discount the funding structure - it may be more impactful on you than the company’s product, industry, leadership team, or any other factors we usually consider.

  • The Ronin Era

    Vincent: ā€œSeven fat years and seven lean years.ā€

    Sam: ā€œThat’s what it says in the Bible.ā€

    When times get tough in my career or the industry overall, I find myself returning over and over to John Frankenheimer’s classic film Ronin starring Robert De Niro, Jean Reno, Natascha McElhone, and other great character actors.

    The movie is about a group of ex-operatives who have been brought together as mercenaries after the end of the Cold War. I won’t go into spoilers about the story, but what really draws me back is how it feels.

    Throughout the entire movie, there is this pervading feeling of paranoia, unease, anxiety, and desperation. In the old order, there were clear sides, right and wrong, and resources to match. But now, these agents, so skilled and capable, have been cast out on their own. There may have been a feeling of loyalty before; but now, it’s clear that was a mirage.

    You can see where I’m going with this. Even we’ve had downturns in tech in the past 10-20 years, this feels different. Tens of thousands of people have been laid off over the past two years. Hiring still hasn’t come back, despite (or because of?) excitement over AI.

    We’re in a new era of tech rōnin. Where do we go from here? What happens to all of us? Do we all become mercenaries, hiring ourselves out to whoever can give a payback? Does power become even more consolidated with the select crew and more unequal than before? Or is there some kind of rebalancing in the cards?

  • Survival Mode

    Wow, it’s been almost seven months since my first post. No excuses, but I hope to make the third one come a lot sooner.

    It’s been a weird time in the tech world the past 12 months or so. After the tech layoffs started in late 2022, things got really dark and scary. The environment doesn’t feel a whole lot different now, other than perhaps we’ve become slightly desensitized to it all with the passage of time.

    The job market still feels really tough; there’s still an extreme imbalance of the number of applicants to the number of job postings. If you’re lucky enough to have a job, you’ve got to be incredibly careful not to get cast overboard. It feels like a death sentence.

    Yes, there are a few hopeful signs out there. The AI boom is still in its early phases, but that feels like a wave that is only being enjoyed by the privileged few who are in a small set of companies. Signs are pointing to a soft landing for the economy. There have been some good earnings numbers recently, and we can only hope those result in more hiring down the line.