January 2026
ME 2.0, essentially working?, manufacturing cool, IRL FR, and power density
In May, I found myself wanting a place to stick my thoughts. I wasn’t particularly interested in the public forums of Twitter/X or LinkedIn, and while I do a lot of long-form writing, not every idea is ready for that format.
I decided to create “chewing on,” a running blog you can read here, which I update periodically with reflections, observations, and opinions. (It is much, much less polished than the below.)
Throughout this year, I pulled ideas from chewing on, along with a few others, and shaped them into the themes below.
These are the areas where I’ll be spending time in 2026:
ME 2.0
Where once our memories lived in scattered notes, photo rolls, and half-recalled ChatGPT threads, personal intelligence systems now promise something more ambitious: continuity. Not just storage, but understanding. Context layered over time, what I’ve read, chosen, forgotten, or avoided, forming a living map of preference and experience.
As models move from reactive assistants to ambient companions, the burden of articulation begins to fall away. No more precise prompting. No more explaining what I need, again and again. Instead, systems already know, anticipating intent through accumulated history, emotional signals, and behavioral patterns. A memory becomes less archival and more interpretive.
But this raises a deeper question about the authorship of the self. If an external system remembers more faithfully than I do, tracking motivations, inconsistencies, and growth, where does “my” memory end and delegated cognition begin? Does ME 2.0 sharpen identity by reflecting it to us, or subtly rewrite it by deciding what is worth remembering at all?

essentially working?
As AI absorbs more task-level knowledge work, productivity increasingly favors the curious over the credentialed. In this new frame, essential work looks less like execution and more like orchestration. The most valuable operators will sit above swarms of agentic workflows, designing, delegating, and supervising chains of autonomous systems spanning research, operations, and decision-making. Work becomes the management of intent, not the completion of tasks.
As agentic tools continue to proliferate across verticals, a second-order layer inevitably emerges: systems to coordinate the systems. Platforms that aggregate, govern, and scale agents–deciding which models act, when they act, and how their outputs compound. Control shifts from individual tools to operating layers.
But what becomes of the balance of labor & leverage? When “doing the work” means directing intelligence rather than supplying it, who remains essential, and by what measure? Does productivity accrue to those who command the agents, or to those who design the rules by which they operate?
manufacturing cool
Can cool be manufactured, or only discovered? For decades, products earned cultural relevance through proximity: who used them, where they appeared, and how slowly they spread. Today, distribution itself has become a creative act. Narrative, placement, and algorithmic amplification now shape what enters the zeitgeist just as much as the product does.
As audiences fragment and attention becomes programmable, “cool” is becoming manufacturable. Startups no longer rely on organic adoption alone; they can refine distribution with the same rigor once reserved for product design: aesthetic coherence, influencer adjacency, and cultural timing are levers.
This reframes GTM as a form of cultural production (i.e., consumers' cultural diets). Rather than building for users and hoping for resonance, companies can script relevance, testing, and iterating on taste at scale. Cool becomes less about authenticity in the abstract and more about believability within a specific cultural moment.

IRL FR
There’s a renewed seriousness to the connections, communities, and conversations happening in real life. After years of over-indexing in digital interactions, the marginal utility of online communities is flattening. What’s emerging is demand for IRL FR (in real life, for real).
The long tail of Covid-era isolation still lingers, and we know how to connect digitally, but we’re less practiced at reentering community. People want to show up, but there’s a lack of infrastructure to make it easy. This creates space for platforms that don’t compete with IRL interaction, but enable it, tools that aggregate intent, reduce social friction, and make IRL legible and repeatable. The next layer of platforms will scaffold IRL connections. They may sit at the coordination layer owning discovery, scheduling, and identity across offline experiences or at the brand layer, where IRL presence compounds loyalty and LTV.
power density
As generative AI scales, intelligence is no longer abstract; it is physical. Models, like humans, demand electricity, water, cooling, and land. What once felt like an infinite expansion of software is increasingly constrained by grids, substations, and energy contracts.
This has shifted the advantage away from algorithms alone and toward infrastructure fluency. The race to build larger models quietly becomes a race to secure power, driving new data center clusters, stressing local grids, and reshaping how and where intelligence is produced. In response, the stack had begun to adapt. Purpose-built accelerators promise more intelligence per watt. Workloads have been moved across time zones to follow renewable supply. Other projects, like behind-the-meter capacity, private microgrids, and eventually small-scale nuclear, could reshape what “cloud” even means.
But how do these tradeoffs resolve? Do we optimize for efficiency, locality, or control? Does access to energy become the true limiter of who gets to build, deploy, and scale the next generation of AI?

___
If you are a) building in any of the areas or b) just want to talk about any of these themes, drop me a line at maria[at]redbud[dot]vc