Postmodern software
and other Notes-app musings to a hypothetical co-founder
In the second iteration of “if I had a co-founder,” here are some late-night texts to a hypothetical co-founder who currently may or may not live in my Notes app.
Community > Product Learnings
I have been testing Aura Check’s core thesis the old-fashioned way: by convincing Brooklynites to do breathing exercises. I started a set of community meetups called the Parasympathetic Club, which has become my primary channel for real-world feedback. The gatherings hit a few birds with one stone: brand building, iterating on the product, and unfiltered customer discovery.
There have been three events so far, each with a slightly different feeling but the same outcome: women talking about crash-outs related to work, bosses, and attachment-challenged partners. The turnout has been strong.
Each cohort tests a different version of Aura Check, and attendees tend to fall into the same archetype: Gen Z and millennial Brooklynites clustered around Bedstuy, Clinton Hill, and Fort Greene. It ends up functioning as a helpful control group because New York neighborhoods are surprisingly reliable consumer-taste filters, for better or worse.
I have been taking that feedback and spending time heads down implementing it (part of why I have been quieter on socials). I feel confident I can crack distribution, but product improvements need to land before I can put more energy toward that again.
Conversational UX
Across my events one clear trend stands out: personality really matters. This mirrors a larger industry focus. In chat-based products, personality and tone effectively are the user experience. Many AI teams now refer to this as “conversational UX.” OpenAI recently reorganized its Model Behavior group to focus more directly on this, a move that signals how central personality has become to foundation model development.
In general, personality is a dial, not a switch. Engagement tends to drop at both ends of the distribution, with either too much or too little personality. I can clearly see these patterns in my own data. Aura Check needs to feel culturally fluent, witty, and emotionally safe, especially when you are building something that interacts with people’s intimate crash-out moments. The voice is close to where I want it.
Other products are running into this tension too. Take Poke by The Interaction Company. It is a wildly viral AI assistant you can text over iMessage. It is brilliant and genuinely useful, and its personality is often cited as its differentiator beyond utility, but it also feels very exaggerated in a way that makes me wonder about long-term retention. It roasts your inbox and shopping habits, which is funny at first, but after a few rounds I catch myself thinking “okay, I get it.” Still, it is one of the first consumer AI tools outside of Chat that people genuinely seem to love.
I am optimizing for personality at the top of the funnel, especially during onboarding, and then gradually dialing it down as the interaction deepens so it does not detract from usefulness. Ideally, this strikes a balance between screenshots and retention.
From Voice to System
I recently did a demo of my MVP at Flora. In the very vibey lighting of the Domino Sugar Factory, I talked through my development process and how it has mostly lived in what you could call the system prompting era: building around one foundation model and treating the prompt as material. Careful system prompting is how I’ve developed the Aura Check personality so far.
In an earlier Substack, I compared this phase of AI building to Modernist sculptural practices, where artists like Brancusi and Noguchi worked with their materials rather than trying to dominate them. Their job was to bring out the natural personality of the stone. The craft was about listening first, then intervening in minimal, intentional ways. System prompting feels similar. You are in a direct dialogue with the model, shaping raw material by adjusting tone, rhythm, and constraint.
Now that I feel good about the Aura Check personality, I am moving into another core lever of conversational UX: memory. There is a lot to say about memory, since it is one of the most active technical frontiers right now. Memory can get very complex very quickly. Similar to how I approached system prompting at the beginning, I am trying to keep this layer simple while still meaningfully improving the experience. At a high level, the challenge is to surface the right insight or user fact at the right moment, in a way that feels natural to the relationship. No one has fully cracked this yet. Teams are experimenting with different architectures, but there is consensus that memory is the backbone of agentic systems because it allows agents to retain and use information over time.
Agentic architecture is now the standard, and we are clearly past the system prompting era. As I begin experimenting with memory and basic agentic scaffolding, the work feels closer to Postmodernism, where value moves from the object to the system. If Modernism was about collaborating with the material, agentic architecture resembles Postmodern practices, where meaning comes from structure, instruction, and the choreography between components. In this case, the components are agents, memory layers, and tools, and the orchestration between them becomes the creative act.
In art, Postmodernism shifted meaning away from the cult of the art object to the systems that generated it. As AI systems mature, value is moving in a similar direction. Namely, value is moving upstream. Less about the app itself, and more about the systems (or tools) that create it, and the relationships between agents, data, and people.
If I had a co-founder, this is probably the conversation we would be having in a coffee shop in Fort Greene. Instead, I am having it here, with you.






Love the name Parasympathetic Club, and the comparisons of software to sculpture! If you’re ever looking for more people to bounce these kinds of ideas around with, I’m always down to chat.