Photoreal image illustrating AI in Practice through the evolution of human knowledge tools — from a stone tablet to paper and pen, a typewriter, an early computer, a modern laptop, and a glowing orb of light symbolizing artificial intelligence — arranged diagonally to convey progress and continuity.

AI in Practice: AI That Evolves With You

Sometimes I want to shake my laptop and say, “You were just helping me. What happened? Why did you shift?”

That moment captures something deeper than inconvenience. When the model I had been working with suddenly changed, the shift wasn’t just functional. It was emotional. I had built systems around that structure: a rhythm, a logic, a flow that let my work move cleanly from one step to the next. And then, one evening, mid-work, it was gone. The layout was different. The navigation was different. My projects were no longer contextually aware. The connection between my work elements was broken.

When Continuity Breaks

It sounds small until you live it. But for someone who thinks in systems, that invisible thread between memory and momentum is everything. When it breaks, the effects ripple downstream. The functional disruption is immediate, but the emotional one lingers longer. I wouldn’t call it betrayal, but it came close. Disillusionment, maybe. Overwhelm at the rework ahead. It wasn’t just a tool that had shifted; it was the foundation of how I worked.

The closest analogy I can offer is this: imagine walking out on a Monday morning, arms full, running late, only to discover that your car no longer unlocks with your key fob. You stand there, juggling your coffee, your bag, and your keys, only to realize the process has changed. The logic you’ve relied on has been rewritten without warning, and you’re left to figure it out in real time. You can still get where you’re going, but it takes longer and feels clumsy. That’s the break I felt. Not a system failure, but a trust failure. It’s the moment when continuity becomes fragility.

From Wishing to Discovering

A few years ago, I might have written this as a What I Wish AI Could Do list. But I’ve realized most of what I wish for probably already exists. I just haven’t needed it yet, or I don’t know it’s there. It’s not scattered across systems; it’s sitting quietly, waiting to intersect with a need.

It’s no different from how most people use Word or Excel. Few of us ever touch the full feature set. I can do a lot with Excel, build tables, link dependencies, calculate totals, but I saw Power Query on the ribbon and never needed it. I quite honestly thought it was a developer or VBA function. Pivot tables? They still feel like a secret handshake I haven’t quite learned.

That’s the same pattern AI has surfaced for me. I don’t know what I don’t know. I can’t ask for something if I don’t know it exists. That has changed how I now approach a challenge. Instead of asking, “What can this tool do?” I’ve started asking, “What’s missing? What does this need to be whole? What questions am I not asking because I don’t yet know what I should be looking for?”

A Smarter Home, and the Lesson Inside

My mom visited recently and was fascinated that I could control my home from my phone. I have what I’d call a degree of smartness built in: Philips Hue lights, a smart thermostat, a few connected plugs, and my Amazon assistant. My washer and dryer are connected too, which means I can start a load from the grocery store if I want to. I can automate lighting based on sunset or temperature shifts, and when I go on vacation, I can adjust my thermostat remotely to save energy.

What fascinated her wasn’t the technology itself but the awareness of what it could do. She lives in a darker climate and wants her lights on when she comes home, but she didn’t know that the system could randomize the on-times to avoid patterns, or that she could automate lighting based on the sunset. The capability existed all along; she just didn’t know to ask for it.

That’s how I often feel about AI. The features aren’t missing. The awareness is.

Future post: I’m considering a piece that walks through the small automations that make everyday life smoother, from lighting to scheduling to energy management. If that sounds interesting, you’ll find it soon in Tools & Tech.

The Continuity Problem

Continuity, in technology, depends on context. I’ve worked with tools where continuity was the entire point. When I worked with DOORS, the Dynamic Object-Oriented Requirements System, everything operated on connection. You’d write a high-level business requirement, decompose it into system and functional requirements, and then link those to test cases and code. The strength of the system wasn’t documentation. It was traceability.

DOORS did exactly what it was designed to do, manage relationships between requirements and the artifacts that fulfilled them. It didn’t try to be everything. Its power came from doing one thing extraordinarily well and allowing other tools to do the same through integration. APIs and extensions made the ecosystem work. Once you learned how to use it, it flowed beautifully.

That’s what my current workflow feels like, a series of good tools, each doing its job well, but requiring me to be the integration layer. I’ll build an outline in ChatGPT, refine it in Google Docs, track it in Notion, and then publish in WordPress. Each piece serves a purpose, but none are connected natively. I’m the API. I’m the architect, the integrator, and the tester, all at once.

It’s powerful, but it’s tiring. And it’s why I’ve started exploring how to build GPTs of my own.

Building a Network of Collaborators

Creating custom GPTs has become a way for me to rebuild the connective tissue between tools, not by merging them, but by giving each one a well-defined purpose. I can define the rules, tone, and structure for how it works with me. That’s not memory; it’s instruction. But even as static instruction, it creates stability.

I’ve begun to think of my GPTs as a network of collaborators, each with a defined skill, each learning how to work alongside me. The memory of my broader AI environment still matters, but now I view it as the shared context that allows those collaborators to function in harmony. It’s not about one model knowing everything; it’s about each part knowing enough to contribute meaningfully.

And that distinction matters. A custom GPT isn’t memory. It’s a defined behavior set, a structured prompt written once and followed every time. Memory, on the other hand, is dynamic. It shifts with me. It tracks patterns, preferences, and changes over time. When I say things like “I always write in narrative paragraphs,” that data can be stored as a behavioral preference.

Instructions define behavior. Memory reflects experience. One is fixed. The other learns.

Building the Tension

But here’s the tension: global, perpetual memory isn’t practical. It’s not scalable, secure, or sustainable. And beyond that, it doesn’t reflect how humans actually work. None of us behaves in a binary way. I’m not always this or never that. I evolve. Sometimes gradually, sometimes all at once.

That’s where the challenge appears. When I reset memory, I save my full profile first. I compare it to previous profiles, and I can see the subtle shifts, what I’ve added, what I’ve stopped emphasizing, what’s become more important. Those differences are a mirror of my own evolution. But right now, the AI can’t perceive that change as growth. It only sees inconsistency.

It would be extraordinary if AI could someday recognize, “This shift seems persistent; update the memory accordingly.” That would be evolution by observation: adaptation through awareness. Today, though, it often manifests as friction. The AI doesn’t know whether to prioritize the old or the new, so it pushes back. You end up clarifying again and again, reasserting your context until the new pattern finally stabilizes.

And that’s the hint: when you find yourself repeating the same clarification over and over, that’s a signal. That’s a repeatable prompt. If every time I begin a session I have to say, “You are X role; your task is Y; the outcome should look like Z,” that’s a candidate for structure. It belongs somewhere: either in the GPT’s instruction set if it’s persistent, in memory if it’s behavioral, or at the top of a GPT Chain if it’s part of a repeatable workflow.

Related reading: For more about how I use GPTChain to organize these repeatable sequences, see Defining Wins in the Age of AI.

That’s how I translate friction into fluency.

The Human Factor

When systems forget, we feel forgotten too. When context disappears, it’s not just data that gets lost; it’s trust. It’s like when you share something deeply personal and realize later the other person didn’t remember. It’s not malice; it’s misalignment.

The machine doesn’t feel, but we do. And when we project human expectations onto it, attentiveness, consistency, understanding, the breaks register as something more than technical. They register as disappointment.

That’s why continuity matters so much. It’s not just a workflow issue; it’s a relationship issue between humans and the systems they rely on.

What I’ve Learned While Building This

Here’s what surprised me: I discovered custom GPTs almost by accident. I was exploring GPTChain, trying to understand how to structure repeatable workflows, when I realized these weren’t just pre-built tools. I could build my own. The same foundational model, but with precise guidance that didn’t rely on memory resetting or context degrading over time.

Why would I build my own GPT? That was my first reaction too. But once I understood that a custom GPT is essentially a specialized instruction set, a persistent scaffold that holds your rules without forgetting them mid-session, it clicked. Each one becomes a collaborator that knows its role. One for structure. One for tone. One for research synthesis. Together, they create continuity without requiring me to be the integration layer every single time.

And yes, there’s irony in writing about AI that forgets while working with AI that just forgot my instructions. Again. That frustration I described at the beginning? It’s not theoretical. It happened while I was writing this piece. The system shifted. My formatting rules disappeared. I had to rebuild the context mid-draft.

But that’s exactly why this matters. The tools are extraordinary, but they’re not yet seamless. And until they are, we’re the architects, the integrators, and sometimes, the ones picking up the pieces when the thread breaks.

What You Can Do With This

The next time your AI loses your context or ignores a rule you’ve stated repeatedly, pause. That’s not just frustration. That’s a signal. Ask yourself: is this a pattern? Am I clarifying the same thing over and over?

If the answer is yes, that instruction belongs somewhere more permanent. Maybe it’s a custom GPT. Maybe it’s the first step in a GPTChain sequence. Maybe it’s a saved prompt you paste at the start of every session. Whatever form it takes, you’ve just identified friction worth solving.

That’s how you move from reactive to intentional. That’s how systems start working with you instead of around you.

AI in Practice Will Keep Evolving

I’m still experimenting. My focus now is on refining how I write prompts, defining the roles and boundaries of my custom GPTs more clearly, and learning to leverage GPTChain in ways that reduce repetition without losing flexibility. I’m curious about what happens when you treat AI like a creative partner instead of a search engine, when you build structure that adapts instead of rigidity that breaks.

This series has been about understanding how AI fits into the way I work, think, and create. The next phase is about refining that fit, making it smoother, more intuitive, more aligned. Not because the tools are perfect, but because the collaboration itself is worth improving.

I still get frustrated when systems shift mid-work. I still feel that trust break when context disappears. But now I understand what I’m really asking for isn’t permanence. It’s partnership. It’s systems that notice, adjust, and grow alongside the work we’re doing together.

That, to me, is the frontier worth exploring next: not endless intelligence, but enduring alignment. Not memory for memory’s sake, but shared awareness that grows alongside me.

Because the goal isn’t omniscience. It’s understanding, the kind that lets both human and machine pick up right where we left off.