AI Isn't in the Workflow, But AI Is How I Built It
How to use AI to create hyper-specific automation you couldn't buy
I’ve been thinking about how people are using AI today. Some try ChatGPT, get mediocre results, and write it off. Others use it daily drafting emails, brainstorming, raising their productivity. And then there’s using AI as a step in workflows, like my Analysis Dossier that automates account research by pulling 10Ks, earnings calls, and mapping insights.
But I’ve been using AI in a different way. What I’ve been building over the past few months looks nothing like a conversation with a chatbot. I’m building automated workflows so specific to our business, so tailored to our exact processes and data, that no tool could solve for them. And what might surprise you: in many of these workflows, AI isn’t even a step in the process.
AI is how I built the process.
The Gap Most People Don’t See
There’s a massive distance between “ask ChatGPT a question” and “AI that actually drives business impact.” Most people stop somewhere along the spectrum, and that’s fine, there’s value at every level. But what I haven’t seen as much is AI as an enabler of capabilities you didn’t have before.
I’m not talking about AI making you 20% faster at writing emails. I’m talking about AI teaching you how to automate, helping you build workflows that save hours every week, and unlocking solutions so specific to your business that buying them off the shelf isn’t even an option.
Sure, there’s a lot of buzz around ‘vibe coding’ apps. But for me, the goal isn’t a product launch. It’s about driving operational excellence and where AI can help.
What’s Actually Missing
When AI feels underwhelming, it’s usually because one or more of these elements is missing:
Context. Garbage in, garbage out. If you ask an LLM to “analyze this data” without giving it the full picture of what the data means, the text of the data, what you’re trying to accomplish, you’ll get generic, even hallucinated output. And what’s more, is the operational context that AI doesn’t have. For example, “closed won” or “stage 4” means something specific to our business and our process that the LLM doesn’t know unless you tell it.
Workflow design. AI works best as a step in a process, not a standalone magic box. The question isn’t “what can AI do?” but “where does AI fit in the workflow, and what comes before and after it?”
Integration with your actual systems. A beautiful AI-generated analysis is not scalable, repeatable, or as valuable if it sits in a chat window instead of in your systems, populating your reports, or triggering the next action in your process.
Iteration and refinement. The first version of any AI workflow is rarely right. The value comes from running it, seeing what breaks, understanding why, and improving. This is just like building any operational process. It takes cycles.
And sometimes the answer to “where does AI fit?” is: it doesn’t. I recently tried using AI-powered search to match a list of accounts to our database. The results looked impressive, with high confidence scores, clean output, but the “high confidence” matches were full of errors.
“Metro Manufacturing” matched to “Metro Insurance” because the AI was matching on conceptual similarity, not the actual entities. I rebuilt it (with AI to guide me) using deterministic logic instead: field normalization, fuzzy string matching, and explicit business rules. The false positive rate dropped from 40% to under 5%. Knowing when AI fits and when it doesn’t is part of the design work.
The Workflow Where AI Isn’t a Step
Let me share a recent project that illustrates what I mean by AI as an enabler rather than a tool.
We receive regular reports from an external database that we need to analyze and reconcile against our own system. If you’ve worked in Ops, you know this pain: external data in one format, internal data in another, and a manual process of matching, comparing, and updating that takes hours every week.
Someone had to manually download and review the reports, pinpoint the differences, flag the changes, and then manually update in our systems. Even though we could use Excel workflows to automate some of this, it was still tedious, and the system changes still had to be done manually.
The workflow I needed to build was highly specific to help with these steps:
Generate a report from our internal system that saves to a shared drive
Pull the external report from a separate shared drive
Standardize the fields and match on unique identifiers (ID, or Name as a fallback for example)
Compare week-over-week to identify what’s been added or changed
Generate an Excel report with charts and analysis
Reconcile against what’s in our system
Create a separate sheet of items that need to be added or modified
Push updates to our systems automatically
There is no tool you can buy that does exactly this. You could buy a platform that might do some of it with massive customization, significant cost, and months of implementation. But this workflow is so specific to our data, our fields, our matching logic, and our processes that it only makes sense to build it ourselves.
And AI is not a step in this workflow. There’s no LLM analyzing the data or generating insights. It’s pure automation. Python scripts running in Azure, triggered by Logic Apps, outputting to Excel, and eventually feeding into our systems through a low-code automation tool.
So where does AI come in? AI is how I built it.
AI as My Development Partner
Six months ago, I couldn’t write Python. Today, I have workflows running automated analysis that would have required a developer before. The difference is that I used AI as my coding tutor, my debugger, and my development partner.
It wasn’t magic. I still had to learn and be familiar with the logic. When I needed to standardize fields, I didn’t just ask for code. I had to learn how to use the Pandas library to handle dataframes. When the script failed because of a ‘KeyError,’ I pasted the traceback into the chat, and the AI explained that my column headers had invisible trailing spaces, something I never would have caught on my own. When the script failed on edge cases, I pasted the error message and it explained what went wrong and how to fix it. When I needed to create charts in Excel programmatically, it walked me through the libraries and syntax.
This is what I mean by AI unlocking capabilities. The workflow itself is deterministic automation, with no AI involved in the execution. But AI made it possible for someone with my background and little coding experience to build it, and to build it in days, not even weeks or months.
What Made This Work
Getting this reconciliation workflow running required more than just coding help. The hardest part was actually the discovery and alignment work:
Process alignment across teams. What exactly are we trying to see in this analysis? What counts as a “change” that matters? Who owns the reconciliation decisions? These conversations took longer than the technical build.
Understanding the edge cases. What happens when the external data has missing fields? What if an account appears under a slightly different name? The process couldn’t be fully automated until we mapped these scenarios, so we included a “manual review” process.
Defining the rules. When do we add a new record versus modify an existing one? What’s the threshold for flagging something for human review? These business rules had to be explicit before I could encode them.
This is why these solutions are so specific. It’s not just that the data is unique, it’s that the decisions are unique to how your organization and processes operate.
Even before full automation was complete, the process of building this workflow created immediate value. By mapping out the data flow and logic, we discovered gaps and information that should have been in our system but weren’t.
The visibility alone was worth it, saving hours of manual work while catching things we might have missed before.
From Operator to Builder
If you’ve tried AI and felt underwhelmed, that’s valid. But it’s usually a setup problem, not an AI problem.
The chatbot experience of asking questions and getting responses is the most visible use case but often the least impactful. The leverage comes from:
Building AI into workflows with proper context
Using AI to develop custom solutions for your specific challenges
Combining AI capabilities with your operational expertise
The gap between “tried ChatGPT” and “built an automated workflow that saves hours every week” is significant. But it’s surmountable, especially for people who already think in systems and processes.
If you’re in Ops and feeling skeptical about AI, I’d encourage you to reframe the question. Instead of “what can AI do for me?” try “what could I build if AI helped me learn?”
Start with a process that’s painful and manual. Map the steps, identify where the logic is clear and repeatable, then ask AI to help you automate it piece by piece. That is where massive value is hiding: not in the chatbot, but in what you can build with AI’s help.

