How to Actually Get AI Working at Work
Lessons from building 9 tools in 11 months.
I’ve been writing a lot about agents lately. About building and nurturing Atlas through its collapse (the cost of learning to build through vibe coding) and needing a team of other agents to recover. Those posts are fun to write and what I learn from my experiments do translate to work, but today, I wanted to write about how I got here. Not the exact origin story, but the actual progression, from figuring out what AI could do beyond chatbots, to building agent coworkers that run alongside my team.
When people ask me how to get started with AI at work, I don’t tell them to go build an autonomous agent. I tell them to go find a problem. That’s what I explored in three phases, each one teaching me something I needed for the next.
Last May I barely knew what Python was. I’m a marketing operations leader with 16+ years in the field. My background is building marketing infrastructure, managing tools, data flows, and automation, but the kind you build in Marketo and Zapier, not the writing code kind. I had opened a terminal a handful of times (probably by accident). And I definitely had no plans to build anything that could be described as “an AI agent.”
What I had was a series of pain points. And each time I solved one, I walked away with a new principle that shaped how I approached the next.
Phase 1: Build the foundation. Learn where AI actually earns its place.
Leadership asked us to find ways to leverage AI to make things more efficient. Fair enough, every company was saying some version of that last year. But instead of starting with the technology, we needed to start with solving an existing pain point: the time it took sellers to prepare for account conversations. Researching a single company, their financials, strategic priorities, competitive landscape, tech stack, org chart, our own engagement history, could take days, sometimes weeks. And every seller did it differently, with different depth, pulling from different sources.
Most of this information is public: SEC filings, earnings call transcripts, annual reports, news. And we had internal data that could connect to the external research. We could use AI to read across all of it and surface the analysis we were assembling by hand.
That became the Analysis Dossier. The workflow ingests a company’s 10-K filing, earnings call transcripts, annual reports, and CRM data. AI reads across all of it and pulls out strategic priorities, challenges, and more. Then it maps those against our positioning and generates talking points, discovery questions, and even a one-page executive slide. All delivered to the seller’s email.
I built this in Zapier and used Python along the way to extract data, format reports, and connect systems Zapier couldn’t reach. Through building the Analysis Dossier, I learned that AI is incredible at synthesis and intelligence, reading across sources, extracting meaning, identifying patterns, generating contextual recommendations. But it’s not good at everything.
Early on I tried using AI for data matching between systems and the results were worse than a simple fuzzy match in Excel. I also built a pipeline reconciliation tool in pure Python, no AI, just code that automated a manual data process that had been taking hours every week. I used AI to help me write the Python, but the solution itself didn’t need intelligence. It needed logic.
That split became a principle I follow: deterministic work belongs in deterministic workflows. Synthesis and intelligence is where AI can add the most value. Save the AI for work that actually requires intelligence.
The Analysis Dossier has generated over 2,000 reports. But more importantly, it taught me how to build AI infrastructure that people actually use. Reports land in the seller’s email and on SharePoint. Nobody logs into a new platform. Nobody needs intense training. The intelligence arrives in systems people already work in, in formats they already understand, meeting where teams already are.
Phase 2: Get people to use it. Enable teams to adopt what you build.
Once the dossier was working, the intelligence it extracted became the foundation for more tools. The dossier fed into value selling playbooks with persona-specific scripts, discovery questions, and objection handling for our inside sales team. Those fed into account intelligence briefs that became a repeatable engine for campaigns. Each tool built on data the previous one had already extracted and verified, and each solution revealed the next problem worth solving.
Meanwhile, a completely different challenge came up. Our demand gen team needed display ads and the process was hard to scale. Every campaign required banners in different sizes, aligned to brand standards, with personalized copy for different verticals and accounts. Designers were backed up. Campaigns were waiting.
So I built another tool: the banner generator. A form where you input requirements around sizes, campaign context, and enter your copy (or let AI suggest it), and hit submit. The workflow generates all the banner variations programmatically, saves them to SharePoint, and emails you the link. Two modes: freeform where you control the exact copy, and AI mode where it suggests messaging based on campaign context.
The banner generator now helps our teams create display ads for vertical and ABM campaigns. The ads are performing better than what we had before, standardized to brand, more focused messaging, and produced at a pace that enabled us to get to market much faster.
There was friction in their process and the solution didn’t require anyone to learn anything new. All a marketer does is fill out a form and get ads in their inbox. Simple as that.
The tight feedback loop, updating features in days, sometimes hours, made the tool more usable and sticky across the team. Marketers wanted more control over text and line breaks, so I added it quickly. That cycle of hearing what’s needed and being close enough to fix it is what turns a tool people try, into a tool people depend on.
This is the phase where I really understood what makes AI tools get adopted versus what makes them sit on a shelf. It’s not the sophistication of the model or how impressive the demo is. It’s whether you started with a real problem, whether the output lands where people already work, and whether you can iterate fast enough to keep up with what they actually need. Once people saw the dossier working, they wanted new playbooks. Once they saw the banner generator, they wanted even more features. That compounding effect only kicks in if the first tool earns enough trust to create demand for the next one.
Phase 3: Push the frontier. Apply what you’ve learned to AI coworkers.
Everything I built up to this phase runs when you trigger it. You submit a form or check a box, a workflow executes, you get output, it stops. That’s the definition of a tool: you activate it and it does its job.
Then around December, I started experimenting with agents. Inspired by Tim Kellogg’s agent, Strix, I wanted to build something that didn’t just run when I triggered it, something that ran continuously, remembered context, and evolved over time.
Outside of work, I’d been building a persistent agent called Atlas. It runs continuously, maintains its own memory, and makes decisions without my input. What I learned building Atlas translated directly to work: the conditions you build around an agent like persona, skills, memory, validation, matter more than the model powering it.
So when I started using agents at work, I had two advantages: months of learning how to make agents reliable (how to get them to align to exactly what I need, how to build in guardrails) and the playbook from Phase 1 and 2. Start small. Pick a real use case. Embed where the team already is.
I started with an SEO strategy agent. It produces competitive analysis, identifies keyword gaps, writes page-level optimization briefs, and drafts content. Once I added more capabilities, it produced a 51-page competitor teardown with screenshots of homepage layouts, gap analysis, and specific recommendations. Research that would have taken a team days, compressed into minutes. I’ve since added a technical SEO agent and a social media strategy agent using the same approach.
Another difference from the tools I built versus the agents: these agents work in our project management system. They pick up tasks, post updates, and deliver work where the team already collaborates. No new platform. No new interface. Just a new team member that happens to run on infrastructure instead of coffee.
What eleven months of building taught me.
Looking back, the most important things I learned weren’t exactly technical. They were about how people work and adopt new tools.
I learned how to identify where and how to use automations, AI, and agents, and which problems need intelligence, which just need good plumbing, and which are ready for autonomous coworkers.
I also learned how to implement solutions that teams actually use. Not just using chatbots, but targeted solutions that start with a specific pain point, land where people already work, and earn adoption one use case at a time.
There’s a bigger picture around organizational transformation, change management, data infrastructure that we’re still figuring out. But what we have figured out is how to make AI work inside real workflows, with real adoption. And I think that’s where most teams should start anyway.
If you’re looking to get your first AI pilot off the ground, or you’ve tried and it didn’t stick, here’s the framework I use. I recently ran a workshop walking another team through this exact approach, and it maps to everything I’ve described above.
1. Find the challenge. What manual process eats your team’s time? What’s inconsistent? What can’t you scale? Start with friction you can feel, not a technology you want to try.
2. Determine the AI approach. Does this need research and synthesis? Content generation? Data analysis? Or is this actually a workflow and automation problem that doesn’t need AI at all? Where does the data live? The approach depends on the answer.
3. Scope the pilot. What’s the smallest version you can test in two weeks? Who are your 3-5 pilot users? What data do you already have access to? Don’t boil the ocean. Build the smallest thing that proves the concept.
4. Define your success metric. Time saved? Conversion lift? Output quality? Adoption rate? Pick something that ties to a number leadership cares about. Something like “Account research went from 3 hours to 10 minutes,” or even better, “Account research drove a +25% lift in meeting rates and 10% in pipeline conversion.”
Every tool I built followed this pattern, even when I didn’t realize it at the time. The dossier started because sellers needed faster account research. The banner generator started because campaigns were bottlenecked on design resources. The agents started because SEO had repeatable research tasks that didn’t need to wait for a human to trigger them.
Too many teams work backwards and buy the tool first, then look for the problem to justify it. What I’ve outlined isn’t a new lesson. People, process, technology, in that order, has been the playbook for every successful implementation I’ve seen in my ops career. AI doesn’t change that. If anything, it makes it even more valid. The technology is more powerful than ever. Getting the people and process part wrong means your team misses out on the most impactful tools to ever be available to them.
So start small, pick a problem, and build something that works. Curious to hear what people are leveraging, building, and learning. Drop me a comment or message below!

