At Bodine & Co. — and, I’m guessing, at your organization — there’s real excitement about AI and how it can help us improve customer experiences. On the other hand, we’ve got a lot of reasonable questions about AI trust and risk, along with concerns around confidentiality and responsible use. Both instincts are valid. In fact, I believe that good leadership in the age of AI requires a tension between them.
For the past several years, we haven’t had an official policy on AI usage. But as our team has grown, we realized that creating a policy grounded in responsible AI was, well, the responsible thing to do.
Want to know what we didn’t do? Lock everything down and call a halt to all AI usage while we assembled a committee to debate the pros and cons, watching the AI world speed by us with update after update that rendered our carefully crafted rules and regulations immediately obsolete. [Acting from a place of fear isn’t my jam. I’ll dive more into that topic in a future post.]
Instead, we’ve taken a safety-first approach. Our new responsible AI policy gives our team enough clarity to move forward thoughtfully and with the confidence they’re doing the right thing.
That starts with being clear about what AI is for inside our company.
We use AI to support our thinking, not replace it. To help with first drafts, pattern recognition, analysis, synthesis, and exploration. To create more whitespace for ideas. To move from “blank page” to “something to react to” more quickly. AI is our collaborator, not an authority — and never the final voice.
From there, we get very practical about what does and doesn’t belong in our AI tools.
As a general rule, we’re comfortable putting information into free AI tools that we’d be comfortable discussing in a room full of colleagues. We’re not entering anything that would compromise client trust, confidentiality, or intellectual property. That means no client names, no sensitive business details, no proprietary information. When we want to think through a real situation, we anonymize it. We change names, remove identifying details, and focus on the underlying pattern or problem instead.
This approach lets us get the benefit of AI’s perspective (and experiment with more free tools) without handing over things that aren’t ours to give.
We also spend time making sure our team understands that not all AI tools are the same. Different tools handle data differently, as do free and paid versions. Some retain information longer than others. Some use inputs to improve their models, while others don’t. I don’t expect everyone to become an expert in model architectures or data pipelines — but I do expect awareness. Knowing that “AI” isn’t a monolith has changed how our team uses it AND led to better judgment.
Our AI policy sets clear expectations, explains the why behind them, and trusts our smart Bodine & Co. humans to make good decisions within thoughtful AI guardrails. And if you’re wondering, this way of working hasn’t slowed us down. Quite the opposite. When we talked about what’s OK and what’s not, our team stopped hesitating. They stopped second-guessing. They started using AI with intention instead of anxiety.
This isn’t a finished policy. Like everything else about AI, it will evolve. But it’s a clear snapshot of how we’re leading into this moment: safety first, humans always, and progress very much welcome.
More to come.