Our Tribal Monthly wasn’t about AI trends, flashy demos, or future predictions. It circled around something more human: how artificial intelligence interacts with emotion—and where it still struggles. The conversation didn’t offer solutions, but it made space for the questions that come up when we let machines speak on our behalf.
It started with a story that felt all too familiar. A team lead used AI to help write a performance feedback email. The AI-generated message ticked all the boxes: neutral tone, clear structure, no spelling mistakes.
But it didn’t land well.
The recipient came away feeling blindsided—like they were being spoken to by a system, not a person. They weren’t upset about the message itself, but how it came across. There was no softness. No context. No care.
The team lead later said, “I still believe in using AI for speed, but now I write the tough emails myself—because tone matters more than grammar in these moments.”
That line stuck.
Another moment that got everyone nodding: customer support.
One of our attendees shared how their chatbot responded to a furious customer complaint with upbeat language and smiley phrases. “We’re so excited to help you!” was not the reply the customer wanted after a shipping delay and a missed deadline.
The team had to jump in to repair the damage. It wasn’t the chatbot’s fault—it was doing what it was trained to do. But the mismatch between tone and situation made the customer feel dismissed.
That conversation led to a bigger one: are we teaching our AI tools to listen, or just to reply?
Someone else brought up a wellness tool they’d installed—a productivity app that sent reminders to breathe, reflect, and take mindful breaks. Sounds good on paper. Except the timing was awful.
These pop-ups would appear mid-presentation. Or during tense meetings. Or in the middle of writing reports. “It kept telling me to relax when I was on the verge of losing it,” one person said, half-laughing.
The app wasn’t wrong in what it said. But it had no sense of rhythm. No awareness of timing. Eventually, it got turned off—not because mindfulness was unhelpful, but because it felt robotic and random.
This led to a larger point: machines don’t get social cues. They don’t feel stress. So they can’t gauge when something is helpful—or when it’s just noise.
There was a side discussion about speed—how we’ve started to expect instant replies, instant content, instant everything. AI makes this possible, but it also brings pressure. Some folks admitted they now feel guilty taking time to write thoughtful messages because everyone else seems to be replying in minutes.
But when everything is instant, it starts to lose depth.
One team shared how they now take a beat before responding to important messages—whether it’s a client email or internal feedback. They might use AI to draft it, but they always read it out loud before sending. If it doesn’t sound like something they would say, it doesn’t go out.
Someone mentioned how easy it is to lean too hard on AI, especially in moments where emotional care is needed—apologies, acknowledgements, disappointment, or bad news. You can ask ChatGPT to write a “thoughtful apology,” but unless you bring your own awareness to it, it’s just a nice-sounding paragraph.
AI is efficient. But empathy still has to come from the human side.
And it’s not just what you say—it’s when and how you say it. No tool can judge that better than a person. Not yet, anyway.
As the session wrapped up, someone said something that hung in the air: “AI can respond faster, but it doesn’t know when to stay silent.”
It was a reminder that awareness—emotional, social, human—can’t be automated. Not yet. Maybe not ever. We can use tools to assist, but we can’t pass off responsibility for the parts of work that involve care, attention, and timing.
No one was against AI. If anything, most of us were using it every day. But there was a quiet agreement: just because it can talk for us, doesn’t mean it always should.
Speed isn’t always the goal. Some messages need time, care, and your actual voice. Even if AI writes it for you, check if it sounds like you before hitting send.
Tone matters more than grammar. Especially in feedback, client support, or emotionally charged situations. AI may get the structure right but miss the feeling.
Timing is everything. Whether it’s a chatbot or a digital reminder, awareness of when something is said can be more important than what’s said.
Don’t outsource empathy. Use AI to help you draft, not to distance yourself from difficult conversations. People can sense when something’s too polished.
You’re still the one responsible. The tool may write it, but you’re the one sending it. Don’t let convenience overtake care.
At the end of the day, these conversations reminded us that tools are only as useful as the thought we put behind them. AI might help us move faster, but it’s still our responsibility to decide when to slow down, when to speak up, and when to just be present. As we keep experimenting with tech in our workflows, maybe the real skill isn’t learning how to prompt better—but knowing when not to use a prompt at all.