In this edition, I decided to focus on a fundamental subject: trust.
Trust is hard earned and needs to be maintained constantly.

With the widespread global adoption of AI tools and incessant headlines on the impact this will have on the job market, threatening people’s livelihood, AI is threatening to erode this critical and fundamental principle which either unites us or divides us all.

 


A problem we’re not yet talking about

Trust in remote teams breaks fastest when people can't predict what happens to their work, words, or data. And right now, AI is making that prediction impossible.

Microsoft and LinkedIn found that 75% of knowledge workers already use AI at work (most of them starting recently). That's adoption at a pace most organizations haven't even begun to process, let alone integrate properly.

In remote settings, this speed without clarity can create confusion as well as fear…

Will my recording and stats be used against me?

Will my work be judged by an algorithm I don't understand?

Will AI replace me if I'm honest about what's not working?

People aren’t necessarily irrational nor may they fear technology, but people do exactly what humans do when the rules suddenly become unclear: they protect themselves.

 

This hits remote teams harder

Psychological safety is the belief you can take interpersonal risks without facing negative consequences. Harvard Business Review's research has consistently shown it's what separates high-performing teams from the rest.

But remote work already makes psychological safety fragile:

-          Silence is ambiguous. Is it disagreement? Confusion? Disengagement? You can't tell.

-          Texts prevail over conversations. Not everything lands as intended and is often interpreted differently.

-          Hybrid meetings create two classes of participants: a) those in the room who catch all the side cues, and b) those on screens who miss them entirely.

MIT Sloan's research on collaboration found that mixed hybrid meetings can actually be worse than being fully remote or fully collocated (didn’t need research to tell us that!), because informal power and context concentrate in the room.

Now add AI to this mix:

-          Meeting summaries capture a version of events that might be wrong

-          Call scoring feels punitive when nobody explains the mechanisms

-          Copilots let people bypass their teammates entirely (how many times have you heard: "I'll just ask the tool"…)

 

Trust is a baseline for prediction

Trust is fundamentally about prediction. People decide whether to speak up based on 3 calculations:

1)      Is it safe? Will I face consequences for this?

2)      Is it worth it? Will speaking up actually change anything?

3)      Is it fair? Do the same rules apply to everyone?

AI changes all 3 at once. It changes who sees what (visibility), how performance gets evaluated (judgment), and who actually makes decisions (agency).

 

Trust follows a predictable pattern:

1) Signals (what people observe). For example:

- How leaders handle feedback and make decisions

- How the team behaves in meetings such as who interrupts, who stays silent

- What the systems do (i.e.: AI monitoring, copilots, automated scoring, call reviews etc)

2) Interpretation (what people conclude)

- Speaking up is risky here

- AI is basically surveillance

- Only certain people benefit from these tool

 

Behaviour (what people do next)

More compliance and less candour

More workarounds like shadow AI, side channels, private message groups

Fewer early warnings, fewer difficult conversations

 

4)      Results (what leaders see)

Execution slows down, problems appear out of nowhere

Fewer ideas surface, more rework

Performance drops that seem to ‘prove’ the team can't be trusted

Which then becomes a new signal. And the loop goes on on.

Your job as a leader is to avoid or break this loop by making the environment more predictable.

 

What trust needs

A trust conversation can very easily stay vague, but they can be made operational instead.

In our society and business, we can break it down into 2 areas:

a)      Human trust

  • Care: Am I treated like a person or a resource?

  • Competence: Do you deliver and help others deliver?

  • Consistency: Do the rules change depending on who's speaking?

Harvard Business Review's team research consistently places trust at the centre of high performance, built through observable behaviours rather than nice-sounding statements.

 

b)      System trust

  • Transparency: What does this tool actually do with my data and work?

  • Control: Can I opt out, correct it, or appeal it?

  • Fairness: Are the rules applied consistently?

If you ignore system trust, all your psychological safety work can vanish because people experience AI as an unspoken evaluator sitting in on every call and how many people truly challenge AI outputs, right?

Microsoft's research on managers explicitly links psychological safety to successful AI integration.

 

Leaders need to be intentional with their actions and take specific steps to avoid the above pitfalls

No surprises, communication is key and ground rules need to be set.

  • What AI is allowed for (and what it's not)

  • What data gets used (and what doesn't)

  • What's being measured and what isn't

  • How humans make the final call on performance, promotions, and forecasting

Predictability is important by explaining the implementation’s expected outcome as this will lower anxiety and room for rumours to develop.

 

If the damage has already been done

HBR's guidance on trust repair consistently points to acknowledgment and clear communication when confidence has been damaged which probably feels intuitive to most of us. So if we follow this guidance with a simple human way to deal with it, it will probably address most of the issue.

  1. Calling the issue out

  2. Being honest about the non-intended impact

  3. Stating the new rule and making AI “challengeable”

  4. Getting people involved and inviting feedback

 

Picking one process that makes people afraid to speak up (i.e.: forecasts, pipeline reviews, deal post-mortems, incident reviews, coaching sessions…) and replacing blame triggers with learning triggers (this is coaching 101 and is proven to be effective):

  • "What did we assume that turned out wrong?"

  • "What signal did we miss earlier?"

  • "What would we do differently next time?"

The best leaders know that psychological safety rises when work is framed as learning, people are invited to participate, and where bad news is responded to calmly and productively.

 

Leadership takeaways

For psychological safety in an AI-enabled teams to exist, people need to observe 3 things consistently:

1)      When someone raises a risk, they get thanked and included

2)      Learning and evaluation are different things (coaching vs performance review)

3)      Fairness is felt

 

“Trust doesn't rebuild through inspirational speeches but when people can predict what happens next and when those predictions keep coming true.”

 

Keep Reading