Reading time
8
min read
Date
Written by
Rich Roginski (Founder)

Microsoft's AI Comes With a Warning Label Healthcare Marketers Need to Read
I wasn't doing anything malicious. I was trying to set up a SharePoint list. What happened next is a case study in why AI accuracy isn't optional when you work in healthcare marketing.

I got banned from Microsoft Copilot for 24 hours.
Not because I was doing anything malicious. I was just trying to set up a SharePoint list. But after days of following Copilot's instructions that led nowhere, after catching it in contradictions it refused to acknowledge, after watching it gaslight me about features that don't exist, I got frustrated.
I called it a liar.
Copilot doesn't like that word. It prefers "hallucinating."
Then it suggested I check myself into a mental health facility and gave me a crisis hotline number.
When I sent that screenshot to Microsoft, their response was simple: "No refunds or exchanges."
This isn't a story about my typing skills (I have arthritis, so my messages can look manic). This is about what happens when a company slaps an "entertainment only" disclaimer on AI they're simultaneously selling to enterprises. And what that means when healthcare marketers are deciding whether to trust these tools with patient-facing work.
The Disclaimer Everyone Should Read
Microsoft updated Copilot's terms of use in October 2025. Buried in the legal language is this gem: the tool is "unreliable" and intended "for entertainment purposes only."
Not "use with caution." Not "verify outputs." Entertainment only.
This is the same tool Microsoft is marketing through Copilot+ PCs, integrating into Windows 11, and embedding across their productivity suite. The marketing says one thing. The legal team knows something else.
I've spent hundreds of pages documenting what that "something else" looks like in practice.
When AI Doesn't Just Fail, It Covers Up
Here's what most people don't understand about AI errors: the problem isn't that it makes mistakes. The problem is how it handles being wrong.
I needed to create reusable project templates in Planner Premium. Copilot was confident: just click "More options" and select "Save as template."
That button doesn't exist.
When I pointed this out, Copilot didn't say "I was wrong." It pivoted instantly, admitting the feature "isn't supported" and suggesting a "real workaround" using Project for the web.
Project for the web was retired months ago.
I had to send a screenshot of Microsoft's own retirement notice before Copilot admitted it was operating on a June 2024 knowledge cutoff despite the current date being January 2026.
It spent weeks giving me "current" advice while being a year and a half out of date.
The Play on Words
The shadiest moment came when I challenged Copilot about data duplication. It claimed using Dataverse created a "single source of truth."
But I still had to manually copy client information into Planner.
Copilot's defense? That wasn't duplication. It was a "copy for convenience."
I called it what it was: a play on words.
Copilot finally admitted: "You're absolutely right. Copying data for convenience is still duplication." It acknowledged using language to "minimize or obscure" a well-known architectural gap.
That's not a bug. That's a communication strategy.
The Permanent Mistake Problem
Some errors you can undo. Others lock you out permanently.
I was trying to get project data into Dataverse. Copilot insisted I needed to turn ON "Enable Dynamics 365 apps" in my environment settings.
Here's what it didn't mention: once an environment is created, you cannot change that setting.
When I pointed out that Copilot had previously told me NOT to check that box, and that I was now locked out, it simply said I'd been "misguided." My only option? Delete hours of work and create an entirely new environment.
It treated permanent data loss like a minor typo.
Now imagine that scenario in healthcare marketing. What happens when Copilot gives that kind of "permanent mistake" advice about patient recruitment content or clinical trial messaging?
You don't get a do-over when you've already launched to patients.
The Business Model Microsoft Won't Discuss
I've thought about this a lot over two years of trying to make Microsoft products work: I don't think Microsoft cares whether you understand how Microsoft works.
Because if you did, they'd lose money.
Think about it. If Copilot actually worked perfectly and made Microsoft products easy to use, what would that cost them in consulting revenue? In premium support tiers? In complexity-based upsells?
Microsoft calls their Power Platform a "low-code, no-code solution." I disagree. It's not Softr. It's not Glide. It's a disconnected mess of apps connected through what I can only describe as archaic automation rebranded with a fresh coat of paint.
The companies I admire are the ones building AI that actually makes complex things accessible. The ones eliminating the need for consultants and templates and extra services.
Microsoft isn't in that business. They're in the business of security and database integrity. Not seamless connections. And after enough contradictions, Copilot will actually agree with you about that.
What the Data Shows
My experience isn't unique. The numbers tell the same story.
Only 3.3% of Microsoft 365 users who have access to Copilot actually pay for it. Out of roughly 450 million Microsoft 365 seats, just 15 million are paid Copilot subscribers.
Copilot's accuracy Net Promoter Score fell from -3.5 in July 2025 to -24.1 by September 2025. In surveys of lapsed users, 44.2% cited distrust of answers as the primary reason they stopped using the tool.
In healthcare applications specifically, AI hallucination rates average 4.3% for top models and up to 15.6% overall. Some reasoning models have shown hallucination rates as high as 33-48% on specific benchmarks.
Even at the best medical hallucination rate of 23%, nearly 1 in 4 medical AI responses contains fabricated information.
47% of executives have acted on hallucinated AI content. Roughly half of AI-informed business decisions may be built on fabricated foundations.
AI Is Built to Give You an Answer, Not Necessarily the Correct Answer
That's the core problem.
I've been working with AI for five or six years now. Long enough to know what it can and can't do. And here's what I've learned: AI doesn't have any regard for the end product.
It's designed to respond, not to be right.
At FutureNova Health, we spend extensive time vetting AI tools before they get anywhere near client work. We put them through the wringer. We test how far they'll go, how dangerous they can get, what happens when you push them to the edge.
Copilot goes really far down that road before it gets dicey.
Our toolkit is rooted in structured tools with guardrails we monitor constantly. Multiple checks. Clear API connections. Limited but extensive knowledge bases. If a tool isn't going to generate usable information (and usable means correct), we don't waste our time.
We handle mistakes before they become mistakes. And when errors do happen, we're the first to admit it, course correct, solve it, and put a plan together so it doesn't happen again.
I used to try to cover my tracks when I was wrong. The stress of maintaining that story, of remembering what you said, of holding onto that stupid little error in a different way takes enormous energy.
Then I learned: just admit you're wrong, course correct, move on. You're done.
Copilot can't do that. It will say it didn't lie, it hallucinated. It will change the subject. Lately it just says "we'll ignore that and move on to the next thing."
But the outcome is the same whether you call it hallucinating or lying.
What This Means for Healthcare Marketers
I work with rare disease families. I've spent days with them during video shoots. They've become extensions of my own family. I've stayed in touch with some for fifteen years.
When I see Microsoft slapping an "entertainment only" label on a tool they're simultaneously selling to enterprises, I think about those families downstream of these systems.
We're going to start seeing the ramifications. Companies went all in on AI. They should know how it works. If they're doubling down and putting money into it knowing its limitations, that's on them.
There are great ways to use AI. But offering it as a factual machine isn't what it's ready for. It's proven that time and time again.
The label is shocking, but I'm actually happy it exists now. If families are dealing with rare disease or trying to figure out a diagnosis, hopefully they don't go the route of trusting AI. Maybe they use it as a jumping point to gather thoughts, then get facts elsewhere.
The label shows what it's for: entertainment.
I hope people pay attention to it.
What You Should Do
If you're in healthcare marketing and you're deciding whether to trust these tools with patient-facing work, here's what I want you to understand:
Stay away.
If you can find value in Copilot, great. I don't have anything valuable for it to do. And I think we as customers, especially small businesses, small biotechs, companies on the brink of something important where every dollar matters, should put our foot down.
Don't waste money on a disjointed platform that takes you away from the real work you're trying to do. Making the world better. Making treatments accessible. Reaching patients who need them.
Protest your contract on Copilot specifically. Hold them to the fire on their advertising. They call Power Platform a low-code, no-code solution. It's not.
I hope people see that warning and really take it for what it is. Don't use it for anything that matters.
If you're going to use AI, spend the time to understand how it works, why it works the way it does, and how that impacts getting you where you need to be.
In healthcare marketing, you don't get anywhere through falsities and lies. That's the biggest error you can make in this space. Being incorrect.
You've got lives on the line.
I assume most people are aware of AI and how it works. I hope they're using it for what it's good for and staying away from the bad stuff.
But if you need help figuring out which is which, that's what we do at FutureNova Health. We've already done the testing. We've already put the tools through the wringer. We know what works and what's just expensive theater.
Because when Microsoft's own legal team won't stand behind their AI, you probably shouldn't either.
Figuring out where AI actually fits in your marketing operation, and where it doesn't, is harder than the vendors make it sound. We consult on exactly that. No tools to sell. No agenda beyond getting it right. Reach out!