top of page

Lessons from Costly AI Failures and Tips to Avoid Them

  • Writer: Rich Roginski
    Rich Roginski
  • Sep 9
  • 4 min read

Lessons and tips for AI Success
Lessons and tips for AI Success

Let me get right to it: AI is incredible... until it isn’t. We just published a pretty sobering report on that exact

reality, and before I go full glass-half-empty, let me assure you, this post isn’t about dunking on tech. It’s about learning fast, fixing things, and being honest when the hype train needs brakes.


Here’s the reality: 42% of companies abandoned their AI initiatives in 2025, up from 17% just a year prior. It’s not just a phase, it’s a pattern. And as much as we’d all love AI to be the solve-everything engine it’s sold as, turns out, it's still very capable of messing up. Spectacularly.


Why We Wrote This Report (and AI Success Tips I Wish I Had)

At FutureNova Health, we believe in building AI that actually works, and by that, we mean it delivers results without compromising quality, safety, or trust. That’s why we published this report in full transparency: to show where others failed, and remind ourselves where we could too.

Inside, you'll find stories like:

  • IBM Watson’s $4B healthcare misfire, which never touched a single real patient due to training on fake data.

  • Zillow’s algorithmic real estate bet, which lost $421M and cost 2,000 jobs because their AI thought it knew the market better than it did.

  • McDonald’s drive-thru AI, which went viral for all the wrong reasons (think: 100+ nuggets accidentally ordered and wrong cars getting charged).

None of these are niche edge cases. They’re red flags waving at anyone building AI without the right guardrails.


Our AI Philosophy (AKA: What Keeps Us Grounded)

If you’ve ever seen our internal docs, pitch decks, or product screenshots, you’ve probably run across some version of this line:

"We don’t use AI for hype. We use it where it creates clarity, speed, and measurable impact."

That means:

  • ELEVATE: No compromises on quality or outcomes.

  • RELIABLE: Systems must be trusted, repeatable, and explainable.

  • DRIVE PROGRESS: Real results, not “potential value.”

  • TIME SAVER: If it doesn’t save us time, it doesn’t belong.

We ask these 4 simple questions prior to developing AI tools.

AI Tips to Avoid Failure and Find Success

Scroll below to see how AI can transform your recruitment of commercialization efforts! Also some tips and lessons to avoid AI pitfalls.



The Best Advice I Can Share (to Avoid Failure in AI Implementation) Start at the Beginning

Caution Before You Commit Before You Dive In Ask first. Spend later. AI can be game-changing, or a very expensive distraction. Before you sink time and budget, pressure-test the basics.

  • Check Yourself (Before You Wreck Yourself):

    • What problem are you actually solving? Be honest. If your goal sounds like a press release, it’s not ready yet.

    • Is AI the best tool for this? Sometimes the smartest move is not AI. Don’t overcomplicate what a spreadsheet could handle.

    • Is your data even usable? Be real, if your data’s a mess, your AI will be a mess with a dashboard.

  • Build the Safety Net (So You’re Not Scrambling Later):

    • Who’s responsible when things go sideways? Set governance early. “We’ll figure it out later” is not a strategy.

    • Are humans still in the loop? Autonomous systems sound cool until they go rogue. Keep people close to the decision points.

    • How will you know it’s going off-track? AI doesn’t stay still. Build in checkpoints to catch drift before it becomes disaster.

Looking to get your idea off the ground? Set-up a AI Ideation Session with FutureNova Health.

What This Has to Do With NovaNext

(Okay, slight spoiler for Part 2 of this series...) A few months ago, I got a little overeager about what AI could do inside our NovaNext™ Platform, specifically around hypertargeting, persona modeling, and site feedback automation.


I spent three weeks charging ahead with some ambitious logic, only to find that AI was confidently... wrong. I’ll share more about that rollercoaster soon, but let’s just say: it was humbling. And necessary.

That hiccup is exactly why we keep going back to our AI Principles, our pre-investment checklists, and our risk governance playbooks. We build for the long game, not the demo.


The Takeaway

If you're using or building AI in any industry, here’s what I’ve learned the hard way:

  • Don’t mistake confidence for capability.

  • Don’t let a shiny feature overrun a broken foundation.

  • And please, for the love of your team and customers, plan for failure.


We're not just technologists, we’re in healthcare. That means our work has consequences, and AI doesn’t get a free pass just because it’s cool.


We’re going to keep sharing what we’re learning, even the hard stuff, because it’s the only way to build something that lasts. Appreciate you being along for the ride. And if you’ve got your own “AI came in hot and fell on its face” story, I’d genuinely love to hear it.


More soon in Part 2: Me vs. NovaNext (and the Three-Week Detour That Taught Me Everything).


Comments


bottom of page