Title: When AI Overpromises and You Have to Take a Step Back (Three Weeks Later)
- Rich Roginski

- Sep 16
- 3 min read

So, here’s a humble confession: I spent three weeks head down on NovaNext development, totally convinced that AI and I were unstoppable. Spoiler: we weren’t. We were making grand plans, dreaming in code, and believing every generation would nail the future. In three short weeks, I learned we still need to pump the brakes.
The Setup: AI Me, You, and Too Much Confidence
You know how it goes. One day, you’re brainstorming NovaNext features, thinking and maybe saying out loud, “This AI will handle everything.” And for a moment, you believe it. Three weeks later, you realize you’ve run smack into AI’s not-so-glamorous side: blind spots, misfires, and the occasional “Wait, why is it doing that?”
A Little Reality Check (From Someone Smarter)
We recently took a closer look at some of the AI “hiccups” making headlines, not to scare anyone, but because it’s important. If you’re building in this space, you deserve to avoid the same traps.
A little recap; Sanford referenced study finds even state of the art AI misinterprets basic physical scenarios 46% of the time, basically doing worse than a toddler might (MindSynai). And while hype thrives, Forbes puts it succinctly: “Machines can answer questions, but only people can ask the right ones” (Forbes).
Building on that, a growing body of research outlines common hurdles, everything from ethical blind spots and data bias to scalability and explainability. For example, Simplilearn outlines top of mind challenges like algorithm bias, lack of transparency, up front infrastructure costs, and questions around privacy, pretty much everything that screams “Hey, maybe this isn’t a straight path” (Simplilearn.com).
My Three Week Jam: What Really Happened
Week 1: I was in my AI powered utopia, visions of NovaNext automating everything from intake assessments to follow ups. I called it my “we can do anything” phase.
Week 2: Reality gauntlet. Misread inputs, weird edge case failures, opaque decision making, no transparency and not enough trust. Suddenly, I found myself debugging paths I’d never tipped into before.
Week 3: The pull back. I started revisiting our AI Principles, things like fairness, explainability, human in the loop designs, and knowing when to hand decisions back to humans. Our AI should augment, not replace, and staying aligned with these values helped me say, “Okay, hold up. Let’s get this right.”
What We Learned: AI Is Powerful and Fallible
Here’s the essence of what I’m taking away:
AI is only as smart as the questions we ask and the guardrails we build.
Every breakthrough tech comes with bumps, missteps, and “Wait, what did it just do?” moments.
Fairness, transparency, and human oversight aren’t just buzzwords, they’re nonnegotiable.
If your team is sprinting toward a shiny AI powered rollout, trust me: take a breath, check your principles, and remember, making mistakes early, while still safe and internal, is way better than shipping confusion later.
Wrapping Up with a Dash of Self Deprecation
So yeah, NovaNext and I had a minor reality check this month. AI and I learned that believing we could do all the fancy things without a reality check was a... let’s call it “enthusiastically optimistic misstep.”
But that’s good. Because none of you clicked into this post to read some AI never fails sermon. You’re here for honesty, learnings, and maybe a chuckle at my expense. And hey, if I can over estimate AI potential in three weeks, I’m betting better things (and better code) are on the horizon.
As always, I’d love to hear your own “AI surprise” stories, or thoughts on how we keep AI grounded and trust intact.
Have an AI idea?
Let’s turn it into something real.
We’re here to help you navigate this new world, one smart step at a time.











Comments