Remember 2023? When every pitch deck suddenly had "AI-powered" slapped on it, and every product roadmap mysteriously sprouted machine learning features overnight?
Yeah. We're finally past that.
The hangover from the AI gold rush is setting in, and something interesting is happening: the founders winning in 2025 aren't the ones with the most AI features. They're the ones who know exactly when to use AI, and when to walk away from it.
Teams are moving from frantic integration to strategic evaluation. We're seeing AI features in design platforms like Figma and research tools like Dovetail become genuinely useful. Not because they're more advanced, but because they're finally scoped to do specific things well.
The product world is slowing down to consider how AI addresses real user and organisational needs. Not just integrating AI because… well, because everyone else is.
The question changed from "Can we add AI?" to "Should we add AI?"
When AI personalisation is executed effectively, engagement can jump by 30%.
Notice the catch in that sentence: when executed effectively.
What does that mean?
➡️ Strong data collection systems that prioritise user privacy from day one. Not as an afterthought when your privacy policy gets flagged.
➡️ Real-time processing for instant interface updates. If your AI takes 5 seconds to "personalise" something, users will bounce before they see the magic.
➡️ Flexible UI frameworks that can actually evolve with user needs. Too many teams build rigid UIs and then try to shoehorn AI into them. It doesn't work.
➡️ Clear feedback mechanisms so the system gets smarter. Without a feedback loop, you're not building AI, you're building a very expensive random number generator.
The companies getting this right are thinking through the entire system design not just throwing algorithms at problems. And we love that 👏
➡️ Strong data collection systems that prioritise user privacy from day one. Not as an afterthought when your privacy policy gets flagged.
"We use AI to [completely mundane task] with [buzzword] technology."
Yes, the graveyard of failed AI features is getting crowded:
❌ The sparkle icon everywhere. If your AI is working, users shouldn't need an icon to know it's there. The experience should just be better.
❌ Solutions looking for problems. AI code completion that makes developers 18% faster sounds impressive, until you realise engineering managers don't feel compelled to buy it.
❌ Complexity without clarity. Personalisation that's so complex it's out of human control. You end up with echo chambers, warped perspectives, and consequences you can't predict.
Most users don't care if you're using AI. They care if your product solves their problem better than the alternative.
If you can't explain why your AI feature makes the user's life meaningfully better in one sentence, you probably don't need it.
2025 is the year ethical AI becomes a requirement.
What this means for you:
→ Transparency: Users need to understand how AI makes decisions, especially when those decisions affect them directly. No black boxes.
→ Bias mitigation: Train AI with diverse data. Run automated bias checks. Have diverse teams review. Audit regularly.
→ Data minimisation: Just because you can collect data doesn't mean you should. Anonymise personal data. Protect everything with strong cybersecurity.
→ Human oversight: Critical business or customer-facing decisions? Keep humans in the loop. AI should augment decisions, not make them alone.
→ Clear accountability: Define who's responsible for AI integrity, ethics, and compliance. Chief Legal Officer? CTO? Create an AI ethics committee if you need to.
Here's the framework we use with founders ⬇️
Not "what could AI do here?" but "what problem are users experiencing?"
If you can't articulate the problem without mentioning AI, you don't have a problem. You have a solution looking for one.
Sometimes the answer is "better search filters." Sometimes it's "clearer information architecture." Sometimes it's "a human writing better copy."
AI isn't the only tool in the box. Often, it's not even the best one.
How will you know if the AI feature succeeded? "It uses AI" is not a success metric.
✅ Engagement? Time saved? Error reduction? Revenue? Define it before you build.
Every AI feature asks users to trust a system they don't fully understand. What are you asking them to trust you with?
If the cost of getting it wrong is high (medical advice, financial decisions, personal data), your bar for "good enough" just went up 10x.
AI models drift. Data changes. Edge cases emerge. Who's responsible for monitoring and updating this after launch?
If the answer is "we'll figure it out later," you're setting yourself up for technical debt and user frustration.
This should honestly be Step 1, but we put it last because sometimes you need to walk through the complexity to realise simple is better.
Outcome-oriented design: AI agents that actually take action, not just return information. Think Siri booking your hotel, not just showing you options. This is coming fast – design for it now.
AI as quality control: Using AI to catch accessibility issues, component misuse, and design system violations before engineers touch them. Less glamorous than chatbots, infinitely more useful.
Adaptive interfaces: UIs that dynamically adjust based on user behavior and context in real-time. Not just "dark mode on/off" but intelligent contextual adaptation.
Predictive interfaces: Minimising steps by anticipating user needs. Done well, it feels like magic. Done poorly, it feels creepy. The difference is in the execution and transparency.
Voice and gesture interfaces: When truly natural and well-scoped, these break down real barriers. The key is not trying to make everything voice-first.
The AI hype cycle taught us something valuable: capability doesn't equal value.
Your users don't care that you can build an AI feature. They care if it makes their lives better.
The founders succeeding at adding AI features in 2025 are the ones who can articulate exactly what problem their AI solves, how it solves it better than alternatives, and why users should trust it.
If you can't do that, you don't have an AI strategy. You have a gamble.
Build what matters. Use AI when it makes sense. Walk away when it doesn't. 🙌