
Why Human Problem-Solving Still Matters
A few days ago, I built a quiz.
It wasn’t just any quiz; it was a small experiment in turning intangible ideas into something interactive. A way to help leaders reflect on their challenges and explore potential support. It worked beautifully, until I tried to improve it.
One tweak. One small change. And suddenly, nothing worked.
I turned to AI for help. Over the next day, I cycled through suggestions, watched as different tools proposed similar fixes, and hit the same walls again and again.
As someone trained in both technical and creative problem-solving, I found myself fascinated, not by the quiz itself, but by why AI couldn’t solve what seemed like a straightforward issue. The difference between the solution that worked and the one that didn’t was minimal.
The experience revealed something bigger: a cautionary tale about how we approach problem-solving in an age of AI.
The Allure and Limits of AI Problem-Solving
AI is incredible at many things I’m told: generating ideas, automating repetitive tasks, even writing code. So, I’ve been trying to learn how best to use it. I’ve been learning to make better prompts and doing that already helps clarify what the problems is that I want to solve and what outcome I would like. In terms of Creative Problem Solving, I find writing prompts forces us to conscientiously focus on the clarification phase.
I’m aware I have limitations in my skills and knowledge of AI uses and that probably I didn’t help the interface in trying to resolve my quiz issues, but I couldn’t help noticing some behaviours and limitations of my problem-solving challenge. I tried multiple models with the same struggles:
- It Can’t See the Whole System
AI operates on the information it’s given. It doesn’t notice missing context, whether that’s a hidden WordPress plugin conflict or an unspoken organisational dynamic. In clear, I was telling it what I’d done, but it couldn’t see my directory, my files and the response. I could upload my files to it, but it couldn’t see them working within the system. - It Defaults to Patterns, Not Nuance
When stuck, AI recycles familiar solutions rather than questioning whether the problem has been framed correctly. (How often do we do the same?). It would try to apply generic solutions to my specific condition. Sometimes the drift was so bad that it started offering solution for an interface I hadn’t installed. Whilst pattern recognition is incredibly useful, it lacks the context to use it wisely. - It Can’t Experiment in Real Time
Humans test, observe, and adapt. AI suggests but can’t feel when an approach isn’t working. The feedback loop is limited. In my case, the issue isn’t the code itself, it works great in the terminal window, but it can’t resolve the issue of porting that code and making it work on my website in an environment that it can’t see. It has very little feedback other than what I tell it (furiously copying error codes, modifying code, recompiling. Rinse and repeat). - It Lacks Judgment
AI doesn’t know when to abandon a flawed strategy, or when the problem itself needs redefining. In this case, I was the one noticing that we’d already done something similar and that it was the 5th time it was offering a final bullet proof solution. It was stuck in a rut and the more context I added to the thread, the deeper we dug. Once fixated on a flawed approach, it kept iterating within that dead end instead of questioning the premise. I had to call a break and start over in another conversation.
The Human Edge: Problem-Solving as an Art
This isn’t just about debugging code. It’s about how we solve problems in business, leadership, and change-making.
The best problem-solving blends:
✔ Methodology (structured thinking, like the Cynefin framework to identify levels of complexity and the most appropriate tools for each context)
✔ Adaptation (knowing when to pivot)
✔ Judgment (recognising when a tool, or AI, is leading us astray)
In simple contexts (clear cause-and-effect), AI excels.
In complicated domains (multiple right answers), it can assist.
But in complex or chaotic situations? That’s where human intuition, experience, and flexibility become irreplaceable.
I did an assignment with Hoffmann La Roche on this topic, building a framework for complex and high impact decision making in complex environments. The initial scope was about the use of AI, but for complexity we had to resort to the human brain and manage ambiguity.
Why This Matters for Leaders and Consultants
As AI becomes the go-to “solution” for everything, I feel that we risk forgetting:
🔹 Not all problems are technical. Many are adaptive, requiring shifts in mindset, behaviour, or strategy.
🔹 Tools are only as good as the hands (and minds) wielding them. AI can’t tell you when its own approach is flawed. In the same way that it couldn’t tell me (because it’s friendly) if I lack the skill to give it correct instructions.
🔹 True problem-solving is iterative. It requires testing, observing, and sometimes stepping back entirely.
This is where skilled facilitation and consulting add value: not in having all the answers, but in navigating uncertainty, spotting flawed assumptions, and adapting methods to fit the problem, not the other way around.
A Call for Better Problem-Solving
Did I eventually fix my quiz? We tried one last ditch attempt with an entirely new approach by stepping back and simplifying. It was time for me to let go! I had spent enough free time on the topic. But it’s really hard for me to let go and leave something unsolved. So, I told it that I was quitting. AI has even more trouble stopping that I do and sort of ran after me suggesting a completely different approach, in my browser, using my code. It worked first time!
The lesson? As we lean into AI, we must also sharpen our human problem-solving skills:
- Diagnose before solving. Is this a simple, complicated, or complex problem? (Cynefin helps here.)
- Stay adaptable. If a solution isn’t working, question the approach, not just the execution.
- Know when tools mislead. AI, frameworks, and models are helpers, not replacements for thinking.
Because in the end, the best solutions come from humans who understand both the tools and the craft.
What’s your experience? Have you hit walls where AI fell short? How do you blend methodology with adaptability in problem-solving?
(And if you’re wrestling with a complex challenge, whether technical, strategic, or organisational, let’s talk. Sometimes, the best solution starts with stepping back, not digging deeper.)