Tailbone evolution
Are we trading long term wisdom for Fool’s Gold?
Martijn Veldkamp
“Strategic Technology Leader | Customer’s Virtual CTO | Salesforce Expert | Helping Businesses Drive Digital Transformation”
June 6, 2025
I want to stop everybody celebrating that AI writes your code. I think you might be cheering for the obsolescence of true problem solving skills. Leaving behind a legacy that no one truly understands. For now I and thus this article has more questions then answers.
The AI code rush
It spits out code in seconds, a great short-term win. But what’s the long term cost when no one on your team, not even your architect, can truly articulate why that code is the right solution, or how to maintain it when the AI’s ‘understanding’ hits a wall?
Our tailbones are harmless evolutionary leftovers. Well, if you ever fallen on your tailbone, you may beg to differ on the harmless part. Of course the primary point of a vestigial organ is its lack of original function, not whether it can still cause pain. But, what if our continued reliance on AI for quick coding is actively atrophying something far more critical. The necessary deep and nuanced understanding of the very problems we’re paid to solve? Are we supposed to evolve into mere AI prompters, losing the ‘muscle’ of genuine architectural foresight and domain mastery?
The sedcution of speed vs necessity of depth
The siren song of AI-powered development is deafening. Well at least the marketing is in my internet bubble. Instant code, accelerated timelines, the allure of hyper-productivity. To me these are potent but short term gains. I think that in our rush to ship faster, we are sacrificing the long-term pillars of robust, sustainable software: genuine problem understanding, maintainability, and refactorability.
The great Jos Burgers said: “It’s not about the technical capabilites of the drill, it’s about the hole, preferably without drilling”. I think we are focussing on the wrong thing. Lines of code or solving the problem? Code is a means, not the end. Software development, at its heart, isn’t about producing code. It’s about solving a problem for a user or a business. That solution doesn’t end when the code compiles. It lives, it breathes, it needs to be adapted, debugged, refactored and scaled. As a team you iterate towards a better understanding of the problem and how the solution fits. This is where the my questions arise from.
AI Ownership?
Do you truly understand the problem domain, or even the solution you’ve implemented, if an AI generated significant portions of it based on a prompt? What do you do when the inevitable bug appears? What do you refactore to meet the ever evolving business needs This just isn’t an individual developer concern. It’s a profound future challenge. The roles of Lead Developer and Software Architect have always been more than just delegating coding tasks. They are the keepers of the faith, the custodians of the system’s integrity, the champions of its long-term vision. Yeah, I want to link some bands to these. 😉
How do we review and take ownership of generated code if the team’s (or even my own) understanding of its intricacies is superficial? Are we shifting from code quality in a human context to validating the output of a black (or at least opaque) box? What will we do in the future knowing our team might not possess the granular understanding to maintain or evolve them without significant AI assistance? This could lead to overly simplistic designs to match a perceived lower skill floor. We might end up with dangerously complex AI stitched systems that are brittle and opaque.
The atrophy of problem solving and domain expertise: The true Tailbone
The evolutionary dead-end we risk isn’t the loss of raw coding ability. It’s the atrophy of deep problem-solving skills and intimate domain knowledge. If we start to make AI the lead for translating problem to code, the human developer’s muscle for analytical thought, the breaking down of complex requirements into logical software structures, for anticipating edge cases beyond the AI’s training data, may weaken. I still haven’t seen any AI start with “well, it depends…”
Just ask yourself where did you write down why one pattern is strategically superior to another in a specific context, considering long-term trade-offs? How do we identify the unmet needs of the customer? When did we discover a flawed assumption? Where are the talks at the coffee machine documented?
So, how will an LLM learn any of this? This nuanced understanding of a complex business is learned and build through the struggle of translating its messy realities into clean code. I am of the opinion, now, that if that “struggle” is outsourced, the depth of understanding is also outsourced. And we all have worked with teams that were either nearsourced or outsourced. How did that shared understanding go?
Solving today’s problems with yesterday’s solutions
This brings me back to the “echo chamber”. If AI is trained on existing codebases and solutions, it excels at producing variations of what’s been done before.
And AI is trained on those code bases?
True innovation often comes from a deep understanding of a problem that allows one to see an entirely novel approach, not just a new implementation. If our primary tool for “solving” is an AI that reflects past shitty solutions, are we limiting ourselves to incremental improvements within existing paradigms?
Where does this leave us?
I am not advocating about rejecting AI tools. But I do think we need to be very clear on our relationship with them. The focus must remain laser-sharp on solving problems effectively for the long term. My argument is about the slippery slope and the potential future of these tools
I am of the opinion that we can use AI to augment our work. It can handle boilerplate, it can suggest alternatives, but the human must remain the strategic thinker, the domain expert, the one who takes ultimate responsibility for the solution’s integrity. We all need to encourage each other to question and validate AI outputs. Assume nothing. Our most valuable skill will be the ability to deeply analyze a problem, understand its domain, and architect a robust and maintainable solution.
Perhaps the real ‘lead developer’ and ‘software architect’ of the future will be defined not by how quickly they can prompt an AI to produce code, but by their understanding the problem so profoundly that they can guide anyone, human or AI, to a solution that stands the test of time. Anything less is just building shinier tailbones.
