By Adam Hassan • Feb 25, 2026
Programming will never be the same again, whether we like it or not.
In 2023 we complained about LLMs giving logical errors and syntax errors that meant you couldn’t even run the project. It was completely valid: we couldn’t rely on AI for correctness.
In 2024 the skepticism shifted.
AI was seen as a viable tool, but many complained about being forced to use Copilot-style features, and that we couldn’t opt out. People were asking how to get rid of AI.
These arguments were valid. We faced unwanted suggestions or auto-generated code that introduced issues, and often made the work take longer than if we had written it by hand.
There was a mismatch between management expectations and reality. As developers, we could see that we couldn’t rely on it, but stakeholders were drowning in hype and news, and that often created even more unwanted pressure on developers.
And even when the tools started to work better, the everyday reality still didn’t match the hype. Having full functions written via autocomplete seemed quite magical, but AI still wasn’t good enough to be trusted to do truly useful work.
A perception that emerged was that generation doesn’t equal delivery. Reviewing code and correcting it required significant time, and often it didn’t feel like AI was saving time at all, but just adding another layer of work.
Then, in 2025, things changed dramatically. Never in our lifetime has AI been this capable and easily accessible.
If I told you in 2023 that we would have a tool that could work across several projects in parallel, create a full plan for a complex feature, ask you questions, and then execute it with almost perfect implementation, nobody would have believed me. But that is the reality we live in today.
Yes, there were studies suggesting AI could contribute to skill degradation (arxiv.org/abs/2512.19644). And yes, reports claimed that 95% of GenAI initiatives were failing to produce meaningful outcomes. But I think the problem isn’t AI — it’s how we’re using it.
Getting a complex feature out across several domains (API, frontend, etc.) has previously required lots of analysis work, meetings, grooming, and potentially someone with frontend expertise to build a good UI and a senior backend developer to build the necessary business logic. It then needed review from UX, other team members, and relevant stakeholders. By the time you pushed that change, it came at a significant cost — both in occupying human input and in normalizing overengineering and insignificant tasks.
Which creates even slower teams.
Recently, for testing purposes, I took one of these initiatives and ran it through Claude Code, ignoring most of the existing processes. Without going into much detail, it executed the task perfectly.
But what was really interesting wasn’t the code itself. It was the research. I could run subagents that read research papers, credible sources, UX principles, and best practices, all while having full access to the existing codebase and its established patterns. It produced a “Design Decisions & Research” document that I could review, edit, and add my own input to. That document then became the reference point for building the feature.
When we build features ourselves, it’s not realistic to spend this much research effort on a single task. You don’t have time to read three papers, study how other products solved the same problem, and cross-reference UX best practices before writing your first line of code. But with AI, you can, and that changes the equation entirely. You’re not just shipping cheaper software that took less time to produce, you’re shipping better software.
There was some pushback. Some people felt they were no longer needed, but I saw that it just changed the process into a much better one. Instead of overanalyzing and spending the majority of the time before pushing a feature, now I can push the feature and start getting feedback from different stakeholders, which is both faster and better.
Now, some things will require a lot of expert involvement, but most things don’t.
Large companies have yet to benefit from this, and the companies that succeed at transforming into AI-first will see tremendous value. Here, the human factor is the main issue, not AI.
“I can write that code faster myself.”
“If AI can do my daily job, what do I do?”
And management expects people to use it, but there’s no mandate from above or training. Some also don’t push for the best tool, and because they have a Microsoft Office license they will naturally just give you Copilot. That doesn’t often fit or work best for everyone.
To be honest, I don’t know. I don’t think anyone knows. But the clear conclusion is that programming will never be the same again, and the parts of the craft you enjoyed before might not translate well into this new era.
Some people will like the new way of prompting your way into a new feature, and be okay with not contributing to the task the same way as we did before.
Others will find it less meaningful, and it doesn’t give the same dopamine kick as before when you had to debug a problem for several hours to finally see it compile.