In the past month, my software development and testing workflow has changed rather drastically and in a way that I had never expected before. It all centers around how I've been using LLMs and AI tools—particularly Claude Code—to handle a larger chunk of my daily work.

Ever since these tools started to gain traction in software development circles, I have adopted their use very slowly compared to most of my peers. My social media feeds are full of developers and testers that I have followed for years talking about how much they are enjoying using AI to boost their productivity.

I, on the other hand, didn't share their enthusiasm due to a mix of skepticism and stubbornness on my behalf. However, my usage of AI has gone up quite a bit lately, to the point where I hit Claude's 5-hour usage limits for the first time ever on my Pro account. I had to wait for it to reset if I wanted to continue using it, which I never thought I'd see. I'm even considering bumping up my subscription to Claude Max so I can avoid this from happening again.

This might seem like it's not an uncommon occurrence, but it's a surprise to me. For the past couple of years, I've dabbled in using AI on and off for my development work with very mixed results, typically because I was rarely satisfied with the results. I'm guessing it's because it's really difficult to change someone's behavior after 20+ years of doing this work. Also, I admittedly have high standards when it comes to my output, and I never felt like what AI gave me met them.

Tools like Claude Code and Codex have gotten much better in the work they produce compared to even a year ago. Models like Opus 4.5 and GPT-5.2 are surprisingly good compared to their predecessors. Knowing how to leverage when to use different modes, from planning to having sub-agents tackle multiple chunks at once, has made it easier to accept as the "new normal" for software development.

Going Through The Five Stages

The fast improvement of AI isn't the reason I've been including these tools as part of my daily workflow nowadays, though. I had to go through quite a bit of an internal struggle to come to grips with this new way of working. After reflecting on this relatively sudden change of behavior, it reminded me of the five stages of grief, where a sudden shift causes an individual to go through five different sets of emotions: denial, anger, bargaining, depression, and acceptance. I realized that in the past few years I have gone through each of those emotions when it comes to AI coding tools and how I've been integrating them into my day-to-day work.

Denial

When ChatGPT was publicly released in 2022, I didn't pay any attention to it, thinking it wasn't anything useful for me. But I remember a client gushing about it in every work meeting that we had, mentioning how he used it to help him walk through an issue or resolve a problem that they had. He was already convinced that this was a game-changing technology, and his enthusiasm piqued my interest.

While the product was rather impressive and something that I had never experienced before, I just couldn't find how this kind of tech could be used for real work. Sure, it was novel and useful for minor tasks like summarization of long texts or some smaller software development work, but the propensity for giving incorrect answers made me dismiss it as something of a toy. No one could use this for legit work tasks, I thought.

As these tools evolved and new ones started to come out, like GitHub Copilot and Claude Code, I also dismissed them as incapable of doing any serious work. After all, how could any of these tools deal with all the nuances and hard-earned experience that I have gained in over 20 years of being a developer? Needless to say, I refused to believe anything practical could come out of AI any time soon and was unable to see the possibilities. It also led to some intense feelings of animosity towards everything AI.

Anger

I still remember my first outburst of anger caused by AI when it came to work. A few years ago, I worked on a project with a developer that didn't have much experience with our tech stack. Almost every single pull request that I had to review from this developer had the telltale signs of heavy AI use: long inline comments on simple code changes, lack of automated tests, and poorly structured code that would likely be a pain to maintain down the road. The code was functional in the sense that it did what the spec asked. However, when going through the code changes, I could spot plenty of subtle bugs and performance issues.

I didn't want to call out the developer and chastise his over-reliance on AI tools, but I did point out the issues I found. I was indirectly told that I was being too nit-picky and that we could deal with those things later since the changes "worked." The client was happy that the work was done quickly and the code was pushed to production, ignoring my advice that leaving the issues I found unaddressed could come to haunt them later—and it did, but that's a different story for another day.

At this point, I began fostering deep resentment towards these tools, feeling that all my hard-earned experience and knowledge was being devalued because of them. Having this particular client prefer fast and cheap output over what I've learned throughout the years honestly hurt, and my reaction was to dislike anything related to AI. Any mention of it on a company's Slack channel or any social media post where someone mentioned it just brought out some bitterness, and I had to figure out a way to deal with it.

Bargaining

Over time, the big players in this space began to release new AI models and tooling that showed a lot of improvement. It was impossible to escape the chatter online, and while a lot of it was the typical social media hype with little to no substance, I couldn't help but wonder if things have changed since my early dismissal and frustration over these tools. I admittedly started to relent a bit, thinking "If you can't beat them, join them."

However, doing this was a very hard thing for me to deal with, because I couldn't shake the feeling that I was compromising on my long-standing beliefs when it comes to software development, particularly when it came to quality. I tried to integrate AI into my workflow while attempting to cling to my old ways of working, but it never felt right. I'd use Claude to generate some code, then I'd spend more time criticizing its output and rewriting everything, which completely defeated the purpose of using these tools.

Meanwhile, I kept seeing more and more people get enthusiastic about the possibilities with these tools, talking about how much time they were saving and how much more productive they felt. I just couldn't get myself to see what they were seeing. Part of it might have been because I was still too bitter about the whole thing to give AI a fair shake. But even taking those feelings into consideration, seeing others thrive in this new environment while I was massively struggling with it began to affect my mental state.

Depression

When AI usage began to explode, I almost immediately ignored it, thinking that this was just another entry in the long cycle of overhyped tech like NFTs and the metaverse. But unlike the NFT bubble quickly popping and the metaverse never finding any compelling use cases, AI coding was apparently here to stay. The genie was out of the bottle, and it's impossible to put it back in after we've gotten a taste of what it can do.

At this point, I admittedly had an identity crisis. I began questioning everything about my career in software, about how much time and effort I spent gaining the skills that I possess, and wondering if most of those skills will be completely obsolete soon. What once felt like a secure occupation was now being replaced, just like it happened to switchboard and telegraph operators when new and cheaper methods came along.

As a freelancer, I began to worry that potential clients would start looking for cheaper alternatives or doing things themselves instead of hiring someone like me. I actively avoided conversations about AI tooling because admitting that I didn't want to use it felt like I was admitting that I was becoming irrelevant. It was easier to stay quiet than to confront the possibility that I might be getting left behind after spending two decades building expertise in this field. However, I couldn't avoid it forever.

Acceptance

As I was out for a walk one day towards the end of 2025 and thinking about how to deal with my career moving forward, a thought popped in my head: it's time to begin accepting what is instead of actively avoiding it. I can't pinpoint a specific reason why this sudden revelation happened, but I took the message seriously and once again began to look at how to implement these tools in my workflow. This time, I had a different mindset. I made a conscious effort to embrace AI tooling as part of software development moving forward.

What I came to realize as I began using these tools more is that I was entirely wrong about feeling like my skills would become useless. They don't replace all the experience and knowledge I've accumulated in over two decades as a developer, and instead they enhance what I could do. Writing code isn't where I bring the most value. Understanding business problems, analyzing trade-offs, and making sure we're building the right things is where I can put all those years to good use. It might sound like an obvious thing, but it took me a while to get to this point.

I also learned that there's a difference between chasing the new shiny thing and strategically adopting new tools. I don't need AI to replace everything I do, and I don't need to feel bad when I choose not to use it. What matters is knowing where I'll get the most out of using these tools. They're just another part of my toolkit, and treating them that way instead of as the thing that will put me out of a job made all the difference in the world.

I'm still learning how to integrate AI effectively in my career and my daily work, but at least now I'm doing it without all the anger, bitterness, and resistance that held me back for so long. I just wish I hadn't spent two years being miserable about something that turned out to be fine. If someone reads this and recognizes themselves in the earlier stages, maybe they won't need as long as I did.