This week a thread on twitter blew up about a CEO whose AI agent deleted a prod database during a code freeze while he was “vibecoding” a demo. Vibecoding is a growing trend and it involves users directing AI to create code that meets their requirements. Typically vibecoding involves little to no actual human-written code. It’s a step beyond using AI to get to a starting point because the goal is to get AI to write code by itself that fits the requirements.
Here is a link to the news story if you would like to read more:
SAASTR Vibe Coding Incident, but I wanted to use the opportunity to talk about where I see the future of the tech sector going in light of recent AI trends.
As AI adoption grows, news stories like this one highlight that the more things change, the more they stay the same. Even in a best case scenario working with a purpose-built agent, it’s possible to see AI hallucinations, unpredictable behavior, and missteps. In critical applications, good CI/CD practices are always going to include peer review, no matter whether the one writing it is a human or a computer.
It’s that reason that I remain optimistic when people ask me if AI is coming for our jobs. Even if machines allow us to output higher volumes of code than our staff could before, it remains incredibly important to have checks in place. It remains necessary to have staff to review and vet code that is produced by AI. The only alternative is an ouroboros of AI checking its own work, validating its own results, and approving its own releases.
I hang out with a lot of fellow parents, and I get asked often if I think it still makes sense to learn to code with the recent advances in AI. I think it’s more important than ever. Scripting languages are exploding, but low-level languages that allow everything else to work, and the computer science knowledge that drives it all will remain incredibly valuable skills.
In many science-fiction series, notably Foundation by Isaac Asimov, technology becomes a kind of magic that is understood by the few and powerful. I stand amazed as I look at current events in my field and see the early signs of moving in that direction. I see that people I am interviewing are not only using AI to answer difficult practical question, but also basic questions about data warehousing best practices.
My ultimate hot take about AI is that as good as it gets, in its current form, namely large language models, it can only mimic. So every bad practice that gets committed by someone who doesn’t know better adds to the knowledge base that trains AI. AI, even purpose-trained agents, do not know how to code. It knows how to produce valid code based on its training data. It fundamentally does not understand what it is doing. So it’s my firm belief that we are headed for a future where anyone can produce code, but few will be able to produce code well. A one-person team may be able to code an entire backend in a week for a given application, but it will take a skillful architect to orchestrate the code and say what should be built. Likewise, it will continue to take skilled people to create requirements that match the business need. It will take skilled people to write test cases and validate the end product.
It is likely that many companies will choose to get by with “good enough,” but there will always be a competitive advantage in doing things well. I love that I’m in a role at a company where they share that commitment to excellence.
As a final note, more candidates are using AI in their daily roles and in the interview process. Consequently, more and more companies may effectively find themselves indirectly trusting their codebase to AI, but with extra steps and expense. If workers lose the skills to evaluate the code that their AI agents produce, they will find themselves reacting to the code, rather than proactively steering the agents to produce effective code that is fit to its purpose. If you need help with any of that, or want to share your thoughts, feel free to reach out, I’d love to chat.

Leave a Reply