Mason Stallmo

April 29, 2026 · Mason Stallmo

IBM Had It Right In 1979

IBM Had It Right In 1979

A computer cannot be held accountable therefore a computer must never make a management decision. IBM, 1979

If you’ve spent enough time around programming circles on the internet you’ve probably seen this quote before. Attributed to someone at IBM back in 1979. Most recently Simon Willison posted this quote on his blog saying that this quote “…could not be more appropriate for our new age of AI”. I couldn’t agree more.

Every other day, maybe every second at most, I see someone on my BlueSky feed attempting to lay responsibility at the feet of an AI agent, talking about how “AI does x” or “AI does y” when it is still the programmer that’s in charge and not the AI. Given how advanced these tools are it’s easy to see how this can be conflated. In many ways they do feel like working with something that has agency. Even so, this is a profound misunderstanding of not only the tool but of the ultimate responsibility of the programmer. IBM had it right in the 1970s — the buck stops with the programmer, AI changes nothing about that.

As AI has gotten more useful for coding tasks there has been a lot of noise from projects around not accepting AI generated code. The reasons given for projects banning AI generated code frequently center around the quality of the code created. This places accountability in the wrong place. If you think the code is bad then hold the programmer submitting the code accountable, don’t dismiss an entire class of tools. Excluding AI tools specifically is shifting accountability over to the computer not keeping it where it belongs, the programmer.

Fundamentally AI is a tool, a uniquely powerful tool, but a tool nonetheless. Tools are used by humans and nothing about AI changes that fact. That is not to say that AI isn’t different than the tools we’ve had before. Its ability to operate for long periods without direct input and to be able to perform tasks that used to only be reserved for humans is unique. Even so, the principle of accountability applies just the same. AI does not have initiative or sentience, there is always a human somewhere that has had input on the behavior of any AI agent. It’s the next level expression of automations that we’ve already had via simple tools like cron and shell scripts.

It was always the responsibility of the programmer for the quality of their work. If a tool they used caused an issue the responsibility still fell to the programmer not the tool. AI has changed nothing about this dynamic but only served to make it muddier. Our tools have become able to do a task that used to be reserved to only humans, this makes accountability less clear than it used to be but it doesn’t fundamentally change responsibility.

There is a fundamental change going on in how the software industry works and it’s not clear how things are going to turn out. We are in one of those moments of history where the decisions we make now will have lasting consequences for those who come after us. We have an opportunity to establish the culture around how these tools and new tools yet to be built are used and received. We can affirm that it is always the humans doing the programming who are accountable and responsible for the code they produce, not the tools they use. Allowing responsibility to be placed anywhere other than the people using the tools would be a great failure of our industry. In the 70s IBM knew the importance of placing accountability with humans over machines. They couldn’t have known then what software development would look like 47 years later and how well that sentiment would age. We ignore their insight to our own detriment.

← All posts