Machines Know But Don't Care
- By
- Steve Williamson, VP Digital Marketing and Content, eRep, Inc.
- Posted
- Monday, November 10, 2025
Editorial by Steve Williamson
"No computer should ever be given the power of making a management decision because it has no accountability."
— IBM internal training manual, 1979
That statement could have come from any cautious manager in the early days of mainframes, when computers were still the size of small houses and "artificial intelligence" was the stuff of pulp novels. But the warning has held up remarkably well. In fact, it's more relevant now than it was when the memo was written.
The Rise of the Machine
IBM wasn't worried about bad code; they were worried about bad judgment.The authors of that internal training manual understood something that most of the modern tech world is still struggling to accept — that intelligence and accountability aren't the same thing.
A computer can calculate, optimize, and recommend. It can even learn. What it cannot do is care.
Today, algorithms screen job candidates, approve loans, generate policy memos, and steer vehicles down crowded streets. They do these things efficiently, sometimes even more accurately than humans. Yet in every case, the actual moral weight of the outcome — who gets hired, who gets denied, who gets hurt — still belongs to the human who built or deployed the system. The tool doesn't own the result; the person does.
The danger comes when people start confusing computational power with moral authority. Once you start treating a machine's output as a decision rather than a suggestion, you've crossed a line. Efficiency becomes its own justification. A system built to reduce uncertainty begins to erase responsibility.
Decisions Without Accountability
A computer can't feel guilt or empathy. It doesn't wake up at 3 a.m. wondering if it made the right call. It can model behavior, but it can't have a conscience. That distinction matters, because decisions without accountability are just math wearing a necktie.
IBM's engineers didn't write their memo out of fear of machines; they wrote it out of respect for humanity. They understood how easy it is for us to surrender difficult choices to something that feels neutral and objective. It's tempting to let the algorithm decide who deserves the loan, the parole, or the promotion — because then it's not our fault. But neutrality is a myth. The system only reflects the values, priorities, and biases of the people who designed it.
Tools of Staggering Potential
And yet, for all that, machines have given humanity extraordinary leverage. Artificial intelligence, in particular, is a tool of staggering potential. Properly guided, it can accelerate discovery, sharpen judgment, and free people from repetitive work so they can focus on what only people can do — imagine, empathize, create. AI can summarize a thousand medical studies in an afternoon or model the spread of a disease faster than any team of humans could manage. It can find efficiencies that save energy, materials, and even lives. These are not small things.
Used wisely, AI becomes an amplifier of human capability rather than a replacement for it.
Artificial Intelligence can widen our field of view and give us better data on which to base our decisions. But it doesn't absolve us of the need to make those decisions ourselves. In the right hands, it's a partner; in the wrong hands, it is a proxy for avoiding moral effort. Like any other tool, its value lies in the intent of the person holding it.
Tools like AI don't make us better people, they only make us more powerful versions of who we already are.
A wise society uses AI to extend compassion, creativity, and justice. A careless one uses it to hide behind spreadsheets and call it progress.
The wisdom in IBM's quote is simple, almost ancient: accountability can't be automated.
Intelligence can be built, but judgment has to be lived. The moral horizon of a tool is always the same as that of its maker. Whether we're talking about a hammer, a weapon, or an AI, the responsibility rests with the hand that wields it.
The future of technology won't be defined by how smart our machines become. It will be defined by whether we stay willing to own the consequences of what they do in our name.
Go to eRep.com/core-values-index/ to learn more about the CVI or to take the Core Values Index assessment.
Steve Williamson
Innovator/Banker - VP Digital Marketing and Content, eRep, Inc.
Steve has a career in project management, software development and technical team leadership spanning three decades. He is the author of a series of fantasy novels called The Taesian Chronicles (ruckerworks.com), and when he isn't writing, he enjoys cycling and old-school table-top role-playing games.
View additional articles by this contributor
Essentials
Additional Reading
Stay Updated
Employer Account Sign-up
Sign up for an employer account and get these features and functions right away:
- Unlimited Job Listings on eRep.com
- Applicant Search
- Applicant Tracking System (ATS)
- Unlimited Happiness Index employee surveys
- 3 full/comprehensive CVIs™
- No credit card required — no long-term commitment — cancel at any time
Write for eRep
Are you interested in writing for eRep? Read our submission guidelines.
