Autonomous Machines? Scary. Un-Accountable Humans? Scarier.
Is the "safe and explainable AI" in the room with us now?
It’s not the model, honey: it’s how you use it.
There’s a certain familiar energy to some of the talk around responsible AI use. It’s got base notes of “choice feminism,” and gentle hints of blame-the-consumer eco-austerity.
You know the logic. Choice feminism is the idea that if a woman freely “chooses” something, that choice is automatically empowering. Eco consumer-scapegoating is the idea that if an individual buys a bag of bananas wrapped in plastic, they are personally responsible for the untimely demise of the planet.
As if individual choices made within structural constraints are always pure, sovereign expressions of personal will — and not, say, necessities or survival strategies. As if upstream decisions don’t shape downstream options.
So, when AI models are framed as entirely “purpose neutral” raw material, and the accountability football is kicked down the line to deployers and vendors; or when deployers and vendors, in turn, kick it down the contractual path to customers; it carries a certain sense of déjà-seen-how-this-ends.
Agentic AI further intensifies the question, blurring the already-fragile line between tool and actor, and threatening to turn accountability gaps into full-blown vacuums. These systems can now initiate tasks, trigger actions, call tools, or chain decisions without a fresh human prompt, making it even harder to say where meaningful influence actually sits — or even “impossible to pinpoint responsibility.”
Choice feminism’s great sleight of hand was turning structural problems into personal ones. Responsibility for environmentally conscious choices has been so successfully laid at the door of individual consumers, that we’ll probably all be enjoying our Starbucks coffees in reusable bamboo cups while we watch the sun go down over the last day on Earth.
The point is, when harm is systemic, it is often politically convenient to reframe it as a matter of individual virtue or “responsible use.”
Today’s question, then, is whether AI governance will fall into the same trap: wherein structural risks created upstream are reframed as matters of user responsibility, and accountability dissolves into personal improvement rhetoric.
And what does that all mean for you — the end user?

Come Get Your Accountability
A key strength of the EU AI Act, considered the world’s first comprehensive legal framework specifically for AI, is its attempt to distribute responsibility proportionately across the entire lifecycle of an AI system, rather than letting it pool at the end of the chain. In this picture, accountability in AI begins upstream.
This isn’t a new idea. Many actors across the AI landscape have been aiming for models of shared responsibility for years — sometimes out of good faith, sometimes as evolving best practice, sometimes simply to handle complexity of modern AI. And some great outcomes have resulted: self-regulatory consortia, best-practice guidelines, safety pledges, frontier model fora, intergovernmental principles, ethical frameworks, and risk frameworks.
But history shows that norms alone are, all too often, not enough. In the arena of lawfare and high-stakes technology, enforceable rules are a stronger glue for shared responsibility, as opposed to voluntary commitments.
One key argument from critics is that to crystallise upstream obligations in this way is unfair, because developers cannot possibly anticipate every downstream use. Their concerns aren’t unreasonable — but the Act doesn’t ask developers to foresee every hypothetical use case. It asks them, just as it asks deployers, to take responsibility for what they can meaningfully influence.
The underlying principle is sound, grounded in principled accountability rather than psychic ability. That said, it may not play out in as straightforward a fashion in practise as it does on paper. Boundaries between developer/deployer responsibility in complex AI supply chains are porous, and enforcement will depend on guidance that is still evolving.
Added to this are concerns that such regulation risks stifling innovation: particularly as the thresholds for the level of “risk” relevant to different models are still being worked out, and noting the potential for compliance costs to fall unevenly on smaller labs.
There is one challenge that particularly catches my attention: that of definitional clarity. Without clear, shared concepts of autonomy, “meaningful human control,” or systemic risk, even the best-designed accountability frameworks will struggle to hold water.
Killer Robots Don’t Do Take-Backsies
One of the clearest illustrations of what happens when we can’t define “autonomy” — or “meaningful human control” — is the ongoing debate over autonomous weapons (fondly known as “killer robots”).
The logic often goes like this: as AI systems become more capable and more flexible — edging closer to something resembling “general intelligence” in an anthropocentric sense — they can make more complex decisions and operate with less human involvement. Greater capability enables greater autonomy.
But the way autonomy is defined in practice is anything but straightforward. Governments and corporations can easily set the bar for “autonomous weapons” so high that systems with significant decision-making power simply don’t qualify. The UK, for example, defines an “autonomous system” as one capable of “understanding higher-level intent and direction” — an AGI-adjacent threshold that would allow them to remain true to their publicly stated intention never to develop “fully autonomous weapons,” all while developing a wide range of highly independent systems to be labelled as merely “automated.”
As Paul Scharre notes1, this effectively shifts the debate toward hypothetical future systems, and away from the near-term reality of weapons that already search for, select, and engage targets with minimal human oversight.
And that definitional flexibility risks creating a very real accountability gap. Human Rights Watch has warned that when responsibility is scattered thinly across operators, programmers, and manufacturers — with no clear anchor point — victims of autonomous systems may be left with no meaningful path to remedy. And of course, “assigning responsibility to the autonomous weapon system would make little sense because the system could not be punished like a human.”
In other words, this definitional flexibility allows considerable grey space for the slow expansion of AI autonomy without corresponding accountability.
This isn’t hypothetical, either. Systems with high degrees of autonomy already exist around the world in things like air defence platforms, cyber operations, loitering munitions, and advanced fighter jets. Humans may be “in the loop,” but it’s far from clear whether they are always meaningfully so: or at what point, legally or morally, responsibility begins and ends.
All of which mirrors a broader risk in today’s AGI discourse: if we can’t define autonomy or meaningful human control, we will struggle to regulate it. If we can’t regulate it, we will struggle to assign responsibility. And if we can’t assign responsibility, “human in the loop” becomes a comforting recitation rather than a meaningful safeguard.
Where we stand today, autonomous machines may be a secondary risk, compared to unaccountable humans.
“If the nature of a weapon renders responsibility for its consequences impossible, its use should be considered unethical and unlawful as an abhorrent weapon.”
“If you have a 10 percent error rate with ‘add onions,’ that to me is nowhere near release […] Work your systems out so that you’re not inflicting harm on people to start with.”
Attorney Dazza Greenwood, Quoted in Wire, 2025 - commenting on a project by software engineer Jay Prakash Thakur, developing an ordering system for a futuristic restaurant, where users could type out their desires to a chatbot, and an AI agent would then research an appropriate price, translate the order into a recipe, and pass the instructions to robot culinary experts. “Nine out of 10 times, all went well. Then, there were the cases where ‘I want onion rings’ became ‘extra onions.’ […] A worst-case scenario, if this happened in real life, would be misserving someone with a food allergy.”
Fully autonomous, accountable Humans
What I find particularly refreshing about the EU AI Act is that it does consider accountability to be a tangible thing, but does not buy into the idea that it somehow calcifies at some point downstream. Its multi-level, shared responsibility approach does not eliminate individual responsibility, but attempts to delineate it between developers and deployers. Structural responsibilities for structural actors; contextual responsibilities for contextual ones.
And to be clear, at a broader level end users do of course bear some responsibility for the way in which they use AI tools; as I mentioned in my last post, we all have a role to play in shaping the infrastructure that will shape our future.
But I am appreciative of any structural refusal to shuffle accountability down the chain until the burden lands on the least informed, least empowered party.
There’s work to be done yet, but it’s a step in a long road to a kind of AI regulation that can retain strong, value-led frameworks, whilst keeping evolutionary pace with the technology it governs, and the society that it envisions.
If we want a future shaped by human autonomy rather than unaccountable systems, we need governance frameworks that anchor responsibility where power actually sits.
To own the future, we have to understand it, argue it, refine it, build it consciously and constantly: starting from human autonomy, bounded by human accountability. And that’s a debate none of us can afford to sit out.
Three Key Takeaways
Responsibility starts upstream. Don’t let anyone convince you otherwise.
Transparency is a need-to-have, not a nice-to-have. Look for it. Reward it.
Accountability travels with power. If power shifts, responsibility should too.
What You Can Actually Do
Ask better questions of the tech you use: Where does this model come from? Who’s responsible for it? What transparency exists?
Notice when responsibility is being offloaded onto you: You might see this in terms of “AI literacy,” “responsible use,” or vague disclaimers. Have boundaries. Assert them.
Support policies, organisations, or candidates that argue for upstream accountability: not just downstream liability.
Choose products and platforms that publish real documentation: Thing like model cards, safety testing, data provenance, not just vague glossy statements.
Treat “human in the loop” as a claim to be interrogated: not a safety guarantee.
If you’re enjoying this
Also, some housekeeping…I’m changing the model, again. To allow for maximum flexibility and readability, from now on all newsletters (between two and four a month) will be free to all for the first two weeks, after which they will be dropped into the paid-only subscriber archive.
The plan was always to build a small but real community here: around writing that brings real value, joy, and meaning to those inside it. Also, I’m passionate about writers being paid for their work. And I want to be able to build this into a real income stream, so that I can keep doing it: making it better, more valuable, more meaningful for all of us as I go.
If that all sounds like a bit of you, your support would mean the world! Right now it’s just £5/month or £50/year for paid subscribers. And believe me when I say, every single subscriber brings the biggest smile to my face.
☕ If Nothing Else, Buy me a coffee? 💛
Enjoying the newsletter? I run on curiosity, compassion, and… yes, a little caffeine. If you’d like to support this work, you can buy me a coffee here. And if you do, thank you so much! Every word and gesture of support means more than you know.
If something resonated today, hit reply and let me know. Every email brings me a genuine smile, and gets a genuine reply. I love conversation more than anything. And if you found this helpful, feel free to forward it to a friend who might need a nudge too.
Thanks for being here, and thanks for doing the good you can do.
Anyway, you guys are the best.
Until next time,
Laura x
Please note that as per our Terms of Use, the content on The Good You Can Do is intended for informational and inspirational purposes, and for general discussion only. Nothing here constitutes legal, medical, psychological, financial, technical, policy, or professional advice. Information may change as the fields of AI and governance evolve.
Unless explicitly stated, I am not a licensed professional in any of the fields discussed. Any advice, insight, or reflection offered is based on personal experience and learning - not professional training or certification. Readers who choose to act on information from this blog do so at their own discretion and risk, and should consult qualified experts before making decisions related to any of the issues discussed in this blog.
Some content may be drafted, edited, ideated, co-produced or refined with the assistance of AI tools. AI-assisted content is reviewed and curated by a human before publication. All rights to the final edited content remain with The Good You Can Do.
Some content on The Good You Can Do may touch on topics related to emotional well-being, loneliness, grief, or personal growth. I share these reflections from personal experience and a desire to foster human connection - not as a mental health professional. If you are struggling with your mental health or emotional well-being, please seek support from a qualified counsellor, therapist, or mental health provider.
From time to time, this blog may include links to external websites, resources, or content created by third parties. These links are provided for your convenience and inspiration, but I do not control or guarantee the accuracy, relevance, or reliability of any external content. Inclusion of links does not imply endorsement.
Paul Scharre, Autonomous Weapons and Stability, March 2020 (Thesis Submitted in Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Department of War Studies, Faculty of Social Science & Public Policy, King’s College London).


Great piece, Laura! These issues are super-critical and are not getting airing in the US. Luckily, the EU is leading the way on oversight, as they are with social media regulation. You asked if AI accountability will dissolve into personal improvement rhetoric. The answer from where I sit is yes—until a political earthquake overcomes money in politics and brings responsible regulation forward.
Regarding the topic of the article, I am truly curious about how agentic AI's abilitiy to chain decisions specifically complicates pinpointing accountability.