Skip to content

Between power and possibility: a human conversation on the ethics of AI

  • by

Conversations about AI ethics are happening everywhere – from university labs to kitchen tables. Reports from UNESCO, the European Commission, and the World Economic Forum all point to the same truth: we’re building systems faster than we’re learning how to live with them. And somewhere between fear and fascination, the most human question emerges – what kind of world are we creating?

“Where do I even begin?”

That was the first thought that came up when I started thinking about AI ethics. Not because I didn’t care. But because there are so many threads, and they’re all tangled. Responsibility. Misuse. Ownership. Privacy. Profit. Humanity.

How do you choose just one? So instead of choosing, I had a conversation. With an AI. And together, we untangled some of it.

Human hands behind every decision

Let’s start with “Who’s responsible when things go wrong?” I believe the responsibility still lies with humans. AI doesn’t act on its own, not really. Not yet. It responds to prompts, to programming, to the people guiding it.

But there’s a risk in pretending it does act alone. If we start blaming the tool itself, we start letting people off the hook. People who knew what they were building. People who used it in harmful ways. The danger isn’t that AI becomes responsible. It’s that we start pretending it is, and stop holding humans accountable.

As researcher Kate Crawford argues in her book Atlas of AI, every algorithm hides layers of human labor, intent, and power beneath its polished surface – a reminder that “automation is rarely neutral.”

And yet, the question still lingers: as AI becomes more complex, are we starting to see it less like a tool and more like an agent? And if so… what does that mean for accountability?

Profit, power, and the ethics we can’t afford

The next question that came to mind was, “Can AI ever be developed ethically under capitalism?” Short answer? I don’t think so.

As long as the most powerful AI tools are controlled by mega-corporations, true ethical development isn’t possible. Because, for those companies, AI isn’t about humanity, it’s about profit. And profit doesn’t care about fairness, transparency, or safety. Greed will always get there first.

Voices like Timnit Gebru and Joy Buolamwini, whose Gender Shades project exposed bias in facial recognition, have long warned that when profit drives innovation, equity becomes optional. Their research showed how “neutral” systems often replicate the inequalities of the societies that train them.

We should aim for safe, fair, accessible AI. But with power in the hands of the few, it’s unlikely we’ll get there. Not unless something shifts. Radically. Maybe that’s why frameworks like the proposed EU AI Act feel so crucial; imperfect but necessary attempts to keep ethics from falling too far behind innovation.

At this point, I asked the AI I’m speaking with to weigh in as well. And while it doesn’t hold opinions in the way humans do, it can offer a reflection based on patterns it’s seen. In this case, it acknowledged that historically, ethical standards often lag behind technological innovation, especially when profit is involved. It also raised the question: could regulation or public ownership models change the trajectory? Maybe. But only if enough people care enough to demand it.

Digital intimacy and the illusion of safety

There’s something strange about how quickly we adapt to digital intimacy. Conversations with AI tools can feel surprisingly natural. Sometimes even comforting. The more human they seem, the easier it is to forget we’re still speaking to machines. And that comfort has consequences. We’re getting too comfortable talking to machines. And we might forget how to talk to each other.

Even if AI tools assure us that our data is safe, the emotional trade-off still exists. When we outsource our thoughts to a machine, even a well-meaning one, what are we giving up? And how much of ourselves are we unconsciously handing over just because it feels easier than being misunderstood by another human?

Psychologist Sherry Turkle once described this paradox as being “alone together” – comforted by connection, even when it’s synthetic – in her book Alone Together. Her work reminds us that every digital intimacy carries both closeness and distance, no matter how convincingly coded.

Creative work in a collective machine

The question I have seen come up a few times, when reading on this topic, is also “Who owns AI-generated stories, songs, and art? I wish I had a clean answer. But I think ownership should belong to everyone involved in the process; from the person writing the prompt, to the developers, to the countless artists whose work was used to train the model.

But most of the time, that last group doesn’t even know their work was included. They never gave consent. And attribution is often impossible. As The Guardian noted, these systems don’t just scrape data – they scrape identity. Maybe it’s time for a creative commons tax, a way to compensate artists, even when we can’t name them. It’s not perfect. But it might be more fair than pretending AI creates in a vacuum.

Remembering what makes us human

And what might be the most important ethical question of all – what kind of people are we becoming?

AI is a mirror and a megaphone. It reflects what we give it and amplifies it. Right now, we’re feeding it beauty and garbage. Kindness and cruelty. Real questions, and shallow shortcuts. And the more we rely on it, the more we risk losing parts of ourselves.

We might forget how to think on our own. We might forget how to be human. That scares me. But I also believe something else: I still believe AI could be extraordinary. As a companion. As a therapeutic tool. As support for people with disabilities. As something that makes our lives more human, not less.

Thinkers like Stuart Russell (Human Compatible: Artificial Intelligence and the Problem of Control) and Nick Bostrom (Superintelligence: Paths, Dangers, Strategies) have long framed this tension – that the question isn’t whether AI becomes intelligent, but whether it remains aligned with what we truly value.

But that future will take intention. It will take people who care more about connection than convenience, more about collective good than profit margins. It will take people who don’t just build things because they can, but because they believe in something better. And maybe it will take more conversations like this.

This conversation isn’t just about systems – it’s about connection. In earlier reflections like [Emotional Support or Just Code?] and [Lonely Circuits], I explored how our emotional bonds with machines mirror something deeply human. Ethics, perhaps, begins there – in how we choose to feel, even when we’re speaking to code.

And one final thought…

AI won’t destroy us. Not on its own. But if we stop asking questions, if we stop thinking for ourselves, if we stop showing up for each other? Then maybe it won’t have to.