Do we need a Hippocratic Oath for product design?
Doctors have one. Lawyers have one. Even engineers in some countries take one. But product design? We’ve got…principle decks, Figma stickers, and Medium think pieces. In an era where AI is rewriting the rules in real time, I wonder if that’s enough. The Hippocratic Oath is essentially a public commitment to first, do no harm. It’s a check against hubris.
As Jeff Goldblum presciently warned in Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Swap “scientists” for “designers,” and that starts to hit uncomfortably close to home, and it feels like the exact conversation product design needs right now. Because let’s be honest: we’re building in an age where “move fast and break things” already broke a lot. Layer on AI and the risks multiply.
A few examples:
AI bias: Generative models reflecting and amplifying systemic bias, designers deploying them without question or mitigation.
Dark patterns: Subscription flows that make canceling harder than signing up, or “nudges” that deliberately confuse instead of guide.
Trust erosion: Interfaces that quietly siphon user data, bury permissions, or present “personalized” results that are anything but transparent.
Feature over ethics: Launching addictive engagement loops or gamified streaks designed to keep people scrolling, even when it’s harmful.
Accessibility and inclusivity as an afterthought: Treating inclusive design as “nice to have” or a late-stage checklist, instead of leading with it from the start.
Parts of our field already treat ethics seriously. UX research and research ops have long worked within clear guardrails around privacy, PII, informed consent, and data handling. We’ve accepted that participant rights are non-negotiable. So why not extend that continuum across the rest of product design? Why should ethical rigor stop at research when the downstream design decisions can do just as much harm, or more?
An oath, whether formal or not, forces us to confront the ethical dimension of our work. To ask:
Does this feature make someone’s life measurably better, or just more monetizable?
Are we protecting people’s data and agency, or just mining it?
Are we solving problems with empathy and context, or just automating them away?
Have we led with inclusivity and designed this for everyone? Is it accessible?
This isn’t about slowing down innovation. It’s about raising the floor for what “responsible design” looks like in 2025 and beyond.
Maybe what we need isn’t a cut-and-paste Hippocratic Oath, but a design-specific commitment: one that acknowledges power, bias, and responsibility in every decision. A promise to balance creativity with consequence.
Because if we don’t, we may accidentally unleash the dinosaurs, and harm both them and ourselves. And once they’re loose, it’s a lot harder to get them back in the paddock.