Ethical Tech

Unintended Consequences: Tech Ethics & Human-Centered Design

How can we get designers and technologists to better consider unintended consequences and ethics while developing products & services?

All Tech is Human hosted a discussion which featured Raina Kumra and our VP of Strategy, Sheryl Cababa. The live video recording was moderated by David Ryan Polgar, Founder of All Tech Is Human. Sheryl strongly encourages designers and technologists to consider unintended consequences and tech ethics when creating products and services.

David, All Tech is Human (ATIH): You’re talking about getting designers and technologists in taking on more social responsibility. Who is responsible for ethics in technology? Is it technologists and designers, or the CEO? Are we in a shifting landscape where we’re trying to decide? Is it politicians?

Sheryl: There’s a perception that we need to be more accountable for our work in terms of societal impact. It’s complicated. It’s nuanced. The responsibility in terms of policy is top to bottom. Some technologists have a mindset of “That’s above my pay grade.” I see a lot of technologists absolving themselves of responsibility. They might feel the decision making power lies above them.

“My hope is we find ways to be more accountable in our decision making and connect them to our day to day work and how we impact society. 

How do you educate clients about ethical technology when it might not be a part of their vernacular?

People know they need to be thinking about ethical technology and It’s not prioritized within processes, KPIs, organizations. It’s hard to integrate good ethics practices when shareholders, executives, and employees are not rewarded for it. [An ethical technology way of working] is challenging when teams fundamentally are not ready for this type of thinking.

A client I formerly worked with was working on an emerging technology and my team and I used an example from Black Mirror as a consideration. One of the members of the client team wanted to capture the example as an opportunity. It’s meant to be a cautionary tale, not to inspire unethical behavior and business ventures.

“Ultimately, people are in very different places for integrating ethics into their business practices and some haven’t even started.”

Do you think the phrase ethics can be a hurdle?

Sheryl: I am not anti-Corporate Social Responsibility, but I think it’s separate from core products and services. It feels like an ethical side-hustle rather than a core part of a business. I try to encourage core teams in organizations to incorporate it rather than have it be separate. I’ve worked with Microsoft on some toolkits. They have a program called “Responsible AI” rather than ethical AI. I like that.

​Could you speak about the blurry ethical boundary in design between persuasion and engineering of people's beliefs and behaviors? Thinking here of captology, for example.

It depends on how persuasive design is being used. Richard Thaler wrote a book several years ago called Nudge. The concept of behavior design has been terribly misused in tech. Infinite scrolling, for example, is a behavioral manipulation that understands our need for randomization and keeping our eyeballs engaged. This is one of the reasons I frame the methods I use as “Outcomes Centered Thinking.” Some prompts I use are:

  • What is the outcome you’re trying to achieve and how does it affect those who use or don’t use your products?
  • Is the product meant to benefit your company or shareholders over society?
  • Dark patterns are used regularly for people to behave in a way that might not be in their best interest. How are we (technologists) creating harm?

I’d like to see [persuasive design] used in a more responsible way. I think it’s been misused by companies to capitalize on people’s attention.

​If an app, website, product, or any technology would suffer financially from introducing or improving ethical guidelines, is it inherently unethical in the first place?

I try to avoid getting into discussions where something is inherently ethical or not. Usually, we can gauge whether the outcomes of the products we built are beneficial for society or not. I don’t think you could look at Instagram or Facebook as inherently unethical; though I do think there are ethical challenges in terms of the outcomes we see in the world as a result of using social platforms. Their decisions that have surfaced and prioritized disinformation is an unethical outcome. The better way of looking at it is trying to anticipate what could happen as a result of your decision making and determine whether those outcomes are good for humanity or not.

Looking at decisions through the lens of society is a more appropriate framework of decision making.

Regarding accountability: What do you think about the imbalance in accountability, i.e. if AI damages human lives accountability is diffused - if humans damage AI hardware they are already "delinquents"?

In the topic of self driving autoanamous vehicles does that get diffused if there is an accident? Is there a danger about who is responsible that someone was hit by a vehicle do they blame a company? An algorithm?

I think we have to take into consideration integrating these systems to replace human interactions or human accountability. An example that falls into this category is in the legal system - a human judge or lawyer and I have gotten the question, “Could AI be more fair than a racist human judge?”

We can’t assume that would be the case - you can report a judge or word would get around. What is problematic with AI is the decision making is not transparent. The system could be racist and we don’t know who would be responsible, aside from who implemented the technology. Our systems of accountability are not well equipped. We don’t have the framework in place to answer that question right now. I do push for companies to be held accountable for the software they build.


Sheryl Cababa is a multi-disciplinary VP with more than two decades of experience. She grew her craft as Executive Creative Director at Artefact and prior to that, was a designer at Frog and Adaptive Path. At Substantial, she works with all sorts of interesting people and companies, conducting design strategy and research. She believes the practice of design needs to be more outcomes-focused, and designers and technologists need to take greater responsibility for their work by considering unintended consequences. She pushes for design to be more transparent, meaningful, and ethical.

Sheryl is an international speaker and workshop facilitator. She has plenty of experience running design strategy workshops and leading projects. In the end, she’s all not just orienting around a human-centered approach to design, but also investigating the understanding of systems and outcomes to improve people's lives.

When she’s not in the office, she can be found at the University of Washington helping educate the next generation of Human-Centered Design and Engineering students or attending a board meeting for Design in Public.

Interested in learning more about how Sheryl + Substantial approach ethically building products? Get in touch.

Let’s build a better future, together.