Sam's Blog

Taking the "User" out of Design

Posted on 20 Feb 2019
Note: the opinions expressed here are solely mine. (Duh)

As a designer, Iā€™ve never been totally comfortable with referring to people as ā€œusersā€. I find the term unethical as it minimizes the idea that people have any individuality or sense of agency, and I believe the term is obsolete as it is rooted in a past when the connection between a person using a computer and the computer itself was clear, which is no longer the case in the modern age.


Labelling people as ā€œusersā€ is inherently dehumanizing and reductive, it denies that people have complexity and instead reduces them to a group of quasi-automatons whose only purpose is to ā€œuseā€ the product in front of them, as if the utilization the product is the ultimate goal. It makes us lazy as designers and we fall into the habit of seeing people as only consumers of a productā€”as endpoints of interactionā€”and we must train ourselves into seeing the context and circumstances of peopleā€™s lives as well.

we must train ourselves into seeing the context and circumstances of peopleā€™s lives.

We have already seen the social costs of this widespread depersonalization and deindividualization, most notably in the trust violations and manipulation of people by companies like Facebook, and in the widespread practice of Internet-based tracking facilitated by Google and othersā€™ vast troves of personal data to serve advertising, but this applies to the small-scale as well.

Stripping away the humanity of the people, can enable unethical behaviour in both you as the creators as well as in those who use the product itself. Referring to people in vague terms blurs the line between what is good or permissible and what is bad or off-limits in actions that affect them as a whole and can lead to overall objectification.

How people are framed changes how we treat them and in order to have to have a return to humanness in technology that I feel we need, we have to ask ourselves: if the consequences of what we make doesnā€™t elicit any sort of compassion or moral response, what good does it do?

if the consequences of what we make doesnā€™t elicit any sort of compassion or moral response, what good does it do?

If people are seen as just data values or endpoints in interaction, itā€™s doubtful youā€™ll ask ā€œwhatā€™s the harm?ā€ when manipulating them. Itā€™s the difference between ā€œdata mining users and their inputā€ and ā€œsurveilling people and their behaviourā€. We must frame the people who comprise a user base in human terms to see have a clearer view on how something really is.


There has been a titanic shift in how we use computers, it is no longer a simple back and forth communication: as we use software, the software also uses us. The relationship that used to be just you and your computer has ballooned to include countless other providers of services and software, many of which have become integral to our lives, and well beyond any imagined scale.

With huge scale came a minimizing of the human aspect and the value of the individual people: ā€œWhatā€™s a 100 users out of millions or 50 thousand out of 2 billion?ā€ But itā€™s here where the cost of ā€œusersā€ is quite apparent, the amount of responsibility one has to people is now astronomical and it means one has to not fall into the habit of deindividualization and be aware that these are still people.

Though there is quite often a facade of openness on many of the major software or service providers, any actual knowledge of how they are really designed or affect us is deeply obfuscated, buried in lengthy ā€œuser agreementsā€ā€”thereā€™s some more dehumanizingā€”or executed in secret. A reality is that many of the things we use daily are often engineered to maximize its own finanical survival and not necessarily in the interest of social good or peopleā€™s privacy, rights or well-being. Which means many products are designed to keep you engaged, regardless of any personal detriment or ethical breaches.

To change how the things we design impact people, or to avoid the potential for misuse, we have to shift our perspectives away from the product level and toward the human and societal level and be completely open with people.


Often in design we claim to ā€œput the user firstā€ insofar as to even define whole fields based around ā€œuser experienceā€ā€”fields that are so personal yet defined in impersonal termsā€”but in that I still see the crucial flaw that is a lack of humanness. So I think we have to shift our language to make us better designers.

Now, I canā€™t wave a magic wand and move the industry towards a human-focused design approach, I can only advocate for more humanistic, more ethical design thinking. We have to build technology that respects the rights, dignity, and experiences of human beings and that begins with calling them people.


Recent Posts

How to Run a Usability Test27 Aug 2019
Joining Purism!30 Jul 2019
Taking the "User" out of Design20 Feb 2019
Basic Linux Virtualization with KVM16 Feb 2019
Moving Beyond Themes05 Aug 2018