How Nir Eyal’s habit books are dangerous

Making a case for vulnerable people

Hired as a speaker throughout Silicon Valley and the international tech world, Nir Eyal’s appeal and influence cannot be ignored. He wrote the book that outlines a technique helping companies create products and services that tap into the psychology of habits. The book, Hooked – How to Create Habit-Forming Products, argues that making people hooked will supercharge growth, increase customer lifetime-value and provide price flexibility. The latter means that you can more easily raise prices as people will be less sensitive to price changes.

Using many of the same concepts found in game mechanics and gambling, Nir Eyal’s teachings assist the creation of technology that uses variable rewards systems to make it more difficult for people to stop using it. Of course, when your book is named Hooked  – How to Create Habit-Forming Products, you are certain to attract readers with questionable intent.

Making a product that is as habit-forming as a drug, must be a dream-come-to-true for many product owners. And while Nir  Eyal himself emphasises that it is wrong to addict people, it’s not obvious how being hooked to a product or service is better for the individual than not being hooked, and perhaps just appreciating or liking something because it provides true value for them. For my part, I struggle with the word hooked.

Here’s what Merriam-Webster’s thesaurus has to say about it:

hooked. adjective.
1 having a compulsive need for a harmful substance or activity
The group’s mission is to help those who become hooked on opioids.

Synonyms and near-synonyms:
addicted, dependent, strung out (the list continues)

In most thesauri, hooked and addicted are synonyms, and it’s no riddle why ”addiction” keeps springing to mind when discussing the habit-forming steps outlined in the book. Later in the book Eyal, speaking to the reader, acknowledges “how we secretly wish” all users “would become fiendishly hooked to whatever we’re making”. If having fiendishly hooked users is what Eyal believes most companies wish for, I perhaps would have hoped for a stronger deterrent to pursuing that path than the book manages to convey. How do we identify at what point a user becomes addicted rather than hooked and what responsibility does the company have in those instances?

When thinking about video games and casinos, however, remember that playing the game often is the goal itself. Choosing to participate is part of the agreement, and for added protection there is often a good deal of regulation around the latter. This is an important point, as there is nothing inherently bad or wrong with spending time on entertainment. It’s when time spent clashes with intent, or interpersonal relationships and obligations are disrupted, that we see negative impact.

But conversely, in the many “free” online communities and services we devote our time to in the 21st century, few people are aware that they are entering a veritable casino — places designed specifically to keep them staying longer and paying attention to targeted messages. That’s not what they signed up for.

Now, some people will enjoy themselves thoroughly, accepting the game, some people will not be as hooked as others and others still will feel like its affecting their health, work or relationships. People with such a variety of experiences may not understand each other’s reactions, and may argue over whose experience is more valid. The question that matters is not what choices people make, but if those choices are conscious and based on informed consent, or if they are uninformed and lead down a path that opposes their own expectations and well-being.

And remember that simply because those who are harmed may be few in number does not invalidate the harm. In fact, it would create a dangerous precedent to ignore those most vulnerable on the basis that a service still benefits a majority. Sadly, I am convinced that precedent is already set.

And today, the potential for harm in numbers is greater than ever as many companies can gain 24/7 access to people through their electronic devices.

As Eyal writes:

Habit-forming technology is already here, and it is being used to mold our lives. The fact that we have greater access to the web through our various  connected devices — smartphones and tablets, televisions, game consoles,and wearable technology — gives companies far greater ability to affect our behavior.

Hooked then goes on to explain how to make use of this opportunity, albeit – as I will address further below – there is a chapter about morality.

Now if you happen to be hooked, here is the painkiller

Nir Eyal is also the author of a recently-released second book: Indistractable. In this book Nir Eyal now wishes to help people distance themselves from the distractions they are hooked on, and become better at taking control over their own life.

I expect the irony to be clear, but I will spell it out: Book 2 is written to help people get un-hooked to powerful habits that tech companies have imposed on them. Meanwhile, the key selling point for book 1 has been to help companies create habit-forming products by making people hooked. There is a market for book 2 precisely because many companies are successful in creating those habit-forming products, whether Eyal’s book has been their guide or not.

The reason for writing Indistractable is not that Eyal has any regrets about writing Hooked, but is based on his firm belief that people need to start taking control over their own lives. In fact, much of the book starts with how Eyal himself struggles to avoid getting stuck on his devices for long periods of time, or at the wrong time – for example when his values tell him that family should come first.

I do appreciate this sentiment and the conviction Indistractable displays, but for me there is a huge disconnect when bridging the two books.

Indistractable acknowledges that there is a constant struggle in our everyday lives to prioritize those things that we value in life, as tech is competing for our attention. To fight it, Indistractable argues, people need to put routines in place to always be resisting and obstructing the external triggers and actions coming their way.

While this applies a well-meaning self-help response to dealing with everyday tech that is designed to make us hooked, it does not address the underlying problem: why there is a hefty struggle going on in the first place. Why does tech exacerbate manipulative business practices, and how is growing knowledge about habit-forming techniques being misused? What are the long-term consequences of people increasingly adopting habits imposed by tech, that they later must struggle to rid themselves of? If the individual has a responsibility to respond by adapting to this new world, what should we expect or demand from the companies themselves? Perhaps most important, I would have appreciated reasoning around accountability.

But surely it can only be good to manufacture good habits?

Time and time again, I see the argument that getting people hooked is good if the outcome is positive. Like exercising, like eating healthier, like taking steps to improve the environment. Rarely do I see these arguments address the fact that there are many industries focusing minute-by-minute – through marketing, nudging and and all forms of manipulation, on pushing people to perform what benefits the company rather than the individual. Many of these efforts were outlined in the 1957 book Hidden Persuaders (always worth a re-read) by Vance Packard and more recently in Buyology by Martin Lindstrom.

And really, in the midst of a complex world of bustling corporate influence and individual traits there is no sure-fire formula, making every attempt at pushing someone in a specific direction an act with uncertain outcome. Different people will be to varying degrees susceptible to habit-forming products and services, depending on biological, psychological as well as environmental circumstances.

Many companies will always search for ways to persuade, influence and make people hooked, but a pressing issue for me as a designer and tech enthusiast is whether we work to assist the practices of those companies, or if we work to deter them altogether.

The idea that we create change by influencing people to create so-called positive habits, instead of educating them about the many actors already influencing their habits, is a maddening concept for me. If “winning“ relies on getting people subconsciously hooked on the “right stuff”, then we’ve already lost. The company with the most to to invest, and the greatest endurance, wins.

I’ll give you this, there are for example exercise apps that help people make changes in their lives. Ones that employ nudging. But once again, these apps are often chosen and asked to do just this. I purposely downloaded the app and asked it to nudge me in a certain direction. It’s as close to consent as most apps come at the moment. My point here isn’t that all exercise apps are good, my point is that it’s much easier to succeed with informed consent in this type of app.

Please note how this differs immensely from downloading an app that is covertly pushing me in a thousand different directions I did not ask for, did not expect, have not consented to and am likely only vaguely aware of.

And please forgive me, because to make the above point I had to leave out how most health apps are based on bogus science. You’ll notice how I didn’t say the habits they create are necessarily positive.

Nir Eyal and ethics

I of course do not believe Nir Eyal has any evil intent. He says himself he has the exact opposite intent. He wants people to build good, healthy products and use his teachings to attract and keep a user base and change people’s behavior for the better. And to reason around good intent, Eyal has coined the phrase “the morality of manipulation”.

I believe Eyal is really good at popularising and explaining ideas and concepts about human behaviour that others can readily make use of – for whatever gains they may choose. But a focus on simplifying human behavior can lead to some really dangerous assumptions and outcomes. And the effort to apply ethical guidelines, for me, misses the target.

In February 2018 Nir Eyal was interviewed for Journalism + Design. One question was: “How can companies build products that are persuasive but not coercive?”

This is Eyal’s response:

There are two questions I tell people to ask themselves:

1. Do you believe the product you’re working on materially improves people’s lives?

2. You have to see yourself as the user. In drug dealing, the first rule is never get high on your own supply. I want people to break that rule and get high on their own supply, because if there are any deleterious effects, you will know about them.

There’s a simple market incentive to not build products to screw people. We’re not automatons, we’re not manipulatable puppets on strings. If a product hurts people, they’ll stop using it.

This 2-step process is part of Chapter 6 of Hooked, the chapter on morality where Nir Eyal happens to be brutally honest about the types of readers his book attracts:

Let’s admit it, we are all in the persuasion business. Technologists build products meant to persuade people to do what we want them to do. We call these people “users” and even if we don’t say it aloud, we secretly wish everyone of them would become fiendishly hooked to whatever we’re making. I’m guessing that’s likely why you started reading this book.

In what is dubbed the Manipulation Matrix the reader is asked to consider those two questions: “Would I use the product myself?”  and “Does it improve people’s lives?” The answers place you in one of four quadrants: dealer, entertainer, peddler or facilitator.

If I as a designer would argue these to be valid questions for judging ethics I would likely have been ousted from the industry.

I do not judge if something improves people’s lives by asking myself if it does. The expected benefits of a product needs of course  to be based on extensive research. And even then I can not be sure about impact until the product actually launches, and by being prepared to measure over time. Mechanisms need to be put in place for regular feedback from both users and non-users. Because here’s the thing: values, priorities, and experiences change over time. And both overlooked minority users and non-users can be among the ones being harmed.

Likewise, I do not judge risk of harm by considering if I myself would want to use something. Again, it’s often not the case that privileged makers of systems are themselves fit to consider the needs and concerns of people who are regular victims of prejudice and mistreatment. The way you assess potential negative impact is by including and involving people from many different walks of life in the product development process, through research but if possible also through co-creation. People who are regularly at risk, who regularly look out for harm, are infinitely better at judging and predicting risk.

Honestly, what saddens me the most in conversations around ethics is a lack of recognition that the people who are being harmed the most, are the ones constantly sidestepped by society, with no voice and no platform to object or stand up for themselves. Not always marginalised but almost always edge cases. They are rarely allowed into the conversation.

And herein lies the danger, and why I believe Eyal’s books raise ethical conundrums. While providing tools to start experimenting with habit-forming techniques there has to be obvious reflective reasoning around the potential for negative impact. Something beyond ”your intent and the powers of the market will keep everyone safe”.

But the second book, will it not release people from the powers of habit and make the dangers of the first book obsolete? Well the premise of the second book seems to be that the companies will keep coming and they will keep doing what they are doing. Your best option is to fight your own lack of discipline and apply yourself to avoid the obvious techniques used by those investing large sums of money to influence your choices.

Yes, there is certainly truth here, but it also places a considerable amount of responsibility on the individual. I believe it’s fair to point out the intrinsic dangers in the assertion that everyone needs to change their own behavioral strategies in order to manage all those trying to influence you. I am a strong believer in that many people can become better at controlling their own emotional reactions to the world around them, but it is precisely because this is a very hard thing to do that we as humans are susceptible to external influence. The very reason why much of the advice in book 1 carries potential to be so effective.

To be clear, it is not always the teachings themselves I object to but the lack of guidance around them. I often fail to see how they can be applied broadly and am mostly concerned for people who also struggle with understanding how to control the tech, wrestle with peer-pressure, lack time to manage manipulative apps, are managing health issues and are not yet aware of the subconscious coercions.

In the end, I am not as concerned about privileged, well-off, tech-savvy people like myself. I can find lots of nuggets in Indistractable that apply to the life I lead, but also many things that would stump others. And this assertion and conviction of mine, that so many people struggle to understand how to even change the settings in their phones, comes from performing countless interviews and usability studies of digital services over the past twenty years with regards to services within health, finance, travel and more. People want the tech to work for them, they don’t want to be fighting with its defaults.

When I say guidance this would be things like:

  • Design to change habits is a super-interesting concept, but make sure to design with people, with their consent, and not at them. People need to be involved in deciding what is best for them.
  • There is no sure-fire way to change habits (individual traits, external factors and context matters) which is why we have to be really careful about listening to outcomes so we’re not pushing people in a detrimental direction. What I’m saying here is that even with positive intent there are obviously dangers.
  • People can indeed be empowered to better manage their relationships with digital services, but the ability to do so is also influenced by education, power of voice, social support, time, health and resilience. Regulation can help protect those devoid of power to influence the market. Consumer rights are not governed by waiting for market forces to rectify themselves.
  • That some people have a healthy and joyous relationship with a digital service does not mean that the same service is not also causing a negative impact for a significant number of other people. These two things can be true at the same time, especially with companies dependent on maintaining a massive user-base for their survival.
  • Impact also applies to people who are excluded. There is something to be said for bridging gaps rather than widening them, as tends to happen when we work solely on improving the lives of people who are already rather well-off.

Surely, people will see through any wrongdoing?

I also find that there are inconsistencies in the messaging around habit-shaping. Arguments like this pop up:

So in this day and age, if you screw people over, if you make a product people regret using, guess what? Not only are they going to stop doing business with you, they’ll tell all their friends to stop doing business with you.
— Nir Eyal

Nir Eyal does indeed seem to be saying that the customers, consumers and citizens will prevail in this struggle, because they will see companies for what they really are, and spread the word. But he is simultaneously arguing for companies to create habit-forming products, ones that people use without thinking. And he is also arguing that we need to put loads of effort into opposing the habit-forming techniques altogether.

My stance is that the argument that people will just stop using bad products is a dishonest one. Remember, this is widespread argument today and in no way something that Eyal is alone in expressing. I hear it all the time, especially from CEOs under fire. But today we know so much more about how emotions and fear control human behavior. The concept of people as rational creatures is a dwindling one – emphasised by Daniel Kahneman and Richard Thaler’s respective Nobel Memorial Prizes in economics. Consider, for example, how much more we understand today about why people stay in abusive relationships. You don’t have to be addicted to be impelled down a dangerous path by habits.

I’m not convinced Nir Eyal himself truly believes that people will simply leave bad relationships with products. He is otherwise quoted as saying, more to his point: “The best products do not necessarily win.” And in fact,  Hooked argues for creating “a monopoly of the mind” and making products “that people turn to with little conscious thought”.

Transparency and awareness of any wrongdoing is difficult to expose when the opposing team has the outspoken goal of minimising your conscious thought.

And again, I want to make it abundantly clear that it’s quite possible for a lot of people to benefit from the same solution that many others are harmed by. Let’s take electric scooters as an example. There is a trend for these scooters to be placed in major cities around the world for on-the-street rental via apps. As these vehicles are scattered by the thousands across sidewalks it’s obvious that some people hate them and some will love them.

Two of the groups impacted by the scooters are people with a visual impairment and people in wheelchairs. They are not even users and hence the likelihood that they would be part of any research is low. But along the same sidewalks that they have been able to navigate safely for many years are now random obstacles popping up every day, often severely impacting safe passage or even placing them in harms way. These groups represent people who are often overlooked by society and whose voices matter much less than others when companies forge ahead. So here’s the thing: as long as the majority of people are benefitting from a product and are not aware of any harm, and perhaps really really enjoying the habit of riding a scooter to their next meeting, there will be no incentive for power-users to protest and not enough people to stop “doing business” for any significant change to occur.

In design, this often happens when people with less common needs are  defined as edge-cases. Edge-cases are human beings whose needs are downplayed. Do the founders of e-scooter companies believe the scooters improve people’s lives? Unquestionably. Do the founders of e-scooters themselves use the scooters? Undeniably. Mitigating negative impact is so much more complex.

Rounding off, I really want to emphasise that I still believe Nir Eyal has positive intent and there are plenty of good pickings in his books. But sometimes even positive intent backfires and we all – including myself – need to proactively take responsibility, and recognize accountability, for the ideas and knowledge we share with the world. Minimising negative impact is also about seeing and responding to the many ways people can take your work and misuse it. My intent is to highlight dangers to boost awareness. In the end, I much prefer conscious decision-making over subconscious when impacting both others’ and my own well-being.

In the end my concerns keep circling back to how much freedom a person should be given to engage in reflective reasoning around the options that claim to promote their well-being. And the apparent inability of habit-forming activities to support the critical reflection required to make well-considered choices in the planning of one’s life.

Awareness of these techniques is important, which is why I encourage you to read Eyal’s books and form your own opinion. And if you have a strategy, do let me know how you work to minimise negative impact and ensure safe and inclusive development of products.


Also, do listen to episode 225 of UX Podcast, where we talk to Nir Eyal about his books and his intent.

Update December 2, 2019: In a previous version of this post, I was unfairly judgmental and indignant towards Eyal in a way that deviated from my own standards. After some feedback from Nir Eyal himself I decided to rewrite parts of the article to better articulate my concerns and remove some unnecessarily harsh language.

What to read next: Digital Compassion: A Human Act

Sources and further reading:

 

Per Axbom

Per Axbom

@axbom

Per Axbom is a Swedish communication theorist born in Liberia. For two decades he has educated digital professionals and helped organizations with digital usability and accessibility. Per makes tech safe and compassionate through reflective reasoning, human-considerate design, coaching and teaching. You can hear his voice on UX Podcast.

Digital compassion book cover Per's recent handbook on managing ethics in tech, Digital compassion, is available to order from Amazon in Kindle format. Send an e-mail to Per for more options.

Schedule time with Axbom

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
newest oldest most voted
Notify of
trackback

[…] part of the feedback and reactions to my recent post on the Dangers of Nir Eyal’s books, I received a relevant question. It relates to how we as designers architect the choices that are […]