For us, business design is a hybrid approach of two disciplines: top-level management consulting blended together with design tools and thinking. We work on similar issues as any business developer or strategy consultant and do not limit ourselves to strict project types.
Business design is a practice that is only finding its place among its more established peers. Traditional business consultancies that rely on the shoulders of scientific management and hypothesis-based thinking have been around since the 1920s. Similarly the traces of design thinking can be followed to as early as 1960s and cooperative design in Scandinavia.
What then, is business design? We follow David Schmidt’s thoughts and see that business design at its core is the science and art of creating and validating business models. However, we’re not content with that. This definition works in design context when a business designer is complemented with other designers and experts, but we look at business design work within a broader scope.
For us, business design is a hybrid approach from the aforementioned two disciplines: top-level management consulting blended together with design tools and thinking. We work on similar issues as any business developer or strategy consultant and do not limit ourselves to strict project types.
To make this more concrete, we have put together building blocks for successful business design. These are theses that we as business designers believe in and stand for:
Ways of thinking
1. Strategy as actions
First and foremost, to have a major impact business designers need to live and breathe strategy. Simultaneously – as designers – we are doers and see that any strategy has to be realised through concrete actions. Design tradition arms us with tools such as co-creation and experiments to complement conceptual thinking.
2. Understanding change, focus on defense or offense
We look to understand what is changing, and only then decide whether it is time to strengthen the core business or to frantically build new. The increasing speed of change is just another buzzword, if we don’t seek to understand how it really impacts business.
3. Sustainability as core business
We strive to have a real impact in all we do, and marketing stunts – especially when it comes to being sustainable – are not our kind of business. We believe time is ripe for business models that have a positive net impact
4. Holistic, systemic approach
In line with great traditions of design thinking, we scratch deep below the surface; we want to understand the underlying implications and connections. Modern business problems are too large and complex for a single organisation to tackle. Ecosystem views and network models are our day-to-day methods of analysis.
Ways of working
5. Not solution-driven, but problem-driven
We don’t believe in best practices or ready-made solutions, butaim to uncover and understandthe problem or issue at hand and only then we take a look outwards: can we get inspiration from some other case or business? We explore what’s possible together with our colleagues from a wide range of disciplines
6. Scalability of everything
There is a growing fear that transformational endeavours don’t scale, create new business or even business value. As business designers we need not only to strive to build scalable services and business models, but also to make business development and implementation scalable.
7. Sustainable business models instead of one-off wins
Buzzwords are not good business. We desire to buildbusiness models and services that are lasting and large enough to matter. We investigate and prioritise actions by evaluating business impact in order to make value tangible and support decision making.
8. Fighting biases and intuitions through being truly customer and data-driven
Leadership and their organisations – even us designers – typically have ‘proven’ truths or conventions that guide their thinking and actions. We want to challenge thiswith data-driven insights and decision making, using experimentation and analytics to feed and validate.
Sometimes we win the lottery. Profitable businesses don’t emerge by accident. Designing them increases the odds significantly.
Our mission is to help you make things happen.
During the coming months we’ll continue sharing our thoughts through a mini blog-series. Feel free to comment, challenge our thinking and have an open discussion with us.
The history of mankind is not only a history of man but also her tools. Tools have been co-agents of our history ever since the emergence of our species, starting with the development of early stone tools at the Olduvai Gorge in Eastern Africa some two and a half million years ago. More recently, our co-agency has been manifested in the revolutions generated by the printing press, the steam engine, electricity, computers, and the Internet.
Our current zeitgeist is that of the Fourth Industrial Revolution, in which we see the rise of cyber-physical systems and new kinds of non-biological intelligence. Here the agentic capacity of technology is becoming something to be taken quite literally as machines start making decisions once made only by biological agents.
This stresses the need for ethical, societal and political consideration of systems possessing qualities pointing towards non-biological intelligence and agency.
But are these considerations topical now because of the intelligence of these systems or rather the lack of it?
Human intelligence is creatively adaptive and driven by embodied meaning
The discourse on artificial intelligence can get pretty wild and jazzed up. Our fantasies, hopes and fears, are undermined with the reminder of just how rudimentary our most intelligent machines still are in comparison with everyday human smarts. Despite their superhuman computational power, machines fail in simple human tasks where recognizing contextual meaning is often key to understanding what is the smart thing to do in any given situation. No machine would survive the complexity of a night out with a bunch of Englishmen with their taste for irony and banter.
Very quickly our efforts to build non-biological general intelligence hit this “barrier of meaning”, asMelanie Mitchell, a Professor of Computer Science at Portland State University, recently wrote in her New York Times Opinion piece.
What is meaning then? There’s a whole field of science devoted to answering this question, namely semiotics, and most everything done in the fields of human and social sciences touches on the topic in one way or another. Meaning drives human action in a very complex but also very concrete sense. The objects, subjects, situations and phenomena we face in our daily conduct are loaded with meaning for us, and change of context often introduces a change in meaning. Just think about the different meanings – and behavioral consequences – of a mundane urinal in the contexts of a toilet or on display at a museum for modern art. With the latter we of course refer to Marcel Duchamp’s ready-made classic Fountainfrom 1917.
One way to define meaning is this: meaning is the function of things we encounter as signs representing or bringing about also something more than just themselves – generating thoughts, feelings and actions. These meanings or interpretative effects of things we encounter are amazingly rich in their subtleties, mutable and context-bound. We not only recognize instantly the contextual meaning of Duchamp’s Fountain as a piece of art, but also interpret and reinterpret again and again its meaning vis-a-vis the tradition of art. We also form different individual interpretations of it, which then interact and influence each other through debates, critiques, books on art history and various more casual human exchanges around the piece. This result in a flux of contesting and changing meanings affecting how we see and treat this basically very mundane object.
We usually interpret meanings and their alteration instantly and effortlessly, our interpretations come in forms that can be consciously cognitive (e.g. being conscious of the meaning of the word ‘cat’) but are not necessarily or even usually so. Most commonly meaning functions in embodied form, as implicit dispositions guiding action, with no need for conscious awareness or representation. This results in what we sometimes call ‘common sense.’ If a person holding a knife approaches you on a dark alley/in a kitchen/at the hospital’s surgical bed/on stage in a theater play, you ‘just know’ how to and how not to react – and often you don’t even ‘know’ in any cognitive sense but skillfully skip straight to the right kind of action.
Our interactions with the world and other living creatures is guided by this kind of embodied and intuitive understanding of the meaning of things and situations we face.And importantly, when the world around us (the environment of our action) changes, as it constantly does to some extent, we have the quite wondrous ability to learn and adjust our action in creative ways. This human creativity utilizes analogies and conceptual metaphors in exporting our understanding from one domain and applying it to something new and unforeseen.
Currently, and probably to the foreseeable future, machines are nowhere near human capabilities for this kind of learning, creativity and generalization.
Artificial intelligence and outdated philosophy
This embodied, intuitive common sense and creative being-in-the-world has been a favorite subject of philosophers of the phenomenological and pragmatist traditions. According to them, it is here, rather than in our explicit cognitive-logical skills, where we find the foundation of our intelligent behavior.
A vocal proponent of this kind of thinking – and a sharp skeptic of the general AI project – was the philosopher Hubert Dreyfus who passed away in 2017. Working in the phenomenological tradition of Martin Heidegger and Maurice Merleau-Ponty, Dreyfus often liked to sarcastically point out how people fussing about the intelligence of machines had actually gotten the wonders of human intelligence all wrong, as something purely cognitive and abstract computational capacity when it should be understood as an embodied, emotional and dynamic relationship with the world around us. The technological types had adopted an outdated philosophy of the mind. This philosophy had its climax in the 17th century rationalism of Rene Descartes with its dualistic mind-body split (the ethereal conscious mind being the sole domain of our intelligence, the profane body being its enemy and distractor), and this Cartesian conceptual baggage is something that current multidisciplinary study of the mind still tries to get rid of. This aspiration got its famous manifest in the neuroscientist Antonio Damasio’s book from 1994, Descartes’ Error – Emotion, Reason, and the Human Brain, where Damasio shows how there’s no intelligence without emotion.
In a recent interview Damasio stated his opinion that “human intelligence can’t be transferred to machines” due to our inability to build emotions and feelings into these machines. Be this as it may, it is safe to say that the intelligence of machines is currently something very different from the intelligence of humans, and compared against human intelligence, in many respects machines are just daft.
And still, we are nowhere near understanding the wonders of the human mind and our intelligent behavior. So how could we even think about transferring anything like it into our technologies? And while we may not even try to replicate intelligence of the human kind, and rather focus on the strengths of machine intelligence, we should be very conscious of the differences of these different types of systems driving the behavior of humans and machines respectively.Understanding of these foundational differences should guide our decisions on what kind of autonomy and agency we can assign to machines – where and when should we let them decide and guide their own operations.
Stupid machines, hidden agents
In her aforementioned article, Melanie Mitchell quotes the AI researcher Pedro Domingos’ incisive conclusion about where we stand now: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
Forget super intelligent machines taking over, it is stupid machines with too much power that we should be cautious about.
Today, most machine learning systems we interface with are quite simple in terms of the utility they serve: they recommend you products via email, they suggest movies you might like to watch via your favourite streaming service and so on. However, in some of these systems a machine may optimize your experience in unforeseen ways. It may efficiently optimize someone else’s utility in ways that directly or indirectly impact your life in undesirable ways, changing our society as a side product.
Machine learning systems are built using data. Usually the more data the better results, but what does better mean in this context? In machine learning an algorithm optimizes its results for a given task. For example, a human resources application with artificial intelligence could optimize for finding the best candidates for a given open position. For that purpose, the machine learning algorithm has been fit to a mass of CV and job data from past recruitments, both successful and not so. The algorithm effectively becomes a representation of the past via the viewpoint of the data it has been fit with, i.e. the only thing the algorithm “knows” of this world is the data it has been trained with. This trained representation can then be used for future HR scoring between candidates and open jobs.
In many countries’ legislation it has been made illegal to discriminate by gender, religion, ethnicity and so on. Such an aforementioned AI tool would not be able to take legislation into account when scoring applicants for open positions – remember, it knows nothing of anything else beyond its training data. If in the past men used to be hired for managerial positions with a higher percentage than women – and this would be represented in algorithm’s training data – the AI application would continue the trend via its scoring and predictions. Not because it “wants” to be evil, or to break the law, but just because of the mathematics at play.
Naturally organizations building such applications make efforts to stay within regulatory bounds and thus actively fight these kinds of examples of discrimination. However, it may be difficult, or even impossible, as the example of Amazon retiring its internal HR AI tool last year shows. Even with Amazon’s vast engineering resources and scientific know-how, fighting discrimination in their own AI system proved to be an insurmountable task.
These kinds of examples raise the obvious political question “to whom is the optimization advantageous”? To the highly talented female manager looking for a job, but who just does not quite make the algorithmic cut due to inbuilt bias in the system? Hardly.
Amazon as a leading player in the field of artificial intelligence has stringent ethical checks and balances at play, and this one example was nipped in the bud before it could cause harm, but what about all those models with unknown bias (to end users, to developers even) hidden from plain sight, built and deployed by organizations with lesser amount of talent and available resources? Is this the beginning of an era of (socially and morally) stupid hidden agents?
If the human capacity for socially responsible agency is based on (a) our ability to holistically grasp the multiplicity of meanings of things, situations and actions we face and do, and (b) on being socially contestable and accountable for our actions to our partners, families, social groups and societies, then we need to ask: what kind of autonomous agency should we and should we not assign to machines to which neither of these apply?
Human reality modified by machine learning
Whether machine learning is producing intelligence of the human variety or not, use of machine learning models is becoming more and more pervasive in our society and everyday life. Each round of industrial revolution has profoundly changed humanity, taking us an extra step away from our naturistic roots. Current developments are no exception, but what perhaps distinguishes this particular revolution from previous ones is that the economical and societal landscape has changed:
there is a new structure of global connectivity between people and electronic services
connectivity is instant
connectivity is affordable
This landscape provides a fertile ground for online products and services that no longer are paid for with money but with personal data. If data is the new oil and AI is the new electricity, then at this point in time the latter is being fed with the former, and end user experience is thus optimized by stupid machines in ways to maximise value to someone – not necessarily the end user or society.
The longer we live in this cycle of optimization hooked to addictive online products and services, the further these services augment our understanding of reality. We take segment-of-one optimized views of the world for granted. We are more than happy to accept personalized web search results. We do not miss content we never knew existed. Dan McQuillan suggests that when machine learning makes decisions without giving reasons, it modifies our very idea of reason, changing what is knowable and what is understood as real. Living and growing in this environment will have an unforeseen impact to who we are as humans.
Our work on AI ethics
There is a discrepancy between ambition and understanding when it comes to deploying AI in business. Over the past few years companies have been unable to avoid hearing about magical promises made by AI marketeers, which has lead to buyers recognizing them needing a tick in the “we are using AI at our organisation” -box. For this reason we tend to kick start data science oriented customers engagements with AI training, enabling a more meaningful discussion to take place on the use of AI. AI training allows discussion to be about creating added value using math not magic, what modelling should optimize, using what data, what are the client’s ethics and how well they understand intended and possible unintended consequences of deploying AI solutions.
This dialogue enables fruitful analysis of what data it is ethically sound to use for modelling. Does data require obfuscation for reasons of privacy? Who can process and see the data? It also allows both parties to understand that even omitting the most obvious data that might cause discriminatory bias in a resulting AI system, data leakage is a real and tricky problem. For example, removing gender from training data does not automatically ensure a gender bias free AI system. Modelling on consumer purchase data is tricky, as different genders have very different and distinctly identifiable shopping patterns. It is easy to imagine many such examples where it becomes altogether difficult to remove discriminatory data from modelling. What should we do then – stop modelling?
For these purposes we at Solita and Palmu have set up an AI Ethics panel with people from various backgrounds, ranging from technical expertise to social sciences and design, that helps us identify and debate on important and contentious AI-related issues and projects. We cannot, and do not want to, stop technological development, but we agree that the time to talk about ethical issues around use of AI is now.Tomorrow it may be too late.
AI in a pluralistic society
As part of our AI ethics endeavors, this year at Slush we brought a soundproof, closed-off cube in the middle of the hectic venue for people to step inside to think and discuss these issues in small groups. We showed glimpses of possible futures, where artificial intelligence has been tasked with making decisions on behalf of humans, decisions that from a human point of view have instantly recognizable ethical aspects to them. Would you let an AI delay a self-driving car if it made the overall flow of traffic faster for everyone else? Would you let an AI prioritize a millionaire investor for cancer treatment over an unemployed person? Would you let AI do recruiting decisions based on estimations of candidates’ short and long-term profitability? After laying down the scenarios, people had to choose their side: either you agreed with the AI’s decisions and were willing to give machines more power, or didn’t agree, wanting to keep these decisions in human hands.
During the two days our experts on machine learning, data science, design and human insights facilitated discussions with people from all over the world. A general concern of the people we met was that currently not enough is done to raise awareness of the complexities of AI-related ethical and societal issues. If treated only in a technological framework, human and cultural biases are easily slipped into our AI applications.While fallible, human reasoning is also amazingly holistic by nature. Guided by intuition, emotions and values our reasoning can recognize ethical relevance in a blink, while machines with all their processing power continuously fail to do this.
However, simplicity is to be avoided also here. We need to balance between dodging moral relativism while recognizing the diversity and the political nature of our moral considerations. If the multiplicity of different cultural and human values is not recognized, our discourse starts resembling that of dictators and totalitarian regimes. As one participant at our Slush cube noted: “If you pose the question as there was one ethical solution – then that’s called dictatorship.”
The work towards ethical AI solutions and regulation should be seen not as proceeding towards a fixed solution, but as a social process, resembling the process of pluralistic democracies with contesting ideologies taking part in the quest to find shared value, common ground and compromise.
The authors are Antti Rannisto and Jani Turunen. Rannisto is a sociologist and ethnographer at Palmu’s Insight team, Turunen is an old school hacker and IT know-it-all working as Solita’s AI Lead.
The authors want to thank Mikki Mustonen, Anni Ojajärvi and the whole AI Ethics group at Solita and Palmu for insightful comments and discussions around the topic!
Want to immerse yourself more in the topic? Check the interviews of our recent guests, the techno-sociologist Zeynep Tufekci on the societal impacts of ML, here, and the philosopher of technology Alix Rubsaam on interrelated historical representations of humans and machines respectively, here.
Also, the upcoming second session of Palmu Insights in March will deal with the topic of Human and Artificial Intelligence. To read more about Palmu Insights and its previous session, go here and here (in Finnish).
To take part in the conversation comment on Twitter: @antti_rannisto, @randommman, @Palmu_Finland, @SolitaOy
Human and cultural insights are foundational to service design. The design community’s favorite concept referring to this work is ‘empathy.’ While highly important, this focus on empathy can also lead to a narrow conception of the insight practice as a whole. This point is argued in more detail here: Beyond empathy.
In understanding people and their behavior, ideally, we need frameworks and heuristics to move fluently from (1) the micro level of human action and experience to (2) the meso level of social habits, practices and dispositions, and then furthermore to (3) the macro level of social and cultural structures.
Alongside this, we need frameworks to move smoothly from understanding (a) conscious to (b) unconscious drivers of human action. These different level frameworks, and different level insights they lead us to, then have to be turned into actionable drivers of concrete design work.
The key here is to work with a stratified approach. Understanding people is not only about people as individuals with their solitary feelings, values, opinions, choices etc. As important, if not more so, is to understand how this stuff found on the micro level relates to and reflects cognitive, social, cultural and other dispositions and structures. We need to aim beyond the conscious individual.
For this we need tools.
A cross-disciplinary team to match multilayered human reality
Luckily, we have a vast resource to draw from. Namely the rich tradition of explaining human life found in the social sciences, behavioral sciences and related fields.
At Palmu, this is reflected in our Insightteam with people coming from e.g. cognitive science, psychology, social psychology, social policy and several from sociology; from human computer interaction, media studies, arts and two from information network studies; from behavioral economics, economics, consumer research, marketing and international design business management.
The important point here is that each bring their complementing lenses to understanding multilayered human reality and its effects on behavior.
The Palmu Insights sessions reflect our interaction with different fields of human expertise
We start the series with zooming in on the micro level of human behavior and looking at some of the latest insights coming from the behavioral sciences. Then, during late 2018 and early 2019, we will expand our scope to the cultural macro level and examine cultural insights as integral to strategic design.
We will also dedicate a session to explore the intersection of human and artificial intelligence, building interaction between philosophy of the mind, cognitive science and the current state-of-the-art in building AI.
First stop: Human insights as behavioral insights, design as behavioral design
The first Palmu Insights session is held at Korjaamo on October 25th (16:00-19:00). This session takes on ‘nudging’, the idea and practice made famous especially by the Nobel laureate 2017 Richard Thaler and leading legal scholar Cass Sunstein.
It is difficult to overstate the impact of Thaler and Sunstein’s 2008 book Nudge – Improving Decisions about Health, Wealth, and Happiness, which inspired a whole new paradigm and a global movement of turning the latest behavioral scientific insights into practical applications influencing people precisely and generating true behavior change.
For this session, we are honored to host two highly distinguished guests: one of the leading European names in behavior change and nudging, iNudgeyou’s Pelle Guldborg Hansen, and former minister and current GD of the Finnish National Agency for Education, Olli-Pekka Heinonen.
Pelle is currently writing a short and precise book on nudging and will give a presentation on the practice of behavior change and nudging hands on. This will be followed by Olli-Pekka’s short commentary on nudging and behavior change from the point of view of the Finnish public sector.
In addition to talks the event will include hands-on work, guidance and round table discussion on
how to apply nudging related to your own day-to-day work
PROGRAM16:00 Doors open, food and drinks16:15 Words of welcome: Human insights as behavioral insights
Antti Rannisto, Design Ethnographer, Palmu
Anni Ojajärvi, Design Ethnographer, Palmu
16:30 (Title TBA)
Pelle Guldborg Hansen, Chief Executive, iNudgeYou
17:45 Nudging from the point of view of the Finnish public sector
Olli-Pekka Heinonen, General Director, The Finnish National Agency for Education
18:00 Hands on – turning problems into behavioral problems, influencing on the level of behavior (in Finnish)
Mikko Väätäinen, Business Designer, Palmu
19:00 Program ends, bar opens :)
Hang around to socialize!
This is the first part in a series of texts exploring questions around the future of service design. The series starts here with a critical treatment of one of the central concepts of service design and design research: empathy. Later, the series will expand towards issues of behavioral design, data, machine learning and the adaptive segment of one.
When thinking about the future of service design, it is tempting to rush into all things technological. Big data, algorithms, robotics and AI populate conferences, publications and blogs dealing with the issue. This makes all the sense as these technological features will affect pretty much everything in one way or another. However, for our future design work and innovations to be successful in the world of humans, we need to hold back and remember to focus also on the oldest of questions: What are people about? What is it to be human?
Here and elsewhere concepts guide and structure our thinking; fundamental concepts do so in a fundamental way. It is especially this kind of concepts that we need to critically scrutinize. When services become more adaptive and alive, and machines we design start to imitate humans, we should examine whether these concepts that guide our design work are up to date and up for the task they are used for?
One of the fundamental concepts of service design is ‘empathy’. As a core concept empathy directs our take on people and affects how we go about doing design research, from choosing fieldwork methodology to doing analysis and building insights. It is our compass for understanding people.
How do we define empathy? In numerous different ways, likely, but according to the psychologist Paul Bloom (2016, 16), “[e]mpathy is the act of coming to experience the world as you think someone else does”. The neuroendocrinologist Robert Sapolsky (2016, 522) points that “[e]mpathy contains the cognitive component of understanding the cause of someone’s [experience], taking his perspective, walking in his shoes”.
The trouble with empathy, however, is that it is too narrow a concept to be defining for our human insights work. Firstly, it is too focused on the conscious mind and not enough on behavior. Secondly, it is too individual-based and not systemic enough. Used uncritically, the concept of empathy can lead into a narrow conception of human action and insights about it. Critical thinking is needed. The behavioral and social sciences can be of help here.
Human action and answering the ‘why?’
Human insights work is about understanding people and their behavior in a profound way; what is meaningful, of value and how for people – and perhaps most importantly why. Answering the ‘why’ takes us to the issues of true relevance for people around which the design work then revolves.
How should you go about answering the ‘why’ questions? Why is something meaningful for people and why do people do what they do?
The naively straightforward approach would guide you to simply meet people and ask them, then take their answers at face value as explanations of true behaviors and motives. Furthermore, this approach would guide you to do sufficiently repeated probes of ‘why?’ to really dig down to the root cause. Then you would collect and summarize the given answers in relation to the research question at hand. This is likely close to the layperson view of how the trick is done, design researchers should know better.
This straightforward approach is misguided because people are highly inaccurate reporters of their actions and motivations, as much of the behavioral science of recent decades points out. We are simply not that aware of what drives our behavior. These drivers usually operate on the automatic, intuitive and often nonconscious level of the mind, or the System 1 level (e.g. Kahneman 2011), to use the current behavioral parlance.
This kind of skepticism towards people’s own accounts of their actions and reasons has a long history. Circa mid-20th century, David Ogilvy, the legendary adman, put it in an unforgettable way: “People don’t think how they feel, they don’t say what they think and they don’t do what they say.”
The philosopher Paul Ricoeur made a famous distinction between hermeneutics and hermeneutics of suspicion in schools of human and social scientific thought. Hermeneutics approaches the world through people’s experiences of it – in a way reminiscent of the empathy-driven design research approach – whereas hermeneutics of suspicion critically notes that
”actors do not have direct access to the meaning of their discourse and practices, […] our everyday understanding of things is superficial and distorted. It is, in fact, a motivated covering-up of the way things really are.” (Dreyfus & Rabinow 1983, 123.)
It is the hermeneutics of suspicion way of thinking that gets much support from recent advances in the behavioral sciences. These advances are not based on philosophical speculation but on empirical evidence. As the cognitive and social scientists Hugo Mercier and Dan Sperber write, summing up recent findings in social psychology according to which
”we have little or no introspective access to our own mental processes and […] our verbal reports of these processes are often confabulations. Actually […] the way we explain our own behavior isn’t that different from the way we would explain that of others. To explain the behavior of others, we take into account what we know of them and of the situation, and we look for plausible causes. […] Where we are systematically mistaken is in assuming that we have direct introspective knowledge of our mental states and of the processes through which they are produced. […] [E]ven in the case of seemingly conscious choices, our true motives may be unconscious and not even open to introspection; the reasons we give in good faith may, in many cases, be little more than rationalizations after the fact.” (Mercier & Sperber 2017, 114-115, 117.)
Now, what does this mean for the empathy-lead approach of understanding people? First, it is important to note that answering the ‘why’ is always an interpretative operation, it is not something people can report directly, though they can have informed interpretations of their own. The answers people give to the ‘why’ question, or other accounts of their behavior’s drivers, are often post-rationalizations to keep up a coherent self-image and social image and give logical-appearing, agency-based explanations of one’s own conduct. Or, as Mercier and Sperber (ibid., 117) put it, “[r]easons, we want to argue, play a central role in after-the-fact explanation and justification of our intuitions, not in the process of intuitive inference itself”.
And this is why we shouldn’t put too much weight on people’s own subjective experience and reports about their actions and reasons. This might be forgotten if our interpretation is driven by the concept of empathy. Empathy should be treated as not the end-product of human insights work but rather as part of our qualitative data set, highly important data as such, but something that shouldn’t lock our interpretative work inside the empathized subjective experience. For answering the ‘why’, we also need critical distance, looking especially to actual behaviors and their contexts.
The ‘why?’ is something the interpreter should keep asking of the data. Answering the ‘why’ should be about (1) using appropriate data and (2) doing informed interpretation of that data.
Concerning the first point above: with empathy as our guiding concept, how do we collect data and what kind of data – are we contextual enough or merely focused inside a solipsistic experience of the individual? We should remember that experience is not enough, we need to get data of actual behavior (what we really do, not only what we think we do) and its true contexts (be they material, social, or cultural). This is where contextual, ethnographic work, behavioral data and running behavioral experiments in real-life contexts become highly important. What unifies these approaches is that they are more behavior-based and contextual than the empathy-lead approach. The focus turns towards understanding people as part of their real-life contexts and testing our innovations with actual behaviors as the final judge.
Concerning the second point above: how to approach interpreting the data? Here, the implication is that we should treat first-hand single-person experience not as an end in itself, but as data that always needs to be interpreted and explained by way of a more holistic point of view. Empathy is the beginning, not the end, of analysis. To do informed interpretation, we need to triangulate using different contextual and behavioral data sources. Also, behavioral and social scientific theories about human action and its dependence on various contexts (material, social, cultural etc.) can be of great help here in guiding us beyond the level of single-person subjective experience.
Towards understanding and designing behaviors
We now realize that the subjective experience can only partly capture what we do in our daily conduct, and explain why we do what we do. In the end, what we should strive to understand is the actual behaviors of people and how our innovations perform in the setting of concrete behavior and its contexts. This is the locus of true value, for people (things being meaningful and relevant in action) and also for companies (building economic value).
Here is the promise of moving from designing services to designing behaviors.
Antti is a sociologist and design ethnographer. During the last 10 years he has worked in the academia (on social theory), at consumer research agencies (qualitative research) and in design research (qualitative, cultural and behavioral insights). Recently he has worked especially on spatial projects and is always interested in how the various contexts affect our action.
Bloom, Paul (2016) Against Empathy. The Case for Rational Compassion. New York: Ecco.
Dreyfus, Hubert L. & Rabinow, Paul (1983) Michel Foucault: Beyond Structuralism and Hermeneutics. Chicago: Chicago University Press
Kahneman, Daniel (2011) Thinking, fast and slow. London: Penguin Books.
Mercier, Hugo & Sperber, Dan (2017) The Enigma of Reason. Cambridge: Harvard University Press.
Sapolsky, Robert (2016) Behave. The Biology of Humans at Our Best and Worst. New York: Penguin Press.
The term ‘design thinking’ will die a slow death by the year 2025. By then, design thinking will have become a new normal - a part of most organizations’ everyday talks, tasks and activities. The air we breathe. But why so?
Let’s look at the big picture. Particularly in the Western world, most of us already own everything we need – a house, a car and the latest iPhone. We don’t really need any more material belongings. Instead of owning more stuff, most of us are seeking the way to a more meaningful life: Better relationships, more time for loved ones, health and wellbeing, less stress, a fulfilling job or heightened self-awareness.
The markets that companies compete on are ultimately made up of the needs and wants of people. When what people seek is becoming increasingly intangible, the role of new services is crucial to help people get where they want to be. This is why the world is turning into services. Companies are adapting, but slowly.
Beyond this, another major change in the last decade is that in order to succeed on a global scale, the services that companies produce must be top-notch. I mean really, really good. In the digital era, the best or cheapest alternative is often just one click away. This means that small local companies often face direct global competition, and must battle with the likes of Netflix or Facebook.
Over the last 20 years, the principles of design thinking have helped companies adapt to these changes and create concrete value through the design of new services that people like to use.
If I summarize how design thinking changes the way we work, it can be narrowed down to two things:
gaining deep understanding of peoples’ needs through empathy and
developing solutions by experimentation and prototyping
Yet, these two elements are completely new ways of working for most organizations. Many talk about empathy and experimential culture, but there is always a huge gap between talking and doing. And also a learning curve to get to a level where impact is achieved.
It’s important to understand that design thinking isn’t just a buzzword, but represents a much bigger change that is ongoing. Put simply, it’s about an extensive global shift in the way we work. In order to create necessary services, work life must adapt, and this change is already well underway. If you don’t believe me, flick through the increasing number of recruitment listings for service designers or familiarize yourself with the thoughts of companies such as Osuuspankki (In Finnish) and Kone.
While it may seem like that the buzz around design thinking can be even too much at times, I think the current design thinking hype might be a good thing after all. Because change is never easy, it’s even necessary that design thinking is being force fed to all of us – after all, it spreads the word, creates conversation and accelerates the change.
In 2025, design thinking as an “ism” will be relatively useless since the principles will have been integrated into organizations’ development and innovation processes. Some companies will do more in-house development, while some will rely on external help. Just like now. Yet the hype will be gone and energy can be directed towards actually changing peoples’ lives with services.
However, in 2025 we will need more people ready for this new way of development. People skilled in empathy that can uncover customer needs, people that can see the big picture yet extract the relevant details, people that can generate creative hypotheses from new solutions, and can design and measure experiments that impact customer and business value. And, naturally, do all this while at the same time leading change in organizations. These are the skills you need to embrace if you want to prepare for the future of work.
What will the buzzword of 2025 be? Since we don’t have to talk about how we work anymore, my guess is that we will be talking value-based design – how to bring moral and ethics to the core of development. Let’s wait and see.
Johannes Hirvonsalo is an experienced service, business and organizational design professional who is especially interested in creating services and service organizations that produce behaviour change.
I recently spent a week in Singapore meeting different government organisations and companies while on a speaking engagement at Design-week Singapore.
The knowledge level of most people I met was impressive. And those who weren’t so familiar with the philosophy of design thinking caught on very fast. They had encountered related articles or reports.
Indeed it seemed that Design thinking was – and is – on everyone’s agenda.
Despite this I was slightly puzzled.
There was clearly deep understanding and know-how. But anywhere I looked I couldn’t find any practical examples, show-cases or success stories. Everyone was excited and eager but…
Knowledge into action
Suddenly, I recognized this phenomenon. With one of my clients we jokingly diagnosed the company as a huge-headed genius with very little hands. This Mega-mind understands, analyses and conceptualises but nothing happens!
In such companies you frequently find talented good people who are frustrated. They know or at least have an educated notion what should be done but are can’t do it. Organisational structures, conflicting target setting, ill-defined work-roles and slowly changing attitudes put a stop to initiatives.
All of these challenges are big but the elephant in the room is the attitude-shift which isn’t happening.
It is safer to cut costs, drive effectiveness or streamline. They are proven tools but not expansive. And a dangerous path.
Finland vs. Singapore
In my native country of Finland companies’ depreciations have exceeded investments in the last 6 years. They are not investing! They are streamlining. I’m afraid it shows already in our economy.
Singapore is trying to activate its companies by government grants and other incentives to get firms to try design thinking in practice. Again referring to my home country I argue that that is not the way.
Companies concentrate more on qualifying for free money than actually on gaining the benefits of work being done.
Design thinking 2.0
If design thinking isn’t catching on then the way it is packaged and offered must change. Maybe for example service design as defined in the western market is not suited for Singapore. Maybe one should, instead of the current efforts concentrate on redefining the use of design thinking to such that there is natural grass-roots demand in the market.
Then you will have the show-cases. You will get the productivity gains and you will win in the market.
I have visited Singapore a few times in the past years. I love the place. Its energy, forward-looking spirit and its ability to mobilise large scale social reform fast.
Indeed I think that Finland would have many lessons to learn from it.
I am afraid however that when it comes to benefiting from design thinking Singapore might face the fate of the frog in a heating bowl.
The frog lies content in the water not noticing the temperature rising before it is too late.
I find that my line of work as a visual designer can be described as a balancing act between enabling positive user experiences and correctly applying a company’s brand and tone of voice. And although many other facets of service design could be described in a similar way, I’m approaching these themes from a user interface design standpoint.
A brand, in broad terms, is any set of qualities that set a company apart from its competitors. It stands to reason that a company that’s widely associated with positive brand qualities would want to protect its brand integrity, all the way down to the finest details.
Many companies have a set of brand guidelines that define how the brand should look, feel and sound in different contexts. To support consistent messaging about the company or service, brand manuals often document acceptable applications and usage of color, fonts, document layouts, images and key messaging.
Maintaining and monitoring acceptable brand applications is important, because brand identity sets the tone of a customer’s initial impressions and gives a brand its recognizability across different channels. A brand’s personality or identity helps build customer relationships. Visual signals establish differentiating factors and brand positioning. A digital service certainly needs to communicate the same values and ideas as the brand as a whole.
But once we start talking about digital services and user interfaces, building a positive brand image stops being just about staying visually on brand. Visual design for online services is not just about making a nice-looking page. In fact, overly harmonious and pretty might mean hard to figure out and lacking in meaningful visual cues. User expectations of how things should look and work need to be accounted for, and users have been trained by multi-million dollar companies like Facebook and Netflix to expect user interfaces that are easy to navigate and minimize friction between the user and their goal, especially on a mobile device. According to a Google research study, only 9 % of mobile site or app visitors will stay if they don’t quickly find what they’re looking for.
Reducing Friction to Align with User and Business Objectives
A large part of our perception of a brand stems from our past experiences with that brand. Google’s study shows that customers who don’t find immediate utility in an app or a website will not only move on, but 28 % will also be less likely to ever buy products from that company in the future. In 2014, Harvard Business Review explored ways to quantify the impact of customer experience on sales and wrote that repeat customers who had the best past experiences spend 140 % more compared to those with the worst past experiences.
28 % of users who don’t find immediate utility on a website will be less likely to ever buy from that company in the future.
On one hand this presents a challenge: anticipating users’ needs and presenting them with the right calls to action at the right time is difficult. On the other hand this is also a brand-building opportunity. According to Google’s data, 29 % of mobile users will immediately switch to another service if they can’t find the answers they’re looking for. Being that other, more nimble service can be a very cost-effective way to build up a following for a new service. Traditional branding activities are expensive and time-consuming. For a new service or a startup, there is no better brand experience than beating the competition at giving the user what they’re looking for, at exactly the right time, in an easy-to-understand interface.
Now, there is an entire field of expertise dedicated to reducing friction between the user and their goal on a website or within an app and increasing the share of users that reach a specific goal, and that’s Conversion Rate Optimization (CRO). Much of the methodology behind CRO could be thought of simply as means for improving user experience through giving the user what they need for as efficiently as possible — or before they even know they need it.
For example, to be able to quickly respond to user needs, the primary calls-to-action that a site or app provides should be prominent. It’s important to make room for these primary calls to action and to steer away from cluttering pages or screens with secondary content that might result in user confusion or drive traffic away from sales-generating activities.
However, identifying the key user needs and figuring out what form those key calls-to-action should take — how they look, how they’re phrased — is difficult to accomplish with just a designer’s intuition. In other words, it’s critical to have an understanding of how people use your service, because otherwise you’re just taking shots in the dark. Brand-building interactions shouldn’t be decided by a designer’s gut feeling or the highest-paid person’s opinion. That’s where CRO tools like page analytics, user research and A/B testing come into play.
In Service of the Brand — or the Brand Aesthetic?
Methodically testing different variations of an interaction flow or of individual page elements — headlines, value propositions, buttons, pricing charts, testimonials — is integral to generating more sales or leads without investing directly into more traffic. Good brand guidelines allow for enough leeway that as long as brand values aren’t being contradicted or users misled, you can try different color combinations, different copy treatments, different page layouts or a number of other factors that might affect the way users understand and experience the interface.
Outdated brand guidelines — or ones primarily designed for print applications, even if recent — might be at odds with conversion optimization best practices, or might even lead to usability or legibility issues in a digital context. An overinsistence on consistency across channels might likewise lead to problems when trying to optimize the user experience in each channel. For example, I’ve worked with visual identities that have been built around several primary colors, of which only one is meant to be used at a time per page. This makes sense in a PowerPoint or print context, but if a website has to be built in a way that avoids using multiple colors per screen, those colors can’t be used to convey meaning or to highlight important elements. This has the effect of making it harder for a user to identify key links and buttons. It also severely cripples a designer’s ability to test different approaches to conversion rate improvement.
An overinsistence on consistency across channels might lead to problems when trying to optimize the user experience in each channel.
Conversion rate optimization is a continuous effort, and means methodical testing and research to align a service better with its business objectives (or key performance indicators — KPIs). Done right, this translates to more efficient service for users. If this process is hamstrung by outdated or draconian brand visual guidelines, then a question arises: are we trying to serve the brand aesthetic or the business goals? If the brand guidelines prevent testing different solutions for reducing friction between the user and whatever our business model wants them to do, the brand guidelines have failed the brand.
Compromise is inevitable when designing services. Delivering the best possible outcome means knowing which compromises to make. An insistence on very strict branding consistency and guardianship across all channels can prevent testing highlight colors or easily scannable copy. An online service like a website or an app is not meant to define what the company brand should be. It should deliver on the best expectations the company’s customers have of the brand.