Skip to main content
SearchLoginLogin or Signup

Inviting Feedback

We are designing increasingly complex ecologies, involving unpredictable human interactions, emotional experiences and intangible services, as well as the physical objects that we pick off the shelf.  We don’t just design forms, we now design platforms.

Published onFeb 04, 2018
Inviting Feedback
·

In order for us to understand and affect the systems in the world around us, we need information. We need feedback. This was one of Norbert Wiener's primary principles when he coined cybernetics in 1948.  One fundamental element of a feedback system is the connection of elements across many systems in many scales and dimensions; a process of cause-and-effect.  The deliciously enticing idea that drives many scientists, engineers and Singularitarians is that this causality in the world is, as Joi Ito puts it, “‘knowable’ and computationally simulatable”.  There is an answer and it can be found.

Artists and designers, on the other hand, deal in argument.  The paradigm-shift in physics that inspired Wiener was a consideration of “not one world, but all the worlds which are possible answers to a limited set of questions concerning our environment” [1].  This is the ambiguous world in which designers live.  There is not often ‘one right answer’ in design; a design is a selection of decisions made during its development and it is the designer’s job to create a suitable narrative explaining these deliberations.  Not a design solution, a design rationale.  Despite our best efforts to discover as much information about the context for which we are designing, a design ultimately depends on our judgment—our limited, biased, idiosyncratic judgment—as designers to select, synthesize and integrate that information into a new design.  We don’t create design solutions; we present arguments for one design out of the messy plethora of alternatives.

This is why designers do indeed “embrace the unknowability—the irreducibility—of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar with”.  And why, as cyberneticist and design theorist Ranulph Glanville argued, “cybernetics and design are two sides of the same coin” [2].

Rejecting answers

Wiener and his contemporaries sowed the seeds that grew into a multidimensional network of cybernetics researchers, systems theorists and designers that influenced and built upon each other’s work. Whatever the future of design and computing may be, it has roots in cybernetics. Let us glance back into the tangled history so we can, as design theorists and cyberneticians Hugh Dubberly and Paul Pangaro put it, “understand better where we are, how we got here, and where we might go” [3].

Hugh Dubberly and Paul Pangaro's cybernetics and design family tree. See an interactive version here: http://cybergraph.dubberly.com/

Cybernetics fundamentally changed how we thought about ‘answers’.  In the years around the publication of Wiener’s Human Use of Human Beings, previously siloed fields of thought began to intermingle, challenging the conventional separation of the ‘hard’ sciences and engineering from the ‘soft’ psychology and sociology; a new, broadening and evolving style of research that historian of science Andrew Pickering calls the “mangle of practice” [4].  This was the original Resistance of Reduction!

This messiness and openness to new possibilities was naturally appealing to designers.  One in particular served as an invaluable translator of these principles of cybernetics and systems into design practice: Horst Rittel.  A professor of the Science of Design at Berkeley, Rittel considered how cybernetics could be used to understand the design process.  Despite being a mathematician, he did not try to ‘solve’ the design process.  In fact, he understood that the design process often feels like aiming at a shifting target, as designers move their focus from sharp details to the blurry big picture and back again, understanding what the problem actually is before responding to it [5].  Especially when it came to designing for what philosopher and systems scientist C. West Churchman described as the “class of social system problems which are ill-formulated, where the information is confusing, where  there are many clients and decision makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing" that were emerging in the mid 20th century [6].  This is what Rittel coined as the ‘wicked problems’ that we are all familiar with today.

Design, in Rittel’s view, then became a process of reasoning, of argumentation.  He led the field with a new (and now common) approach: creating a ‘design rationale’ [7].  Together with other designers and theorists (including Christopher Alexander and John Chris Jones) at a London conference in 1962, Rittel considered the changing role of the designer in the increasingly complex, post-industrial world and advocated for cross-disciplinary systems thinking as part of a more scientific design process. The Design Methods movement was born [8] and continues to influence us today. If you have ever tried to solve any problem using a framework such as ‘define-prototype-evaluate’, you too have benefitted from Horst Rittel’s cybernetic view of design.

Design as conversation

These theories were not without their valid criticism, however, and—as with the best creative movements—it was the founders who led the discussion on how the principles of the movement had been misinterpreted.  The systems conceptualised in cybernetics were drawn into the enticing light of rationalism and control.  Models for design applied to other fields were popularised by cognitive scientist Herbert Simon’s book The Sciences of the Artificial [9], which expanded ‘design thinking’ into a rigorous framework for decision making.  Mechanistic in its execution, the design process became a flow chart of narrow questions and prescribed answers; a language that could fit neatly into the discrete digital language that machines understand.  Just as  Taylor’s analysis and ‘programming’ of humans working in factories meant that their roles could easily be transferred to machines [10], computer scientists integrated these computational models of the design process into software programs.  The messy creative process carried out by unpredictable, idiosyncratic designers could be managed, it could be controlled, it could be automated.

However, cybernetics was never intended to be a reductive clockwork mechanism for calculating and controlling complex systems. What connects Wiener and Rittel—and what Ito’s manifesto highlights—is acceptance that the complex systems we are designing—systems containing illogical, biased, idiosyncratic human beings—are messy and chaotic.  The reductive first-order theory of cybernetics would have us identify a goal and design a solution for it.  But this abstraction of the creative process into a ‘science of design’ is, design historian Victor Margolin [11] writes, “too remote from actual design situations”.  As Kees Boeke showed us in his imaginative 1957 graphic essay Cosmic View: The Universe in 40 Jumps (made famous through Charles and Ray Eames’s short film Powers of Ten) the world is a more multifaceted place than that, made up of many overlapping systems and with many stakeholders, including ourselves, with different goals.  “We're inside of what we make, and it's inside of us” Donna Haraway astutely observes in her Cyborg Manifesto [12].  Designers needed a way, as Boeke put it, to “develop such a wide, all-embracing view”.  And it was anthropologist Margaret Mead who, in her 1968 essay Cybernetics of Cybernetics, recognised that cybernetics could be the framework to do it; applying the notion of feedback to our own worlds as a way to question all of those subjective viewpoints and account for ourselves within those systems [13].

Thereby second-order cybernetics was born and embraced by the design community as an epistemology for understanding how we are part of complex systems and how to account for our role in them.  We accepted that Rittel’s wicked problems could not be solved or optimised, but they could be ‘tamed’; we could create an argument for the most appropriate design solution out of the myriad of alternatives.  The design process could also have no single definition or universal method.  As Richard Buchanan, professor of design, management and information systems, writes in his article Wicked Problems in Design Thinking, “design eludes reduction and remains a surprisingly flexible activity” and thus should be recognised more as a “liberal art of technological culture” [14].

As this new second-order cybernetics approach to design evolved, conversation became the practice of the day.  Conversations between the many stakeholders in the design process to deliberate and frame the many subjective perspectives to, as design theorist Klaus Krippendorff writes in The semantic turn: A new foundation for design, “make sense of things” [15]. Conversations with those for whom the design was intended to guide interactive, behavioural, contextual, empathic, speculative, interpretive, analogous, extreme, responsive, and generative research explorations. Conversations such as these helped to uncover underlying motivations and inspire unexpected responses, which became the cornerstone of the Human-Centered Design and participatory design methodologies championed by, among others, anthropologist and designer Jane Fulton Suri at design consultancy IDEO [16].  Conversations with ourselves, or a “reflection-in-action”, as philosopher and urban planner Donald Schön calls it, to help us question our own models and interpretations as we experiment in our design processes [17].  Conversations with machines, such as the pages of transcripts of anthropologist Lucy Suchman’s conversations with a Xerox PARC photocopier, that provided the intellectual bedrock for the human-computer interactions that we are so familiar with today [18].

This complex social history of cybernetics and design is the backdrop for where we are today in design.   Just as it influenced modern computation pioneers Doug Engelbart and Alan Kay as they created the first real-time interactive systems, the principles of cybernetics continue to impact our practice of design.  We are designing increasingly complex ecologies, involving unpredictable human interactions, emotional experiences and intangible services, as well as the physical objects that we pick off the shelf.  We don’t just design forms, we now design platforms.  Wiener’s cybernetic notion of feedback has never been more important, as is Rittel’s understanding of design as argumentation, as a conversation.

If conversations, then questions

Dubberly and Pangaro beautifully summarise this conversational role that cybernetics can have in design today [2] :

If design, then systems: Due in part to the rise of computing technology and its role in human communications, the domain of design has expanded from giving form to creating systems that support human interactions; thus, systems literacy becomes a necessary foundation for design.

If systems, then cybernetics: Interaction involves goals, feedback, and learning, the science of which is cybernetics.

If cybernetics, then second-order cybernetics: Framing wicked problems requires explicit values and viewpoints, accompanied by the responsibility to justify them with explicit arguments, thus incorporating subjectivity and the epistemology of second-order cybernetics.

If second-order cybernetics, then conversation: Design grounded in argumentation requires conversation so that participants may understand, agree, and collaborate on effective action.

If the above is correct and design is a conversation—a reasoned argument through the field of knowledge associated with a situation—then, taking a Socratic view, a large part of this process is about asking questions.  As Fulton Suri [16] writes, these questions are “appropriately-tailored tools to apply at different points throughout the innovation process”; generative questions that reveal new insights and inspire new ideas; evaluative questions that help us learn about and refine our goals; and predictive questions that help us imagine the future of a particular design.  These are all questions that we are increasingly using intelligent machines to help us answer because, as Nicky Case writes in How to be a Centaur, “AIs are best at choosing answers. Humans are best at choosing questions.”

But, Case also comments, “What kind of questions should a human ask?”, since not all questions are equal.  Some guide us to ‘neat and tidy’ answers.  Some help us more deeply understand our existing knowledge.  Some even provoke us to question our own beliefs.  So far, our intelligently augmenting Human+AI centaurs have mainly dealt in the realm of ‘neat and tidy’ questions; questions such as “what possible solutions fit these goals & constraints?”.  The genetic algorithms and machine learning programs that can be used to diagnose a medical condition, design a chair, or create a ‘new’ work of art by an old Master painter use these directed questions to converge on a few specific, quantitatively better answers; a process not dissimilar to the solution optimisation of first order-cybernetics that designers rebelled against.  Collaboration systems designer Özgür Eris observes that these types of guiding queries are actually called “deep reasoning” questions [19].

While machine learning algorithms march to find the one ‘true’ optima, designers meander around actively looking for multiple peaks to climb, often building entirely new ones to explore as well.  We want to diverge away from the facts that are known and completely regenerate the landscape of possibilities around us.  Sometimes this is a necessity because, as CEO of IDEO Tim Brown writes in his recent essay on designing for augmented intelligence: “The right kind of data is not always available or useful or a smart shortcut when you’re trying to address multifaceted human issues.”  Eris describes the questions that allow us to do this as “generative design” questions.  Like the paranoiac critical method that Surrealist master Salvador Dali used to create his ambiguously interpretable images, these are the types of questions that prompt us to interrogate underlying assumptions, consider irrational analogies and associations, and reflect on our own subjective biases and psychological motivations; as Frances Hsu writes, a “systematic encouragement of the mind’s power to look at one thing and see another, and the ability to give meaning to those perceptions” [20].  These are the second-order questions that can help us tame wicked problems.

A new Forum to provoke conversation

“The fact that the Forum was round encouraged one kind of debate”, professor of Communication Fred Turner suggests.  Or, put another way by philosopher and pioneering media theorist Marshall McLuhan in his book Understanding Media: The Extensions of Man, “the medium is the message” [21].  Extending this idea, we can see that the interface through which we communicate shapes the types of questions we can ask. Whether we are interacting with systems that Wiener feared would encourage us to “accept the superior dexterity of the machine-made decisions without too much inquiry as to the motives and principles behind these” or, as Palfrey writes in his essay Line-Drawing Exercises: Autonomy and Automation, it is a “lack of trust [that] holds us back from embracing a world of machine-driven decision-making”, Pablo Picasso’s critique of computers holds: “Computers are useless. They can only give you answers” [22].  I do not know if machines will ever be able to answer the types of generative design questions that designers thrive on.  But can a second-order cybernetic view of design help us create new technologies and interfaces that encourage questioning, and hence help us explore the multifaceted arguments that will be key in tackling today's complex problems?

Interfaces that use this approach should allow us to ask not just ‘neat and tidy’ questions, but the abstract, seemingly unrelated ones that occur to us as well—especially at the nascent stage of the creative process.  We should also be able to interrogate the ‘answers’ these interfaces give to us; as professors of Computational Creativity Simon Colton and Geraint Wiggins [23] write, “the software should be available for questioning about its motivations, processes and products” but also be “an invitation to engage in a dialogue with the artefact and/or the creator and/or the culture and/or yourself”. These interfaces should encourage us to ask questions about our own designs, questions that act like sociologist Susan Leigh Star’s boundary objects supporting dialog between disciplines [24], questions we didn’t think we had, questions that help us reflect about ourselves.  As Case writes: “We hoped for a bicycle for the mind; we got a Lazy Boy recliner for the mind.”  How can we create interfaces that become tools to help us exercise our questioning natures?

Artificial Intelligence Augmentation (AIA) may be one answer that moves us forward, but we should be conscious of not falling into the same trap of a slow controlled march towards reduction that we have experienced with first-order cybernetics.  We don’t want to end up lamenting, as Richard Lachman does in his article STEAM not STEM: Why scientists need arts training, that “I saw the best minds of my generation spend their lives optimizing microseconds out of their high-frequency trading algorithms, or devising routing-algorithms for drone-delivered burritos.”  What we have learned from our second-order cybernetic approach to design is that we are part of the system, one that now includes our machine collaborators.  Just as we apply the practice of conversation to understand the wide range of information related to wicked problems, including examining our own biases, these AIA tools should be about more than just outsourcing our cognition. How can they create new elements of cognition and, as creative coder Michael Nielsen writes, “change the thoughts we can think”?

Overly ‘user-friendly’ interfaces can contribute to this cognitive inertia.  “The purpose of the best interfaces” Carter and Nielsen suggest “isn’t to be user-friendly in some shallow sense. It’s to be user-friendly in a much stronger sense, reifying deep principles about the world, making them the working conditions in which users live and create.”  While part of the creative process does indeed need the competence and accuracy that these intelligent tools can provide, radical breakthroughs come only from challenging the existing principles in our fields.  The whip that lashes us today is the perceived need for interfaces to provide us with seamless and instantaneous ‘solutions’; mechanically asking Alexa to “do this for me” or Google to “find me the answer to this” and be immediately presented with responses from an index of possible answers.  But when those interfaces are what we use to inspire us, the design process “becomes synonymous with correction and selection, later with celebration; rarely with creativity" contends computer art pioneer Brian Reffin Smith.  

We don’t always need alignment from these new ‘smart’ tools we are creating; sometimes we need slow, convoluted, surprising friction.  We need to embrace interfaces that don’t have all of the answers.

Palfrey writes that “the inefficiency of the current system here seems to be a feature, not a bug”.  He was talking about human idiosyncrasy versus machine accuracy being a benefit when unexpected situations arise.  But we can also apply this attitude to how we design our interactions with tools.  Information theorists like Norbert Wiener and Claude Shannon conditioned us to think that tampering with a message will damage the information transmitted.  Jasper Bernes, author of The Work of Art in the Age of Deindustrialization, disagrees as, while the noise in a message makes it tend towards indecipherable chaotic entropy, it can also “create a margin of error in which creative interpretation and misinterpretation might thrive.”

Creative misinterpretation is often key to the design process.  Just like Rorschach tests, we perceive in our surroundings the affordances that our intuition and emotions are attuned to at that particular moment [25].  A colleague mistakenly responding to an upside-down version of one of your sketches can be key in breaking out of your ‘local optima’ and serendipitously considering a whole new approach to your design problem.  From Sunspring’s machine-learning-generated sci-fi film script that was both nonsensical and profound, to the confusing yet thought-provoking list of unrelated interpretations that Google’s Quick, Draw! app applies to your sketch before guessing the ‘correct’ answer, the immaturity of our intelligent technologies can actually provide great creative inspiration.  The ‘answers’ we get from these interfaces may seem vague and hard to imagine, Nielsen acknowledges, but this is because these interactions allow us to invent “ways of thinking which haven't yet been invented”.

Renowned designer Kenya Hara writes in his book Designing Design [26]: “Creativity is to discover a question that has never been asked”. Rather than dictating queries to machines, how can we harness these principles of second-order cybernetics, of design as argumentation and conversation, to enable machines and their interfaces to provoke questions for us to answer?  Questions that may seem unrelated and strange but actually serve to reveal new connections, new insights, and new ‘answers’.  Questions that can help us understand and take responsibility for our roles in the design of our future, and therefore help us to ask better questions of the machines in our present.

Comments
3
?
Jeff Sussna:

It seems to me that this article doesn’t actually take 2nd-order cybernetics far enough. It still looks for “answers”, and “effective action” through “agreement”. I think the point of complexity is that we’re never done. We can agree, but as soon as we act the landscape has changed, and with it the discourse in which we are embedded. To me 2nd-order cybernetics means we never stop asking questions. I believe that’s the way we “tame” complexity, by continuously steering through it.

Joichi Ito:

I’ve heard Neri Oxman use the word “compositional” as something that can be assembled and disassembled. This feels related.

I also wonder if there is something to be said about things that work both forwards and backwards vs only in one direction.

Joichi Ito:

Interesting to look at the forms of machine learning like probabilistic program that does more meandering while optimizing. Also, interesting that Martin Nowak, when he talks about evolutionary dynamics, always describes evolution as a search process and not something that is trying to solve something or find an equilibrium.