Space, the final frontier, where human and machine need to merge.
Space, the final frontier, where human and machine need to merge. Photo: "Mark Lee tetherless and free", NASA Commons, https://www.flickr.com/photos/nasacommons/7611655852/

The age-old humanist divide in modern tech

A lot of my work revolves around finding ways to apply technology in a way that helps people be productive and effective in their lives. It’s a fun challenge that gives you the opportunity to learn new things. However, underneath it lies an ages old struggle between two traditions: human vs machine.

My journey into interaction design and the application of technology began before I knew there was a word for it. My interest intensified when I began studying for a degree in cognitive science. For those of you not familiar with “cog sci”, it’s a fascinating multi-disciplinary field consisting of ideas from computer science, psychology, linguistics and anthropology. For me, it was a perfect fit. It was narrow in one sense, yet so full of ideas concepts to explore. Many of these ideas belong to traditions dating back hundreds if not thousands of years.

As with any field, there are always subfields and political movements within those fields. In cog sci you can generally refer to classical and modern cog sci. Classical cog sci was born out of information theory and early computer science. It aimed to model human reasoning and thinking using formal rules and languages (program code). It was somewhat reductionist in the sense that it considered the brain a closed system that we could understand if we just controlled the stimuli. The “brain in a vat” is a metaphor for this branch of cognitive modeling.

In the mid ‘80s, another branch gained popularity. By this point, more than just computer scientists were paying attention to the possibilities that the ideas within cog sci opened up for. Having focused mostly on the problems of translating an (A.I. research) and generally taking a “the brain is a computer” approach to problems. This new branch borrowed ideas from sociology and anthropology and wanted to redefine cognition from something that happens inside your cranium to something that happens in a context. Researchers following this strain focused on how [we use the environment around us and adapt it to simplify tasks]. These ideas have later been applied in human-computer interaction. Others looked at how groups of individuals solve problems together and the role of culture in thinking.

Having entered the field two decades after this schism, I had the opportunity to consider both approaches. Being raised by parents who majored in social work and sociology, the contextual and cultural understanding of cognition seemed more “true” and natural to me. But this new contextual approach brought its own set of challenges. It opened the field to deal with things that are hard to quantify or measure with objective precision. It required researchers to adopt new methods to collect and analyze data. It’s a “fuzzier” way to understand reality.

On the other hand, with the influx of thinkers from other schools of research and different traditions, cog sci got a major brain boost. Having traditionally been dominated by computer scientists, logical theorists and linguists, this new more qualitative form of cog sci required the sociologists and hard core reductionists to find a common understanding. For me, this blend of ideas from both traditions make this field so interesting and dynamic.

It attracts people who have a complex view of reality and who know that the world is way more complex than a mathematical formula can make it out to be (this is why the TV show Numbers has always seemed totally ridiculous to me) but who on the other hand want to apply formal methods to make sense of it. They don’t lazily give up and leave it labeling it “unexplainable”, but use all the tools they have, computation models, statistics as well as qualitative research tools to try and make sense of it.

Another result of this dual thinking is that engineering sciences and humanists sciences have a shared arena to battle it out. In my view, one of the major dividers between engineers and humanists is the view of the ideal.

In the world of engineering sciences, the predictable and modellable is the ideal. There’s there idea that there’s a logical and internally consistent design that is superior because of its inherent qualities. As I think of this, I realize this tradition likely runs back to ancient Greek philosophy and Plato’s “ideas”. It has put down roots in engineering and it’s a way of thinking that everyone who studies engineering, or some form of it, gets influenced by.

On the opposite site of the fence are the humanists. They believe that humans, despite all our inconsistencies and irrationalities, are the ideal. This idea probably comes from religion and the common theme that humankind was made in (insert any deity)’s image but has since evolved to be a part of the humanist tradition. It’s a view that most people who work within the human sciences, the arts and likely also medicine share.

This old battle between the idea of the perfect machine and the divine creature has repercussions even to this day. One example that comes to mind is usability and user experience where these two traditions need to work together to create something that is technically and humanly performant. It explains why it’s difficult for some engineers to understand what makes a good user interface and why systems designed by engineers tend to be counter-intuitive. I believe it’s a direct result of the idea of the ideal or the sense of perfect and that it happens largely at a level of cognition we’re not actively aware of. Engineers design user interfaces based on their ideas of what should be logical and what constitutes a beautiful solution. Unfortunately, circuit board designs and button toolbars are very different kinds of designs.

Put this in contrast to the humanist approach. The humanist starts by understanding who the end user is, what drives them and what pain points in their life the system or solution helps them address and solve. They assume as little as possible and try not to apply any preconceived notion of the ideal. They believe that context is key and the system cannot be designed in isolation.

Since technologies are born out of this tradition, it comes to no surprise that the first home computers had massive manuals. This wasn't an issue, after all, how can you expect to use such a powerful machine without first understanding it? Most people would indeed consider it being their responsibility learn in order to use something. After all, the machine is a beautiful piece of cutting edge technology. As close to perfect as possible. We are only human. Flawed.

It's this machine perspective that leads engineers to designing systems that require tome-like manuals for users to read and learn in order to perform the simplest of operations. And technology has for a long time been dominated by that perspective and tradition. It was the issues that this philosophy caused that led to the rise of the fields of interaction design and usability. If it wasn't for the cross-over between the humanities and the engineering sciences, chances are user experience would still be abysmal. In fact, it would likely take the form of user adaption to a system, rather than systems being designed for ease of use.

Another manifestation of this struggle between ideas is the trend towards data-driven design. I'm not referring to research-driven design but essentially using log files to infer conclusions. Data-driven design is fantastic when done right. Unfortunately, more and more companies invest vast amounts of money in ways to collect data points and hire statisticians and mathematicians to make sense of all the numbers. In my view, this is just the latest example of how our belief in technology diminishes the role of human judgment. But as with most trends, I am sure we will see a bounce back.

The problem with this type of data-driven design is that this kind of data doesn’t offer context. Data by itself is meaningless and interpreting data without context leads to even more meaningless statistical models. You can find out exactly how many people that clicked on a specific link but unless you talk to them, you will never find out why. And similarly, you can design and A/B test for a numbers outcome and to re-arrange a website or app to maximize for a certain outcome but it will never reveal anything about what it means to people.

Value is subjective and about meaning. I don’t think we can put numbers on the meaning of the human experience or what delights us. At least not yet.

The solution: a dual approach. A combination of qualitative humanist research to understand the people for whom you’re creating meaning for and the use of quantitative data as a factor to consider when evaluating options.

I sometimes say that I picked the perfect field to study considering the massive interest we see today in natural language processing, machine learning, computer vision, artificial intelligence, semantic search and human-computer interaction. All of these subfields within cog sci. For me, it was about following my heart. And I discovered a microcosm that somehow encapsulates science at large.

It would seem modern cog sci with its duality of traditions, being a crucible for ideas from two rivers flowing to the same ocean, offers many of the tools and ideas needed to solve a great deal of today’s problems.

Photo by: http://nos.twnsnd.co/image/114412649575

This article was updated on 2020-03-01

Jakob Persson

Your host. A founder and entrepreneur since 20 years. Having worked as a freelancer, consultant and co-founder of a successful digital agency has shaped his skills in and insights into business. Jakob blogs at blog.bondsai.io, speaks at events and consults with agencies and freelancers in growing and developing their companies. He holds degrees in media technology and cognitive science.

Comments