Cyborg pioneer and Google Glass technical lead Thad Starner
https://www.flickr.com/photos/salforduniversity/10726228895

The deceiving sliding slope towards cyborgism

A popular theme in science fiction is the idea of the future merger of humans and machines. It’s often portrayed as a singular event. In reality it’s more gradual and we are closer to it than we can probably imagine.

Since ages unknown, the human mind has been compared to machines. In the the 1700’s it was compared to a clockwork, the most advanced mechanism known. When the transistor was invented, an even better technological analog and metaphor for the brain was found. And so it has continued. Human brains and minds are constantly compared to computing machinery.

Cyborg pioneer and Google Glass technical lead Thad Starner (https://www.flickr.com/photos/salforduniversity/10726228895/)
Cyborg pioneer and Google Glass technical lead Thad Starner (https://www.flickr.com/photos/salforduniversity/10726228895/)

As to whether that is an apt analog or a relevant comparison, I do not intend to discuss here and now. What I find considerable more interesting is how our minds seem to work compared to how computers are designed to work and what this means as the two become more and more entangled.

A computer is a essentially a machine that manipulates data sequentially. To be technically correct, most computers can process data in parallel but each computing device (core) can be considered to work with sequences. One if the things that makes the computer so powerful compared to other types of machines is that it can change its own programming and use the result of one operation as input to another. Most clockwork mechanisms could only carry out a fixed set of operations. The operations were set and could not be altered once the machine had been built.

This put obvious limitations on what these machines could do. If you ever wanted to repurpose them, you’d have to return to the blueprint and rethink the whole thing.

Alan Turing, known as the father of the computer and one heck of a genius, invented a hypothetical machine commonly known as the Turing Machine or TM for short. Turing himself called it an “a-machine”. Turing was a humble British gentleman after all and wasn’t looking for fame. Just like German discoverer of x-rays, Wilhelm Röntgen, called his discovery “X-strahlen”.

The TM was unique in that it could respond to instructions that resulted in those instructions being changed. This was a mental leap and an idea that inspired many mathematicians, logicians and engineers. The TM is a beautiful idea well worth understanding (Wikipedia has a lot of information on it in case you’re interested).

Without going into more details about the TM, suffice to say that Turing’s idea was revolutionary even though it was just a mental model. The transistor was the technological breakthrough needed to build such a machine at the scale to make it useful. Transistors were later replaced by the silicon semi conductors we know today. Today, almost every computer in existence is a variant of Turing’s “a-machine”. The smartphone in your hand owes part of it existence to Alan Turing and so does the computer controlling your car engine’s fuel intake.

Most computers take a series of instructions, perform operations and generate output in aforementioned sequential fashion. Experiments with alternative designs have been done but remain mostly niche cases. As a result, computers are amazingly good at calculating a series of square roots or telling you how many Z’s there are in this post. This has lead to massive increase in productivity for us humans. Things that took weeks to calculate or perform can now be done in seconds. The degree to which we seem able to automate work appears almost endless. In fact, automation is expected to be the biggest threat to employment ever.

This will have enormous impact on society as more and more jobs become redundant. No job seems safe from automation which poses many challenges. The repercussions will be big and they will affect more than jobs.

Imagine we could find a way to communicate directly with a computing device. Imagine your brain could talk directly to your iPhone or Android device in a natural and effortless manner. No more typing or speaking, just thinking. How would that change your life?

This idea isn’t that far-fetched. Not only does it feature in a lot of science fiction it’s probably closer than we believe. We’ve been walking this road for quite some time already and we’ve been adapting to it successfully ever since we picked up a stick and realize we could use to dig for roots to eat. The groundwork is laid. The reality is just one or two breakthroughs away.

I’m fascinated by this idea. Not just because it makes the hairs on my neck stand, but because it will bridge two major ideas and offers incredible potential. The ideas have to do with how we think and reason and how we model our world.

The merger of human thinking with computer interfaces could let us feel, experience and live as humans and at the same time be able to reason, model and calculate at the speed of a computer.

A human mindset through associative conscious thinking

Conceptual map

Source

As humans we appear highly capable of learning, grasping concepts and building on existing knowledge. This is likely the result of the structure of our brains. At the core, our thinking appears associative. It’s what happens when you let your thoughts roam free. The human mind seems to store information through associate links in a massive web where ideas and concepts are tied to one another, like the nodes in a mind map.

This way of storing and accessing information is extremely useful and powerful when you need to be able to be reminded of things in any number of ways. Colors, smells, faces and names can serve as keys to doors to concepts you did not even know you remembered. It also helps us be creative and imaginative and lets us see parallels and analogs. This is probably what enables lateral thinking – the ability to solve problems through seemingly unconventional ways.

But it’s also frustrating when you’re trying to remember where you left the keys or when you’ve forgotten your PIN code. Your mind isn’t like a neat, ordered library where you can look up things by letter or the Dewey decimal system. Your mind doesn’t know what it knows so searching your mind’s space of memories means tracking down those concepts through their neighbors. Every memory is a cell that only knows its neighbors.

How we model the world through algorithmic automated processing

HK_Airport_Terminal_1_office_location_map_and_directory.jpg

Source

Despite feeling and knowing the world intuitively, explaining it and making inferences is a bit harder. To do that, we have invented models and systems that allow things to be described and outcomes predicted. Originating in math and science, these models are now the foundation of computing.

The most common way we store data in computers is very different from how you and I remember things. When storing data in computers, instead of linking data through webs of associative connections, we rely on keys and addresses. Your computer’s hard drive can be described as a massive directory or library. It relies on an index to be able to return the data you’re looking for. Throw away the index and the data becomes considerably less useful.

Note that I used the word data and not the word information. This is because information is data put in a context and machines generally do not possess an understanding of concept. For a computer the name SAM consists of just three letters forming a symbol. But to you, it could be an uncle, a friend or your pet. The name evokes something else in you. It’s more than a string of letters.

This lack of context allows the computer to effectively process these symbols in an automated manner. And doing so extremely fast. Once a program is written, it can be executed over and over at blazing speed. All it does is handling strings of symbols. The computer can theoretically recall the same data over and over without errors (if we ignore the fact that hard drives corrupt data).

Imagine a simple task such as counting the number of pages in a book. As a human, you need to consciously perform the task every time you do it. You need to be aware of the steps and be careful you don’t miss a page. The computer, on the other hand, isn’t aware at all of what it’s doing and it’s just performing the program. Our awareness of the meaning of what we do and how we deal with information with context makes us also error-prone. And even worse, tasks can potentially bore us. Even critical tasks like observing a radar image for traces of enemy bombers, as was the case in the UK during World War II, can lead to lapses in attention. Our human propensity for boredom can lead to lives lost.

Calculations are similarly limited by this beautiful inexactness caused by using having to “think” through everything we do. The flexibility of the mind appears to also be a great limitation, by some standards.

In the international AI competitions, where the jury determines whether they’re communicating with an artificial intelligence or a human mind, this has become a litmus test. What you do is that you ask the thing or person you’re communicating with to calculate the square root of a random number. If the answer is returned immediately, it’s a dead giveaway. No human is capable of that.

It's all about the interface

It would seem that our ability to learn and apply abstract ideas, to be inspired and motivated, to dream, to imagine, to visualize and can also limit us. There are clearly tasks machines are so much more suited for than we are. Yet both modes of “thinking” have their merits.

It’s no wonder than that the idea of getting the best of both has inspired writers. A human mind augmented by a computer could do all the things we’re so good at such as composing, creating and sensing. At the same time, it could draw on its logical friend’s ability to find numbers, identify deviations in patterns, remain absolutely vigilant and perform mathematical calculations instantly and without error.

Imagine a company chief executive who can instantly call up the key data on any part of the business, run statistical predictions, use mathematical tools to analyze and then make a decision based on numbers as well as their subjective judgment. It’s still science fiction but not very far away in terms of technology.

This change is not so much about the technology being portable, power-efficient, touch-capable and LTE-enabled as about how we interface with it. It would seem that the next great evolutionary step in computing isn’t about how computers work, but how we use them and work with them. What may not be evident to many is that this isn’t a new trend. It’s been going on for years. The possibilities opened up by smartphones haven’t been primarily driven by technological breakthroughs but by how the machines are integrated in our daily lives. It’s a matter of anthropology, not systems engineering. The development of this tight interaction is a natural next step.

The ‘borgs’ of MIT

As with many developments, some individuals were there long before most of us even remotely considered its possibility. In a recent episode of NPR’s brilliant Invisibilia podcast, wearables pioneer Thad Starner was interviewed and recalled his early experiments with a wearable computer.

Thad wore the computer all the time and could store and retrieve information using a wrist-mounted keyboard. By recording conversations with people and typing down details he’d otherwise forget, his social skills improved which strengthened his bond to others. It would seem the machine helped make him more human, interesting and relatable in the eyes of others.

But That wasn’t satisfied. His friends caught on and they started sharing notes, creating a collective memory of human interaction. Not surprisingly, Thad and his friends were joking referred to as “The Borgs”, named after one of Star Trek’s most memorable alien civilizations.

More than twenty years have passed since Thad and his friends started using wearable tech. Since then, the technology has shrunk enough to be usable for more than die-hard pioneers. Google Glass is the latest popular attempt to further bridge the gap between human and machine. While Glass has reportedly had setbacks, I do not consider the idea dead, by any means. This is going to happen. One way or another. We'll all be cyborgs, eventually.

This article was updated on 2020-03-01

Jakob Persson

Your host. A founder and entrepreneur since 20 years. Having worked as a freelancer, consultant and co-founder of a successful digital agency has shaped his skills in and insights into business. Jakob blogs at blog.bondsai.io, speaks at events and consults with agencies and freelancers in growing and developing their companies. He holds degrees in media technology and cognitive science.

Comments