Surprising ways AI technology is reshaping the traditional Engineer role

Share:

There are a handful of technologies that have the potential to upend not only existing business dynamics, but also the way we conduct our entire lives. Innovations such as blockchain, VR/AR, and quantum computing will certainly become an important part of our day to day lives in the coming decades. However, no technology carries as much revolutionizing potential as that of modern Artificial Intelligence, and, more specifically, Artificial General Intelligence (AGI).

When I attended Columbia University for my Computer Science degree in the year 2000, AI was a joke.  There was no intelligence to it, not even artificial. At the time, what was meant by “AI” was a certain set of algorithms that could approximate solutions to problems that were not easily solvable by humans. On its best day, AI would be able to emulate human actions in a very narrow context, like playing a game of chess, for example. The trouble was, the methods by which these AI-wannabe programs arrived at an action had nothing in common with the way a human would go about it. Whereas a human combines reasoning, tactics, strategies, pattern recognition, and a slew of psychological and circumstantial factors to do even something well-defined, like move a chess piece, a computer bluntly calculates as many game scenarios as possible to find one it likes best. There are no tactics, there is no strategy, there is no creativity — only calculation. This wasn’t intelligence, this was number-crunching.

Fast forward a decade, and we begin to see the first inkling of intelligence from computer systems. This inkling arrived in the form of neural network algorithms, made effective and popular by Prof. Yann LeCun of NYU fame among others. These algorithms are principally different from typical AI approaches, in that they can generate solutions not through trial and error, but through a clever assembly of calculating nodes. The nodes may have several layers, they may have binary or analog properties, there may be thousands of them or just a handful. The key innovation to make neural nets “go” was the algorithm that trains the nets to generate useful results. This is the algorithm that, given a certain dataset, will wiggle the calculating nodes to and fro until they settle in a magical arrangement that can generate useful answers when queried with any analog of the dataset on which it was trained. In this way, a neural network trained on images of cats, can recognize a cat in another digital image. This architecture has a faint echo of the way our own brain’s neurons activate in response to stimuli and generate the wonder[bread] we experience as consciousness.

We are getting closer.

One of the most fascinating aspects of the advent of neural nets and analogous approaches, more recently filed under the umbrella of “deep learning”, is that the resulting arrangement of nodes is opaque to the users of the algorithm. This is novel in the field of computer science, and I dare say, truly bothers the most diehard tech geeks. At the innermost core of geekdom, is the ability to figure out and systematically understand how something works. Hackers, coders, engineers, all have this in common: they like to understand every element of a model, its components, and their relationships, and thereby be able to predict how it works, and, crucially, what to tweak when it stops working. If even the tiniest part of a system is “magic”, it is no longer truly serviceable by a card-carrying techie. Deep learning models are 90% magic, and this is causing geekxiety. Although there is a burgeoning field of research with the explicit goal of understanding neural network models, it is nascent, and, at this point, it is unclear how much sense, even theoretically, can be made of these innards.

This is a problem.

The fact that we cannot fully understand the resulting arrangement of neural net nodes and how and why this arrangement does a good job at spotting a kitty in a picture can be a problem when guarantees are important. It’s fine and good if your goal is to auto-sort your photos into folders With and Without cats. If a stray cat finds his way into a Without folder, it’s not a big deal… the planet does not blow up. If, however, you are using the same algorithm in your self-driving vehicle to avoid a cat on the road, there are more serious consequences. And, if we substitute a cat for a bicycle or a human being, suddenly, a stray human in the “just keep driving” folder is a disaster. Understandably, we would like some guarantees from the model about it never ever ever mistaking a human for something else when the car is in motion. Unfortunately, it is exactly these kinds of guarantees that are for all intents and purposes impossible to obtain by analyzing the network of nodes directly. Surely, the model can be tested extensively across a variety of conditions, which is how Uber, Tesla and other self-driving pioneers are proofing their systems today. However, although we can arrive at a level of comfort with the performance, we cannot be guaranteed that in some strange circumstance, the system will fail to “see” a person.

It’s not all bad news.

If we consider an actual human driver, it puts the shortcomings of AI models in perspective. Certainly, no matter the driving experience and vision aids, people will, much like AI, make mistakes. Each year, over one million car accidents occur in the US alone. If anything, the fact that AI is a bit unpredictable may be a meta-clue to the fact that we are getting closer to the way people, rather than traditional computer programs, think and behave.

We are entering a new computing paradigm.

This one is tough for traditional techies to make peace with, so strap on your proverbial seat belt for the next part. Human behavior is not fundamentally “functional”, rather, it is “chaotic”. In math, a chaotic function is one which gives highly variable results for tiny changes in its input. Take a moment to google “Chaotic Pendulum” for a very visual example. That is to say, some tiny aspect of our experience, whether environmental, bodily, or psychological, can dramatically vary our actions in the next moment. As demonstrated in The Matrix by Neo’s many alternative responses in his conversation with the Architect – people are unpredictable in important ways. The same should be true of any AI whose goal it is to approximate human cognition rather than clever problem solving.

Human behavior is emergent. It manifests through a great number of competing systems in our body and in our brains vying for control of our actions. From something as banal as “I really want to use the bathroom, but I think this lecture is almost over so I’ll hang in there…” to “I smelled bacon on my way to work and now I want a chicken club sandwich for lunch…”, our behavior is a product of base physiology, instincts, cultural notions, and some rational thinking. In people, this process is called cognition. Through cognition, we make sense of the world, ourselves, and then we do (or intentionally don’t do) something about it.

Similarly, we should expect more sophisticated AI systems, what today is sometimes termed “Artificial General Intelligence” or AGI, to have emergent properties. It will not have a function that codes responses for common greetings, for example. The AGI system, like a human, will learn acceptable forms of greeting in some context and reason its own response to a “top of the mornin’ to ya” based on its collective internal state, and whether it’s even paying attention to what you’re saying or if it’s preoccupied with figuring out where to grab a chicken club sandwich.

This means that true AGI may not be fully programmable or configurable. Unlike Westworld, where we see admins adjusting personalities from “passive” to “aggressive” or from “dumb” to “smart”, future AGI may not have such levers because, like the opaque neural network models of today, these aspects of its behavior may be emergent from more fundamental structures that don’t directly translate into aggression or smartness. For example, we may be able to make AGI more curious or more eager to respond quickly, but how that manifests into the AGI “personality” will remain to be seen.

This is a very different way of thinking about constructing systems than we have seen through all of history of humanity to this point. It’s a new type of engineering. It is part computer science and part psychology. Engineering and improving AGI will be an exercise in honing chaotic behavior toward desired goals, such as driving our cars, or, perhaps, not becoming our overlords. This new approach will require a new toolset both theoretically and operationally. The Turing underpinnings of modern computer science will no longer fully explain why an AGI decided to design a blue dress rather than a pink one, much like there is no mathematical formula that could predict why a human designer would make a similar choice.

To that end, the kinds of specialists tasked with AGI realization will need to be a new type of technologist. This new breed of tech will need to be not only comfortable with source code that either compiles or it doesn’t, and, when it does, it always calculates the needed results in the exact same way, but also with conceptual models and strategies, which, although are not mathematically guaranteed, have been shown to deliver desired results. Consider, for example, today’s psychologists. While we understand what the various parts of the brain do, we don’t have a guaranteed prescription for psychological treatment, rather, there are different treatment modalities. Some may benefit from talk therapy, others may benefit from yoga, yet others may require medication. Similarly, we should expect that the next generation of AGI will have different modalities of development. Part of it will certainly require cold hard computer code, but other parts of the system will require architecting and directing emergent behavior of the system as whole.

The sooner we let loose the “psych-engineers” on AGI, the sooner we may see true digital cognition that not only plays chess, but also whistles a tune as it makes us some coffee because it happens to be in a good mood.

 

Human-like AI

Share: