Jan Krüger's blog

Creative Engineering and randomness

Problems in knowledge engineering

· Read in about 10 min · (2020 words)
Knowledge Engineering

Given the “right” philosophical attitude about how the world works, the ultimate goals of knowledge engineering, namely obtaining, processing, using and making accessible all kinds of knowledge, can definitely be achieved. This set of bold goals, however, presents researchers with very difficult problems. All attempts that exist today are restricted to small classes of knowledge.

What needs to be done

One of the major applications of knowledge engineering is generating knowledge (that is, meaningful information) from input. This input may be arbitrary data, sometimes incomplete or noisy, and often confusing. A classic example is a black-and-white photograph of a zebra standing in front of a similarly striped wall of some sort or, in this case, a highly artistic drawing of the same (sadly, I have neither the zebra nor the wall to take a photograph of):

Zebra

Brain vs. computer, round one: autonomy

I’m not saying it will be impossible to build a system that recognizes zebras, but it will probably need to be tailored to this exact purpose, such as a program that can decide whether a given image shows a zebra in front of a wall. The human brain does the recognition job quite easily, along with other jobs that are not exactly easy to do with machines.

At some point, many knowledge engineers that are presented with the task of building pattern recognition software will probably get rather jealous of evolution, which managed to create a system that is astonishingly flexible and manages all kinds of learning without obvious struggle. If only we had the source code…

This desire becomes most obvious in the wish for so-called strong AI or specifically artificial general intelligence (AGI), i.e. a machine that can do everything the human brain is capable of. We are nowhere near that, though, so research typically investigates weak AI, which is about solving domain-specific problems (there are people doing research on AGI though).

Supervised learning

The usual way of going about that involves setting up some kind of exciting algorithm and then feeding it with training data (examples of inputs along with the desired outcome of the recognition task). This is called supervised learning.

This approach actually works quite well for many things. Even given very limited experience with machine learning, I have managed to write software that performs speaker recognition, with the assistance of a few other people. I can’t give you hard numbers about its accuracy because the numbers we had were restricted to a miniature set of test files so they didn’t really say much, but within that test set it was very good. Still, we had to research domain-specific methods to extract the most meaningful features of the human voice from speech files and even more effort went into fine-tuning parameters, so most of the work was ours and not the machine’s.

Unsupervised learning

Right, but what if you do not know the right answers yourself? This is where_ unsupervised learning_ comes in, which means finding groups of similar samples in the input data. If the unsupervised learning system is given inputs which can be separated into three different groups, chances are it will learn to sort inputs into three categories (this is called _clustering_). The disadvantage, of course, is that the clusters are not labelled, so someone else gets to make sense of them. Also, since nobody knows just how many “real” clusters there are, you can’t be sure if the system doesn’t decide on the “wrong” number of clusters.

Reinforcement learning

Now suppose you want a system to learn on its own, because it needs to be autonomous. This will happen in a trial-and-error process: the system is faced with a situation, decides on an action and observes the outcome (and, specifically, the quality of this outcome). But how will the system decide how good the outcome of its actions was? It can’t, because it does not actually understand the meaning of the task it’s performing. The solution is to supply it with a goal function. The goal function maps each possible outcome to a score value, and the task of unsupervised learning is to optimize its actions with respect to that goal function. And who do you think will be responsible for designing this goal function? That’s right.

The conclusion

These observations allow only one conclusion right now: even weak AI does not work without human intervention in most realistic cases. Winner of this round: brain.

Explanation?

It is difficult to make accurate general statements about AI. Is it actually possible to design an AGI? Probably, because if the brain can do it, logic suggests that it can be imitated. Is it likely that this will happen? That’s next to impossible to say. I believe that the answer is no.

Strong intelligence in its existing implementations (humans) is extremely complex. In fact, nobody really understands how the brain works. Now, is it likely that anybody will ever fully understand the brain? No, because that would require that a single brain be capable of understand every aspect of itself. In other words, the brain would need the capacity to describe itself, which is logically impossible (just like no solid box can fit inside itself). The only way the brain could ever be fully explained is by collaboration of a team of researchers. Even then, though, I am very skeptical about the odds of success; have you ever seen a team of substantial size that can communicate well enough that it can be seen as a properly distributed “hive mind”?

It is far more likely, then, that a working AGI will be discovered by coincidence, just like many other scientific breakthroughs were coincidental. Humans are pathetically bad at estimating the odds in complex problems, so I will not presume to be able to make an accurate prediction. Anyway, it is my personal belief that this will not happen for a very long time, at the very least not during my lifetime.

And now?

For this reason, like many other people, I will focus on implementing weak AI to solve problems. A number of successful approaches in weak AI are not well understood either, but at least they work (some surprisingly well) and can be made sense of to some degree.

In the context of my general topic of interest, I would like to state that I even believe that AGI would come with its own set of difficult problems that may be even harder to control than those occurring in weak AI. I will go into more detail in a follow-up that presents the human side of the scale.

Comments

The following is a selection of user-submitted comments from the previous iteration of this website.

Nice article! It should be mentioned, though, that the brain as well comes with some amount of preprogramming. Take some deaf children who never have heard a spoken word in their life and they will still develop some kind of (sign) language that has grammatical rules much like spoken languages.

Also, if you compare the machine to the brain, please compare it to an infant’s brain. An adult brain has had the advantage of many years of access to all sorts of “training data” and, also, got feedback from other humans. That’s a typical part of learning. While the human brain is preprogrammed to some extent (also, learning probably begins long before birth), many of the things we are capable of habe been taught. I don’t see why we need to make it harder for computers by not teaching them as well.

The ultimate goal is probably some software brain, that, running on a computer inside a robot with similar physics and sensors to those of human beings, is given the freedom and the time to “grow up”, just like children do, with the result of it learning many of the things that are completely normal for human beings. Still, we would have to allow for some differences concerning world view. Just like people with disabilities recognize the world differently, a robot will. If we just could replace a brain with a computer by making it run on whatever the body provides and connect it to the complete set of sensory input, that is, the nerves, things might look a bit different.

In the end, I think, AGI will fail mainly because it doesn’t have nearly as much input and also not as great a variety of input as humans do.

Dennis


In response, let me whip up my personal idea of a model of how motivation works on a low level. I think that motivation is a critical factor in development. If you are not motivated to achieve anything, you will never learn how to achieve anything. After all, why should you? There’s nothing to win from it. All that’s left to do for you is performing random actions or echoing things you’re observing.

So, how do you get motivated? You need a goal function, i.e. something that gives you a general direction. I agree that this is pre-defined in humans, and it is based on physical sensations. There are some sensations you would rather have and some you would rather avoid. On a slightly higher level you’ve got emotions, which some people argue are translated internally into the same signals (which is why you often get strong physical sensations when you’re having strong emotions of some sort).

On a neurophysiological level, you’ve got a huge network of nerve cells in which you want some parts to be active and others to be inactive. This is a fairly vague goal and I think that it is difficult to model using typical artificial neural networks as used by everyone. After all, these are simply functions in which it is clearly defined which neurons are inputs and which are outputs, and what you do is “teach” them what kind of output should be generated for a given input. Quite different, isn’t it?

So in my opinion, artificial neural networks are not suited for developing AGI. There are other things the brain does that artificial neural networks currently cannot, such as re-wiring and development of new cells, possibly according to some sort of strategy. These things are not well understood at all. Additionally, artificial neurons are dumbed-down copies of neurons. Real neurons translate inputs into outputs according to an arbitrary function, whereas people tend to use the inputs of an artificial neural network as a weighted sum that is passed through a linear or sigmoid activation function to decide whether the neuron fires.

I think we need a different model of the brain before we can ever hope to succeed in the quest of AGI. Artificial neural networks do have competition, mainly from the area of statistics. I believe that statistical approaches are great tools for weak AI but will not manage to achieve the degree of adaptability that the brain possesses. Another thing that seems to be the latest hype is the so-called Hierarchical Temporal Memory, but that again seems to be tailored to simple pattern recognition (and I haven’t heard anything about it so far, except for buzzwordy marketing).

So one hurdle that I see is finding the right kind of approach to the problem in the first place. This includes finding a model that can grow itself in a meaningful way (even if we do not understand the meaning). The next would be to find a suitable goal function, possibly based on pre-defined pain/pleasure signals. Another would be what you said: supplying a world that is complex enough to generate enough useful knowledge for the machines to process (and that in itself should be a huge project).

To me it seems like we cannot “invent” AGI unless we copy ourselves (or find some other way to achieve AGI by accident). If we actually manage to do so, I imagine that this AGI will suffer from the same insufficiencies as humans do (as stated at the end of the posting, I will elaborate on that in a follow-up), and I can’t help but wonder if the quest of copying ourselves is worth it in the end.

Jan