Intuition is an interesting concept, and I believe that it’s a bit hard to really make sense of for people who don’t consider themselves intuitive. At least it didn’t make a lot of sense for me a year or two ago.
I suppose many think that intuition is something you are born with… some people just “know” certain things without being able to reason them out, and other people have to conduct an elaborate analysis of the facts in their minds to end up with the same conclusion. If that’s the way you think about it, you might believe that intuition is something of an unfair advantage.
Another widespread position seems to be that intuition is very risky… after all, intuition doesn’t give you the certainty that logical reasoning can give you, right? So perhaps if you go by that idea, you might say that it’s better to not use intuition at all.
I think that the answer is somewhere in between, as it often happens to be… and I’m going to tell you how intuition became a natural thing for me, even though I wasn’t exactly born with it, nor did I think it made sense to trust in it. But now I do have it, and I do trust it, because I use it in a way that I’m confident in. And don’t worry, I’m not going to cite the usual hogwash about left brain versus right brain… I’ll just explain a useful way of looking at intuition, and I’ll also waste a few words on how important I think it is for knowledge engineering.
While I grew up, intuition pretty much didn’t exist for me. I was socially awkward (feeling out of place and being bullied didn’t help) and I spent pretty much all of my time with computers. Computers aren’t known for their intuitive powers (except, of course, when they know just the right moment to crash to lose you as much of your precious work as possible)… they are just all about cold, hard logic. I could work with that.
I knew about intuition but I actually believed in an even blend of the two positions outlined above: I thought that intuition was a something you either have or don’t, and I didn’t have it, and that reason was actually much better anyway.
One thing that changed my thinking was that I started to wonder where mastery comes from, for example in programming. There’s a very, very small number of people who are really seriously extremely good at programming. The overwhelming majority of people who try it never get any farther than cobbling together things other people have done without really understanding how they work.
Many people say that programming needs certain cognitive skills that many people simply don’t have, and that it’s not possible to teach those skills. I used to think the same. At some point in my life, however, I started wondering whether being lauded as extremely intelligent by random other people (I’m not too shabby at programming, as it turns out) really means all that much. I mean, sure, it’s easy to just assume that the Extremely Superior Intellect™ you were gifted with is what makes you so great at everything and just leave it at that. But eventually I started doubting all allegedly foregone conclusions of that kind… and I started considering other ways of looking at what I could and couldn’t do.
I’m deliberately overstating the intuition-free approach here. My purely rational argument for that is that caricature makes it a hundred times easier to see the important part… and it vastly increases the chances of seeing the important part in more realistic situations later on.
A big part of programming is understanding what’s going on, and knowing which ingredient to fit where. Programming is a lot like cooking: you can just throw together a couple of ingredients and the result invariably does something, but it’s not necessarily useful or tasty. Or perhaps it’s useful but it’s so hard to chew that your teeth hurt after you’ve finished eating the program. Oh wait, I’ve gotten my metaphors all mixed up there.
The way to fail at cooking or at programming is to try and remember elaborate sets of rules, obtained from an authority in the field or by field tests. Let’s see… if the steak has this kind of sizzle, go to step 47a where you turn it over; if it’s the other kind of sizzle, increase the temperature by five percent and continue in section 87; if it turns this shade of red, call the fire department, and so on… you could easily fill several volumes with instructions like these, and even if you were extremely quick at looking them up while cooking your masterpiece, you’d be stuck in two ways: firstly, if something happened that wasn’t quite covered by one of your rules, you’d be stuck; secondly, improving more than a single rule at a time is going to be very tricky if you’ve got them all laid out in a huge tangled mess… and anyone who has ever had the delightful experience of working on multidimensional non-linear optimisation knows that optimising individual components is not one of the very best optimisation strategies, and it almost never leads to the very best solution.
That leaves us with two tangible limitations of an approach based purely on logic and reasoning:
- The set of rules you have to maintain in your mind gets larger and larger and harder to manage as your experience increases.
- An elaborate set of rules leaves you completely unable to deal with any case that isn’t exactly covered by one of the rules.
I suppose it’s obvious that nobody really does anything that way, at least not if they do it very well. So what do they do instead? Well, let me tell you a little thing about the human brain.
The brain is completely unparalleled in learning things in a “fuzzy” way. What does that mean? Given a large base of experience, it somehow makes up an internal representation of what to do when, and it doesn’t provide you with a formal list of rules for that. You know that this happens whenever you do something without thinking about it. For example, try explaining to someone else how to tie shoelaces, step by step. I don’t know very many people at all who can do that without stumbling. Why? Because they know how to do it so well that they never need to think about it. They just go through the motions.
In fact, how about an even more pervasive example. Go ahead and lift your arm, now. Done? Did you have to think about how to coordinate the various muscle contractions involved in such a complex operation? Probably not. Your brain did it for you.
Neither of those are intuition, of course. They’re just learning. However, I’ll argue that intuition is more of the same.
Let’s look at really good programmers again. What are their defining attributes? I think it’s fair to say that one very important one is that they have a lot of experience. And quality experience, too, right? If you have a lot of experience with how to do something poorly, that doesn’t magically make you a wizard.
What don’t programmers do? See last section: they don’t have a huge Book of Programming Rules in their heads that they consult for everything they do. That wouldn’t work very well. So somehow they learn to “just know” what to do in any given situation. How? The same kind of learning I just described: the kind where you can make a decision without thinking about it, and perhaps you’ll even be hard-pressed to explain just why you decided that way and not a different one that might have been okay, too.
So at some point I discovered I actually had plenty of intuition but I had never noticed it. I don’t think anyone will seriously claim that I was born with intuitive knowledge about how to write computer programs… not least because, to my knowledge, none of my ancestors ever programmed anything in their lives. So I had to have developed it somehow, right?
At this point, the way I understand intuition is actually so extremely simple it seems almost silly to break it down… but here you are, with programming as the example:
- Initial state: you don’t know anything about programming. You decide that you want to learn how to do it.
- You connect up the programming disk to your neural connector interface. Just kidding.
- You look at how other people do it… starting with extremely simple examples. I think it’s very important that these examples are inherently practical. I don’t really think it’d be a good idea to learn to cook with example recipes that don’t result in anything edible. This is, I think, the single most prevalent reason that people fail to learn how to program… either the first examples aren’t practical enough or they aren’t simple enough. It’s not a big secret that a weak foundation isn’t something that you can build on very well, so there you are.
- You try something simple yourself and get good feedback on it. (The quality of the feedback is probably the second most important factor here… often people learning how to program get no feedback at all, or feedback like “this piece here is wrong but I ain’t telling you in what way, or in a very cryptic way at best” (computers are very good of producing this very frustrating kind of feedback). Good feedback goes like “this piece here is wrong. You might think it might do X but it actually tries to do Y and that doesn’t work because of Z. Try this here instead. Can you see how it might work better?”
- Based on the feedback, you start trying more and more different things, and gradually step up the complexity and the range of things you experiment with.
- Repeat the cycle a few thousand times.
- You’re now a good programmer.
Did you notice that I cleverly hid a few pieces of rationality in there? And they’re very important pieces of rationality. One is: keep the amount of new information manageable at all times. Another: don’t fly blindly; get feedback and incorporate it into your experiments.
That’s how useful intuition develops: by controlling the amount of new information you subject yourself to at any given time, and by testing your results.
It’s simple and I believe it’s the only way to get really good at something. I also believe that all of us use intuition… perhaps it just takes a little nudge to notice it. Consider yourself nudged.
Okay, I said it’s simple, but I didn’t say it’s easy. There are a few problems you can run into.
If you ignore my suggestion to avoid building huge sets of rules in your mind, intuition isn’t going to happen. Trying to be in full control of all the data is counter-productive. Do so at your own peril. Of course you’ll discover individual rules all the time while you learn to do something, and it’s just as counter-productive to prevent that from happening, but don’t try and stick any rules to the front of your brain. That will allow you to always be able to recite the rule, but it won’t magically integrate the rules into an intuitive understanding of whatever it is you’re learning. Just look at something until you really understand it, then move on to something else… that way it’s pretty difficult to mess it up.
The much more complicated problem is with bad feedback. If you’re new to something, you can probably distinguish useful from useless feedback, but it will be much harder to distinguish correct from incorrect. The most efficient solution is to get someone who is really good at doing whatever it is you want to learn, and who is also really good at explaining things (yeah, yeah, I realise I’m not telling you anything revolutionary right now). Alternatively you can just check multiple sources and hope for the best.
The best inoculation against bad feedback, however, is to keep looking at contradictory ideas. Whenever you’re good at something, it’s tempting to dismiss other people who go about it in a different way. Here’s a good rule of thumb: when two people are good at something but they disagree about the way it should be done, it’s very likely that both positions have something going for them, and there’s some kind of trade-off involved, and probably also some ego. Look at both sides, consider both their advantages and their disadvantages. There are few things that have no advantages, and there are few things that have no disadvantages.
Keep your options open, and you’ll keep getting better even if you sometimes learn wrong things out of bad feedback.
Software agents making use of artificial intelligence or some other newfangled approach to decision-making and prediction aren’t generally connected with the word “intuition”. After all, artificial intelligence is often defined as being about agents that act rationally, e.g. in “Artificial Intelligence: A Modern Approach” by Russell and Norvig.
Intuitive reasoning versus intuition-free reasoning is a bit like artificial intelligence versus rule-based systems (I bet you didn’t see that one coming). I don’t need to talk about the limitations of rule-based systems here… after all, I already did when I talked about cooking.
Artificial intelligence, especially if you use stuff like artificial neural networks, amounts to giving your agent a lot of experience with (hopefully) good feedback and it ends up constructing a decision algorithm that is not easily expressed in terms of rules. That’s totally the same thing as intuition, isn’t it?
The punchline, I guess, is that you can only act rationally through intuition, except in the simplest situations. Now go out and throw this sentence at random people and see a good percentage of them laugh at you and call you insane.
By the way, intuition is just as important in knowledge engineers, obviously. Without having some intuitive understanding of KE algorithms and concepts and the structure of synthetic universes as perceived by agents, I doubt you can do even a half-decent job. My personal approach to knowledge engineering has always been, and will continue to be, about developing intuitions. I’m not all that great at it, but I am confident in the madness to my method. Or was that the other way round?
The following is a selection of user-submitted comments from the previous iteration of this website.
Nice one! There is one good thing about incoherent feedback about something being wrong: it rarely appears when indeed everything is right. More and more I merely notice that there is an error message, but don’t even bother to read it, instead I just go back to the code I just wrote and look for errors. Often enough, intuition leads me to the right place. More fun, though: finding out that the error you just fixed wasn’t the one reported. Just knowing something is wrong already helps immensely.
Yeah, that kind of binary feedback is often good enough… but not when you’re first learning to do something. As you get better at something, at some point you can get by with less accurate feedback… just like at some point it gets easier to deal with more added complexity.
Oh, and you can actually get bad feedback even if something works. Computers don’t tend to give you that kind of feedback, but humans do.