I’ve been building bots for a while now. A lot of bots. Bots to remind me of my favorite parts of cult classic movies. Bots to represent some aspect of myself out on the Internet. Even bots that act more like admins than bots. And what I’ve learned the most is that training bots is hard.
Understanding intent and context from a single sentence is much more complicated than I had previously imagined, and it has given me a much more forgiving perception of things like Siri. “I want that,” may seem like a commonplace phrase uttered a million times a day and is almost universally understood, but to a computer it’s a borderline useless statement. Sure, it understands you desire something, but what is it? And you want it how? To have, “ I want that for my collection.” To be rid of “ I want that gone.” It’s tough for a computer to grasp what you’re saying, because you’re not giving it enough context.
Of course the reverse can cause massive headaches as well. “I want that over here next to the chair where it can get the most light.” What do you want? There is so much context that the intent is almost completely forgotten by the end of the sentence.
All of this brings me to David, my son (yes, we were super original in naming him). David is a toddler, and as much as I cherish every moment of him discovering the world around him and all that it means, he’s also discovering his boundaries — safety wise, socially, and within the specific rules of our own family dynamics (aka, because I said so).
When reading about toddlers, as I’m sure at least a couple of you have done, one of the main things they say when explaining things to a toddler is a little goes a long way. Don’t overwhelm them with details. So if my son is super interested in the knobs on the stove, don’t tell him, “Don’t touch those, they control the flow of gas and although turning it on by itself poses little danger, a small spark could set off the accumulation of gas and burn down the house.” Instead tell him, “Don’t touch those, dangerous.” He doesn’t need the extra stuff. If I tell him all that extra stuff, he might only understand “don’t burn down the house”, which is important, however there were many steps before the house would actually burn down, the first of those, touching the knob, is the dangerous one and one he can control.
The reverse is also true about the stove. If I were to simply say, “Don’t,” he might not understand what I’m telling him not to do. Don’t touch the knob? Don’t be in the kitchen? Don’t be in the kitchen, touching the knob, while you’re wearing pajamas? Kids don’t know, and how would they? They are barely able to understand how not to fall over, so how could they understand the context of my statement? And training an AI is the same.
When you first get going, be prepared to be discouraged. Your AI will infer things that aren’t even remotely correct. It won’t do this because it’s dumb, it will do it because it’s actually smart, it’s just learning. And if you constantly start over, because you are surprised by the results, you will never get anywhere.
The toddler books will say don’t be surprised if you have to tell your child the same thing 20 or more times before they get it. They hear you the very first time, and more often than not, they will remember and understand the basic principles of what you’re telling them, but they will need multiple repetitions, and with slightly different context, for them to fully grasp the entire statement. Your AI will do the same. They key is stay consistent with your training, and look for new ways to say the same thing so you can give it more context.
Be patient. Like all things, you have to crawl before you can walk.