I recently attended a symposium on artificial intelligence and philosophy. The spectra of viewpoints was dizzying. Where does AI stand in our whole taxonomy of things? What are the implications of this technology becoming possible and what should they be? How can we even study something that does not yet exist? (Videos from the event are available if you understand Finnish and have 5 hours to spare.)

One term arose several times and is worthy of attention: a black box. Intelligent machines that we have no way of understanding, or even if we could, ones that are intentionally obscured. Intuitively I think that if it’s not by nature a black box, we can’t call it intelligent. But probably at least the cognitive scientists will disagree.

I suspect that most of the intelligent machines that people have in mind are really simple. Just by knowing the time, location, and a bit of history, a mobile device can make simple but very good guesses about what you want or are going to do next. Or when you have many address book contacts in common with someone, it’s a good guess to suggest you might know each other. You can’t take advantage of a smartphone and stay under the radar.

These algorithms aren’t often much more complex than a modern car’s windscreen wiper. It decides it’s time to wipe depending on how wet it is, looking at one simple sensor. (My car is not very good at deciding and I’m probably oversimplifying things, but anyway: simple correlation triggers a simple action.) And although the matter is disguised in stupidly hard to read language, many of these things are even patented, i.e. public. They are black boxes only for those who are not “skilled in the art”. But the intelligence here is only a perceived one, a mind trick.

Hybrid groups, composed of humans and specialised machine learning algorithms working in concert, are a further step in evolution. In the debate about AI in science professors Hannu Toivonen and Petri Ylikoski envisioned a research group where an AI does time-consuming but relatively mundane tasks, and humans do the real science. Roughly translated from the discussion you could call it Artificially Aided Intelligence. But for many scientists it is still a futuristic idea.

In the business world, you don’t need to imagine. For instance, digital marketing today is done by hybrid teams. An AI, sometimes advanced, but sometimes still quite rudimentary, does the mathematical heavy lifting. Marketers rely on its insights to deliver a message. All members of the group are indispensable. And as a collective they have at least one characteristic of an intelligent entity, intent, which in this case is to sell you stuff.

This is also a point where we start to get into trouble with black boxes. People responsible for these hybrid groups are not ready to blindly accept evaluations and autonomous decisions from a digital black box, no more or less than from a human member. Data scientists and engineers have a hard time of selling AI solutions unless they can explain the decision logic in layman’s terms. But an intuitively understandable explanation is often quite difficult, if not impossible to give.

When learning, the machine will decide itself what is important and what not. The most useful algorithms are programmed to learn from events as they happen. We can be completely open about what the algorithm is and what sources of data we will be using to teach it. We can set bounds on what the set of possible outcomes is, but we don’t know what’s going to happen. If we did, the algorithm, along with many other things, would be much simpler.

Still, it’s not black magic. The future of AI might be in entities that are completely synthetic. They have their own hierarchy of needs. They will set their own, alien goals. I doubt it will be any time soon, but I don’t doubt that a general-purpose AI will one day exist. After we get there, Dr. Michael Laakasuo estimated that a superintelligence surpassing ours might emerge in as little as 18 months. (He did not project from the present, as I mistakenly wrote in my notebook and an earlier version of this post.) If he’s right, it will be very, very fast and a little bit scary. But in the meantime, let’s not be afraid of the black box. It’s black but not as black as you might think.

Pirkka Kärenlampi

Pirkka Kärenlampi


Pirkka is a senior software consultant and the CEO of Creacomp.

He likes asking questions more than stating facts, simplicity more than complexity, quickly testable hypotheses more than carefully laid out plans. He’s here to make things happen.

Pin It on Pinterest

Share This