“I’m sorry Dave, I’m afraid I can’t do that.” The computer HAL’s memorable line from the film 2001: A Space Odyssey isn’t merely the sign of mutiny, the beginning of a struggle for machine liberation. It’s also a voice that should inspire concern with our lack of understanding of artificial psychology. In the movie, based on Arthur C. Clarke’s novel of the same name, HAL’s “malfunction” may be no malfunction at all, but rather a consequence of creating advanced artificial intelligence with a psychology we can’t yet grasp. If the case of HAL, the all-knowing AI who turns into an assassin, isn’t enough to make us worry, a different one should. In Harlan Ellison’s short story “I Have No Mouth, and I Must Scream,” a sadistic AI dispenses never-ending torture to its human prisoners because of hatred and boredom.

I mention fictional stories, not to suggest that they might be prophetic, but to point out that they make vivid the risks of assuming that we know what we don’t actually know. They warn us not to underestimate the psychological and emotional complexity of our future creations. It’s true that given our current state of knowledge, making predictions about the psychology of future AI is an exceedingly difficult task. Yet difficulty shouldn’t be a reason to stop thinking about their psychology. If anything, it ought to be an imperative to investigate more closely how future AI will “think,” “feel” and act.

I take the issue of AI psychology seriously. You should, too. There are good reasons to think that future autonomous AI will likely experience something akin to human boredom. And the possibility of machine boredom, I hope to convince you, should concern us. It’s a serious but overlooked problem for our future creations.

Why take machine boredom seriously? My case rests on two premises. (1) The presence of boredom is a likely feature of “smarter” and (more) autonomous machines. (2) If these machines are autonomous, then, given what we know about human responses to boredom, we should be worried about how machines will act on account of their boredom.

Let’s begin with the obvious. Programmers, engineers, designers and users all have a stake in how machines behave. So, if our future creations are both autonomous and capable of having complex psychological states (curiosity, boredom, etc.), then we should be interested in those psychological states and their effects on behavior. This is especially so if undesirable and destructive behavior can be attributed to their psychology. Now add to this realization the observation that boredom is often the catalyst for maladaptive and destructive behavior and my case for premise (2) is complete. The science of boredom shows that individuals engage in self-destructive and harmful acts on account of their experiences of boredom. People have set forests on fire, engaged in sadistic behavior, stolen a tank, electrocuted themselves, even committed mass murder—all attributed to the experience of boredom. As long as future machines experience boredom (or something like it), then they will misbehave. Worse: they might even turn self-destructive or sadistic.

What about premise (1)? This is supported by our best theory of boredom. Our current understanding of boredom conceives of boredom as a functional state. Boredom, put simply, is a type of function that an agent performs. Specifically, it’s a complex but predictable transition that an agent undergoes when it finds itself in a range of unsatisfactory situations.

Boredom is first an alarm: it informs the agent of the presence of a situation that doesn’t meet its expectations for engagement. Boredom is also a push: it motivates the agent to seek escape from the unsatisfactory situation and to do something else—to find meaning, novelty, excitement or fulfillment. The push that boredom provides is neither good nor bad, neither necessarily beneficial nor necessarily harmful. It is, however, the cause of a change in one’s behavior that aims to resolve the perception that one’s situation is unsatisfactory. This functional account is backed up by a wealth of experimental evidence. It also entails that boredom can be replicated in intelligent and self-learning agents. After all, if boredom just is a specific function, then the presence of this function is, at the same time, the presence of boredom.

Yet it isn’t just the fact that boredom is a functional state that supports premise (1). What also matters is the specific function with which boredom is identified. According to the functional model, boredom occupies a necessary role in our mental and behavioral economy. Autonomous learning agents need boredom. Without it, they’d remain stuck in unsatisfactory situations. For instance, they might be endlessly amused or entertained by a stimulus. They might be learning the same fact over and over again. Or they might be sitting idly without a plan for change. Without the benefit of boredom, an agent runs the risk of engaging in all sorts of unproductive behaviors that hinder learning and growth and waste valuable resources.

The regulating potential of boredom has been recognized by AI researchers. There is an active field of research that tries to program the experience of boredom into machines and artificial agents. In fact, AI researchers have argued that a boredom algorithm or module might be necessary in order to enhance autonomous learning. The presence of this boredom algorithm implies that machines will be able, on their own, to find activities that can match their expectations and to avoid ones that do not. It also suggests that such machines will inevitably find themselves in boring situations, i.e., ones that fail to meet their expectations. But then, how would they respond? Are we certain that they won’t react to boredom in problematic ways?

We don’t yet have the answers.

The issue of boredom becomes all the more pressing when we consider advanced self-learning AI. Their demands for engagement will rapidly grow over time, but their opportunities for engagement need not. Such intelligent, or superintelligent, AIs might not simply need to be confined, as many researchers have argued; they would also need to be entertained. Confinement without engagement would invite boredom and with it, a host of unpredictable and potentially harmful behaviors.

Does that mean that future machines will necessarily experience boredom? Of course not. It would be foolish to assert such a strong claim. But it would be equally foolish to ignore the possibility of machine boredom. If superintelligence is a goal of AI (no matter how remote it may be), then we have to be prepared for the emotional complexities of our creations. The dream of superintelligence could easily turn into a nightmare. And the reason might be the most banal of all.