As artificial-intelligence products steadily improve at pretending to be human—an AI-generated voice that books restaurant reservations by phone, for example, or a chat bot that answers consumers' questions online—people will increasingly be put in the unsettling situation of not knowing whether they are talking to a machine. But the truth may make such products less effective: recent research finds a trade-off between transparency and cooperation in human-computer interactions.

The study used a simple but nuanced game in which paired players make a series of simultaneous decisions to cooperate with or betray their partner. In the long run, it pays for both to keep cooperating—but there is always the temptation to defect and earn extra points short term, at the partner's expense. The researchers used an AI algorithm that, when posing as a person, implemented a strategy that was better than people are at getting human partners to cooperate. But previous work suggested people tend to distrust machines, so the scientists wondered what would happen if the bot revealed itself as such.

The team hoped people playing with a known bot would recognize its ability to cooperate (without being a pushover) and would eventually get past their distrust. “Sadly, we failed at this goal,” says Talal Rahwan, a computer scientist at New York University in Abu Dhabi and a senior author on the paper, published last November in Nature Machine Intelligence. “No matter what the algorithm did, people just stuck to their prejudice.” A bot playing openly as a bot was less likely to elicit cooperation than another human, even though its strategy was clearly more beneficial to both players. (In each mode, the bot played 50 rounds against at least 150 individuals.)

In an additional experiment, players were told, “Data suggest that people are better off if they treat the bot as if it were a human.” It had no effect.

Virginia Dignum, who leads the Social and Ethical Artificial Intelligence group at Umeå University in Sweden and was not involved in the study, commends the researchers for exploring the transparency-efficacy trade-off, but she would like to see it tested beyond the paper's particular setup.

The authors say that in the public sphere, people should be asked for consent to be deceived about a bot's identity. It cannot be on an interaction-by-interaction basis, or else the “deception” obviously will not work. But blanket permission for occasional deception, even if it can be obtained, still raises ethical quandaries. Dignum says humans should have the option to know after they have interacted with a bot—but if she is calling customer service with a simple question, she adds, “I just want to get my answer.”