The article investigates whether companies using AI chatbots in customer service should be transparent about the involvement of human employees working behind the scenes. To this end, we conducted a field and a lab experiment. Our findings suggest that disclosing human involvement before or during an interaction leads customers to adopt a more human-oriented communication style. Rather than using simple keyword-style queries, customers tend to use longer, more complex, and more natural sentences when human involvement is disclosed (vs. not disclosed). This effect is driven by customers’ impression management concerns. That is, customers are more concerned about making a good impression when they know that there are human employees who step in if the chatbot is unable to respond. Ultimately, the more human-oriented communication style increases employee workload because fewer customer requests can be handled automatically by the chatbot and therefore must be delegated to a human. These findings are important because they help us understand how customers respond to human-AI hybrids and reveal the unintended consequences that greater transparency in AI interactions can bring.
Link to announcement: https://www.linkedin.com/posts/informs-isr-journal_isr-announces-2025-best-paper-and-editorial-activity-7364627496253779968%2D%2DPzK
Read the article here: https://pubsonline.informs.org/doi/10.1287/isre.2022.0152