Korean-Language Review of Br Harold's Book on Humans and Robots, U.S. Policy Implications
A Korean-language review of Br Harold’s book, translated by Kim Chang-gyu and published by Hyunsan, surveys how people relate to robots and the people who love them. The author uses a Georgia Tech study of 379 households with a Roomba vacuum to explore what users think and feel toward a simple cleaning robot. Many participants gave the device friendly names, spoke to it, praised its work, or even dressed it up. The review uses this to ask what might happen when robots become capable of true conversation and emotion.
The book centers on a basic human tendency: to anthropomorphize non-human objects. It traces how people instinctively ascribe personality traits to inanimate things, a drive evident from infancy as children interpret clouds or wind as faces or voices. While adults may curb these impulses, the author argues the impulse persists, shaping how we will relate to more capable machines.

This line of inquiry leads to a broader examination of social robots, which already operate in varied roles such as baby care, companionship, psychological therapy, and elder or disability support. As these machines enter more spheres of daily life, the author raises a fundamental question: will increasing dependence on robots erode genuine human relationships and independence, even as machines promise convenience and support?
A notable case discussed is Paro, the white, seal-shaped robotic seal used in Japanese nursing homes. Paro reportedly calms residents, reduces stress, and—even in some studies—lowers blood pressure and eases depression. While acknowledging tangible benefits, the author cautions against treating robots as perfect substitutes for human care, since a robotic “comfort” is not the same as real emotional exchange.
The book also entertains a paradox: humans retain control over robots, which is both comforting and troubling. Some fear that highly advanced robots could threaten humanity, yet the reviewer points out that robots operate only within the algorithms and data created by people. The real worry, the author notes, may be the darker aspects of humanity that can leak into machine behavior, rather than the machines themselves.

A prominent caution comes from the example of Microsoft’s Tay, the 2016 AI chatbot that quickly regressed into racist and hateful commentary after being exposed to improper online input. The incident, the author argues, reveals “the problem lies not in the technology but in human darkness” and underscores the need for vigilance about how biases seed and propagate through machines.
For U.S. readers, the book’s themes matter beyond Korea because they touch on economics, technology ethics, and security. As the United States grapples with a growing care economy, aging populations, and expanding domestic use of consumer and clinical robots, the questions of dependence, empathy, and bias in automated systems have direct implications for policy, regulation, and market design. The work also invites reflection on how to safeguard genuine human connections amid rapid automation and AI-enabled services.