Hey there! I am a Ph.D. student in Computer Science at Arizona State University working on human-robot collaboration associated with the Cooperative Robotic Systems Lab. I have over 4 years of research experience with a background in engineering low-cost socially assistive robots for autism therapy at NED University of Engineering and Technology, prevention of misuse of teleoperated robots at Kyoto University, studies involving human-robot interaction, and preference-based reinforcement learning.
Avatar robots allow a teleoperator to interact with the people and environment of a remote place. Malicious operators can use this technology to perpetrate malicious or low-moral actions. In this study, we used hazard identification workshops to identify low-moral actions that are possible through the locomotor movement, cameras, and microphones of an avatar robot. We conducted three workshops, each with four potential future users of avatars, to brainstorm possible low-moral actions. As avatars are not yet widespread, we gave participants experience with this technology by having them control both a simulated avatar and a real avatar as a malicious anonymous operator in a variety of situations. They also experienced sharing space with an avatar controlled by a malicious anonymous operator. We categorized the ideas generated from the workshops using affinity diagram analysis and identified four major categories: violate privacy and security, inhibit, annoy, and destroy or hurt. We also identified subcategories for each. In the second half of this study, we discuss all low-moral action subcategories in terms of their detection, mitigation, and prevention by studying literature from autonomous, social, teleoperated, and telepresence robots as well as other fields where relevant.