Autonomy of Robots

There is some overlap between definitions of autonomous mobile devices and definitions of robots: some devices may meet both definitions, though non-mobile robots would not be covered by the previous definitions. Many of the notions of autonomy for autonomous mobile devices  were proposed in terms of mobility tasks or challenges. Autonomy in the context of robots has a number of different interpretations. [Richards2016] defined robots as nonbiological autonomous agents (which defines them as being inherently autonomous) and more specifically asa constructed system that displays both physical and mental agency but is not alive in the biological sense. Considering autonomy in the context of human-robot-interactions, [Beer 2014] proposed a definition for autonomy as: the extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to or created by the robot) without external control. [IEC 2017] similarly defines autonomy as capacity to monitor, generate, select and execute to perform a clinical function with no or limited operator intervention, and proposes guidelines to determine the degree of autonomy. [Huang 2019] similarly asserts that autonomy represents the ability of a system in reacting to changes and uncertainties on the fly. [Luckcuck 2019] defines an autonomous system as an artificially intelligent entity that makes decisions in response to input, independent of human interaction. Robotic systems are physical entities that interact with the physical world. Thus, an autonomous robotic system could be defined as a machine that uses artificial intelligence and has a physical presence in and interacts with the real world. [Antsaklis 2019] proposed as a definition: If a system has the capacity to achieve a set of goals under a set of uncertainties in the system and its environment, by itself, without external intervention, then it will be called autonomous with respect to the set of goals under the set of uncertainties. [Norris 2019] uses the decision-making role to distinguish between automated, semi-Autonomous and Autonomous systems where human operators control automated systems; machines (computers) control autonomous systems; and both are engaged in the control of semi-autonomous systems. As we have seen, when people refer to autonomous systems they often mean different things. The scope of automated devices considered as robots is also quite broad ranging across service devices like vacuum cleaners, social robots and precision surgical robots [Moustris 2011]; common behaviors across this range of robots seems unlikely. Intelligence is already difficult to define and measure in human, let alone artificial intelligence. The concept of a robot seems to be intertwined with the notion of autonomy; but defining autonomy as behavior, or artificial intelligence, does not seem to add much clarity to the nature of autonomy itself. The decision-making role notion of autonomy seems more consistent with dictionary definitions of autonomy based around concepts of “free will”.

There are a wide variety of types of robots intended for different applications including some (not necessarily anthropomorphic) designs intended for more social applications; developing a consistent scale of autonomy across these applications seems difficult. Human social interactions with robots also bring the distinction between objective measurements of autonomy and human perceptions of autonomy (see e.g., [Barlas 2019]). Autonomous robots are being considered in cooperative work arrangements with humans. In such a context the activity coordination between the robot and the human could become much more complex with task leadership passing between the two. (See e.g. [Jiang 2019]). Future controversies over the connotation of social robots is likely to concern their sociality and autonomy rather than their functionality [Sarrica 2019].  [Yang 2017] introduced a 6-level classification of autonomy in the context of medical robotics:

  • Level 0 – No Autonomy
  • Level 1 – Robot Assistance
  • Level 2– Task Autonomy
  • Level 3 – Conditional Autonomy
  • Level 4 – High Autonomy 
  • Level 5 – Full Autonomy

This six-level autonomy scale seems reminiscent of the scales proposed for autonomous mobility. While at first glance, it may seem attractive as an autonomy scale, the categories proposed seem rather ambiguous – e.g. full autonomy to one observer, may be merely a single task for another. In contrast, [Ficuciello 2019] separated out a notion of Meaningful Human Control (MHC) in the context of surgical robotics, and proposed a four-level classification:

  • Level 0 MHC – Surgeons govern in master-slave control mode the entire surgical procedure, from data analysis and planning, to decision-making and actual execution. 
  • Level 1 MHC – humans must have the option to override robotic corrections to their actions, by enacting a second-level human control overriding first-level robotic corrections. 
  • Level 2 MHC – humans select a task that surgical robots perform autonomously. 
  • Level 3 MHC – humans “to select from among different strategies or to approve an autonomously selected strategy 

[Beer 2014]’s definition described behaviors that autonomous systems may engage in but does not provide a scale or measurement approach for the degree of autonomy in a particular system. The taxonomies of [Yang 2017], [IEC2017] and [Ficuciello 2019] are defined externally to the autonomous robot system (e.g., in terms of the level of operator oversight). [Ficuciello 2019]’s insight could equally be applied to autonomous mobile devices where a number of the proposed taxonomies could be interpreted as scales of human control rather than device autonomy. [Beer 2014]’s definition based on behavior has the advantage that it is observable; and observation of behavior does not always imply the existence of a rational autonomous decision causing that behavior. [Luckcuck 2019], [Antsaklis 2019] and [Norris 2019] define autonomy in terms of artificial intelligence, goal seeking and decision making. While goals and decisions can be explained if they exist, much of recent technology trends have emphasized artificial intelligence techniques such as machine learning, that are not easily amenable to providing explanations. Articulating goals and decisions across a broad range of robot application domains seems rather difficult. 

It is important to be more precise and agree upon a common definition for autonomy. Could [Myhre 2019]’s definition of autonomy be applicable to the broader category of robots?  Recall this definition of autonomy requires acceptance of liability, and ideally a quantification of that liability in monetary terms. Mobile robots could incur many of the same types of liabilities of other autonomous mobile devices.  Non-mobile robots can’t cause a collision with other people or their property as this category of autonomous robot devices are not moving. But immobility does not prevent other causes of liability. Consider immobile robots intended for social interactions with humans speaking information that other people could hear; this might result in liability for privacy violations, slander, etc. Quantifying these liabilities for interactions between humans is already difficult, but not impossible; hence it is reasonable to expect that autonomous robots could be held to similar liability quantification standards. Across a broad range of application domains, robots could be a cause of injuries of various sorts to humans and their property resulting in potential liability.  If a robot is interfacing with the real world it is difficult to envision a scenario where all potential liabilities are impossible. Even a passively sensing robot could potentially result in some liability for privacy violation. Hence the approach of defining and scaling autonomy in terms of the range of acceptable accountability or liability seems applicable to a broad range of robots. 

References

[Antsaklis 2019] P. Antsaklis, . “Defining Autonomy and Measuring its Levels Goals, Uncertainties, Performance and Robustness.” ISIS (2019): 001.

[Barlas 2019] Z. Barlas,  “When robots tell you what to do: Sense of agency in human-and robot-guided actions.” (2019).

[Beer 2014] J. Beer, et.al., “Toward a framework for levels of robot autonomy in human-robot interaction.” Journal of human-robot interaction 3.2 (2014): 74-99.

[Ficuciello 2019] F. Ficuciello, et al. “Autonomy in surgical robots and its meaningful human control.” Paladyn, Journal of Behavioral Robotics 10.1 (2019): 30-43.

[Huang 2019] S. Huang, et. al., “Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots.” Industrial Robotics-New Paradigms. IntechOpen, 2019. 

[IEC 2017] IEC 60601-4-1:2017Medical electrical equipment — Part 4-1: Guidance and interpretation — Medical electrical equipment and medical electrical systems employing a degree of autonomy (2017)

[Jiang 2019] S. Jiang, “A Study of Initiative Decision-Making in Distributed Human-Robot Teams.” 2019 Third IEEE International Conference on Robotic Computing (IRC). IEEE, 2019.

[Luckcuck 2019] M. Luckcuck, et.al. “Formal Specification and Verification of Autonomous Robotic Systems: A Survey”, ACM Computing Surveys. Sep2019, Vol. 52 Issue 5, p1-41.

[Moustris 2011] G. Moustris, et al. “Evolution of autonomous and semi‐autonomous robotic surgical systems: a review of the literature.” The international journal of medical robotics and computer assisted surgery 7.4 (2011): 375-392.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[Norris 2019] W. Norris, & A. Patterson. “Automation, Autonomy, and Semi-Autonomy: A Brief Definition Relative to Robotics and Machine Systems.” (2019).

[Richards 2016] N. Richards, & W. Smart., “How should the law think about robots?.” Robot law. Edward Elgar Publishing, 2016.

[Sarrica 2019] M. Sarrica, et.al., “How many facets does a “social robot” have? A review of scientific and popular definitions online.” Information Technology & People (2019).

[Yang 2017] G.Yang, et al. “Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy.” Sci. Robot 2.4 (2017): 8638.