Autonomy of software Agents

There is some overlap between the concepts of autonomy in software agents and robots; and another overlap between the autonomy of software agents and DAOs. The concept of autonomous software agents has been around for more than 20 years. In 1996, [Franklin 1996] proposed a taxonomy of autonomous agents (biological, robotic or computational) and defined software agents as a type of computational agent (distinguished from artificial life), but further categorized into task specific agents, entertainment agents or viruses. Note that this defines agents as being autonomous; but does not provide a notion of the level of autonomy. In 1999, [Heckman 1999] reviewed potential liability issues arising from autonomous agent designs. Around the same time, [Barber 1999] quantitatively defined the degree of autonomy as an agent’s relative voting weight in decision-making. Reviewing autonomous agents in 2000, [Dowling 2000] considered the delegation of any task to a software agent as raising questions in relation to its autonomy of action and decision, the degree of trust which can be vested in the outcomes it achieves, and the location of responsibility, both moral and legal, for those outcomes; but did not couple responsibility for decisions with levels of autonomy.

In 2003, [Braynov 2003] considered autonomy a relative concept depending on what a user (or another agent) expects from an agent; defining autonomy as a relation including four constituents: (1) The subject of autonomy: the entity (a single agent or a group of agents), which acts or makes decisions; (2) The object of autonomy. The object of autonomy could be a goal or a task that the subject wants to perform or to achieve. It could also be a decision that the subject wants to make; (3) The affector of autonomy: the entity that has an impact on the subject’s decisions and actions, thereby affecting the final outcome of the subject’s behavior. The affector could be the physical environment, another agent (including the user), or group of agents. The affector could either increase or decrease the autonomy of the subject; (4) Performance measure: a measure of how successful the subject is with respect to the object of autonomy. Around that time, [Brazier 2003] was considering whether agents could close contracts, and if so how liabilities might be allocated. While autonomy was seen as a complex relationship involving decision making, and allocation of liabilities was seen as important, liability was not considered a dimension of autonomy. [Braynov 2003]’s object of autonomy included decision making, it also includes broader topics like goals.

In 2017, [Pagallo 2017] provides a historical perspective on the liability issues arising from automation through to autonomous systems, recognizing that historical approaches may be insufficient for the current challenges of technology. Considering levels of autonomy in agents from an ethical perspective, [Dyrkolbotyn 2017] identified 5 “levels” of autonomy to pinpoint where a morally salient decision belongs on the following scale: (i)  Dependence or level 1 autonomy: The behavior of the system was predicted by someone with a capacity to intervene; (ii)  Proxy or level 2 autonomy: The behavior of the system should have been predicted by someone with a capacity to intervene; (iii)  Representation or level 3 autonomy: The behavior of the system could have been predicted by someone with a capacity to intervene; (iv)  Legal personality or level 4 autonomy: The behavior of the system cannot be explained only in terms of the systems design and environment. These are systems whose behavior could not have been predicted by anyone with a capacity to intervene; (v) Legal immunity or level -1: The behavior of the system counts as evidence of a defect. Namely, the behavior of the system could not have been predicted by the system itself, or the machine did not have a capacity to intervene. Also around this time; [Millard 2017] was concerned with the need for clarity concerning liabilities in the IoT space. In this period the need for change in existing liability schemes was starting to be recognized, and the level definition of [Dyrkolbotyn 2017]’s autonomy scale based on decisions included notions of legal consequences of decision making. 

In 2019, [Janiesch 2019] surveyed the literature on autonomy in the context of IoT agents identifying 20 category definitions for levels of autonomy, based on whether the human or machine is performing the decision making, and 9 different dimensions or types of agent autonomy: interpretation, know-how, plan, goal, reasoning, monitoring, skill, resource and condition. They also identified 12 design requirements for autonomous agents and proposed an autonomy model and language. [Lebeuf 2019] proposed a definition for software (ro)bots as an interface paradigm including command line, graphical, touch, written/spoken language or some combination of interfaces; and also proposed a faceted taxonomy for software (ro)bots based on facets of (ro)bot environment, intrinsic characteristics and interaction dimensions. The boundaries of these different types of software entities have become blurrier again with less consensus on the meaning of level of autonomy in the context of software entities.

Back in 2003, [Braynov2003] had noted that it was obvious that between the lack of autonomy and the complete autonomy there was a wide range of intermediate states that describe an agent’s ability to act and decide independently. A common thread around decision making as a basis for autonomy levels emerges from [Barber 1999], [Dowling 2000], [Braynov 2003] and [Dyrkolbotyn 2017], but not a consensus on a particular metric. The recent recognition of the potential for changes in the legal liability regimes to better reflect software agents brings to mind [Myhre 2019]’s assertion of accountability as a measure of autonomy. Whether through action or inaction, software agents may potentially cause injuries to people or their property. For purely software agents with no cyberphysical aspects, those injuries would have to be informational in nature (e.g. privacy violations, slander etc.). While liability or accountability for autonomous decision making, may not be the only useful dimension for autonomy in software agents, it does have some practical commercial value in quantifying risks thus enabling more commercial activities based on software agents to proceed.  

References

[Barber 1999] S. Barber, & C. Martin,  “Agent Autonomy: Specification, Measurement, and Dynamic Adjustment” In Proceedings of the Autonomy Control Software Workshop, Agents ’99, pp. 8-15. May 1-5, 1999, Seattle, WA. 

 [Braynov 2003] S. Braynov, & H. Hexmoor. “Quantifying relative autonomy in multiagent interaction.” Agent Autonomy. Springer, Boston, MA, 2003. 55-73.

[Brazier 2003] F. Brazier, et al. “Can agents close contracts?.”(2003).

[Dowling 2000] C. Dowling, “Intelligent agents: some ethical issues and dilemmas.” Selected papers from the second Australian Institute conference on Computer ethics. Australian Computer Society, Inc., 2000.

[Dyrkolbotn 2017] S.Dyrkolbotn, et.al., “Classifying the Autonomy and Morality of Artificial Agents.” CARe-MAS@ PRIMA. 2017.

[Franklin 1996] S. Franklin, & A.Graesser. “Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents.” International Workshop on Agent Theories, Architectures, and Languages. Springer, Berlin, Heidelberg, 1996.

[Heckman 1999] C. Heckman, & J. Wobbrock. “Liability for autonomous agent design.” Autonomous agents and multi-agent systems 2.1 (1999): 87-103.

[Janiesch 2019] C. Janiesch, et al. “Specifying autonomy in the Internet of Things: the autonomy model and notation.” Information Systems and e-Business Management 17.1 (2019): 159-194.

[Lebeuf 2019] C. Lebeuf, et al. “Defining and classifying software bots: a faceted taxonomy.” Proceedings of the 1st International Workshop on Bots in Software Engineering. IEEE Press, 2019.

[Millard 2017] C. Millard, et. al., “Internet of Things Ecosystems: Unpacking Legal Relationships and Liabilities.” 2017 IEEE International Conference on Cloud Engineering (IC2E). IEEE, 2017.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[Pagallo 2019] U. Pagallo, “From Automation to Autonomous Systems: A Legal Phenomenology with Problems of Accountability.” IJCAI. 2017.