Autonomy of software Agents

There is some overlap between the concepts of autonomy in software agents and robots; and another overlap between the autonomy of software agents and DAOs. The concept of autonomous software agents has been around for more than 20 years. In 1996, [Franklin 1996] proposed a taxonomy of autonomous agents (biological, robotic or computational) and defined software agents as a type of computational agent (distinguished from artificial life), but further categorized into task specific agents, entertainment agents or viruses. Note that this defines agents as being autonomous; but does not provide a notion of the level of autonomy. In 1999, [Heckman 1999] reviewed potential liability issues arising from autonomous agent designs. Around the same time, [Barber 1999] quantitatively defined the degree of autonomy as an agent’s relative voting weight in decision-making. Reviewing autonomous agents in 2000, [Dowling 2000] considered the delegation of any task to a software agent as raising questions in relation to its autonomy of action and decision, the degree of trust which can be vested in the outcomes it achieves, and the location of responsibility, both moral and legal, for those outcomes; but did not couple responsibility for decisions with levels of autonomy.

In 2003, [Braynov 2003] considered autonomy a relative concept depending on what a user (or another agent) expects from an agent; defining autonomy as a relation including four constituents: (1) The subject of autonomy: the entity (a single agent or a group of agents), which acts or makes decisions; (2) The object of autonomy. The object of autonomy could be a goal or a task that the subject wants to perform or to achieve. It could also be a decision that the subject wants to make; (3) The affector of autonomy: the entity that has an impact on the subject’s decisions and actions, thereby affecting the final outcome of the subject’s behavior. The affector could be the physical environment, another agent (including the user), or group of agents. The affector could either increase or decrease the autonomy of the subject; (4) Performance measure: a measure of how successful the subject is with respect to the object of autonomy. Around that time, [Brazier 2003] was considering whether agents could close contracts, and if so how liabilities might be allocated. While autonomy was seen as a complex relationship involving decision making, and allocation of liabilities was seen as important, liability was not considered a dimension of autonomy. [Braynov 2003]’s object of autonomy included decision making, it also includes broader topics like goals.

In 2017, [Pagallo 2017] provides a historical perspective on the liability issues arising from automation through to autonomous systems, recognizing that historical approaches may be insufficient for the current challenges of technology. Considering levels of autonomy in agents from an ethical perspective, [Dyrkolbotyn 2017] identified 5 “levels” of autonomy to pinpoint where a morally salient decision belongs on the following scale: (i)  Dependence or level 1 autonomy: The behavior of the system was predicted by someone with a capacity to intervene; (ii)  Proxy or level 2 autonomy: The behavior of the system should have been predicted by someone with a capacity to intervene; (iii)  Representation or level 3 autonomy: The behavior of the system could have been predicted by someone with a capacity to intervene; (iv)  Legal personality or level 4 autonomy: The behavior of the system cannot be explained only in terms of the systems design and environment. These are systems whose behavior could not have been predicted by anyone with a capacity to intervene; (v) Legal immunity or level -1: The behavior of the system counts as evidence of a defect. Namely, the behavior of the system could not have been predicted by the system itself, or the machine did not have a capacity to intervene. Also around this time; [Millard 2017] was concerned with the need for clarity concerning liabilities in the IoT space. In this period the need for change in existing liability schemes was starting to be recognized, and the level definition of [Dyrkolbotyn 2017]’s autonomy scale based on decisions included notions of legal consequences of decision making. 

In 2019, [Janiesch 2019] surveyed the literature on autonomy in the context of IoT agents identifying 20 category definitions for levels of autonomy, based on whether the human or machine is performing the decision making, and 9 different dimensions or types of agent autonomy: interpretation, know-how, plan, goal, reasoning, monitoring, skill, resource and condition. They also identified 12 design requirements for autonomous agents and proposed an autonomy model and language. [Lebeuf 2019] proposed a definition for software (ro)bots as an interface paradigm including command line, graphical, touch, written/spoken language or some combination of interfaces; and also proposed a faceted taxonomy for software (ro)bots based on facets of (ro)bot environment, intrinsic characteristics and interaction dimensions. The boundaries of these different types of software entities have become blurrier again with less consensus on the meaning of level of autonomy in the context of software entities.

Back in 2003, [Braynov2003] had noted that it was obvious that between the lack of autonomy and the complete autonomy there was a wide range of intermediate states that describe an agent’s ability to act and decide independently. A common thread around decision making as a basis for autonomy levels emerges from [Barber 1999], [Dowling 2000], [Braynov 2003] and [Dyrkolbotyn 2017], but not a consensus on a particular metric. The recent recognition of the potential for changes in the legal liability regimes to better reflect software agents brings to mind [Myhre 2019]’s assertion of accountability as a measure of autonomy. Whether through action or inaction, software agents may potentially cause injuries to people or their property. For purely software agents with no cyberphysical aspects, those injuries would have to be informational in nature (e.g. privacy violations, slander etc.). While liability or accountability for autonomous decision making, may not be the only useful dimension for autonomy in software agents, it does have some practical commercial value in quantifying risks thus enabling more commercial activities based on software agents to proceed.  

References

[Barber 1999] S. Barber, & C. Martin,  “Agent Autonomy: Specification, Measurement, and Dynamic Adjustment” In Proceedings of the Autonomy Control Software Workshop, Agents ’99, pp. 8-15. May 1-5, 1999, Seattle, WA. 

 [Braynov 2003] S. Braynov, & H. Hexmoor. “Quantifying relative autonomy in multiagent interaction.” Agent Autonomy. Springer, Boston, MA, 2003. 55-73.

[Brazier 2003] F. Brazier, et al. “Can agents close contracts?.”(2003).

[Dowling 2000] C. Dowling, “Intelligent agents: some ethical issues and dilemmas.” Selected papers from the second Australian Institute conference on Computer ethics. Australian Computer Society, Inc., 2000.

[Dyrkolbotn 2017] S.Dyrkolbotn, et.al., “Classifying the Autonomy and Morality of Artificial Agents.” CARe-MAS@ PRIMA. 2017.

[Franklin 1996] S. Franklin, & A.Graesser. “Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents.” International Workshop on Agent Theories, Architectures, and Languages. Springer, Berlin, Heidelberg, 1996.

[Heckman 1999] C. Heckman, & J. Wobbrock. “Liability for autonomous agent design.” Autonomous agents and multi-agent systems 2.1 (1999): 87-103.

[Janiesch 2019] C. Janiesch, et al. “Specifying autonomy in the Internet of Things: the autonomy model and notation.” Information Systems and e-Business Management 17.1 (2019): 159-194.

[Lebeuf 2019] C. Lebeuf, et al. “Defining and classifying software bots: a faceted taxonomy.” Proceedings of the 1st International Workshop on Bots in Software Engineering. IEEE Press, 2019.

[Millard 2017] C. Millard, et. al., “Internet of Things Ecosystems: Unpacking Legal Relationships and Liabilities.” 2017 IEEE International Conference on Cloud Engineering (IC2E). IEEE, 2017.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[Pagallo 2019] U. Pagallo, “From Automation to Autonomous Systems: A Legal Phenomenology with Problems of Accountability.” IJCAI. 2017.

Autonomy of Robots

There is some overlap between definitions of autonomous mobile devices and definitions of robots: some devices may meet both definitions, though non-mobile robots would not be covered by the previous definitions. Many of the notions of autonomy for autonomous mobile devices  were proposed in terms of mobility tasks or challenges. Autonomy in the context of robots has a number of different interpretations. [Richards2016] defined robots as nonbiological autonomous agents (which defines them as being inherently autonomous) and more specifically asa constructed system that displays both physical and mental agency but is not alive in the biological sense. Considering autonomy in the context of human-robot-interactions, [Beer 2014] proposed a definition for autonomy as: the extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to or created by the robot) without external control. [IEC 2017] similarly defines autonomy as capacity to monitor, generate, select and execute to perform a clinical function with no or limited operator intervention, and proposes guidelines to determine the degree of autonomy. [Huang 2019] similarly asserts that autonomy represents the ability of a system in reacting to changes and uncertainties on the fly. [Luckcuck 2019] defines an autonomous system as an artificially intelligent entity that makes decisions in response to input, independent of human interaction. Robotic systems are physical entities that interact with the physical world. Thus, an autonomous robotic system could be defined as a machine that uses artificial intelligence and has a physical presence in and interacts with the real world. [Antsaklis 2019] proposed as a definition: If a system has the capacity to achieve a set of goals under a set of uncertainties in the system and its environment, by itself, without external intervention, then it will be called autonomous with respect to the set of goals under the set of uncertainties. [Norris 2019] uses the decision-making role to distinguish between automated, semi-Autonomous and Autonomous systems where human operators control automated systems; machines (computers) control autonomous systems; and both are engaged in the control of semi-autonomous systems. As we have seen, when people refer to autonomous systems they often mean different things. The scope of automated devices considered as robots is also quite broad ranging across service devices like vacuum cleaners, social robots and precision surgical robots [Moustris 2011]; common behaviors across this range of robots seems unlikely. Intelligence is already difficult to define and measure in human, let alone artificial intelligence. The concept of a robot seems to be intertwined with the notion of autonomy; but defining autonomy as behavior, or artificial intelligence, does not seem to add much clarity to the nature of autonomy itself. The decision-making role notion of autonomy seems more consistent with dictionary definitions of autonomy based around concepts of “free will”.

There are a wide variety of types of robots intended for different applications including some (not necessarily anthropomorphic) designs intended for more social applications; developing a consistent scale of autonomy across these applications seems difficult. Human social interactions with robots also bring the distinction between objective measurements of autonomy and human perceptions of autonomy (see e.g., [Barlas 2019]). Autonomous robots are being considered in cooperative work arrangements with humans. In such a context the activity coordination between the robot and the human could become much more complex with task leadership passing between the two. (See e.g. [Jiang 2019]). Future controversies over the connotation of social robots is likely to concern their sociality and autonomy rather than their functionality [Sarrica 2019].  [Yang 2017] introduced a 6-level classification of autonomy in the context of medical robotics:

  • Level 0 – No Autonomy
  • Level 1 – Robot Assistance
  • Level 2– Task Autonomy
  • Level 3 – Conditional Autonomy
  • Level 4 – High Autonomy 
  • Level 5 – Full Autonomy

This six-level autonomy scale seems reminiscent of the scales proposed for autonomous mobility. While at first glance, it may seem attractive as an autonomy scale, the categories proposed seem rather ambiguous – e.g. full autonomy to one observer, may be merely a single task for another. In contrast, [Ficuciello 2019] separated out a notion of Meaningful Human Control (MHC) in the context of surgical robotics, and proposed a four-level classification:

  • Level 0 MHC – Surgeons govern in master-slave control mode the entire surgical procedure, from data analysis and planning, to decision-making and actual execution. 
  • Level 1 MHC – humans must have the option to override robotic corrections to their actions, by enacting a second-level human control overriding first-level robotic corrections. 
  • Level 2 MHC – humans select a task that surgical robots perform autonomously. 
  • Level 3 MHC – humans “to select from among different strategies or to approve an autonomously selected strategy 

[Beer 2014]’s definition described behaviors that autonomous systems may engage in but does not provide a scale or measurement approach for the degree of autonomy in a particular system. The taxonomies of [Yang 2017], [IEC2017] and [Ficuciello 2019] are defined externally to the autonomous robot system (e.g., in terms of the level of operator oversight). [Ficuciello 2019]’s insight could equally be applied to autonomous mobile devices where a number of the proposed taxonomies could be interpreted as scales of human control rather than device autonomy. [Beer 2014]’s definition based on behavior has the advantage that it is observable; and observation of behavior does not always imply the existence of a rational autonomous decision causing that behavior. [Luckcuck 2019], [Antsaklis 2019] and [Norris 2019] define autonomy in terms of artificial intelligence, goal seeking and decision making. While goals and decisions can be explained if they exist, much of recent technology trends have emphasized artificial intelligence techniques such as machine learning, that are not easily amenable to providing explanations. Articulating goals and decisions across a broad range of robot application domains seems rather difficult. 

It is important to be more precise and agree upon a common definition for autonomy. Could [Myhre 2019]’s definition of autonomy be applicable to the broader category of robots?  Recall this definition of autonomy requires acceptance of liability, and ideally a quantification of that liability in monetary terms. Mobile robots could incur many of the same types of liabilities of other autonomous mobile devices.  Non-mobile robots can’t cause a collision with other people or their property as this category of autonomous robot devices are not moving. But immobility does not prevent other causes of liability. Consider immobile robots intended for social interactions with humans speaking information that other people could hear; this might result in liability for privacy violations, slander, etc. Quantifying these liabilities for interactions between humans is already difficult, but not impossible; hence it is reasonable to expect that autonomous robots could be held to similar liability quantification standards. Across a broad range of application domains, robots could be a cause of injuries of various sorts to humans and their property resulting in potential liability.  If a robot is interfacing with the real world it is difficult to envision a scenario where all potential liabilities are impossible. Even a passively sensing robot could potentially result in some liability for privacy violation. Hence the approach of defining and scaling autonomy in terms of the range of acceptable accountability or liability seems applicable to a broad range of robots. 

References

[Antsaklis 2019] P. Antsaklis, . “Defining Autonomy and Measuring its Levels Goals, Uncertainties, Performance and Robustness.” ISIS (2019): 001.

[Barlas 2019] Z. Barlas,  “When robots tell you what to do: Sense of agency in human-and robot-guided actions.” (2019).

[Beer 2014] J. Beer, et.al., “Toward a framework for levels of robot autonomy in human-robot interaction.” Journal of human-robot interaction 3.2 (2014): 74-99.

[Ficuciello 2019] F. Ficuciello, et al. “Autonomy in surgical robots and its meaningful human control.” Paladyn, Journal of Behavioral Robotics 10.1 (2019): 30-43.

[Huang 2019] S. Huang, et. al., “Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots.” Industrial Robotics-New Paradigms. IntechOpen, 2019. 

[IEC 2017] IEC 60601-4-1:2017Medical electrical equipment — Part 4-1: Guidance and interpretation — Medical electrical equipment and medical electrical systems employing a degree of autonomy (2017)

[Jiang 2019] S. Jiang, “A Study of Initiative Decision-Making in Distributed Human-Robot Teams.” 2019 Third IEEE International Conference on Robotic Computing (IRC). IEEE, 2019.

[Luckcuck 2019] M. Luckcuck, et.al. “Formal Specification and Verification of Autonomous Robotic Systems: A Survey”, ACM Computing Surveys. Sep2019, Vol. 52 Issue 5, p1-41.

[Moustris 2011] G. Moustris, et al. “Evolution of autonomous and semi‐autonomous robotic surgical systems: a review of the literature.” The international journal of medical robotics and computer assisted surgery 7.4 (2011): 375-392.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[Norris 2019] W. Norris, & A. Patterson. “Automation, Autonomy, and Semi-Autonomy: A Brief Definition Relative to Robotics and Machine Systems.” (2019).

[Richards 2016] N. Richards, & W. Smart., “How should the law think about robots?.” Robot law. Edward Elgar Publishing, 2016.

[Sarrica 2019] M. Sarrica, et.al., “How many facets does a “social robot” have? A review of scientific and popular definitions online.” Information Technology & People (2019).

[Yang 2017] G.Yang, et al. “Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy.” Sci. Robot 2.4 (2017): 8638.

Autonomy Levels In Mobile Devices

Autonomy is relevant in many different activities; and has recently received a lot of attention in the context of autonomous mobility of cyber-physical devices. A broad range of mobile physical devices are being developed for a variety of different geographic environments (Land, Sea, Air). These devices have various capabilities of internal controls often described with a ‘Levels of Autonomy’ taxonomy. Autonomous mobility is largely concerned with the problems of navigation and avoidance of various hazards. In many cases, these autonomous mobile devices are proposed as potential replacements for human operated vehicles which creates some uncertainty regarding potential liability arrangements when devices have no human operator. The levels of autonomy proposed for autonomous mobility might provide some insight for levels of autonomy in other contexts e.g., DAOs.

Levels of Land Mobile Device Autonomy 

Automobile-specific regulations and more general tort liabilities have established a body of law where the driver is in control of the vehicle, but this is challenged by automation culminating in autonomous vehicles [Gasser 2013]. In their more recent review of autonomous vehicles, [Taeihagh 2019] categorized technological risks and reviewed potential regulatory or legislative responses in US and Europe; where the US has been introducing legislations to address autonomous vehicle issues related to privacy and cybersecurity; While the UK and Germany, in particular, have enacted laws to address liability issues; other countries mostly acknowledge these issues, but have yet to implement specific strategies.  Autonomous vehicles are expected to displace manually controlled vehicles in various applications becoming an increasingly significant portion of the vehicular traffic on public roads over time. [SAE2018] Defines 6 levels of automation:

  • Level 0 – No Driver Automation – May provided momentary controls (e.g. ,emergency intervention).
  • Level 1 – Driver Assistance – Sustained, specific performance of a subtask 
  • Level 2 – Partial Driving Automation – sustained specific execution with the expectation that the driver is actively supervising the operation.
  • Level 3 – Conditional Driving Automation – sustained, specific performance with the expectation that the user is ready to respond to requests to intervene as well as to performance failures 
  • Level 4 – High Driving Automation – Sustained, specific performance without the expectation that the user will respond to a request to intervene
  • Level 5 – Full Driving Automation – Sustained an unconditional performance without any expectation of a user responding to a request for intervention 

Autonomous vehicles on off-road applications add another layer of complexity for route planning, steeper slopes, hazard recognition, etc. [Elhannouny 2019]. The levels of autonomy  for land mobile autonomous systems in this the taxonomy are sequenced as decreasing levels of human supervision.

Levels of Sea Mobile Device Autonomy 

Maritime autonomous systems pose many additional challenges to their designers as the environment is more hostile and the device may be more remote than land mobile autonomous systems etc. Without a well-defined safety standards and efficient risk allocation, it is hard to assign liability and define insurance or compensation schemes leaving unmanned surface ships potentially subject to a variety of different liability regimes [Ferriera 2018]. Unmanned maritime systems have been in operation for more than 20 years [Lantz 2016] and more frequent interactions between manned and unmanned systems are expected in future.  The International maritime Organization [IMO 2019] recently initiated a regulatory scoping review of the conventions applicable to Maritime Autonomous Surface Ships.  The Norwegian Forum for Autonomous Ships [NFAS 2017] proposed a four-level classification of autonomy:

  • Decision Support – Crew continuously in command of vessel
  • Automatic – pre-programmed sequence requesting human intervention when unexpected events occur or when sequence completes
  • Constrained Autonomous – fully automatic in most situations but human operators continuously available when requested by the system 
  • Fully Autonomous – no human crew or remote operators. 

As with land-based mobility autonomy, the levels of this taxonomy for sea mobile autonomous systems are defined and sequenced as decreasing levels of actions by human operators.

Levels of Air Mobile Device Autonomy 

The air environment adds an additional degree of freedom (vertical) for navigation, and introduces different navigation hazards (e.g. birds, weather) compared to land or sea environments. Human operated air mobile devices have in the past been relatively large, expensive and heavily regulated. Smaller lower cost drones have enabled more consumer applications.  In the US, the FAA has jurisdiction over drones for non-recreational use [FAA 2016], though an increasing number of states [NACDL  2019] are also promulgating regulations in this area. Potential liabilities include not just damages to the device, but also liabilities from collisions with fixed infrastructure, personal injuries, trespass, etc.

[Clough 2002] proposed a 10-level classification of autonomous control levels:

  • Level 0 – remotely piloted vehicle
  • Level 1 – execute preplanned mission
  • Level 2 – Pre-loaded alternative plans
  • Level 3 – Limited Response to Real Time Faults/ Events
  • Level 4 – Robust response to anticipated events
  • Level 5 – Fault/ Event adaptive vehicle
  • Level 6 – Real Time Multivehicle Coordination 
  • Level 7 – Real Time multi vehicle cooperation
  • Level 8 – Multi-Vehicle mission performance optimization
  • Level 9 – Multi-Vehicle tactical performance optimization 
  • Level 10 – Human Like

More recently, [Huang 2005] proposed autonomy levels for unmanned systems along three dimensions: Mission complexity, Human Independence and Environmental difficulty. The levels of this autonomous air mobility taxonomy are defined and sequenced as increasing complexity of the scenario and its implied computational task.

Consolidated views of mobility autonomy levels

[Vagia 2019] reviewed levels of automation across different transportation modes and proposed a common faceted taxonomy based on facets of human-automation interface, environmental complexity, system complexity and societal acceptance. [Mostafa 2019] points out that the level of autonomy may be dynamically adjustable, given some situational awareness. In their literature review of levels of automation, [Vagia 2016] noted that some authors refered to “autonomy” and “automation” interchangeably, and proposed an 8-level taxonomy for levels of automation:

  • Level 1 – Manual control
  • Level 2 – decision Proposal 
  • Level 3 – human Selection of decision
  • Level 4 – computer selects decision with human approval
  • Level 5 – computer executes selected decision and informs the human
  • Level 6 – computer executes selected decision and informs the human only if asked
  • Level 7 – computer executes selected decision and informs the human only if it decides to
  • Level 8 – autonomous control informs operators only if out of specification error conditions occur.

This consolidated view of an autonomous mobility taxonomy is defined and structured as in terms of the decision role taken by the computer system.

Taxonomies provide a means to classify instances, but the intended uses of the categorization limit the value it provides. For example, the [SAE 2018] classification may be useful in setting consumer expectations, and the developers of the other taxonomies proposed no doubt had other purposes in mind during their creation. One consideration is whether the proposed categorizations are useful in terms of the intended benefits from implementing some form of autonomy. [Araguz 2019] identified the need for (or benefits from) autonomy from the literature as improving performance in systems with high uncertainties; ameliorating responsiveness, flexibility and adaptability in dynamic contexts; and ultimately handling complexity (especially in large-scale systems), and in the aerospace field as a means to improve reliability and tolerance to failures. Defining a level of autonomy taxonomy based on the amount of operator control required, or the complexity of the scenario/ computational tasks arguably speaks toward those dimensions of intended benefits.

Presenting levels of autonomy as a numbered sequence of levels implies not just a classification taxonomy, but some quantification of an autonomy continuum, rather than a set of discrete and non-contiguous classification buckets. Most of these taxonomy proposals also leave full autonomy as a speculative category awaiting future technology advancements (e.g., in Artificial Intelligence) and do not provide existence proofs that meet the category definition. While to some autonomy may imply some degree of implementation complexity, the converse is not always true; increasing levels of complexity do not necessarily lead to autonomy. Constructing an autonomy taxonomy based on the decision roles taken by the computing systems is more interesting because it focusses on the actions of the system itself rather than external factors (e.g., scenario complexity, other actors). The decision roles themselves, however, are discrete categories, rather than a linear scale; and are independent of the importance of the decisions being made. 

 [Myhre 2019] reviewed autonomy classifications for land and maritime environments and proposed that a system is considered autonomous if it can legally accept accountability for an operation thereby assuming the accountability that was previously held by either a human operator or another autonomous system; thus classifying systems as autonomous or not. With non-autonomous systems, liability generally lies with the operator; with autonomous systems, liability generally shifts back to the designer. The scope of accountability or liability in the event of some loss is often described by a monetary value. Insurance contracts are often used to protect against potential liabilities up to some specified monetary level. Monetary values have the advantage of providing a simple linear scale. A system that can accept a $1M liability is thus arguably more autonomous than one that could only accept a $1,000 liability.   

References

[Araguz 2019] C. Araguz, et al. “A Design-Oriented Characterization Framework for Decentralized, Distributed, Autonomous Systems: The Nano-Satellite Swarm Case.”  International Symposium on Circuits and Systems (ISCAS). IEEE, 2019.

[Clough 2002] B. Clough,  “Metrics, schmetrics! How the heck do you determine a UAV’s autonomy anyway.” AIR FORCE RESEARCH LAB WRIGHT-PATTERSON AFB OH, 2002.

[Elhannouny 2019] E. Elhannouny, & D. Longman. Off-Road Vehicles Research Workshop: Summary Report. No. ANL-18/45. Argonne National Lab.(ANL), Argonne, IL (United States), 2019.

[FAA 2016] FAA “Operation and Certification of Small Unmanned Aircraft Systems”, 81 Fed. Reg. 42064, 42065-42066 (rule in effect 28 June 2016); 14 C.F.R. Part 107.

[Ferreira 2018] Ferreira, Fausto, et al. “Liability issues of unmanned surface vehicles.” OCEANS 2018 MTS/IEEE Charleston. IEEE, 2018.

 [Gasser 2013] T.Gasser, et al. “Legal consequences of an increase in vehicle automation.” Bundesanstalt für Straßenwesen (2013).

[Huang 2015] H. Huang, et al. “A framework for autonomy levels for unmanned systems (ALFUS).” Proceedings of the AUVSI’s Unmanned Systems North America (2005): 849-863.

[IMO 2019] IMO Legal Committee, 106thSession,27-29 March 2019. 

[Lantz 2016] J.Lantz, Letter to IMO in response to circular 3574, USCG, 16707/IMO/C 116, Jan 2016

[Mostafa 2019] S. Mostafa, et. al., “Adjustable autonomy: a systematic literature review.” Artificial Intelligence Review 51.2 (2019): 149-186.

[Myhre 2019] B. Myhre, et.al., “A responsibility-centered approach to defining levels of automation.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

[NACDL 2019] National Association of Criminal Defense Lawyers, “Domestic Drones Information center” retrieved 12/31/2019.

[NFAS 2017] Norwegian Forum for Autonomous Ships “Definitions for Autonomous Merchant Ships”  2017

[SAE 2018] SAE, “Surface Vehicle Recommended Practice: Taxonomy and Definitions for Terms related to riving Automation Systems for On-Road Motor Vehicles” (Revised ) June 2018

[Taeihagh 2019] A. Taeihagh, & H. Lim. “Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks.” Transport reviews 39.1 (2019): 103-128.

[Vagia 2016] M. Vagia, et.al., “A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed?.” Applied ergonomics 53 (2016): 190-202.

[Vagia 2019] M. Vagia, & O. Rødseth. “A taxonomy for autonomous vehicles for different transportation modes.” Journal of Physics: Conference Series. Vol. 1357. No. 1. IOP Publishing, 2019.

Decentralized Autonomous Organizations

A Decentralized Autonomous Organization (DAO), (Sometimes also referred to a Decentralized Autonomous Corporation (e.g. [Kypriotaki 2015])) is a type of smart contract executing as an autonomous organizational entity in the context of a blockchain. Note that the “smart” in smart contract does not require or imply the use of AI (though it is also not prohibited), it refers to automation in the design and execution of legally enforceable contracts. DAOs were originally envisioned as a pseudo-legal organization run by an assemblage of human and robot participants [Vitalik 2013a], [Vitalik 2013b], [Vitalik 2013c]. More recently [Wang 2019] proposed a definition of a DAO as a blockchain-powered organization that can run on its own without any central authority or management hierarchy, and a five-layer architecture reference model. The legal status of DAOs is continuing to evolve. In the absence of specific enabling legislation, There have been proposals that DAOs be considered a trust administrator [Jentzsch 2015] , or a form of (implied) partnership [Zetsche 2018], or be construed as falling within existing LLC enabling statues [Bayern 2014], or requiring new enabling legislation for Cryptocorporations [Nielsen 2018]. Meanwhile, some states (e.g., Vermont [Vermont 2018]) have created enabling legislation recognizing blockchain based LLCs.

DAOs have been proposed in a number of different applications. [Zichichi 2019] proposed a DAO for crowdfunding applications. [Mylrea 2019] proposed a blockchain entity similar to a DAO in the context of energy markets. [Miscione 2019] reviewed blockchain as an organizational technology in the context of both markets and public registries. [Calcaterra 2019] proposed a DAO for underwriting insurance. [Diallo 2018] proposed DAO applications in e-government. DAOs are built on top of blockchain smart contracts. The scope of smart contracts can include off-chain computations, and a wide variety of token types and cyberphysical devices (see e.g., [Wright 2019]). Smart contracts can be used in the execution of a broad array of contracts.

An organization needs to be able to adapt to changes in its environment. Such adaptation in a software entity requires configuration control, mechanisms to clean up old versions etc. This is complicated in the case of blockchains where the software code may be stored as part of the “immutable” block. A key decision point in such adaptations lies in the selections of adaptation to support. In the context of decentralized systems, this is complicated because of multiple parties interacting in a decentralized fashion. This decision process in the evolution of a DAO is generally described in the literature as the governance mechanism for the DAO. Some blockchain architectures (e.g., Tezos) support governance mechanisms inherently, others would require the governance to be build into a higher layer dapp – in this case a DAO. The need for such governance became widely recognized in the aftermath of an early DAO implementation. [Jantzsch 2015] implemented[1] a DAO as smart contracts on Ethereum. After the reentrancy exploit in 2016 of a coding flaw resulted in considerable financial loss, the flaw was resolved through a hard fork [Mehar 2019].  [Avital 2019] identified three classes of governance mechanisms: control, coordination, and realignment, and compared governance practices in centralized and decentralized organizations; arguing that interpretations of governance mechanisms in near-autonomous or stigmergic (indirectly coordinated) forms of a decentralized organization require a theoretical taxonomy emphasizing process over structure. [Notra 2015] proposed a governance as a service model for collaborating DAOs. Governance mechanisms are an ongoing challenge for blockchains, not just DAOs, but DAOs as a legal entity require additional consideration of who is legally authorized to make such upgrades.

The notion of a DAO as a new paradigm for organizational design has sparked the imagination of some commentators (e.g. [Hsieh 2018], [Wooley 2016], [Koletsi 2019]), but the underlying performance and capacity of many current blockchain protocols remain technical impediments to that vision. There have been some performance improvements; for example, [Barinov 2019] introduced a Proof of Stake DAO improving performance of consensus mechanisms over prior Proof of Work blockchains. There are a variety of platforms [Clincy 2019] for implementing blockchains, smart contracts, and thus DAOs. Some claim [Hsieh 2018] that bitcoin itself is a form of DAO.  DAOs could be implemented in permissionless blockchains or permissioned blockchains [Wang 2019]. The most (in)famous example was implemented on Ethereum [Dupont 2017]. [Alimoglu 2017] proposed an organization design for DAOs featuring a crypto currency funding mechanism, a decision mechanism based on voting and released[2] an Ethereum/ Solidity implementation under a GNU GPL v3.0 license. DigixDAO[3] aims to be a self-organizing community on the Ethereum blockchain that actively involves its token holders in decision making and shaping the direction of the asset tokenization business. Members of the Ethereum community have pooled funds together to distribute grants effectively through a DAO structure (MolochDAO[4] [Soleimani 2019]). OpenLaw[5] provides a support mechanism for formation of various legal entity types for DAOs. DAOstack[6] is building an open source software stack for implementing DAOs. DatabrokerDAO [7]appears to be launching a DAO based data service. MakerDAO supports transactions in collateralized debt positions.

While some (e.g., [Lopucki 2018]) have questioned the wisdom of aligning software entities with legal entities, DAOs seem to be in various stages of implementation and operation with some color of law enabling legal recognition. Some (e.g., [Field 2020]) are predicting significant commercial DAO progress in 2020. While example DAOs have been in operation for more that 5 years now, consensus on the required functionality for DAOs is still emerging. While the decentralized nature of DAOs is perhaps unfamiliar to some, it, and the notion of a virtual entity like a corporation are widely understood. The notion of autonomy in this context could use some further elaboration, and perhaps even categorization or quantification. DAOs by definition include some notion of autonomy, but what is required for a smart contract to rise to the level of a DAO is not exactly clear. Autonomy implies some aspect of free-will and self-determination, but it is not clear that truly independent decision making is expected in the DAO applications so far. The objectives for DAO applications above seem to be in the nature of an entity that can be trusted to perform in some expected manner. The smart contracts underlying a DAO employ logical programming to execute their contractual objectives; that logical basis provides an explanation of behavior even when unexpected (e.g. due to some reentrancy flaw); and behavior explanations are often unavailable in artificial intelligence applications (e.g., those based on machine learning). Legal recognition as an entity implies not only a capacity for independent action, but also some duties (e.g. duties to responding to other legal processes, obligations toward others (e.g., shareholders, employees)). Other virtual entities (e.g., corporations) rely on designated humans to perform these functions. The argument for implementing a DAO is to minimize the human role in favor of automation. Corporations also provide a feature of limited liability. Many of the DAO applications are manipulating digital assets of considerable value and so considerations of liability if those valuable digital assets were to be lost, damaged or degraded. Where the DAO extends  smart contract to cyber-physical assets or other tokenized assets the potential liabilities may become important considerations.  So how much autonomy is then required for a DAO to be a DAO?

References

[Alimoglu 2017] A. Alimoğlu, & C. Özturan. “Design of a Smart Contract Based Autonomous Organization for Sustainable Software.”  13th Int’l Conf. on e-Science. IEEE, 2017.

 [Avital 2019] M. Avital, et. al., (2019). “Governance of Decentralized Organizations: Lessons from Ethereum.” 11th Int’l Process Symposium Organizing in the Digital Age. PROS 2019, Chania, Crete, Greece.

[Barinov 2019] I. Barinov, et al. “POSDAO: Proof of Stake Decentralized Autonomous Organization.” Available at SSRN 3368483(2019).

[Bayern 2014] S. Bayern, “Of bitcoins, Independently wealth software and the zero member LLC”, Northwestern U. Law Rev. vol 108, pp 257-270, 2014

[Buterin 2013a] V. Buterin, “Bootstrapping A Decentralized Autonomous Corporation: Part I” Bitcoin Magazine Sept 20, 2013

[Buterin 2013b] V.Buterin, “Bootstrapping An Autonomous Decentralized Corporation, Part 2: Interacting With the World”, Bitcoin Magazine, Sept. 22, 2013

[Buterin 2013c] V. Buterin, “Bootstrapping a Decentralized Autonomous Corporation, Part 3: Identity Corp”, Bitcoin Magazine, Sept 25, 2013

[Buterin 2014] V. Buterin, “DAOs, DACs, DAs and More: An Incomplete Terminology Guide”, Ethereum Blog, May 6, 2014

[Calcaterra 2019] C. Calcaterra, et.al., “Decentralized Underwriting.” Available at SSRN 3396542(2019).

[Clincy 2019] V. Clincy, & H. Shahriar. “Blockchain Development Platform Comparison.” 43rd Ann. Computer Software and Applications Conference (COMPSAC). Vol. 1. IEEE, 2019.

[Diallo 2018] N. Diallo, et al. “eGov-DAO: A better government using blockchain based decentralized autonomous organization.” 2018 International Conference on eDemocracy & eGovernment (ICEDEG). IEEE, 2018.

[DuPont 2017] Q. DuPont, “Experiments in algorithmic governance: A history and ethnography of “The DAO,” a failed decentralized autonomous organization.” Bitcoin and Beyond (Open Access). Routledge, 2017. 157-177.

[Field 2020] M.Field “The DAO Spring is coming”, Medium, Jan 1, 2020

[Hsieh 2018] Y. Hsieh, et. al. ,”Bitcoin and the rise of decentralized autonomous organizations.” J. of Organization Design7.1 (2018): 14.

[Jentzsch 2015] C. Jentzsch, “Decentralized autonomous organization to manage a trust.” White Paper (2015)

[Kypriotaki 2015] K. Kypriotaki, et.al., “From bitcoin to decentralized autonomous corporations.” International conference on enterprise information systems. 2015.

[Koletsi 2019] M. Koletsi, “Radical technologies: Blockchain as an organizational movement.” Homo Virtualis 2.1 (2019): 25-33.

[Lopucki 2018] L. Lopucki, “Algorithmic Entities”, Washington U. Law Rev., vol 95, no 4, pp 887-953, 2018

[MakerDAO 2020] MakerDAO Whitepaper https://makerdao.com/en/whitepaper/ retrieved 1/2/2020

[Mehar 2019] M. Mehar, et al. “Understanding a revolutionary and flawed grand experiment in blockchain: the DAO attack.” J. of Cases on Inf. Tech. (JCIT) 21.1 (2019): 19-32.

[Miscione 2019] G. Miscione, “Blockchain as organizational technology.” University of Zurich, Department of Informatics IFI Colloquium, Zurich, Switzerland, 28th February 2019. University College Dublin, 2019.

[Mylrea 2019] M. Mylrea, “Distributed Autonomous Energy Organizations: Next-Generation Blockchain Applications for Energy Infrastructure.” Artificial Intelligence for the Internet of Everything. Academic Press, 2019. 217-239.

[Nielsen 2018] T. Nielsen, “Cryptocorporations: A Proposal for Legitimizing Decentralized Autonomous Organizations.” Utah Law Review, Forthcoming (2018).

[Norta 2015] A. Norta, et. al., “Conflict-resolution lifecycles for governed decentralized autonomous organization collaboration.” Proc. 2nd International Conference on Electronic Governance and Open Society: Challenges in Eurasia. ACM, 2015.

[Soleimani 2019] A. Soleimani, et.al., “The Moloch DAO: Beating the Tragedy of the Commons using Decentralized Autonomous Organizations”, Whitepaper, 2019

[Vermont 2018] Vermont S.269 (Act 205) 2018 §4171-74

[Wang 2019] S. Wang, et al. “Decentralized Autonomous Organizations: Concept, Model, and Applications.” IEEE Transactions on Computational Social Systems 6.5 (2019): 870-878.

[Woolley 2016] S.Woolley,& P.Howard. “Automation, algorithms, and politics| political communication, computational propaganda, and autonomous agents Introduction.” Int’l J. of Communication 10 (2016): 9.

[Wright 2019] S. Wright, “Privacy in IoT Blockchains: with Big Data comes Big Responsibility”, Int’l Workshop on IoT Big Data and Blockchain (IoTBB’2019) in conjunction with IEEE Big Data (2019)

[Zetsche 2018] D.Zetsche, et.al., “The distributed Liability of distributed ledgers” U. Ill. L.Rev. 2018.

[Zichichi 2019] M. Zichichi, et al. “LikeStarter: a Smart-contract based Social DAO for Crowdfunding.” arXiv preprint arXiv:1905.05560 (2019).


[1] https://github.com/slockit/DAO/

[2] https://github.com/ebloc/AutonomousSoftwareOrg

[3] https://digix.global/dgd/

[4] https://molochdao.com

[5] https://dao.openlaw.io/formation

[6] https://daostack.io

[7] https://databrokerdao.com