Distributed Artificial Intelligence: A primer on Multi-Agent Systems, Agent-Based Modeling, and Swarm Intelligence

   Almost two years ago, I paused thinking about the future of AI and drew down some “predictions” about where I thought the field was going.

One of those forecasts concerned reaching a general intelligence in several years, not through a super powerful 100-layers deep learning algorithm, but rather through something called collective intelligence.

However, except for very obvious applications (e.

g.

, drones), I have not read or seen any big development in the field and I thus thought to dig a bit into that to check what is currently going on.

As part of the AI Knowledge Map then, I will have a look here not only at Swarm Intelligence (SI) but more generally at Distributed AI, which also includes Agent-Based Modeling (ABM) and Multi-Agent Systems (MAS).

   Let’s start from the broader classification.

Distributed Artificial Intelligence (DAI) is a class of technologies and methods that span from swarm intelligence to multi-agent technologies and that basically concerns the development of distributed solutions for a specific problem.

It can mainly be used for learning, reasoning, and planning, and it is one of the subsets of AI where simulation has a way greater important the point-prediction.

In this class of systems, autonomous learning processing agents (distributed at large scale and independent) reach conclusions or a semi-equilibrium through interaction and communication (even asynchronously).

One of the big benefits of those with respect to neural networks is that they do not require the same amount of data to work — far to say though these are simple systems.

According to Ponomarev and Voronkov (2017), a DAI can be defined by three main characteristics:These are the minimum requirements to be considered a distributed AI, but we can keep diving in and observe the next level of details of those systems (Decker, 1987; Dumke et al.

, 2009):Hence, in order to structure a DAI system, it is not only necessary to intervene on the system functionalities, but also design the agent architecture (e.

g.

, the degree of heterogeneity, reactive or deliberative, etc.

) as well as the system one (communication, protocols, human involvement, etc.

), as first evidenced in Parunak (1996).

All these dichotomies for the system to correctly run ask the designers to make several calls, in particular to guarantee that the agents are always coherent, that the communication/interaction between them has a fixed language/protocol to be followed, and finally that those results are synthesizable and actionable.

What we have described are the building blocks for any DAI system, but it might sound sometimes confusing since some of the features can look conflicting at times.

It could be then useful to highlight another bifurcation (Stone and Veloso, 2000): in a DAI context, you might either want to analyze a system where several branches work together to achieve a common goal or designing multiple independent agents and look for an emerging solution from their interactions.

In the first case, you are facing a Distributed Problem Solving (DPS) type of issue, while in the latter you are dealing with Multi-Agent Systems at their best.

This is an important distinction because it draws a line between simply distributed and decentralized systems.

It is indeed very different to have a system where there is a centralized process of task distribution and re-composition or a spontaneous allocation of tasks in a decentralized fashion.

  In DPS, multiple agents work together to solve a specific problem (Durfee, 2001).

The key in these systems is that cooperation is required since no individual agent has sufficient information, knowledge, and capabilities to solve the whole problem.

Being sure that information and capabilities are correctly allocated in such a way that agents complement rather than conflict between each other is the real challenge for any researcher.

Typical application areas are distributed planning and control, interpretation, cooperating expert systems, cognitive models of cooperation, and human cooperated backed by digital tools, between many.

Any solution in these systems is reached through coherence (designing incentives for the agents to work together) and competence (agents need to know how to work well together), and usually targeted to either constraint-satisfaction problems (DCSPs) or distributed constraint-optimization problems (DCOPs).

For each class of problems, multiple algorithms have been designed (Yeoh and Yokoo, 2012) but in general, every approach consists of reducing a larger problem into interdependent sub-tasks — spatial, temporal, or functional that they might be.

The partial solutions are then integrated and fit into an overall solution (Durfee et al.

, 1989).

It is easy to understand the requiring cooperation increases the complexity of the system exponentially, so why not looking for a unified bigger problem solver?.In other words, have DPS approaches a legitimate rationale to be implemented?Well, they do (Durfee et al.

, 1989).

First of all, it is relative cheap to connect multiple devices from a hardware standpoint, even cheaper than having a centralized processor.

Second, many AI applications are distributed by nature and design, whether we like it or not.

The ability to modularize the problem into subproblems is also a great advantage since modules are easier to check, debug, and maintain.

Finally, having DPSs facilitates the integration of AI into human society because cooperation is the evolutionary mechanism we, as a race, have followed to prosper.

   Multi-Agent Systems (and Technologies) are a fairly old class of algorithms, where individual agents interact between each other based on pre-determined rules/constraints and, as a consequence, a collective behavior that is “good enough” emerges from those interactions.

The interactions that occur in the systems are both between agents and between agents and the environment itself (and become computationally intractable as number of agents grows) and most of the time an individual agent does not know ex-ante what constitutes a reward function or how to maximize it as well as has no clues on the systems dynamics — in other words, it does not know the full problem space and, even if so, it is able to only partially solve the problem.

Therefore, it must discover the solution by learning.

More recent approaches indeed expect to benefit from the joint use of MAT and Machine learning (and more specifically reinforcement learning, deep learning and deep convolutional networks), since ML can use ABM as an environment and a reward generator while ABM can use ML to refine the internal models of the agents (Rand, 2007).

The neural networks are therefore used as a computational approximation of the non-linear, multivariate time series generated by the ABM, or generally as computational emulators of entire ABMs (Van der Hoog, 2017).

The further step is then integrating those approaches with Graphical Models and Mean-Field Games (Mguni et al.

, 2018; Yang et al.

, 2018).

Mean-Field Games, in fact, facilitate not only the interaction between individual agents but also tracks the decision-making process in huge groups of agents and studies how a single agent act in response to a group (and vice-versa).

The Agent-Based Models (ABM) are instead bottom-up computational techniques that can be thought of being part of MAT, although they do not always completely overlap.

The big difference lies, in fact, in both the design as well as the goal of the system created.

ABMs are usually designed to study the collective behaviors of agents that follow basic rules, not to solve a specific problem.

In this sense, the nature of the systems are completely different: one exploratory and descriptive (ABM), the other one engineered and prescriptive (MAT).

ABMs are used in several applications from urban planning to epidemiology, from economics to transportation (Waldrop, 2018), but regardless of the application they all have similar characteristics: many agents (whether “intelligent” or not) at various scale; an environment where they operate; a set of learning rules and decision-making heuristics to regulate the exchanges with other agents; a map of what interactions are possible and how.

In other words, these systems require a full ontology to work (Livet et al.

, 2008)Not many companies work with MAT and ABM, and those are classes of models that are still more frequent in academic environments rather than industrial applications.

However, companies like Magenta Technology, Prowler.

io, Accelerated Dynamics, Charles River Analytics, Quorum AI, AnyLogic, SmartUQ are pioneering in this space.

   Swarm/collective/symbiotic Intelligence deals with how natural (and artificial) systems made by multiple agents coordinate using decentralized control and self-organization.

The term has been proposed by Bloom (1995) while studying complex adaptive systems and stems from a combination of different concepts (apoptosis, parallel distributed processing, group selection, and superorganism).

At this point, you might observe the boundaries between the different technologies listed so far may blur, and in fact, it is very hard if not sometimes useless to clearly draw a line.

A typical swarm system has some properties you should be familiar with by now: it has many agents, which are fairly homogeneous (either identical or belonging to few typologies), and that interact between each other according to very basic rules that only exploit local information exchanged directly with another agent or via the environment (this indirect coordination mechanism is called stigmergy).

The group tends eventually to self-organize and results emerge from the overall behavior of the system.

Those single individual behaviors can often be described in probabilistic terms, i.

e.

, each agent stochastically acts based on his local perception of the neighboorhood.

This is a very key point because jointly with the above properties they assure that the system can be scaled, parallelized, and made fault-tolerant.

It also lays down a further consideration on any SI algorithm.

In fact, it is not only distributed (run separately by each agent in the system) but also embeds some degree of randomness in the decision process of each node.

This may look trivial or useless but is the reason why the system does not get stuck in “locally compressed states” (Cannon et al.

, 2016).

In other words, this gives the chance to a swarm that clustered into several isolated subgroups to have an individual who is keen to leave the group and keep the entire interactive process alive.

The beauty of those systems does not end here.

They are, in fact, very flexible and at the same time robust (they keep working even parts malfunction), as well as completely decentralized and unsupervised, and it works regardless of being used to describe natural or artificial agents.

As already mentioned, Nature is inspiring here and there are in fact many SI algorithms that have been conceptualized by looking at natural swarm of agents: Particle Swarm Optimization (Kennedy and Eberhart, 1995), Ant Colony Optimization (Colorni et al.

, 1991), Artificial Bee Colony (Karaboga, 2005), Bacterial Foraging Optimization (Passino, 2002), Firefly Algorithm (Yang, 2008;2009), Artificial Fish Swarm Optimization (Li et al.

, 2002), Stochastic Diffusion Search (Bishop, 1989), between many.

However, I personally believe that human collective intelligence is the first step to draft an artificial swarm intelligence that can fulfill its main capability, which is optimization (mainly through simulation).

This is why I am usually very excited by crowd intelligence sort of applications.

One of the interesting companies pushing this forward is Unanimous AI.

They would rather think of an SI system as “a brain of brains” system (Rosenberg, 2015; 2016).

In many applications implemented so far, it has been simply asked individuals to provide inputs, and then aggregated after-the-fact the inputs in a sort of “average sentiment” intelligence.

According to Rosenberg, the existing methods to form a human collective intelligence do not even allow users to influence each other, and when they do that they allow the influence to only happen asynchronously — which causes herding biases.

An AI on the other side will be able to fill the connectivity gaps and create a unified collective intelligence, very similar to the ones other species have.

Good inspirational examples from the natural world are the bees, whose decision-making process highly resembles the human neurological one.

Both of them use large populations of simple excitable units working in parallel to integrate noisy evidence, weigh alternatives, and finally reach a specific decision.

According to Rosenberg, this decision is achieved through a real-time closed-loop competition among sub-populations of distributed excitable units.

Every sub-population supports a different choice, and the consensus is reached not by majority or unanimity as in the average sentiment case, but rather as a “sufficient quorum of excitation” (Rosenberg, 2015).

An inhibition mechanism of the alternatives proposed by other sub-populations prevents the system from reaching a sub-optimal decision.

Unanimous AI is clearly not the only company out there working on collective intelligence, even though this is not a crowded market at all.

Companies like Augur, Estimize, Almanis, Ace Consensus, Premise, Streebees, CrowdMed, ConvergentAI, Gnosis, Cindicator, Stox, use the crowds in different ways but are all based on the idea that many minds reach a higher prediction accuracy than an individual one (wisdom of the crowd).

Numerai and Quantopian are other examples of crowdsourced intelligence where ‘tournaments’ are created, and the data scientist who implements the best AI models and reach the best prediction win a sum of money from the company (there are differences between the two especially regarding licenses, algorithm submission, etc.

although the main idea is more or less the same).

Finally, there are a few companies that deal with SI applied to physical robotic agents, e.

g.

DoBots (swarm robots), Hydromea (swarm marine robots), Sentien Robotics (aerial robots), Third Space Auto (autonomous vehicles and UAVs), or Swarm Technology.

   I am highly convinced that whatever form of general intelligence will emerge at some point, if any, will likely be a distributed one.

Not simply decentralized, but distributed.

Are MAT or Swarm intelligence, therefore, the solution?.Well, part of.

The integration with other methods (e.

g.

, neural nets, reinforcement learning, etc.

), as well as with other technologies such as blockchain (Calvaresi et al.

, 2018), is essential but still not pursued as it should.

The class of algorithms and methodologies explained above gather some characteristics that I see essential in any future AI, which are parallel processing, distributed decisions, several acting agents in a specific environment, a knowledge-based approach to interactions and modeling, self-organization and intelligence emergence.

But above all, this is a class of models that never gives us one specific answer to a problem but several ones depending on the way the problem itself (and the constraints) are framed.

It is very much like life, and the need for calibrating at each step throughout the process makes this similarity even stronger.

And this, at least to me, is more than enough to see multi-agent systems as a paramount step toward a better AI.

   Bishop, J.

M.

(1989).

“Stochastic Searching Networks”.

Proceedings 1st IEE Conference on Artificial Neural Networks: 329–331.

Bloom, H.

(1995).

“The Lucifer Principle: A Scientific Expedition Into the Forces of History”.

New York: Atlantic Monthly Press.

Calvaresi, D.

, Dubovitskaya, A.

, Calbimonte, J.

P.

, Taveter, K.

, Schumacher, M.

(2018).

“Multi-Agent Systems and Blockchain: Results from a Systematic Literature Review”.

In: Demazeau Y.

, An B.

, Bajo J.

, Fernández-Caballero A.

(eds) Advances in Practical Applications of Agents, Multi-Agent Systems, and Complexity: The PAAMS Collection.

PAAMS 2018.

Lecture Notes in Computer Science, vol 10978.

Springer, Cham.

Cannon, S.

, Daymude, J.

J.

, Randall, D.

, Richa, A.

W.

(2016).

“A Markov Chain Algorithm for Compression in Self-Organizing Particle Systems”.

PODC ’16 Proceedings of the 2016 ACM Symposium on Principles of Distributed Computing: 279–288.

Colorni, A.

, Dorigo, M.

, Maniezzo, V.

(1991).

“Distributed optimization by ant colonies”.

In: F.

Vaerla, P.

Bourgine (Eds.

), Proceedings of the European Conference on Artificial Life, Elsevier Publishing: 134–142.

Decker, K.

S.

(1987).

“Distributed problem solving: A survey”.

IEEE Transactions on Systems, Man, and Cybernetics, 17 (5): 729–740.

Dumke, R.

, Mencke, S.

, Wille, C.

(2009).

“Quality Assurance of Agent-Based and Self-Managed Systems”.

CRC Press.

Durfee, E.

H.

, Lesser, V.

R.

, Corkill, D.

D.

(1989).

“Trends in Cooperative Distributed Problem Solving”.

IEEE Transactions on Knowledge and Data Engineering, 1 (1): 63–83.

Durfee, E.

H.

(2001).

“Distributed Problem Solving and Planning”.

In: Luck M.

, Mařík V.

, Štěpánková O.

, Trappl R.

(eds) Multi-Agent Systems and Applications.

ACAI 2001.

Lecture Notes in Computer Science, vol 2086.

Springer, Berlin, Heidelberg.

Karaboga, D.

(2005).

“An idea based on honey bee swarm for numerical optimization”.

Technical Report 06, Erciyes University, Turkey.

Kennedy, J.

, Eberhart, R.

(1995).

“Particle swarm optimization”, in: Neural Networks, 1995.

Proceedings.

, IEEE International Conference 4: 1942– 1948.

Li, L.

, Shao, Z.

, Qian, J.

(2002).

“An optimizing method based on autonomous animals: fish-swarm algorithm”.

System Engineering Theory Practice 22 (11): 32– 38.

Livet, P.

, Phan, D.

, Sanders, L.

(2008).

“Why do we need Ontology for Agent-Based Models?”.

Complexity and Artificial Markets: 133–145Mguni, D.

, Jennings, J.

, Munoz de Cote, E.

(2018).

“Decentralised Learning in Systems with Many, Many Strategic Agents”.

arXiv:1803.

05028v1.

Parunak, H.

V.

D.

(1996).

“Applications of distributed artificial intelligence in industry”.

In G.

M.

P.

O’Hare and N.

R.

Jennings, Foundations of Distributed Artificial Intelligence: 139–164.

Wiley Interscience.

Passino, K.

(2002).

“Biomimicry of bacterial foraging for distributed optimization and control”.

IEEE Control Systems Magazine: 52–67.

Ponomarev, S.

, Voronkov, A.

E.

(2017).

“Multi-agent systems and decentralized artificial superintelligence”.

arXiv:1702.

08529Rand, W.

(2007).

“Machine learning meets agent-based modeling: when not to go to a bar”.

Unpublished Paper.

Rosenberg, L.

B.

(2015).

“Human Swarms, a real-time method for collective intelligence”.

In Proceedings of the European Conference on Artificial Life (pp.

658–659).

Rosenberg, L.

B.

(2016).

“Artificial Swarm Intelligence, a Human-in-the-loop approach to A.

I.

” In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16): 4381–4382.

Stone, P.

, Veloso, M.

(2000).

“Multiagent Systems: A Survey from a Machine Learning Perspective”.

Autonomous Robots, 8 (3): 345–383.

Van Der Hoog, S.

(2017).

“Deep Learning in (and of) Agent-Based Models: A Prospectus”.

Preprint arXiv: 1706.

06302.

Yang, X.

S.

(2008).

“Firefly algorithm”.

Nature-Inspired Metaheuristic Algorithms 20: 79–90.

Yang, X.

S.

(2009).

“Firefly algorithms for multimodal optimization”.

In: O.

Watanabe, T.

Zeugmann (Eds.

), Stochastic Algorithms: Foundations and Applications, Vol.

5792 of Lecture Notes in Computer Science, Springer Berlin Heidelberg: 169–178.

Yang, Y.

, Luo, R.

, Li, M.

, Zhou, M.

, Zhang, W.

, Wang, J.

(2018).

“Mean Field Multi-Agent Reinforcement Learning”.

arXiv:1802.

05438.

Yeoh, W.

, Yokoo, M.

(2012).

“Distributed Problem Solving”.

AI Magazine, 33 (3): 53–65Waldrop, M.

M.

(2018).

“Free Agents: Monumentally complex models are gaming out disaster scenarios with millions of simulated people”.

Science, 360 (6385): 144–147.

This piece originally appeared on Forbes here and here.

Original.

Reposted with permission.

Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.

disqus.

com/embed.

js; (document.

getElementsByTagName(head)[0] || document.

getElementsByTagName(body)[0]).

appendChild(dsq); })();.

. More details

Leave a Reply