Markov Chain Aggregation for Agent-Based Models Markov Chain Aggregation for Agent-Based Models

Markov Chain Aggregation for Agent-Based Models

    • USD 64.99
    • USD 64.99

Descripción editorial

This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the updating rule and governs the dynamics at a Markovian level, plays a crucial part in the analysis of “voter-like” models used in population genetics, evolutionary game theory and social dynamics. The book demonstrates that the problem of aggregation in ABMs - and the lumpability conditions in particular - can be embedded into a more general framework that employs information theory in order to identify different levels and relevant scales in complex dynamical systems

GÉNERO
Ciencia y naturaleza
PUBLICADO
2015
21 de diciembre
IDIOMA
EN
Inglés
EXTENSIÓN
209
Páginas
EDITORIAL
Springer International Publishing
VENTAS
Springer Nature B.V.
TAMAÑO
4.4
MB