Időpont: 2020. szeptember 15-től, kéthetente szerdánként, 08.30-10.00 óra között
Helyszín: Zoom platform/ Matzner előadó – Kőszeg Chernel u. 14. Európa Ház I. emelet
Előadó: Gábor Hofer Szabó
Letölthető olvasmányok: https://owncloud.btk.mta.hu/index.php/s/PUHTU40ETYPMml8
The mission statement of the Seminar:
The aim of the Reading group in “Sociophysics” is to bring together researchers and students with different backgrounds to create a common intellectual platform based on various disciplines to address the challenges of our times in a complex way. The suggested readings cover a wide range of topics from physics, political science, economics, social psychology to climate change and beyond. They are selected with the aim that each of us can learn from those who are experts on the field. This means that the majority of the readings will not be on your field so reading them will require some effort on your part. You are not expected to fully understand each paper, but you are very much expected to comprehend the broader picture and to come up with your own questions and comments.
Some questions on which the seminar is focusing:
- What is the relation between the natural and social sciences? Is there a fundamental difference between their approach to their subject matter? Is there a methodological gap between „hard” and „soft” sciences?
- What is the status of explanation, reduction, emergence, and contextuality in natural and social sciences?
- Does modern science, especially modern physics, help to tackle some of the problems of social sciences? How can social scientists inspire physicists? Are there mutual channels, practices, languages to provide heuristic moments, concepts, analogies to each other? How about the mutual liabilities from a historical perspective?
- Which scientific traditions, schools, emblematic thinkers are ready to be rediscovered from these perspectives?
A szemináriumok időpontjai is rövid összefoglalói angolul érhetők el!
Sociophysics schedule
June 7, 2022:
This was our last session in the spring semester. We made plans for the next academic year concerning readings, workshops, and attracting new people from iASK and outside.
May 25, 2022:
In this session, we discussed with György Csepeli some of the social psychological foundations of sociophysics.
What do you think?
Do there exist basic cognitive and psychological features of agents (like the mass and charge of the particles) which are robust enough such that one can build statistical models to explain emergent social phenomena?
May 11, 2022:
Today we had no reading. Instead, we talked through the methods and techniques which are necessary to build a physical model for a given social phenomenon. The session was a kind of preparation for the next semester.
April 27, 2022:
The ultimate goal of Sociophysics can be formulated as the quest to understand “the collective behaviour of people in a society, in terms of their opinions, attitudes or decisions”. This definition was formulated by Federico Vasquez whose paper “Modeling and Analysis of Social Phenomena: Challenges and Possible Research Directions” that we discussed at today’s seminar. The paper explains the main problems with current experiments in sociophysics and outlines the possible research directions which include collecting more empirical data, involving data analysis and creating agent-based models with more realistic features which require calibration and validation against real data.
We discussed the main reasons why agent-based modeling fails to make better predictions of human behaviour and one of our conclusions is that their success depends on determining the main psychological parameters. The models in sociophysics are not general but rather tailored to specific phenomena. Their methodology is capable of addressing specific problems. Furthermore, we discussed the importance of teamwork and the involvement of physicists, social scientists, psychologists and data scientists in agent-based simulation modelling.
What do you think?
Is determining the fundamental properties of agents the main problem of sociophysics? How to improve the prediction accuracy of agent-based simulation modelling?
Reading: Modelling and Analysis of Social Phenomena: Challenges and Possible Research Directions
March 30, 2022:
Michael Esfeld’s paper „From the open society to the closed society:
reconsidering Popper on natural and social science” is a libertarian attack against the coercive policies introduced to handle the corona crisis. Esfeld’s main concern is that lock-down and other restrictive measures which take certain values (health or climate protection) absolute and micromanage society down to family and individual life with respect to these values are serious threat to the Popperian open society based on induvidual freedom. During the session we evaluated Esfeld’s arguments both at the specific level of policy measures responding to covid and at the general level of social engineering.
What do you think?
Do you find coercive measures such as lock-down a threat to individual freedom or a legitimate tool in fighting the virus?
March 16, 2022:
We were agreed that nor Harari’s oversimplified historical approach, nor his conclusion about the role and reverse trajectory of wars in history are adequate to frame the Russian military aggression against Ukraine. From a historical point of view, the real problem is the backlog of political integration behind the economic and cultural integration. The political elites of the Past are not changing their mindset, because the integration into bigger socio-economic units would need restructurations, new types of division of labor, new divison of power, new decision making platforms. Politics became the most obsolete subsystem, when our civilization tries to find its best adaptation repertoire. In a social macroevolutionary language: this war is an attendant of the transition from a group formation phase to the group transformation phase, where the old interest structures are trying to safeguard their survival potential.
What do you think?
Our question is about the quest for tools, strategies, institutions and tactics to change the mindset of our political elites, to renew their approach (or their telos and logos): what is the mission of policy making, whose interests are to be represented, which kind of values and moral machineries needs which kind of reconsiderations and reformulations?
Reading: Yuval Noah Harari argues that what’s at stake in Ukraine is the direction of human history
March 02, 2022:
Social media algorithms measure how users interact with each other and how they utilise the platform to collect and repurpose behavioural data. These data are interpreted and used by algorithms for targeted marketing and the personalisation of the experience. Since algorithms determine visibility of information based on personal preferences, they enclose users in filter bubbles where they are likely to see only information they tend to agree with.
At today’s seminar, we discussed the paper “Majority-vote model with limited visibility: An investigation into filter bubbles” by Andr´e L. M. Vilela, Luiz Felipe C. Pereira, Laercio Dias, H. Eugene Stanley, and Luciano R. da Silva. The paper raises the question of how social media algorithms influence opinion formation. The authors added a visibility parameter to the majority-vote model and used Monte Carlo simulations to calculate the critical noise parameter as a function of visibility and get the phase diagram of the model. “Starting from zero noise and increasing its value, the system undergoes a phase transition from an ordered phase, where one opinion prevails, to a disordered phase, with no dominant opinion which corresponds to a polarized society”. We talked about the possibilities of understanding the effects of algorithmic filter bubbles on the formation of opinion through such simulations. Our conclusion was that unlike in physics where we know the basic building blocks or microelements, this is not the case in social sciences. While this paper is an important contribution in the sphere of physics, it still does not give us critical insights into the impact of social media algorithms on opinion formation.
What do you think?
Is the Ising model applicable to social interactions? Can this simulation tell us something about opinion formation even though the agents in the model don’t interact?
Reading: Majority-vote model with limited visibility: An investigation into filter bubbles
February 16, 2022: The evolution of information behavior, the ontogeny of information is a superb issue to illustrate, why and how social scientists by necessity needs the contribution of natural sciences to unfold some of their special problems. The cognitive evolution and its predecessor, the development of nervous system as an evolutionary prediction force is beatifully described by Kardos Lajos’s adiaphore determination scheme (ADS) theory. JUmpong from this point, the participants discussed intersting cross-cutting issues, like the past and a future, the moral consequences of the acceptance of ADS model and as a historical example, tha Sleepwalkers-effect by Clark, describing the cybernetic gap in the first world war between the pace of generating and getting new information and the obsolete patterns and slow decision making prcess which led to the escalation of war, even if no one wanted it before.
What do you think?
Are there any chance to harmonizing concepts in the Popperian 2nd and 3rd world?
February 2, 2022: The group considered the Mindscape podcast of Sean Carroll (https://www.preposterousuniverse.com/podcast/2020/01/06/78-daniel-dennett-on-minds-patterns-and-the-scientific-image/) in which he discussed „Minds, Patterns, and the Scientific Image” with Daniel C. Dennett, the famous American philosopher. Various intriguing topics came up such as the scientific image, patterns, intentional stance, the space of reasons, emergence, consciousness, moral agents, the recursive self-awareness, and the role of transparency in creating an expanding space of possibilities that lead to the emergence of consciousness. The group decided that they will take a closer look at the problem of consciousness and the role of the appearance of language in its emergence.
What do you think?
How can we bridge the concepts of the scientific image (electrons, quarks, atoms, photons, distance, time, etc.) with the concepts of the manifest image (colors, sounds, dollars, homeruns, love, etc.)?
Daniel Dennett on Minds, Patterns, and the Scientific Image Mindscape
December 15, 2021: In this session, we discussed a new non-anthropomorphic approach to social sciences advocated by Brian Epstein, a philosopher of social sciences and the 2016 winner of the Lakatos Award.
What do you think?
Do you agree with Epstein that social sciences need be pursued at a sub-human level?
Watch: https://www.youtube.com/watch?v=FLbEKpL-5Z0
November 24, 2021: We examined an agent-based simulation study on the resilience of social norms of cooperation under resource scarcity and inequality. The model considers agents who can exhibit either a cooperative or a non-cooperative (defectors) behavior via sharing water over two harvesting seasons. The two optimal behaviors are local minima: the cooperative behavior is advanced by punishment from the part of the community, while the non-cooperative is advanced by increased profit through the selfish strategy. The model reveals that the whole system ends up in one of these behavioral patterns depending on the values of model parameters such as punishment strength, initial proportion of cooperators, water inflow, water variability, frequency of interactions of agents, relative price of dry and wet seasons. The topic of the article drove the conversation in the direction of cooperative behavior of people in the COVID crises regarding vaccination.
What do you think?
How can such models be made more realistic and what are the limitations of this modeling approach?
November 10, 2021: In this session we talked about the difference between idealization and approximation based on John Norton’s paper: Approximation and idealization: why the difference matters? According to Norton „an approximation is an inexact description of a target system. An idealization is a real or fictitious system, distinct from the target system, someof whose properties provide an inexact description of some aspects of the target system.”
What do you think?
Pick your favorite sociological model. Is it an idealizations or an approximations?
Reading: Approximation and Idealization: Why the Difference Matters
October 27, 2021: Based on the Varieties of Capitalism paradigm developed in 2001 by Hall and Soskice, the paper titled „Varieties of entrepreneurship: exploring the institutional foundations of different entrepreneurship types through ‘Varieties-of-Capitalism’ arguments” written by Selin Dilli & Niklas Elert & Andrea M. Herrmann (2018) explores the ways in which differences in institutions across 20 Western market economies influence the success of entrepreneurship. The authors find that there is no perfect type of market economy for entrepreneurship. Instead, they find that each institutional setting provides benefits for types of entrepreneurship. The paper was discussed mostly as the model of analysis of socioeconomic phenomena. For most of the participants of the discussion, it was an entirely new approach, and the discussion evolved around the questions of the time of the emergence of the model/typology and its’ applicability to the research of non-Western and non-market economies. The participants also discussed entrepreneurship as a socioeconomic phenomenon.
What do you think?
To what extent, if any, can we apply the socioeconomic models developed based on the evidence gathered from the Western economies to emerging markets and non-market economies?
October 13, 2021: Due to the increasing number of natural disasters caused by climate change worldwide, understanding the dynamics of communication and forming individual opinions in risky situations is very important, especially considering the danger of spreading fake news and conspiracy theories. In the paper “Opinion Dynamics and Collective Risk Perception: An Agent-Based Model of Institutional and Media Communication About Disasters”, the authors Francesca Giardini and Daniele Vilone attempted to apply a numerical approach to the study of collective risk evaluation with the aim to investigate how social influence and vertical communication from governmental institutions and the media affect individual opinions. They created an agent-based model with the ambition to examine how different sources of information interact with selected individual features including trust in institutions, sensitivity to risks and propensity to interact with others.
At today’s seminar, we discussed this agent-based model and the results of the experiment. It was said that the research shows two crucial results: that, unlike in many other agent-based models, topology didn’t matter in this one and that, even though all the agents received the same news, the absolute consensus was never reached in the model. The model was criticised for being overly simplistic as it started from the premise that only three parameters crucially affect the formation of the opinion in risky situations. However, it has also been said that this model indicates that identifying the right parameters can indeed help predict how people will form their opinions. Even though humans are much more complex than atomic particles, we surprisingly discover that very few individual traits can forecast the outcomes. It has also been stressed out that this simple model is just a starting point for creating more complex ones.
What do you think?
Can we explain human behavior in specific situations if we choose the right parameters that crucially determine our opinions and choices? Can agent-based models explain how people form their opinions in risk situations? Do they help us understand increasing political polarisation?
Reading: Opinion Dynamics and Collective Risk Perception: An Agent-Based Model of Institutional and Media Communication About Disasters
Link: www.jasss.org/24/1/4/4.pdf
September 29, 2021: As a starting point for our semester, we discussed the provocative paper of Joshua M. Epstein titled „Why Model?”. The main purpose of this paper is talking about „some enduring misconceptions about modeling”. Some of these misconceptions are that modeling always means mathematical models, prediction is always the goal of modeling, only mathematical models need to be validated. The author offers 16 reasons other than prediction to build models. Some of these are explanation, guiding data collection, suggesting dynamical analogies, discovering new questions, and so on. He distinguishes between implicit and explicit models. In explicit models, the assumptions are laid out clearly and in detail. These models make the production of reproducible results possible. From the discussion of the group it was increasingly clear that if we want to bring natural and social scientists closer, their attitudes towards modeling is a key issue: a common ground for those attitudes should be found.
What do you think?
Can we find problems in the practice of iASK that can be reduced to a skeleton so that they can be handled with mathematical models?
June 08, 2021: After discussing last week how modern physics can change everyday concepts such as time, we talk about the realm of reality where the methods and viewpoints of physics are no longer valid, at least, according to Stuart Kauffman, author of the book „A World Beyond Physics”. His concept is that life is a historical process, namely, it is a one-time process that visits only a tiny fraction of the available states, a feature called non-ergodic in physics. LIfe includes autocatalytic cycles that make it possible for living systems to recreate themselves, and to elevate the level of their complexity via the process of evolution. Complex subsystems that are called „Kantian wholes” by Kauffman play their functional roles influencing the survival and further evolution of the organisms of which they are parts.
What do you think?
Does the fact that there are loopbacks in living systems that are absent in non-living systems really mean that the bottom-up methods of physics are impossible to use or this is just a technical issue?
May 25, 2021: For this session, we read Caro Rovelli’s book The Order of Time. In this book, Rovelli analyzes the concept of time as described by modern physics. He explains how modern physics challenges our common-sense view of time, especially the notion of the present, the flow of time, and the direction of time. During the discussion, we discussed in what sense these findings of modern physics are relevant to social and life sciences.
What do you think?
Relativity teaches us that there is no such thing as an objective, observer-independent present. Does it mean that our everyday concept of the present is an illusion?
Reading: Carlo Rovelli: The Order of Time / Az idő rendje
May 12, 2021: We discussed an article titled “The Great Powers and Regional Conflicts: Eastern Europe and the Balkans from the Post-Napoleonic Era to Post-Cold War Era” written by Benjamin Miller and Korina Kagan, and published in 1997. The article discusses the causal relation between a) balances of great power capabilities and interests, b) four types of powers’ involvement in a conflict – cooperation, competition, disengagement, and dominance -, c) small states’ autonomy, and d) patterns of regional conflicts. The article addresses the influence of these causal relations on the consequences for international security. The major argument of the article is that variations in the degree of intensity of conflicts and the likelihood of successful conflict resolution in different regions are affected by the character of great power involvement in these regions, while the great powers do not cause or terminate regional conflicts, but rather mitigate them.
The argument as well as the model proposed by the article generated criticism within the group for disregarding the influence of conflicts of great powers in the form of proxy wars, irrational factors such as identity, hidden factors. Thus, the group questioned the argument’s applicability to other regions and other periods of history in Eastern Europe and the Balkans. We also discussed the latest political developments in the Balkans (an unofficial document the “Non-Paper” suggests redrawing the Balkan states’ borders along the ‘ethnic’ lines).
What do you think?
To what extent ‘great powers’ and ‘small–nation states’ can act, or be regarded as agents of change of the world system, given their own transformations? How useful is the idea of ‘great powers’ currently?
April 28, 2021: We read the paper of Perc et al. Statistical physics of Human cooperation and discussed the significance and problems of applying the theory statistical physics to social phenomena. Gábor Hofer-Szabó gave a short intro to the main ideas and methods of statistical mechanics and the reduction of thermodynamics to statistical physics. We discussed the structure of reduction of phenomenological descriptions to fundamental sciences.
What do you think?
Do social sciences have a fundamental ontology which phenomenological descriptions can be reduced to?
Reading: Statistical physics of human cooperation
April 14, 2021: We discussed a paper titled „Resilience and vulnerability to climate change in the Greek Dark Ages” in which the authors apply an agent-based model and simulation to study the dynamics of the Bronze Age Collapse of a civilization between 1300-900 BC. The model, although simplistic, revealed the importance of such factors as soil degradation, rainfall variability and dietary reliance on agriculture in the process. Positive feedback loops contributing to the decline have also been revealed by the model. The simplicity of the model drew some criticism inside the group. Doubts of the usefulness of such models have been put forward.
What do you think?
Is it useful to construct simplistic models for studying historical processes by ignoring details that are considered important by historians?
March 31, 2021: Understanding the dynamics of work processes in home office settings throughout worldwide lockdowns could help improve labour efficiency during and after the Covid-19 pandemic. The paper by Peter Hardy, Leonardo Soriano Marcolino and Jose F. Fontanari entitled “The Paradox of Productivity During Quarantine: an Agent-Based Simulation” proposes that social distancing can be beneficial in various work environments provided that there are certain limited social interactions. We discussed the possibility of using such simulation models to quantify productivity at work and also debated about possible empirical experiments that could test them. We talked about the difficulties of simplifying types of personal traits and social interactions to make such models possible. However, the prevailing opinion was that carefully designed experiments could prove whether specific interactions among co-workers could improve efficiency.
What do you think?
Can certain psychological traits that have the ability to affect social interactions play an important role in work productivity, and can this help us understand the socio-economic implications of the Covid-19 pandemic?
Reading: The Paradox of Productivity During Quarantine: an Agent-Based Simulation
March 03, 2021: Our fellows had their most flaming row ever, in the history of Sociophysics reading group, after discussing Seth D. Baum’s forthcoming paper (Philosophy & Technology, DOI 10.1007/s13347-020-00416-5) on Artificial Interdisciplinarity. In Baum’s vocabulary it is the artificial intelligence that performs interdisciplinary research or supports other agents in the performance of interdisciplinary research to solve complex societal problems.
We were agree, that the term Artificial Interdisciplinarity itself is an unlucky choice, and we do not see strong enough its interdisciplinarity-specific features (to transcend epistemic divides between disciplines? Peer review of work that integrates an eclectic mix of topics? Transfer of interdisciplinary research insights from one problem to another?
The discussion turned into a debate on the nature of scientific data production and the more stressful information environment of a scientist. The malaise of those, who feel overwhelmed by the unprocessable quantity of relevant data and literature interfered with the statement, that information overload is a myth, and we have enough power and tools to follow the growth of new scientific knowledge.
What do you think: did we overrun the limits and tractability of the ceaselessly produced scientific knowledge, or AI can really help to cope with the challenge of size and complexity of the noosphere?
March 03, 2021: Based on the Nature Letter paper of Whitehouse et al titled „Complex societies precede moralizing gods throughout world history”, we discussed the possibility of using data science to answer broad historical questions. Specifically, this paper uses the Seshat: Global History Databank to decide what came first: complex societies or moralizing gods who worry about the interpersonal relationships between people. The answer is unequivocal and included in the title. We discussed the difficulties of building such databases, but the general opinion was that we need to go down to this „granular” level in order to build a complete picture and a narrative. We also discussed the paper of László Z. Karvalics who proposed a theory based on „control structures” and suggested a framework for the explanation of how various social structures come about including religious structures.
What do you think?
Is building databases such as Seshat a reasonable approach in history or social sciences or information lost via standardization of data are too important?
Reading: Complex societies precede moralizing gods throughout world history
February 10, 2021: Political polarization in societies is an emergent phenomenon that has been studied before by different methods because it threatens the stability of democratic societies by undermining cooperation. We discussed a paper from Simon Schweighofer et al. titled „A Weighted Balance Model of Opinion Hyperpolarization” that presents the results of agent-based simulations for the phenomenon of opinion hyperpolarization. The model couples opinions of agents with interpersonal attitudes and relates them via a function that includes a bias that they call evaluative extremeness. This is practically a positive feedback mechanism that assigns a stronger attitude to the agent pair than that would follow from the degree of agreement or disagreement. The model reproduces the phenomenon on the basis of interactions between agents without any additional assumption for social structure.
What do you think?
Is opinion hyperpolarization inevitable in societies or can we prevent it? Should we?
Reading 1: A Weighted Balance Model of Opinion Hyperpolarization by Simon Schweighofer, Frank Schweitzer, David Garcia
Reading 2: About Opinion Hyperpolarization by Schweighofer et aI. JASSS 2020 (Dezső Boda)
January 27, 2021: During the discussion, we focused on development during the last about two decades: nearly all disciplines in social sciences and humanities have made impressive attempts at applying network theory in their respective fields. In historical scholarship one of the most successful works based on network theory was written by a widely known and extremely prolific American scholar, Niall Ferguson. He is not just a traditional university professor but a most active, highly visible but frequently criticized public intellectual. The two key concepts in this book (The Square and the Tower, Penguin, 2017) are hierarchies and networks, the research question Prof. Ferguson poses is how these smaller and larger groups of European and American people organized along the lines of these two types of human relationships were fighting for local, regional and global power from the Middle Ages to the present. It brings together theoretical insights from a number of disciplines, ranging from economics to sociology, from neuroscience to organizational behavior. Its central thesis is that social, economic, cultural networks have always been much more important in history than most historians, fixated as they have been on hierarchical organizations such as states, have generally allowed, especially during two periods. The first ‘networked era’ was initiated by the spread of Luther’s Reformation and the introduction of the printing press to Europe at the turn of the fifteenth and sixteenth centuries. It lasted until the end of the eighteenth century. The second – our own time – dates from the 1970s, though the book argues that the technological revolution we associate with Silicon Valley was more a consequence than a cause of a crisis of hierarchical institutions. The intervening period, from the late 1790s until the late 1960s, saw the opposite trend: hierarchical institutions re-established their control and successfully shut down or co-opted networks. The zenith of hierarchically organized power was in fact the mid-twentieth century – the era of totalitarian regimes and total war. At first sight, this seems to be a very creative and thought-provoking approach to the periodization of modern history. We, however, focused on the weaknesses: a quite arbitrary selection of sources, generalizations without references to respective empirical research, frequent superficial understanding of the results of some disciplines, frequent lack of referring to alternative interpretations of the sources used, sensational style. The discussion then moved on to the problem of how original and academically valuable ideas in social sciences and humanities can reach larger audiences, how they can have an impact. Among others works by St. Pinker, J.Diamond, and J. Burke were discussed.
What do you think?
How can you combine a high academic level and broad social impact in social sciences and humanities?
Reading 1: The Square and the Tower: Networks and Power, from the Freemasons to Facebook by Niall Ferguson
December 9, 2020: After scanning the conceptual framework, developed by Anokhin and his followers to describe and understand functional systems, as special types of complex systems, we highlighted its core elements: the importance of the anticipated result, the influence of genetic and individual experience to “programme” this anticipation, the regulatory mechanisms.
What do you think?
Where is the heuristic surplus in Anokhin’s theory, comparing to other 20th-century system theories? Is it only the importance of „feedback”/ are there other essential aspects to deal with?
How can we apply the “granular” approach of Anokhin’s functional system theory in social science model-making and in historical reconstructions, to understand micro-and macro-level events and structures (as “solidified” functional systems) at the same time?
November 18, 2020: At this occasion, we continued our discussion about Complex Adaptive Systems (CAS), what are their characteristics, what are their inner mechanisms and so on. The article was written by Murray Gell-Mann, a Nobel-laureate theoretical physicist, one of the co-founders of the Santa Fe Institute that is the main temple of complex systems sciences. According to Gell-Mann, „CAS is a pattern-recognition device that seeks to find regularities in experience and compress them into schemata”. The central concept of this definition, schema (schemata is plural) is the basis of the bottom-up definition of CAS that elucidates the what about of a CAS on the basis of the behaviors of its components (called agents by John Holland), the regularities (patterns) of the whole CAS being an emergent property resulting from the zillions of micro-events. The concept of the schema is the same as the „internal model” of John Holland and the „rule” used by one of us (Boda). This concept makes working on the basis of a hierarchical picture of CAS possible. Gell-Mann also discusses in his paper different levels of adaptation, different forms of selective pressures, different reasons of maladaptive features that appear during adaptive processes, and the importance of making the rules simple. „Simple rule” practically means „compressed schema”. During the seminar we discussed a wide variety of questions from the necessary numerosity of a system to be called CAS to the importance of complex, cooperating, „united” societies in history. The introductory slides prepared by D. Boda are attached.
What do you think? Do you think that the bottom-up approach to Complex Adaptive Systems based on the schemata/rules is useful in understanding complex social and biological phenomena?
November 4, 2020. We continued our series with the paper of James Ladyman, James Lambert and Karoline Wiesner: What is a complex system? The authors of the paper review the various attempts to characterize a complex system and argue that some of these features are neither necessary nor sufficient for complexity and some of them are too vague and confused. They also argue that the best concept capturing complexity in an information-theoretic way is Statistical complexity. We discussed about the meaning of the concept “system” and about the use of higher-level, structural explanations of concrete phenomena.
What do you think? Can an emergent natural or social phenomenon (like the flight of a flock of birds, traffic jams) be understood at the higher, emergent level or for an explanation do we need to go down to the underlying lower level of the components and interactions (birds, cars)?
October 21, 2020. Following the discussion of fragility, we turned to the topic of scientific explanation and read Wesley Salmon’s Scientific Explanation: Causation and Unification. In this paper Salmon intends to reconcile the two major approches to scientific explanation: the local, bottom-up, causal mechanistic approach and the top-down, global, unificatory approach. In the seminar we discussed the relation of the two approaches with respect to our own research field and discussed the problems of explanation in terms of its social, psychological and technological preconditions and motivations.
What do you think? When are you satisfied with an explanation on your own research field? Can you characterize the criteria for an succesful explanation?
September 30, 2020. After discussing adaptability and resilience, we turned to James P. Crutchfield’s paper The Hidden Fragility of Complex Systems: Consequences of Change, Changing Consequences. Fragility is, in a sense, the opposite of resilience. We know that our socio-technical systems are inherently fragile, but, at the same time, they are inherently resilient. They can correct errors and can respond to challenges either on the macrolevel (resilience) or on the microlevel (adaptability). The author claims that as our already complex socio-technical systems become even more complex (spontaneous reorganization), they undergo structural changes, new hierarchies emerge, and new kinds of unpredictable fragilities arise. „Fragility is hidden from us because it is emergent.” – as Crutchfield stresses. The socio-technical systems are especially exposed to this potentially dangerous effect, because they involve intelligence. The good old question whether complex systems can be controlled arises without any comforting answer. In connection with discomforting answers, the group discussed the definitional problems of collapsing societies.
What do you think? Is the idea of emergent fragility any useful in dealing with seemingly complex problems such as social disorder, climate change, or the inability of society to deal with a pandemic?
September 16, 2020. After the summer break, we started our Sociophysics Reading Group with Walker et al. Resilience, Adaptability and Transformability in Social–Ecological Systems. This paper portrays three attributes of social–ecological systems: resilience, adaptability and transformability in a geometric way using the metaphor of a stability landscape. We discussed the many forms the various strategies for sustainability must take.
What do you think? When is system adaptive and resilient? Can you come up with an example from your own field?