key: cord-0174184-f9i80dok authors: Dessertaine, Th'eo; Moran, Jos'e; Benzaquen, Michael; Bouchaud, Jean-Philippe title: Out-of-Equilibrium Dynamics and Excess Volatility in Firm Networks date: 2020-12-09 journal: nan DOI: nan sha: 1f6392fd55ae11c945056c9d99052da1f7dc7fb9 doc_id: 174184 cord_uid: f9i80dok We study the conditions under which input-output networks can dynamically attain a competitive equilibrium, where markets clear and profits are zero. We endow a classical firm network model with minimal dynamical rules that reduce supply/demand imbalances and excess profits. We show that the time needed to reach equilibrium diverges to infinity as the system approaches an instability point beyond which the Hawkins-Simons condition is violated and competitive equilibrium is no longer admissible. We argue that such slow dynamics is a source of excess volatility, through accumulation and amplification of exogenous shocks. Factoring in essential physical constraints absent in our minimal model, such as causality or inventory management, we then propose a dynamically consistent model that displays a rich variety of phenomena. Competitive equilibrium can only be reached after some time and within some restricted region of parameter space, outside of which one observes spontaneous periodic and chaotic dynamics, reminiscent of real business cycles. This suggests an alternative explanation of excess volatility in terms of purely endogenous fluctuations. Diminishing return to scale and increased perishability of goods are found to ease convergence towards equilibrium. (Dated: November 4, 2021) We study the conditions under which input-output networks can dynamically attain a competitive equilibrium, where markets clear and profits are zero. We endow a classical firm network model with minimal dynamical rules that reduce supply/demand imbalances and excess profits. We show that the time needed to reach equilibrium diverges to infinity as the system approaches an instability point beyond which the Hawkins-Simons condition is violated and competitive equilibrium is no longer admissible. We argue that such slow dynamics is a source of excess volatility, through accumulation and amplification of exogenous shocks. Factoring in essential physical constraints absent in our minimal model, such as causality or inventory management, we then propose a dynamically consistent model that displays a rich variety of phenomena. Competitive equilibrium can only be reached after some time and within some restricted region of parameter space, outside of which one observes spontaneous periodic and chaotic dynamics, reminiscent of real business cycles. This suggests an alternative explanation of excess volatility in terms of purely endogenous fluctuations. Diminishing return to scale and increased perishability of goods are found to ease convergence towards equilibrium. What is the origin of macroeconomic fluctuations? Textbook macroeconomic models picture the world as a succession of equilibria where markets clear perfectly and firms maximise their profits. Each equilibrium is characterised by a different level of productivity or household preferences, themselves driven by exogenous "shocks", which are the primary cause of fluctuations. Drawing an analogy from physics, one may call such an approach "adiabatic", in the sense that the time needed for the system to reach equilibrium is much shorter than the time over which the environment changes, so out-of-equilibrium effects can be neglected. The time evolution of the economy is then slaved to the time evolution of the exogenous parameters. This assumption is at the core of DSGE models (see Galí (2015) ), but also central to the analysis of Acemoglu et al. (2012) in their now classic paper on the network origins of aggregate fluctuations. The central proposition of the present work is that the standard economic equilibrium may actually be dynamically unattainable. Correspondingly, the "small shock, large business cycle" paradox (i.e. aggregate fluctuations much too large to be explained by exogenous shocks alone, see e.g. Cochrane (1994) and Bernanke et al. (1996) ) would be chiefly explained by out-of-equilibrium effects. Indeed, in such out-of-equilibrium situations, the dynamics is mostly of endogenous origin and cannot be accounted for by traditional equilibrium arguments, like those of e.g. Long and Plosser (1983) and Acemoglu et al. (2012) , Baqaee and Farhi (2019) , Carvalho and Tahbaz-Salehi (2019) . From a conceptual point of view, our point is the following: economic equilibrium requires so much cooperation between rational, forward looking agents, that the only way such equilibrium can plausibly be achieved is through some kind of adjustment process, that inevitably takes some time to complete. 1 We will argue that even in cases where equilibrium is eventually reached, this time can be much longer than the evolution time of technology or of any other type of shocks (political, social, geopolitical, sanitary, etc.) that do affect the economy, in which case the adiabatic hypothesis is doomed to fail. Such a situation requires a richer modelling framework where out-of-equilibrium dynamics is an integral part of the description: We do not only need to describe the final equilibrium state, but also the path to equilibrium, which may in fact never converge. 2 Past literature in macroeconomics (see e.g. Bénassy (2005) , Chiarella et al. (2005) , Fisher (1983) , Gintis (2007) , Grandmont (1985 Grandmont ( , 2006 and Beaudry et al. (2020) ) has been mainly concerned with "disequilibrium" effects, which in that context means studying the impact of price or wage frictions and rigidities that prevent the economy from reaching full equilibrium. In a sense, these models postulate economies that are not able to reach an idealised state of equilibrium because of certain imperfections, but they still mostly deal with the static properties of such economies with no particular focus on their dynamics or on how said state is reached. Effectively, the dynamics that are studied through these models are those in which the agents in the economy are all capable of optimising their behaviour, exchanging goods and coordinating between themselves instantly. The main source of fluctuations is therefore given by external shocks to the economy. This is, for example, the case of Long and Plosser (1983) and Kydland and Prescott (1982) , despite their common acknowledgment of the need to take into account the "time to build" in the economy, which we choose to interpret too as the time required for all the agents to coordinate and exchange enough information to reach equilibrium. Another strand of the literature considers "reduced-form" differential equations that describe the coupled evolution of a set of aggregate variables (for example employment, wage and output in the original model by Goodwin (1982) and revived in Flaschel (2008)). These low-dimensional dynamical equations can generate various types of dynamics, such as business cycles in the Goodwin model which is, mutatis mutandis, equivalent to the classic Lotka-Volterra (or predator-prey) model of Lotka (1920) , Volterra (1926) . Note also that Liu and Tsyvinski (2020) establish a system of coupled dynamical equations whose dynamics are determined by certain matrices that describe the production network. This is, in a way, a similar approach to ours, but our model considers a fully non-linear model with a linearised evolution governed by a production-network determined matrix that is only valid close to equilibrium. Yet another direction is explored by Agent Based Models (ABMs), where individual agents/firms make decisions based on plausible heuristic rules. ABMs are explicitly dynamical models, in the sense described by Ballot et al. (2015) : decision rules lead to actions (buy/sell, produce, update prices and wages, etc.) that move the economy one step forward in time (see the work of Dawid et al. (2013 ), Delli Gatti et al. (2005 , 2008 , Fagiolo and Roventini (2012) , Gualdi et al. (2015b) , Raberto et al. (2012) and Poledna et al. (2019) for recent examples). Note that some of these heuristics actually correspond to the agents acting rationally but with limited information or by only being able to forecast the other agents' behaviour, an approach taken by Bonart et al. (2014) which we also follow in this work. The approach of ABMs, although very prolific, has been heavily criticised by those who argue in favour of "micro-founded" models where agents are forward-looking and optimise inter-temporal utility functions. In the present paper, we revisit these ideas within the framework of network economies, where firms interact through a supply/demand (or input/output) network. As mentioned above, such models have recently become popular as a way to generate excess aggregate volatility, as shocks may possibly propagate through the input-output network. However, the seminal papers of Long and Plosser (1983) , and of Acemoglu et al. (2012) are studied within the "adiabatic" framework in which the system instantaneously adapts to productivity shocks (for a recent enlightening review of these models, see Carvalho and Tahbaz-Salehi (2019) ). Furthermore, most of these papers assume a Cobb-Douglas production function, which ensures that an equilibrium always exists, whatever the input-output network and independently of the productivities of the firms. More recently, Baqaee (2018) and Baqaee and Farhi (2019) extended such work using Constant Elasticity of Substitution (CES) production functions, showing in particular that they induce non-linear effects that can cause the amplification of small shocks, all while remaining at economic equilibrium. Another similar line was recently exploited by , Pichler et al. (2020 , where both CES-like production functions and non-equilibrium dynamics were exploited to forecast the economic shock due to the Covid-19 pandemic lockdowns. See in particular the Introduction of Pichler et al. (2020) for a comprehensive discussion about substitution effects in production functions and their macroeconomic consequences. But as argued long ago by Hawkins (1948) and Hawkins and Simon (1949) for Leontief economies, and more recently by two of us, Moran and Bouchaud (2019) , for more general CES production functions, equilibrium may cease to exist when the average connectivity of the network is too large, firm productivities are too low, or markups are too large. In these cases, the description of a time evolving economy as a succession of static equilibria just does not make sense. In fact, Moran and Bouchaud (2019) argue, in the spirit of a conjecture by Bak et al. (1993) , that real economies could generically be sitting close to a point where general equilibrium disappears. In the present paper, we endow such a network model with a class of plausible dynamical rules, which aims at describing the fate of the economy outside of the adiabatic regime, and identify cases where equilibrium does exist mathematically but can never be reached dynamically. A step in this direction was proposed by Mandel et al. (2015) and, independently, by Bonart et al. (2014) , where a dynamical Cobb-Douglas economy was considered, with plausible update rules for production and prices. Interestingly, the model considered by Bonart et al. (2014) leads to a phase transition between a region where equilibrium is reached (when firms slowly adapt to shocks) and a region where coordination breaks down (when firms adapt too aggressively) and where equilibrium is no longer dynamically accessible. 3 In the latter phase, endogenous volatility becomes dominant. But this model only goes half-way towards a full-fledged dynamical description, since market-clearing was imposed by fiat in Bonart et al. (2014) , with no excess production or excess demand -leading to conceptual inconsistencies and, in fact, spurious instabilities. In the present work, we propose a consistent framework to describe dynamical out-of-equilibrium effects in network economies. Our approach is a hybrid between standard economics thinking (where firms attempt to optimise profits in a competitive environment, and households optimise their utility function to balance consumption and labour) and Agent Based Models, where simplified behavioural assumptions allow one to specify the decision-making process of firms. We propose a minimal parametrisation of the heuristic rules used by firms to update production, prices and wages, which already leads to a surprisingly rich phenomenology of the resulting economy, which in some cases smoothly reaches equilibrium, and in other cases display much more complicated dynamical patterns, including cycles, chaos and crises, but also what we call "deflationary" equilibria. We argue that such generic scenarios could naturally explain the "small shocks, large business cycle" conundrum in terms of out-of-equilibrium effects. In a nutshell, when firms tend to over-react and adjust prices/productions too quickly in the face of imbalances, the economy enters an oscillatory or chaotic regime, where volatility is purely of endogenous origin. In a sense, our model can be seen as a multidimensional, discrete time version of the reduced form differential equationsà la Goodwin (1982) and followers, that also lead to oscillatory dynamics. The main difference is that we describe the dynamics of the economy at a highly disaggregated level (that of firms), which is an important aspect in view of the amount of micro-data now available to calibrate such models. Given the diversity of phenomena that can take place within our framework, we are quite confident that the model is flexible enough to account for many empirical facts. However, in the current era of "big data", some extensions of the model may be worth investigating -with each extension bringing one or several new parameters that need to be calibrated. In particular, some of our behavioural assumptions may appear too primitive and could be enriched later, as we discuss in section IV C. The most important generalisation, in our opinion, will be to explicitly include competition, debt, interest rates and bankruptcies in the model. In particular, the way the network "rewires" after the birth of a new competitive firm or after the removal of a bankrupt firm, with the possibility of cascading defaults, is clearly one of the most interesting aspects of firm network models when it comes to understanding business cycles and economic crises. These cascading bankruptcies are, in fact, the very motivation for studying network models, but they will not be directly addressed in the present article. The article is organised as follows. In section II, we set up the stage for a firm network model and propose a definition of competitive equilibrium suited for our purposes. Section III presents a simple heuristic for an out-ofequilibrium dynamical model of interacting firms. We show that reaching equilibrium might take an infinite amount of time (therefore jeopardising the adiabatic hypothesis) and that the dynamics displays excess volatility when the economy sits close to an instability. In section IV, we present a fully consistent extension of the model of section III which incorporates natural constraints which were overlooked such as causality or shortages. We propose a numerical study of this extension in section V where we highlight and discuss the existence of other interesting dynamical regimes besides competitive equilibrium. We also provide several technical appendices for completeness. In Appendix A, we detail the derivation of competitive equilibrium equations in the most general setting of production function. Appendix B shows the computation of the relaxation time of the naive model which relies on Appendix C that compiles necessary intermediate results on the stability matrix. In Appendix D we show that a marginally stable linear stochastic system creates excess volatility and we apply this result to a generic case of the naive model of III. In Appendix E we provide time-series of the dynamics of our model on realistic networks. Finally in Appendix F, we provide a pseudo-code for simulation of the fully consistent approach of section IV. The code itself is made available at: https://yakari.polytechnique.fr/dash. Following the descriptions of Acemoglu et al. (2012), Bonart et al. (2014), Long and Plosser (1983) and Carvalho and Tahbaz-Salehi (2019) , we model the economy as consisting of N firms that interact with one another and with a single representative household which provides labour and consumes goods. The economy is described by a "technology network", namely a directed graph where each node i = 1, . . . , N represents a firm and where the link j → i exists if i uses the good produced by j for its own production. The node labelled i = 0 conventionally represents households. Each edge in the graph j → i carries a "weight" that is a measure of the number of j goods needed to make an unit of i. The production function gives the quantity of goods y i produced by i as a function of input goods and labour (no capital at this stage) and the intrinsic, possibly time dependent, productivity of the firm z i (i.e. its efficiency in converting a given amount of inputs into outputs). We generalise the standard CES production function Arrow et al. (1961) as: 4 where x ij is the amount of good j (or labour x i0 := i if j = 0) available to i, J ij ≥ 0 and a ij ≥ 0 link variables that measure the importance of good j in the production of i 5 , and where we define γ i as the level of production of firm i. Note that although for all values of q ∈]0, ∞[, the J ij s can be absorbed into the a ij , our specification allows for consistent limits when q = 0 (Leontief) and q = ∞ (Cobb-Douglas), see below. The parameter b sets the return to scale: if all inputs and work hours are multiplied by a factor λ, then total output is multiplied by λ b . The parameter q measures the substitutability of inputs. For example, when q → 0 + we get the Leontief production function, corresponding to the case where production falls to zero if a single input is missing: If q → +∞, we get the Cobb-Douglas production function for which some amount of substitutability is present. Indeed, halving the quantity x ik of input k can be compensated by multiplying the input of by 2 a ik /a i , where the a ij describes the amount of substitutability between the goods in the production of i. Although our dynamical model applies to any production function, and is not restricted to the CES family specified above, we will for the sake of simplicity illustrate our general arguments using the special case of a Leontief production function with constant return to scale (b = 1), as equilibrium conditions can be solved explicitly. However, the general phenomenology of the model does not depend on this specific choice and applies to a broad family of production functions. Given the prices p i of the goods and wage p 0 , the profit π i of firm i can be written as where G i denotes the total proceeds of the future sales ("gains"), x i0 := i the working hours provided by the household and x 0i := C i is the consumption of good i by the households. Now, the textbook protocol at this stage is to impose that firms maximise their profit assuming that markets will clear, so that all that is produced will be sold, hence Using the production function (II.1), profit maximisation by firm i then leads to the optimal quantities of input goods x ij and optimal production z i γ i . Note that when b = 1, the corresponding solution leads to profits that are zero in equilibrium, but they are strictly positive when b < 1, corresponding to imperfect competition in that case. Following the logic of our paper, we take another stance and depart from the standard definition of equilibrium in two ways: 1. Since we do not assume that markets clear at each time step, firms can only compute the optimal input quantities x ij required to reach a certain production target y i := z i γ i . Since firms do not know in advance how much of their production they will be able to sell (and consequently how much they will earn), the only lever on which they can act is the cost term that they attempt to minimise. The optimal value of γ i will then be obtained through a dynamical adjustment process, see below. 2. We assume that our firm network is competitive in the sense that enough firms sell similar goods to drive profits down to zero at equilibrium. (An explicit description of such a competitive process would entail introducing a time dependent network where firms rewire towards cheaper suppliers. This is however beyond the scope of the present work, which assumes that this process has already taken place). More explicitly, the cost-minimizing input quantities are such that Within the CES framework, this leads to: with ζ = (1 + q) −1 . In the Leontief case with b = 1, this boils down to which amounts to buying no more than the minimum amount needed to reach the target. We then obtain prices and productions by assuming perfect competition, i.e. π i = 0 for all firms (step 2), and perfect market clearing. This last condition can be written as where C i is the households' demand for good i. Except when return-to-scales are constant (b = 1), the above two-step procedure is not equivalent to the standard profit maximization where market clearing is assumed from the start, which allows firms to know their gains in advance and include them in the optimisation program. For Leontief production functions with b = 1, the resulting equations are linear and identical to those of the classical equilibrium for which profits are naturally zero when markets clear. They read: where M is a matrix defined as M ij = z i δ ij − J ij (where δ ij is the Kronecker symbol δ ij = 1 for i = j, 0 otherwise), V i := p 0 J i0 is the workforce need of firm i and κ a positive vector describing final demand. 6,7 For more general production functions, the equations can be written down as well -see Appendix A -but we will not consider them further in the present paper. The important features are: • For Eqs. (II.8a, II.8b) to have non-negative solutions for prices and productions, M must be a so-called M -matrix, as shown by Fiedler et al. (1962) , Hawkins and Simon (1949) and, in the context of production networks, Moran and Bouchaud (2019) . Owing to its particular shape, with non-negative terms on the diagonal and negative terms on the off-diagonal, this is equivalent to the spectrum of M having a non-negative real part. For a given set of input-output coefficients J ij , this imposes that firms productivities must be large enough, otherwise no admissible equilibrium exists where all N firms are alive. 8 • For all finite values of q < +∞ in the CES production function, some analogous conditions must be fulfilled for an admissible equilibrium to exist, see Moran and Bouchaud (2019) . • When q = +∞ (i.e. in the Cobb-Douglas case), positive solutions to the equilibrium equations always exist, independently of productivities or network coefficients (see Appendix A), as is the case in the article by Acemoglu et al. (2012) . The possible non-existence of static solutions for generic production functions and network topologies urges us to go beyond equilibrium and formulate dynamical equations that would still make sense in such cases. But even in situations where an admissible equilibrium exists, it is by no means automatic that the economy is able to reach it on its own device. And even if it does, the description of non-adiabatic situations, i.e. those for which technologies and productivities evolve on a time shorter than the time needed to reach equilibrium, also require consistent dynamical equations. Interestingly, when the economy is close to an instability, e.g. when the smallest eigenvalue of M tends to zero in the Leontief case, the time needed to reach equilibrium will turn out to be infinitely large. This not only makes the adiabatic assumption moot but, as we shall see, compels the modeller to handle dynamical effects with special care. In this section, we introduce the simplest version of a dynamical model aimed as describing out-of-equilibrium effects (transient or permanent) in a network economy. The equations we will postulate are based on reasonable "rules of thumb" that firm decision makers are likely to use in real life conditions, see Gigerenzer et al. (2002) , Kahneman and Tversky (1973) , Tversky and Kahneman (1974) . In looking for such reduced form dynamical equations, we draw inspiration from what physicists call "phenomenological approaches", based on symmetry, plausibility and dimensional arguments. Such arguments avoid getting lost in the "wilderness" of possible models -to paraphrase Sims -once the straight-jacket of rationality is jettisoned. Whereas in the economic equilibrium as defined in the previous section profits are zero and markets clear, outof-equilibrium situations tautologically imply non zero profits and/or excess supply or demand. So we naturally introduce, for each firm, two indicators that measure the distance from equilibrium: E i (t) is the excess production at time t (interpreted as unsatisfied demand if E i (t) < 0), and π i (t) the instantaneous profit or losses of the firm at time t. Prices and productions must then adapt through some kind of adjustment process to reduce these imbalances: • Faced with excess production, firms will lower prices to prop up demand, and/or reduce production to limit losses. • Faced with excess demand, on the other hand, firms can consider increasing prices and/or increase production. • Similarly, when profits are positive, firms may be tempted to increase production but at the same time competition, attracted by the prospect of a profit, should put pressure on prices. • If profits are negative, firms will try to adapt by lowering production and increase prices, with the hope of better compensating production costs. All these rules are common sense and it is hard to argue that they do not play a crucial role in the real economy with boundedly rational agents. What is more debatable, however, is how to model them quantitatively. In this work, we further assume that restoring forces are all linear in E i (t), π i (t), at least when these imbalances are small enough. If only for dimensional reasons, all quantities determining price and production relative changes must appear as relative, non-dimensional quantities, i.e. ratios of E i (t) to total production y i (t) and π i (t) to total sales y i (t)p i (t). Hence we posit the following adjustment rules for prices and production: where δt is an elementary time step, and the parameters α, α , β, β characterise the speed of adjustment in the face of imbalances. From our general arguments above, we expect that all these parameters are non-negative, i.e. that firm policies and market forces tend to dampen imbalances. Whether these will be sufficient to stabilise the whole economy around the classical equilibrium described in the previous section is the whole point of the present research. These parameters could depend on the firm i, with some firms choosing to be more aggressive than others in their adjustment policy. Throughout the present work we will stick to time-independent and firm-independent values for α, α , β, β . 9 The simple rules of Eqs. (III.1a, III.1b) are very similar in spirit to those used in several well studied Agent Based Models -see Delli Gatti et al. (2008) , Gualdi et al. (2015b) . Note that α > 0 reflects our hypothesis that competition is at play in the economy, pushing prices down when profits are positive. Although null profits and market clearing obviously imply from Eqs. (III.1a, III.1b) that prices and productions are time invariant, the converse is more subtle. Assume indeed that there exists quantities p i , y i , E i and π i towards which prices, productions, and imbalances converge under the dynamics (III.1a, III.1b). These values should satisfy If the matrix of parameters is non-singular, then the only solution is trivial and E i = π i = 0 which coincides with the equilibrium defined in the previous section. The only way through which this matrix can be singular is when αβ + α β = 0. Since α, α , β, β are chosen to be positive, this happens only when at least one of the pairs (α, α ), (β, β ), (α, β ) or (α , β) is equal to (0, 0). In the first two cases, prices or productions are frozen in time, making the dynamical rules moot. In the last two cases, prices and productions are driven either only by profits or only be production surplus. The dynamics will converge towards a partial competitive equilibrium with only one of the two conditions of null profits or market clearing fulfilled. This is again not a satisfying choice of parameters, because it implies no reaction from the firms to either supply/demand imbalances or to profits/losses. Therefore for generic cases our dynamical rules have fixed points that correspond precisely to competitive equilibria. 10 Eqs. (III.1a, III.1b) may now be closed by expressing imbalances in terms of prices p i and productions y i , as: where C i (t) is the consumption of households, i (t) the quantity of labour, and where we have again restricted our analysis to constant returns to scale Leontief production functions. With this, one must too model the consumption of households. For simplicity, we assume that households work full time, and denote by L 0 the total amount of available labour (this assumption will be relaxed below, as we will allow for unemployment, see section IV D). Consumption is obtained by saturating the current budget p 0 (t)L 0 to maximise a log-consumption utility, i.e. max Ci(t) i where θ i is the preference for good i. The optimal consumption is then C i (t) = L 0 θ i /µ(t)p i (t) with µ(t) = i θ i /p 0 (t). Putting all these ingredients together and taking the continuous time limit δt → 0 in (III.1) yields the following system of coupled non-linear ordinary differential equations (with V i = p 0 J i0 ): (III.5b) Interestingly, these equations bear a strong resemblance to generalised Lotka-Volterra models used in theoretical ecology by Biroli et al. (2018) , where an ecosystem self-organises into a configuration that is highly susceptible to amplify external perturbations. Newer extensions to such models, along the lines of Roy et al. (2020) , show that they can also explain anomalous, persistent fluctuations in the populations of the different species that make up an ecosystem. The different analogies linking the study of firm networks and ecosystem have also been fruitful in linking the notion of trophic levels, namely the position of a species along the food web, to the "upstreamness" of a firm along the supply chain, as done by Antràs et al. (2012) , and in the work of MacKay et al. (2020) where these concepts are used to study the properties of production networks. Interesting parallels between these two domains could also arise when studying the impact of technological innovation or biological evolution within these models. The economic intuition behind the analogy with Lotka-Volterra equations is the following: when dealing with a complex assembly of interacting entities, be it an ecosystem with species having attained a certain evolutionary level or an economy with firms capable of using certain technologies, one is considering a complex system with a large amount of feedback loops. The different entities depend on one another in a way that creates feedback loops that can lead to very volatile oscillatory behaviour or even chaos, and in general to crises where certain firms or certain species must become "extinct" to re-stabilise the system, a point that was made by Moran and Bouchaud (2019) and Bak et al. (1993) arguing in favour of "self-organised criticality". Although the analogy with ecology is chosen here because it is easy to understand, we stress that we believe this is a generic characteristic of a large class of systems with a large number of inter-dependencies, and so that this is particularly the case for firm networks. Equations (III.5) are our "naive" candidate equations for the out-of-equilibrium dynamics of the firm network model, which limitations will be discussed below. One immediately checks that the equilibrium solutions p eq,i and γ eq,i (given by Eqs. (II.8a, II.8b)) are fixed points of these equations, as it should be. One can also study the linear stability of this equilibrium. Writing p i (t) = p eq,i +δp i (t) and γ i (t) = γ eq,i +δγ i (t) and keeping only terms of order 1 in δ(.), one finds a linear evolution equation for a 2N dimensional vector U = (δp, δγ), of the form: The equilibrium stability is determined by the sign of the eigenvalues of the corresponding 2N × 2N dynamical matrix D. Such an analysis is detailed in Appendix B. When all eigenvalues are negative, equilibrium is locally stable. Any small perturbation away from equilibrium decays towards zero, at a rate asymptotically given by the eigenvalue closest to zero. The corresponding relaxation time τ relax can be computed explicitly when the Hawkins-Simon conditions are on the verge of being violated, i.e. when the smallest eigenvalue of the network matrix M is at a distance ε → 0 away from 0. We find (see Appendix B): (III.7) This expression allows us to draw two important conclusions: (III.5) for N = 100 firms. The initial relative distance in the simulation is taken to be δ = 10 −3 . The high productivity regime corresponds to a high value of ε = 1000 and leads to a very short relaxation time τ relax . On the other hand, in the low productivity regime where ε → 0, the system takes longer and longer to reach equilibrium again, and the relaxation time τ relax diverges. • When ε → 0, the relaxation time of the system diverges, i.e. it takes an infinitely long time to reach equilibrium. As we mentioned in the introduction, this makes the adiabatic approximation unsuitable as changes in the technologies and in the network structure will happen before equilibrium can be reached. This long time scale also leads to an amplification of exogenous volatility in the system, see below. • As long as α, α or β are strictly positive, the relaxation time is finite. (The equilibrium is still stable if some coefficients are negative provided others are positive and sufficiently large.) A numerical illustration of the type of weakly out-of-equilibrium dynamics predicted by the model is shown in Fig. 1 . One sees a complex interplay of spontaneous oscillations (coming from the imaginary part of the eigenvalues of the dynamical matrix D) with a slowly decaying envelope, ∝ exp(−t/τ relax ). Note however that one must be particularly careful about spurious numerical effects when simulating (III.5). Indeed, such differential equations fall into the category of so called stiff ordinary differential equations (ODEs). They are characterised by an evolution governed by two (or more) very different timescales. For a dynamical system of the form III.6, we denote by σ ν the eigenvalues of the matrix D (as in Appendix (B)). We callσ and σ the two eigenvalues such that |Reσ| ≥ |Reσ ν | ≥ |Reσ| , ∀ν, i.e. respectively the fast and slow timescales of the system. The stiffness ratio is defined as r = |Reσ| |Reσ| , and the system is said to be stiff if this ratio is large. In our case, as ε → 0, σ will remain finite whereas σ is of order ε making the stiffness ratio r behaves as ε −1 (see Appendix B). Stiff ODEs require special care for their simulation. More precisely, one cannot use simple explicit integration routines with fixed step-size but rather implicit schemes such as Radau integration (see Hairer et al. (1993) ). Now, suppose that the parameters describing the economic equilibrium (such as productivities or household preferences, etc.) are changing over time, the dynamical equation governing economic fluctuations, Eq. (III.6), becomes: where ξ(t) represents the (weak) exogenous shocks to the economy. It is then not hard to show (see Appendix D) that in the limit ε → 0, the volatility of prices and output is proportional to ε −1/2 , and can thus be much larger than the variance of the exogenous shocks when the system approaches the limit of stability. The intuitive reason is that past shocks linger a very long time (comparable to τ relax ) in the system and aggregate with more recent shocks, leading to a much larger overall perturbation. Hence, the proximity to the point of instability is a natural candidate to explain the "small shocks, large business cycle" paradox (see Bonart et al. (2014) for a related discussion). An illustration of this phenomenon for our model is given in Fig. 2 . However, in this scenario, fluctuations are predicted to persist over long times ∼ ε −1 . We will discuss in section V C 4 below another scenario for "large business cycles" based on non-linear, endogenous fluctuations rather than on long-lived exogenous fluctuations. The above results suggest that, although "naive", our equations already provide an interesting generic scenario for anomalous fluctuations of output, namely the proximity of an instability. Note that the dynamics we have described is directly linked to a large body of work concerned with the stability of large complex systems (see the historical precursors Gardner and Ashby (1970) , May (1972) , and Fyodorov and Khoruzhenko (2016) and Bunin (2017) for recent general approaches), using random matrix techniques to represent generic interactions. These papers highlight the importance of studying of the eigenvectors and eigenvalues of large random matrices for understanding of complex systems, with other noteworthy contributions by Neri and Metz (2012) , Tarnowski et al. (2020) and Mambuca et al. (2020) . However, the naive approach above sweeps under the rug important constraints that, while irrelevant at equilibrium, turn out to be essential out-of-equilibrium: • Causality: firms must plan production before they know how much they will manage to sell. • Supply/demand imbalances (which are zero if markets clear): when supply exceeds demand, inventories accumulate, whereas when demand exceed supply (including inventories) involuntary savings increase. These extra variables should play a role in the out-of-equilibrium evolution of the economy, but are totally absent from Eqs.(III.5). Furthermore, if some input goods is missing, Eqs. (III.3b) incorrectly account for imbalances. In the next section, we will propose a minimal, fully consistent model that allows one to account for both causality and imbalances. Interestingly, we will see that hard constraints -such as the impossibility to consume more than what is available -lead to intrinsically non-linear dynamics, even for small perturbations close to equilibrium. As a consequence, limit cycles or chaotic behaviour will spontaneously emerge, when Eqs. (III.5) can only lead to damped oscillations converging to equilibrium. Such generalised equations in fact allow one to obtain legitimate dynamics even in the region where the equilibrium is no longer defined, i.e. when ε < 0, whereas Eqs. (III.5) cease to make sense in this case (prices and productions are are always dragged below zero). As we just mentioned, the naive approach of the previous section, although interesting, is at best approximate and incomplete since it overlooks some incontrovertible constraints, such as physical bounds (consumption cannot be larger than production plus inventories) and causality (or "time to build"). In particular, when shortages are present, input goods must be allocated among customers in a specific way, which we will choose to be proportional to the posted demand. But in turn, such shortages will lead to production undershooting targets. Still, some of the results of the previous section will turn out to be useful to understand the extended model presented below. Evolution of the projection u + N (t) of U(t) onto the eigenvector of D associated to the marginal eigenvalue, responsible for the volatility increase (see Appendix D) after productivity shocks with volatility σ = 10 −8 and ε = 10 −4 , y-scale 10 −6 (left), ε = 10 3 , y-scale 10 −11 (right). For ε = 10 −4 , the volatility of output and prices is of the order of 10 −6 , i.e. 100 times larger than σ, as expected from theory. Accounting for the first constraint implies the following. If demand exceeds supply, all of a firm's production will be sold and exchanged, whereas if supply exceeds demand, only the quantity that was demanded will be traded, leaving a surplus that will add to the firm's inventories. Hence, the flow of goods going from i to j must be computed with care; instead of the single quantity x ji (t) considered in the previous section, we need to introduce the amount of goods i demanded by firm j, x d ji , that can only be smaller or equal to the quantity actually exchanged, x ji . This can be understood as a contract that may only be fully honoured if firm i produces enough to meet all demands. In a similar fashion, we distinguish the amounts C d demanded by households from what they will effectively be able to buy, C. Similarly, the work hours posted by firms d i may not be equal to the total amount of work L s households are willing to provide. To handle the situation where supply exceeds demand, we keep track of firm i's inventory of good j, denoted by I ij (t) and to which we successively add the goods that the firms did not manage to sell or to use and subtract those that perished. Implementing causality in the dynamics also means dissecting the firms' decision processes. Clearly, goods can only be sold at time t after they have been produced at time t − 1, and prices may change (if only slightly) between these two times. More importantly, firms only have partial information about the amount of goods they will be able to buy and sell when they plan for the next production cycle. Likewise, the number of employees they will be able to hire is not known precisely, because it depends on the amount of work deemed acceptable by the households. It is at this stage that we will introduce a heuristic rule that allows firms to plan for the next production round by making more or less informed guesses about these unknown quantities. In the present work, we assume that firms base their estimate on what happened in the previous time step, although more complicated and more general rules can already be imagined. In order to keep all causal constraints satisfied, one must carefully set up a consistent chronology for the actions of firms and households. The resulting time-line of the model is schematised in Fig. 3 . Each time step δt (δt = 1 hereafter) is conveniently sliced in three successive "epochs", represented as boxes in Fig. 3 . At the end of time step t − 1, goods have been produced and are available for consumption at t in quantities y i (t) and prices p i (t). (1) Forecasts (2) Production targets At any given time, firms must plan how much to produce for the following period. To capture this, we keep exactly the same adjustment rule as in the naive version of our model, Eq. (III.1b), but using now the expected profits E t [π i ] and excess productions E t [E i ] at the end of the period, which we specify below. Thus, the target production for time t + 1, y i (t + 1), is set using where G i (t) denotes the proceeds of the sales ("gains"), L i (t) the production costs ("losses"), D i (t) the overall demand for good i and S i (t) the supply of good i, which is already known to the firm at time t, Once the target productions for t + 1 are decided, the corresponding quantities x ij are computed according to Eq (II.5). Firm i then posts its demands for inputs j for delivery at time t, taking into account their current stock of I ij of said inputs, with the rule Thus, if stocks are plentiful, the firm will prefer drawing from them instead of buying new inputs. In the meantime, households calculate their own consumption target for good i as detailed below and they also decide, given offered wages, how much labour they are willing to supply, a quantity we call L s (t) that now may not correspond to full employment. At this point, firms start hiring workers from the job market, albeit without exceeding the total supply of work L s , i.e. where i is the real amount of work contracted by firm i. Workers are paid the same wage p 0 (t) independently of their employer. 11 Conventionally, we prescribe that wages are paid immediately upon hiring -regardless of any technical unemployment in the future caused by shortages of inputs -which allows the household to compute its available budget for the present period: with S(t) the household's savings. The household's demands for goods C d i (t) are computed in section IV D. Trading can now start, whereby firms sell their production and buy the goods they need, in a way to satisfy the constraint that the total amount of goods sold cannot exceed production plus inventory, viz. (IV.5) If demand exceeds supply, buyers are satisfied proportionally to their posted demand, and so quantities x that are effectively exchanged are given by where D i (t) is the total demand for good i at time t. The equation for C i (t) is slightly more convoluted because we do not give households access to debt, see Eq. (IV.30) below. At this point, firms have an exact knowledge of their earnings and expenses. Their profit at round t may now be computed: Firms also know how much excess supply or demand they actually registered: Realised profits and supply/demand imbalances then generate price updates. We describe them exactly as in Eq. (III.1a), which now reads: where all quantities are now known. 12 Prices are updated due to tension between supply and demand, which is in our framework a natural channel for inflation or deflation. By the same token, tensions on the job market are bound to lead to wage updates, which we postulate to be of the same form as for price updates, namely meaning that excess demand of labour increases wages, and vice-versa. This rule implements a Phillips curve at each time step (see Phillips (1958) and Blanchard (2016) ). One could also use an asymmetric update rule, accounting for the fact that lowering nominal wages is more difficult than raising them. Finally, one could also consider adding a direct coupling between the inflation of the price of goods and wages, as an extra term in the right hand side of Eq. (IV.10). The last epoch corresponds to the start of production. Firm i uses the workforce i , along with available quantities x a ij that depend on exchanges x, optimal inputs x and inventories I, as Indeed, if the inventory I allows to provide for optimal input x, then no demand is posted (see Eq. (IV.2)): x = 0 and x a = x. Otherwise, the firm acquired a quantity x that now adds to available stocks, and so x a = x + I ≤ x. Note that labour cannot be stored, and therefore I i0 = 0 at all times. Now that all of the available inputs x a ij and labour i are known, the outputs are determined by the firms' production functions, which in the Leontief case with b = 1 entails: The firms' inventories of their own production is also updated, as where the decay factor σ i measures the perishability of good i. For durable goods, σ i 1 and e −σi ≈ 1, whereas σ i 1 and e −σi 1 for perishable goods. Furthermore, in the Leontief framework total production is limited by the scarcest input, which is therefore depleted during production, leaving a fraction of the other inputs unused. We denote so that we can write the fraction of inputs k = j (i) effectively used as (IV.14) The unused inputs add to firm inventories, and their update may be written using Eq. (IV.11), as Finally, for numerical purposes, it is convenient to rescale new prices p i (t + 1) by the new wage p 0 (t + 1) to avoid exponential growth (or decay) of prices induced by inflation (or deflation), effectively measuring prices in units of wages. We therefore set: 13 This concludes the third and last epoch of the time step. The process is then repeated at time t+1, with productions y i (t + 1) and prices p i (t + 1). To close the model, we now need to specify how firms estimate their future profits/losses and excess/deficit production. The behaviour of households must also be spelled out, to allow for the determination of the demand of goods and the supply of labour. We may write the expected profit of firm i as showing that in the planning phase firms must estimate future goods and labour demand, which we will denote generically as E t [x] . Similarly, the expected excess production is also a function of E t [x]: The simplest assumption we can adopt is that firms are "sticky", and estimate all future demands to be equal to their last observation (which follows the rationale that firms produce in order to meet total demand), i.e. However, some immediate generalisations come to mind. For example, firms may also factor in realized quantities x(t − 1) in their estimate, and set as a learning rule where λ ∈ [0, 1] is a parameter. Our "sticky" assumption that will be used henceforth thus corresponds to λ = 1. Another possible generalisation is that firms use a more sophisticated learning rule that allows them to estimate E t [x] using time-series analysis, the simplest of which is "constant gain learning" (equivalent to computing the exponential moving average) of past realised demands. This is similar to the AR1 estimation of economic growth used by the agents of Poledna et al. (2019) in their decision-making process. Trend-following, extrapolative rules may also be considered. All these extensions are beyond the scope of the present paper; at this stage, our ambition is to set up a minimal consistent framework, free of spurious numerical instabilities, and that can converge to competitive equilibrium in some region of parameter space. As in standard macroeconomic models, we assume that households are represented by a single representative agent with a certain disutility for work, who seeks to maximise the following utility function 14 is the total amount of work provided by the representative household. The so called Frisch elasticity index ϕ, after the eponymous Frisch (1959), gives a measure of the convexity of the disutility of work, L 0 is the scale of the amount of work that the household is able to provide and Γ is a parameter that can be set to unity without loss of generality. In the limit ϕ → ∞, households are indifferent to the amount of work provided L(t) < L 0 , but refuse to work more than L 0 . With an utility function of this form, the household may then compute its optimal demand for good i, C d i (t) which it will set as a consumption target for period t, and the optimal amount of labour L s (t) it is willing to provide to firms. To compute the aforementioned quantities, the household needs to know its current savings S(t) and anticipate its income for the next period. The expected utility is estimated with optimistic forecasts (i.e. consumption demand will be met and offered labour will be fully utilised). Wage p 0 (t) and prices p i (t), on the other hand, are all known before the "Exchange and Update" stage, see IV B 2. Hence, with an expected budget constraint that reads 15 is the expected (or in fact hoped for!) budget. For convenience, we denote as W 0 (t) = p 0 (t)L 0 the wage associated to L 0 work-hours. The household optimises its expected utility while enforcing the budget constraint using a Lagrange multiplier In order to find µ(t), one must enforce (IV.23). We find the following equation on µ(t): . (IV.27) When ϕ = 1 (a common value found in the literature and corresponding to a quadratic work-disutility), we have Note, interestingly, that high savings lead to reduced labour supply. Also, because of possible involuntary unemployment, the household may want to consume more than it is able to spend when L d (t) < L s (t). A final word on the scaling behaviour of these quantities with N is in order. For large N we expect that the size of the household sector will also be of order N . Noting thatθ is also of order N , one finds the following befitting scaling laws if we choose L 0 ∼ √ N : meaning that total work-hours and total consumption are proportional to the size of the population, as it should be. 15 In a follow-up paper, we shall introduce precautionary savings and interest rates, which lead to the appearance of inflationary equilibria. Note also that we assume that all goods are immediately consumed by the household, which does not make much sense for durable goods. This also could be reconsidered. 16 Although not necessary for the purpose of the present paper, it is important to allow for confidence effects, which can lead to endogenous crises (see e.g. Morelli et al. (2020) ). One possibility is to couple the consumption propensity to the unemployment level, taken as a proxy of consumer confidence, i.e.: log where θ 0 i are the baseline values for consumption preferences. In the following, we will fix ω = ω. Because we do not allow households to borrow in the present version of the model, real consumption must be adjusted in the case of partial unemployment. In this case, the available budget is necessarily smaller than what was hoped, leading to a realised consumption: with B(t) their available budget computed in (IV.4) . The difference between C i (t) and C r i (t), if positive, is added to the inventory I ii (t) of firm i. The households' savings are then updated as: The above steps look rather tedious and considerably more complex than the simple logic behind our first "naive" model. Nonetheless, they are quite natural when one decomposes all the stages of a real production process. But more importantly, we have found that short-circuiting any of these steps leads to inconsistent dynamics with spurious instabilities, reflecting that natural constraints are in fact violated. Furthermore, the approach of behaviour modelling as a series of actions or sequence of events is a typical feature of ABMs, where the ordering of these events is done in a coherent way as to ensure causality. An important difference with the naive version of section III is the large number of update rules that necessarily involve cusps, such as those involving taking the maximum or minimum of two expressions, see section V B below. Furthermore, the number of thumb rules used by firms and households to aid their decision has increased, and so has the number of parameters that are needed to describe a given instance of our toy economy. Therefore, and in spite of the fact that the naive model allows a fair understanding of certain regions of the parameter-space of the full model, we cannot reasonably attempt an exhaustive description using analytical tools only. We therefore resort to a numerical exploration of its properties, using computer simulations that are described in detail in the pseudo-code provided in Appendix F. We also provide access to an open access simulation tool that allows the reader to explore different configurations here: https://yakari.polytechnique.fr/dash. The following section is a numerical investigation of the very rich phenomenology of the above model, supplemented with some analytical results when possible. Because of the relatively large number of parameters, we only investigate here some specific "cuts" in parameter space, but believe that these cuts are representative of all the possible dynamical classes that the model can generate. To facilitate reading this section, we will first recall the different parameters that can be adjusted. We will then explore the different types of dynamical trajectories that can be observed in our toy economy, and classify them into different "phases". This idea comes from physics, where the macroscopic properties of a system can be split into different parameter regions where its aggregate behaviour is qualitatively the same. These regions only depend on the values taken by a handful of parameters that describe the system; an eloquent example is that of water, which depending on the pressure or temperature can be in either the liquid, solid or gas phase. We will therefore present the following "phase diagrams" that summarise the influence of the parameters on the broad dynamical behaviour of our model, an idea that was already advocated for economic Agent-Based Modelling in Gualdi et al. (2015b) . The different parameters introduced in the previous sections may be split into two categories: static parameters, describing the production network and the production function, and dynamic parameters, describing the evolution of prices, labour and outputs. We provide an overview of them and of the typical values we assign to them in our simulations below. a. Static Parameters 1. Number of firms N -here N = 100. 2. Type of network -here a random regular directed network, see Kim and Vu (2006), McKay (1981) , where each firm has the same number of clients and suppliers d = 15. 3. CES production function -here a Leontief production function (q = 0 + ) with a return to scale parameter b = 0.95. 17 4. The smallest eigenvalue ε of the production matrix M, which for large values corresponds to a stable economy. 5. Firm inter-linkages J ij , which we take to be 1 when firms i and j are linked and zero otherwise. 6. Firm productivities z i , first set to 1 and then adapted to adjust ε to take the required value. 18 7. Baseline household consumption preferences θ 0 i , modelled by iid uniform random variables rescaled to have i θ 0 i = 1. 8. Work disutility Frisch index, set to ϕ = 1 (quadratic disutility of labour) and scale of workforce set to L 0 = 1. 9. The behavioural extrapolation parameter λ, defined in Eq. (IV.20), is set to 1. Note that we shall also simulate our model on more realistic models of firm networks, including actual input-output networks constructed from the FactSet database (FactSet (2021)), see Appendix E. b. Dynamic Parameters 1. Parameters describing restoring forces: α, α , β, β , (see Eqs. (IV.1)-(IV.9)). We restrict ourselves to the case β = α = β = α and scan for varying values of α. 2. Phillips curve parameter ω, relating wages to tensions in the job market (see Eq. (IV.10)). 3. Confidence parameter, relating consumption propensities to unemployment: ω (see Eq. (IV.24)). For this study, we take ω = ω. 4. Perishability parameters σ i describing the speed of decay of good i, all taken as σ i = σ except when otherwise indicated. These choices therefore reduce the number of parameters to explore to four: ε (network stability), α (strength of restoring forces), ω (Phillips curve parameter) and σ (perishability). We will now show how varying them may lead to a very rich phenomenology. As stressed above, the naive model of section III C can be linearised, leading to a complete analytical estimation of the time needed to reach equilibrium. The cusps of the full model, however, imply that perturbative analysis produces at best piecewise-linear equations. 19 To be precise, let us attempt to linearise the different update rules by writing δx(t) = x(t) − x eq for the perturbed value of any quantity x and expanding the different equations to lowest order in δ·. When applied to the flows x ji one gets: Depending on the sign of δS i (t) − δD i (t), the flow of exchanged goods is characterised by two different linear equations, rendering the system piecewise-linear. The same feature also holds for exchanged work (replacing δS i (t) − δD i (t) by δL s (t) − δL d (t)) and realised consumption (where the switch depends on δS i (t) − δD i (t) as well as the budget constraint which is more cumbersome to write, see (IV.30)). But this means, perhaps surprisingly, that there does not exist a limiting case where the full model would boil down to the "naive" model of section III. Linearising around equilibrium yields piecewise-linear dynamics that can be described by the evolution of a (N 2 + 4N + 1)-dimensional state vector that we denote by U(t), following the notation of section III C. This vector encodes perturbations on stocks δI ij (t) (stacking the columns of the N ×N stocks-matrix in an N 2 -dimensional vector), current production targets δ γ i (t + 1), past targets δ γ i (t), production levels δγ i (t), prices δp i (t), and finally the household's savings δS(t). To study possible switches in the conditions defining our piecewise-linear dynamics, let us define two vectors c i and c w such that separating state space into two regions: • the no shortage region for i, H + i (resp. H + w ) where c i U(t) > 0 (resp. c w U(t) > 0) where i's supply is enough to cope with demand (resp. work offer is enough to cope with work demands); • the shortage region for i, H − i (resp. H − w ) where c i U(t) < 0 (resp. c w U(t) < 0) where i's supply is not enough to cope with demand (resp. work offer is not enough to cope with work demands). The intersection of these half-spaces defines regions of space called cones in which the linearised dynamics are fully characterised by a well-defined stability matrix. Calling S ⊆ [[1, N ]], the set of firms with a shortage of goods, we define stability matrices in each cone as follows Although the dynamics inside each cone are linear, knowledge of the eigenvalues of the stability matrices is in general not sufficient to conclude on the stability of the entire system. Indeed, knowing whether a cone is preserved or not by its stability matrix is essential to understand the dynamics. If the linear dynamics corresponding to the stability matrix inside a cone preserves it, meaning that any trajectory starting in the cone will always be contained within it, the dynamics becomes trivial. On the other hand when this is not the case, a trajectory may switch back and forth between different cones, and it will therefore be described by a product of stability matrices. It is this product that one most study in order to conclude on the overall stability of the system. This can lead to quite complicated trajectories, where for example two different cones have stability matrices that are such that a trajectory starting in one inevitably ends up in the other and vice versa, leading to a pseudo-oscillation that can be stable in the long run. To our knowledge, the mathematical tools needed to account for these interesting cone-wise linear dynamics are not available in the general case. For each set of values of the parameters (α, ω, σ, ε), we start from a random perturbation about equilibrium of relative magnitude δ = 10 −3 , taking e.g. p i (t) = p eq,i (1 + δu) with u uniform in [−1, 1]. 20 We then run the dynamics for T = 20000 time-steps and consider only the last 2500 to classify the trajectory into one of several classes that are detailed below. The trajectories we will use to classify the behaviour of our model are those of relative price differences δp(t) := p(t)/p eq,i − 1. 21 In order to provide more vivid illustrations of some of these dynamical types, we have made firms slightly heterogeneous in their values of the parameters α and σ. In the figure captions below, the notation α, σ ∈ [A, B] means that these quantities are chosen uniformly in [A, B] , independently for each firm. Finally, for all price trajectories reported below, we highlight one firm at random to make the time-series more readable. In general, we observe five classes of behaviour (or "phases"): convergence towards the competitive equilibrium, convergence towards deflationary equilibria, crises, business-cycle like oscillations or chaotic oscillations and economic collapse, where the economy crashes after a finite number of time steps. Different phase diagrams corresponding to this classification can be seen in Fig. 5 with ε in [100, 1, 0, 01, −5], and the study and description of these phases is detailed in the sections below. Note that the boundaries of the phase diagrams depend on the network of interactions, especially as ε → 0. If ε 1, then productivity factors are very large, and network effects can safely be neglected. However, as ε → 0, . (α, σ) ), all for the same network economy, ω = 0.1 and different values of ε. The color code is explained in the legends. The region where the competitive equilibrium state is stable shrinks when ε decreases, and disappears when ε < 0 as deflationary equilibria and cycles/chaos take over. One also observes regions with cycles and chaos, and crises. Finally, when restoring forces are to weak (small α) the economy crashes. these network effects become more and more important and the specific type of network will play a role. The regular network chosen in this section is thus only meant to illustrate the different classes of dynamical trajectories that can be generated by the model. But such classes are in fact generic and appear for a broad family of networks, and are a consequence of the non-linear update rules followed by the firms. Before delving into the description of each of these classes, we note in particular that (see Fig. 5 ): • All the different classes appear for values of parameters α, ω, σ, ε of order unity, to wit: interesting dynamical behaviour do not require uncanny values of the parameters. • The region where the competitive equilibrium is reached shrinks as the economy approaches the instability ε → 0 from above. When ε < 0, there is no admissible equilibrium and only deflationary equilibria or cycles/chaos can be attained. • For a fixed perishability σ one observes the following succession of phases as the restoring parameter α is increased: collapse when α is too small, followed by deflationary equilibria, then competitive equilibrium and finally cycles and chaos for large α, corresponding to firms that overreact to imbalances. • At the boundary between competitive equilibrium and cycles and chaos, one observes intermittent crises, similar to the ones described in Gualdi et al. (2015b Gualdi et al. ( , 2016 ) -see below. Starting from α = 0 and increasing its value, the model finds itself first in a collapse phase, where prices diverge exponentially and productions plummet to zero. As α grows we reach a critical value α c corresponding to a transition from the collapse phase to one where the economy is able to stabilize (either in a deflationary of competitive equilibrium). This transition, which can be observed in Fig. 5 , appears to be independent of both ε and σ. The fact that α c is independent of ε means that the economy collapses when prices are too slow to adjust, regardless of the nature of the firm network. The exact value of α c in Fig. 5 can be computed to be: where ω is the Phillips curve parameter relating wages to unemployment (see Eq. (IV.10)). This value is obtained by diagonalizing the stability matrix of the system D 0 in the no shortage cone δS i (t)−δD i (t) > 0 for all firms i (see section V B), which happen to be stable under the quasi-linear dynamics. 22 This collapse transition may also be observed along the diagonal ω = α of the right plots in Fig. 12 below. (Note however that an additional wedge where the economy diverges appears for small ω which will be discussed in V F 2.) The most natural behaviour one could expect is for the economy to converge to a competitive equilibrium, where all profits are zero and markets clear, as classically assumed in economics models. This is indeed what happens, but, interestingly, it requires α to be neither too small, nor too large, i.e. when restoring forces are strong enough to stabilise the system but not too strong to avoid overshoots and the corresponding impossibility for the economy to coordinate. Perishability σ should also be large enough, see Fig. 5 . Finally, as returns to scale diminish (i.e. as the parameter b decreases), the region where competitive equilibrium can be attained becomes more extended (see Fig. 8 below) . In order to give some economic meaning to the phase where competitive equilibrium is reached, let us focus on the case ε = 1, i.e. a firm network of moderate average productivity, relatively far from the Hawkins-Simons instability. Choose the unit time scale of the model to be a quarter (3 months), a reasonable period for firms to adjust prices and production. Competitive equilibrium cannot be reached when α < α d (ε = 1) ≈ 0.395. Such a value of α means that a firm facing a production imbalance of say 10% will attempt to reduce it to 10% × e −0.395 ≈ 6.7% over the next quarter. Attempting to reduce it much faster leads to oscillation and chaos. For example, when σ = log 2 ≈ 0.69, corresponding to half-life of goods of a quarter, α should remain smaller than ≈ 0.55 to avoid falling in the yellow region of Fig. 5 . Note that when σ drops below ≈ 0.3, competitive equilibrium is unattainable. Note further that within the competitive equilibrium phase, convergence can either be purely exponential, or correspond to damped oscillations or even damped chaos, see Fig. 6 . The precise nature of the relaxation depends on the relative values of α, α , β and β . Interestingly, marginal stability and diverging relaxation times may occur even for non-zero values ε. Indeed, as we get closer to the transition line α d ( ) between competitive equilibrium and deflationary equilibrium, the time needed to converge gets larger in the same way as in the naive model. As an illustration, if we take ε → ∞, one finds α d = α c as given by Eq. (V.4). If we now set α = α c + δ with δ α c , it is not hard to show that the largest eigenvalue of D 0 is given by 1 − δ/(3(1 + ω)) + O(δ 2 ). The relaxation time is then of order δ −1 and indeed diverges close to the destabilisation transition as in the naive model. The same scaling δ −1 for the relaxation time holds for generic values of ε, when α = α d (ε) + δ. Finally, note that at fixed σ, the interval for which competitive equilibrium can be reached shrinks to zero, to wit ]α d (ε), α d (ε) + f (ε)[, with f (ε) → 0 as ε → 0. More precisely, numerical simulations show there exists a value ε c (depending on σ, b and ω) such that where r = f (ε + c ). By the same argument as above, the relaxation time will scale as f (ε) −1 ∼ 1/(ε − ε c ) and we retrieve the same behaviour as in the naive model. An interesting feature of our model is the appearance of a different kind of equilibrium, corresponding to stationary points where profits and excess demand are non-zero, but equal to a constant value. We call them "deflationary" equilibria because prices synchronise with the (negative) inflation rate determined by the downward evolution of wages, induced by chronic unemployment (i.e. L s > L d ). We denote byπ ∞ i andĒ ∞ i the rescaled values of profits and excess supply in the stationary state. These must then verify (see Eqs. (IV.1, IV.9)): In the present case where λ = 1 in (IV.20) one also has E ∞ [Ē i ] =Ē ∞ i so one can simplify these equations to get In contrast with the competitive equilibrium, which is independent of the dynamical parameters α, α , β, β , deflationary equilibria are characterised by prices and production levels that depend on the parametrisation of the dynamics. Explicit expressions for the stationary prices/productions are, however, difficult to compute analytically. Fig. 7 shows an example of the convergence of inflation-adjusted prices towards their stationary values. Note that in real terms, the stationary price level is above the equilibrium value. Throughout our simulations we have found these equilibria to be rather stable. For a fixed value of ω, the transition between deflationary equilibria and competitive equilibrium occurs at a value α d (ε) which is difficult to compute analytically. We can nevertheless locate this transition Stable eq. decreases, one can see that the area below α d (ε) tend to decrease i.e. the region where competitive equilibrium is reached becomes larger. As mentioned in footnote 17, decreasing return-to-scale tend to stabilise the dynamics. Note that the values of α d (ε = 100), α d (ε = 10) and α d (ε = 1) are consistent with values observed on the phase diagrams of Fig. 5 for which b = 0.95. Finally, as ε → 0, one can see in Fig. 5 that the smallest σ over which this transition exists gets larger. As a consequence, for small ε, the region labelled "competitive equilibrium" only exists for large enough σ. numerically by simulating the stability matrix in the no-shortage cone mentioned in the previous subsection. Results are reported in Fig. 8 . However, these deflationary equilibria make little economic sense in the long run, because (a) the stationary level of production tends to be extremely small compared to equilibrium values and (b) forecasts of consumption (for households) and profits (for firms) systematically overshoot their realized counterparts. One expects that in such situations, like in the case of economic collapse, the influence of monetary and fiscal policies cannot be neglected. Furthermore, we expect that when biases are strong and systematic, agents would soon adapt and change their forecasting rules accordingly. Such an extension is however beyond the scope of the present paper, but a natural conjecture is that firms would react more strongly to imbalances (i.e. increase the coefficients α, α , β, β ), which would drive the system back in the competitive equilibrium phase or in the oscillatory phase. We finally point out that we have not found, within the present specification of the model, inflationary equilibria where the demand for labour exceeds the supply. However, we have found that introducing precautionary savings used to buy interest rate paying bonds leads to new phenomena, including a whole region where inflationary equilibria are now found. Owing to the strongly non-linear dynamics defining the model, it is natural to expect that some choices of the parameters lead -as in generic dynamical systems -to oscillations or to chaotic dynamics, which is indeed what we observe in a whole region of parameter space -in short, when firms tend to over-react and adjust prices/productions too quickly in the face of imbalances. The first interesting oscillatory behaviour is that of spontaneously emerging business cycles, as shown in Fig. 9 . They can be either synchronised ( Fig. 9 -a) or completely unsynchronised ( Fig. 9-b) , depending on the values of ω and ε, and the relative values of α and β . Chaotic oscillations also emerge (see Fig. 9 -c). We stress that such persistent oscillations, observed in the rather large portions of the phase diagram, are not due to external perturbations, absent in these simulations (compare with section III D where small external shocks are amplified by the proximity of an instability). Rather, this is a region of the phase diagram where the volatility of the economy is purely endogenous (see Bonart et al. (2014) for similar observations). This provides yet another scenario to explain the "small shock, large business cycle" puzzle described by Bernanke et al. (1996) , different from the proximity of an unstable point, as in section III D. Volatility may be high because of the existence of self-sustained oscillations/chaos, as reported here and in many previous work in which a dynamical systems approach to economics was advocated, see e.g. Flaschel (2008), Goodwin (1982) , Grandmont (1985) , Keen (1997) , Rosser (1999) and also Chiarella et al. (2005) , Delli Gatti et al. (2005) , Gualdi et al. (2015b) , Pangallo (2020) in the context of ABMs. This additional dynamical class is represented in Fig. 10 . Here, a fast relaxation to equilibrium is followed by spontaneous destabilisation. The system enters a cycle of price inflation and plummeting production. This is most likely due to a switch between different cones, characterised by different stability matrices, as discussed in section V B. The first matrix is stable, whereas the second has at least one eigenvalue out of the unit circle, and therefore an unstable direction. Non-linear saturation effects then take over and quell the dynamics, and the system flows back towards equilibrium before the next crisis appears. These acute endogenous crises are one of the most interesting aspects of our model; they also appear in the Agent Based Models of Gualdi et al. (2015b) and Sharma et al. (2020) where they result from a generic synchronisation mechanism, as made explicit by Gualdi et al. (2015a) . A weakness of the naive model of section III was that it can only produce divergent trajectories whenever ε < 0, i.e. in the low productivity phase. As illustrated in Fig. 11 , our full model produces instead a wide range of interesting behaviour in this case, from deflationary equilibria to oscillations. Of course, since there is no well-defined equilibrium, the convergent phase is now proscribed. However, an important message is that a viable economy can exist even if the Hawkins-Simon condition is violated, but at the expense of either substantial stationary imbalances or oscillatory/chaotic behaviour. Finally, we illustrate here the crucial role of inventories in determining the type of dynamics we observe. As shown in the phase diagrams of Fig. 12 in the (α, ω) plane at fixed σ, goods that perish immediately (σ = ∞) lead to simple relaxation towards equilibrium (deflationary/competitive) or to a collapse. In a sense, this limit is as close as possible to the "naive" model of section III where all inventory effects were overlooked. On the other hand, non-perishable goods lead to oscillating, highly volatile economies. Intuitively, if firm i has a stock I ik of good k, it will decrease its demand to firm k, leading to a decrease of its production. This lasts until all stocks are exhausted. A phase of booming demands and increase in production follows, firms' stocks begin to pile up again and the economy enters another cycle. This is similar to the well-known "bull-whip effect" suggested by Dai et al. (2017) , where inventories are known to lead to instability effects. These instabilities indeed disappear completely when σ = ∞ (right column of Fig. 12 ). The phase diagrams of Figs 5 and 12 have been established by classifying the behaviour of the system after an initial perturbation around equilibrium of magnitude 10 −3 . But our system is non-linear, larger perturbations may lead to different outcomes for the same set of parameters. In this section, we study of the impact of initial conditions on the dynamics. Large perturbations leads to the system reaching a deflationary equilibrium. Also note that downwards perturbation can lead to deflationary equilibrium. In this case, the system overreacts and blows up to reach a deflationary equilibrium. Dashed black lines separate the regions of positive and negative perturbation on prices or productions. The red star corresponds to no-perturbation. As for any non-linear dynamical system, the basin of attraction of a given fixed point is defined as the set of initial conditions that will allow the system to reach it. The analytical determination of basins of attraction is a notoriously difficult question, especially for high dimensional systems. As pointed out in section V B, it is possible to linearise the dynamics around equilibrium. The subsequent dynamics is piece-wise linear and described by the evolution of a N 2 + 4N + 1-dimensional vector. As it would be unrealistic to explore separately the effects of a perturbation on each and every component of this vector, we will restrict our study to uniform perturbations on current productions, prices and production targets. We thus parametrise the perturbations as where r p and r γ are the perturbation radii ranging from −1 (initial values at 0) and +∞. For a given r p , we scan all values of r γ and find the largest upward and downward possible perturbation allowing the system to revert back to equilibrium. Beyond this domain, the dynamics may drive the system to another phase. Fig. 13 shows the approximate regions for which the dynamics reach equilibrium after a perturbation of size (r p , r γ ). For large ε, the system is able to sustain very large perturbations when r p , r γ > 0. However, whenever either r p or r γ is negative, the system can end up in the collapse region. We will discuss this point in the next section. Finally, as one expects, the basin of attraction drastically shrinks as ε is reduced (Fig. 13-right) . One can see that the system is still able to cope with large perturbations on production provided that prices are not too far from equilibrium. The shrinking of the basin of attraction of the competitive equilibrium state as ε → 0 again reveals how network effects are crucial to understand the fragility of the economy, since the value of ε is, we recall, determined by productivity on the one hand, and the structure of the input-output network on the other. FIG. 14: Phase diagrams (α, ω) for ε = 10 4 , β = 0.1, α = 0.25, σ = ∞ and α = β. Left: The system is initialised in the no-shortage region by applying a small upward perturbation on equilibrium prices and productions of magnitude 10 −4 . Right: The system is initialised in a mixture of no-shortages and shortages (50%/50%) by applying a perturbation around equilibrium prices and productions. The red line corresponds to the prediction α c = √ β ω (here α = β) for the stability of the matrix D 0 . Note here that we fixed values of α and β to illustrate another possible shape of the transition around α c . If we had chosen, as previously, α = β = α = β , the transition line would have been the line α = ω, as in Fig. 12 . On top of the importance of the magnitude of perturbation, the direction of the perturbation matters as well. This is a consequence of the separation of state space in different cones. For a small perturbation around equilibrium, if the system is initialised in the no-shortage cone, the dynamics will behave differently than if it were in the full shortage cone. As an illustration, Fig. 14 shows the phase diagrams in the plane (α, ω) for the same parameters but for different initial perturbations. On the left, a small upward perturbation is applied on equilibrium prices and productions but initial targets are set to γ eq . This prepares the system in the no-shortage cone since production is higher than in equilibrium and household's demand lower. We see that the collapse region is very well described by the stability of the matrix D 0 defined in section V B, which in this case keeps the trajectory inside the no-shortage cone. On the other hand, the right-hand plot shows that initialising the system in a mixture of no-shortages/shortages adds an additional wedge of collapsing dynamics (note that the same wedge is present on the diagrams of Fig. 12) . Above this line, the matrix D S drives the dynamics outside the partial-shortage cones. The system finally reaches the no-shortage cone, which is preserved by D 0 and for which equilibrium is stable. Below this line, the dynamics is thrown into the full-shortage cone, which is preserved by D N but where, on the other hand, equilibrium is unstable. To further illustrate this effect, we ran the dynamics of our model with mixed initial conditions p i (0) = p eq,i (1 ± r p ), γ i (0) = γ eq,i (1 ± r γ ), γ i (1) = γ eq,i (1 ± r γ ), (V.9) where we choose + for 50% of the firms and − for the others. This prepares the system in a state where 50% of the firms cannot fulfill demands. In Fig. 15 , we show the basin of attraction of equilibrium for perturbations r p , r γ ranging from 0 to 1. As we see, large enough shortages can destabilise the dynamics even at large ε. Let us first summarise the main messages of this paper. This work started from the observation made in Moran and Bouchaud (2019) that generic input-output network models cannot reach a competitive equilibrium state when productivity is too low, connectivity too high, or substitutability too low. This begs the question: what happens to the economy in such cases? We argued that the answer to such a question is necessarily of dynamical nature, and demands an extension of the standard equilibrium framework to out-of-equilibrium equations of motion, that aim to describe how imbalances regress in time and how fast equilibrium is reached -if it is reached at all. We first proposed what we called a "naive" model, based on the idea that forces driving the economy back to equilibrium are linear in the imbalances (profits and supply/demand imbalances). This leads to interesting non-linear (Lotka-Volterra type) differential equations for prices and productions which predict, among other things, that the equilibration time diverges as the network economy approaches the instability point at which competitive equilibrium is no longer admissible. We argued that this long time scale also leads to excess volatility, as the impact of past exogenous shocks cannot quickly dissipate. We then pointed out that the naive model does not correctly factor in physical constraints: excess demand cannot be satisfied, excess supply must be stored, consumption can only start after goods are produced, wages can only be spent after being paid, etc. Accounting for all these constraints within a consistent model considerably complexifies the resulting equations, but leads to a model which displays a much larger variety of possible dynamical behaviour, some very far from the competitive equilibrium. In fact, the dynamics of the model can remain well-behaved even in the region of parameters where equilibrium is inadmissible because some prices and/or productions would be negative, unless some firms are removed from the network. A numerical investigation of the full model leads to rich phase diagrams, from which we extract the following salient features, with clear economic implications: • The competitive equilibrium attracts the dynamics only in a restricted range of parameters: the speed at which firms adapt to imbalances must neither be too slow nor too fast, and the rate at which goods spoil must be high enough. Diminishing returns to scale also help convergence towards equilibrium. • When the adaptation speed is too large, or the perishability of goods too low, coordination breaks down and the economy enters a phase with periodic or chaotic business cycles of purely endogenous origin, as was also reported in Bonart et al. (2014) . • Close to the boundaries between the competitive equilibrium phase and the oscillating phase, one observes a regime of intermittent crises, with long periods of quasi-equilibrium interrupted by bursts of inflation. • Another class of equilibria exists, with a negative inflation but with stationary real prices and production different from those pertaining to the competitive equilibrium. In particular, markets -including the job market -do not clear in such situations: labour supply is always larger than labour demand. These equilibria are however characterised by persistent discrepancies between forecasts and realized quantities, which presumably make them unstable against simple learning rules. • For inflationary equilibria to exist, where labour demand is larger than labour supply, one needs to introduce precautionary savings and interest rates in the model. • Finally, we have checked that the overall shape of the phase diagram is robust to changes of the structure of the network (although see Appendix E for additional information about the dynamics on real input-output networks) and of the specific form of the CES production function that one uses. This means that our results are generic and should hold in realistic situations as well. Our model therefore suggests two distinct out-of-equilibrium routes to excess volatility (or "large business cycles"): (a) purely endogenous cycles, resulting from over-reactions and non-linearities, or (b) persistence and amplification of exogenous shocks, governed by the proximity of a boundary in parameter space where the competitive equilibrium becomes unstable. While scenario (a) may appear at first sight to be more generic, the self-organized criticality scenario proposed long ago by Bak et al. (1993) could make (b) plausible as well. Specific empirical work is needed to distinguish between these two scenarios. It should however be borne in mind that many relevant features of the real economy are left out of the present version of the model. In particular, whereas firms are allowed to make losses, we have not accounted to the cost of credit that this would entail, and the impact of monetary policy, increasing or decreasing the interest rate in the face of inflation/deflation. Nor have we introduced a bankruptcy mechanism when firms go too deep into debt, removing non-competitive firms along the lines of, e.g. Sharma et al. (2021) . But this would require moving from a static network of firms, as considered throughout this work, to a dynamically evolving network that rewires as some firms go bankrupt and others are created. In fact, another motivation for moving from such a static framework to a rewiring model is to be able to describe possible cascades of bankruptcies mediated by the input-output network, much as cascades of defaults can occur in banking networks. We leave this for further investigations. The household sector also needs to be better described, moving away from the representative household assumption and introducing wage inequalities, confidence effects (as, for example, in Morelli et al. (2020) ) and debt. In fact, our dynamical model can be seen as a hybrid between traditional economic models (describing equilibrium) and Agent Based Models, where extra reasonable but ad hoc rules are implemented to account for out-of-equilibrium, dynamical aspects. As we have shown, in some swath of parameters, the classical competitive equilibrium is reached. If reached fast enough, the "adiabatic" assumption used in most standard descriptions will hold, whereas when the equilibration time is long (or even infinite) new phenomena appear. We hope that the possibility of recovering standard results in some limiting cases will make the ABM approach more palatable to economists, and at the same time elicit the inherent limits of general equilibrium ideas. Conversely, including firm network effects in ABMs like Mark-0 (Gualdi et al. 2015b (Gualdi et al. , 2016 along the lines of the present model is certainly worthwhile. Finally, an appealing feature of our approach is the possibility to use highly dis-aggregated data on individual firms and prices (for example through the "Billion Price Project" Cavallo and Rigobon (2016) ) to calibrate the model and, hopefully, use it as a powerful descriptive and predictive tool. We look forward to working in that direction in the near future. also thank Antoine Mandel and Ben Moll for very useful comments on the first version of this manuscript. JPB benefited from numerous conversations with the members of the "Rebuilding Macroeconomics" project, including Angus Amstrong and Roger Farmer. JM would like to acknowledge countless discussions with Dhruv Sharma about Agent-Based Modelling. This research was conducted within the Econophysics & Complex Systems Research Chair, under the aegis of the Fondation du Risque, the Fondation de l'Ecole polytechnique, the Ecole polytechnique and Capital Fund Management. In this section, we summarise all the key notations that are used throughout the paper. q is the elasticity of substitution between inputs. The case q = 0 corresponds to a Leontief production function where inputs are not substitutable to one another whereas q = ∞ corresponds to a Cobb-Douglas production function where inputs are fully substitutable. Furthermore, we call ζ = 1/(q + 1). b is the return-to-scale parameter. J ∈ M N,N +1 (R) is the input-output matrix. Its entries J ij denote the amount of inputs made by j needed by i to produce one unit of its good, and therefore defines a weighed adjacency matrix and an interaction network. Conventionally, the input j = 0 corresponds to labour and we use the notation J i0 = V i . a ∈ M N,N +1 (R) is the substitution matrix. Its entries a ij and a ik indicate the ease with which firm i can replace an input k with another input j. For example, a large value of a il with respect to the other a ik s means that input l can easily substitute any other input. Λ ∈ M N,N +1 (R) = a qζ • J ζ is the aggregate matrix for the Constant-Elasticity of Substitution production function. M = ∆ z ζ i −Λ is the network matrix with the productivity factors of the firms on the diagonal. We implicitly cross out the first column of Λ. When q = 0 (i.e Leontief production function) M = ∆ (z i ) − J. ε is the smallest eigenvalue of the network matrix. N is the number of firms. z i is the productivity factor of firm i. α is the log-elasticity of price growth rates against production surplus. α is the log-elasticity of price growth rates against profits. β is the log-elasticity of production' growth rates against profits. β is the log-elasticity of production growth rates against production surplus. ω is the log-elasticity of wage growth rate against labour market tensions. σ i is the depreciation parameter of good i. p i (t) ∈ R N is the price of good i time t. p 0 (t) is the common wage used to pay the household at time t. y i (t) := z i γ i (t) is the production of firm i at time t along with the corresponding production levels γ i (t) at time t. y i (t) := z i γ i (t) is the targeted productions by firm i at time t along with the corresponding targeted production level γ i (t) at time t. I ij (t) ∈ M N (R) is the inventory of good j possessed by firm i. In particular, the diagonal terms I ii (t) corresponds to the stock of its own production. G i (t), L i (t), S i (t) and D i (t) correspond respectively to the proceeds of sales ("gains"), the production costs ("losses"), the supply and the demand for each firm at time t. is firm i's production surplus at time t. x ij (t) is the quantity of good j that minimises the costs for firm i given a certain production target and for a given production function. x i0 (t) := i (t) corresponds to the optimal amount of work required. x d ij (t) is the quantity of input j that is demanded by firm i to firm j. x d i0 (t) := d i (t) corresponds to the demanded amount of work. x ij (t) is the quantity of input j that is effectively exchanged. x i0 (t) := i (t) corresponds to the amount of work the household is hired to do for firm i. x a ij (t) is the quantity of input j that is available for production. x a i0 (t) := a i (t) corresponds to the available workforce for production. x u ij (t) is the quantity of input j that is effectively used for production. x u i0 (t) := u i (t) corresponds to the available workforce for production. λ is a behavioural parameter determining how firms forecast their future exchanges. Household θ i is the consumption preference of the household for good i. L 0 is the nominal number of hours that the household is willing to work. Γ is the aversion to work parameter. ϕ is the convexity-to-work parameter. ω is a consumption confidence parameter. We first enforce the market clearing condition x eq,ji + C eq,i , (A.1) and inject it into the zero-profit condition using (II.5). We can deduce a nicer expression for the quantity p net eq,i = N j=0 Λ ij p ζ eq,j at equilibrium: and therefore a nicer expression for the exchanged quantities x eq ij = Λ ij p −qζ eq,j z qζ i p qζ eq,i γ Using the null budget condition we can retrieve the equilibrium consumption so that we have every ingredients to get closed form equations on prices and production levels. We express (A.2) and (A.3) back into the the zero profit condition to retrieve the first equilibrium equation: , and then in the market clearing condition to retrieve the second equilibrium equation: In the case where q → 0 + and b = 1, one can check that (II.8) is retrieved. b. Case q = +∞: Cobb-Douglas To retrieve the equations in the case q = +∞, we need to take this limit in (II.5). It yields We can then express the quantity z i γ b−1 b eq,i p eq,i through the zero profit condition as Using the market clearing condition, we can get the first equilibrium equation in the Cobb-Douglas case: To get the second equation, we inject the previous into (A.4) and take the logarithm. It reads In the Cobb-Douglas case, a positive equilibrium for prices and productions always exists. Indeed, looking at the second equation, one sees that a solution generically exists (except in the very specific case where b −1 is an eigenvalue of a) for log p eq . Exponentiating this solutions shows that p eq will always be positive. For the first equation, the matrix I N − a is always invertible since the eigenvalues λ of a are such that |λ| ≤ N j=1 a ij = 1 − a i0 < 1 thanks to Gershgorin's theorem. This also proves that I N − a is in fact an M-matrix, which makes the solution of this equation positive, implying in turn that γ eq is also positive. The question of positive solutions to non-linear algebraic systems is a complicated one. For now, no general recipe exists to prove that positive solutions exist except in some very specific cases. The previous set of equations for q < +∞ and generic b is no exception. However, first order approximations are possible. Setting q = 0 (Leontief production function), the equations read We know the solutions p eq , γ eq for b = 1 and we set b = 1 − δ. Assuming that equilibrium prices and productions are of the form p eq = p (0) eq + p (1) eq (δ), γ eq = γ (0) eq + γ (1) eq (δ) such that at least p Developing the solution at leading order in ε (we use results and notations from Appendix C) we get where any function of a vector is understood as component-wise. We can write p eq at leading order in ε and get a first approximation of ε c (δ) under which equilibrium prices are negative In the case of an undirected d-regular network, we have (A.10) Fig. 16 shows the region where an admissible equilibrium exists. FIG. 16: Region of existence of a positive equilibrium for a d-regular network on n = 100 firms. We see that we retrieve the standard Hawkin-Simons transition ε c (0) = 0 for δ = 0 i.e b = 1. The first order solution A.10 is plotted in blue and fits very well for this range of values of b and ε. However, upon zooming on the frontier on the right side of the plot, one begins to see a divergence between the blue line and the numerical frontier. In this section, we assume that the productivity factors are large enough to ignore interactions between firms. In this regime, firms are efficient enough so that the actual amount of inputs does not matter in the final production. In this limit, we can give approximate expressions for the equilibrium prices and productions Similarly, we approximate each block of the stability matrix: (B.4) and deduce the spectrum of the D by computing its characteristic polynomial and setting it to 0: Solving this equation yields two eigenvalues σ ± , both with degeneracy N , that read This in turn lets us deduce the relaxation time: Studying the behaviour of D as ε → 0 + requires understanding the behaviour of M, p eq , γ eq in that limit. We now introduce the matrix J = ∆ (z max − z i ) + J and denote by ρ ν (resp. |r ν , ν |) 23 its eigenvalues (resp. right/left eigenvectors) ordered by their real parts. The Perron-Froebenius theorem implies that the top eigenvalue ρ N is real, simple and associated to a full and positive eigenvector. We next use the following spectral representation of the matrix M: (B.8) which lets us express the equilibrium prices and outputs as well as D. We also use the notation M 0 to refer to the network matrix when ε = 0. This matrix is singular and verifies Expanding in ε and neglecting factors of order ε 4 and higher gives the following for the blocks of the stability matrix: Interestingly enough, although the upper-right block of D diverges as ε → 0, its spectrum converges to a finite limit. To see this, we use the block determinant formula for same-size matrices, where the commutator [C, D] = CD − DC = 0. In our case, we need [D 3 , D 4 ] = 0 which is true only in the limit ε = 0. We can then write: 24 Each factor in this product yields two eigenvalues: The trailing factor shows that 0 is an eigenvalue of D (for ν = N ), twice degenerated as ε → 0. We deduce that the system exhibits marginal stability in this limit. Figure 17 shows the empirical distribution of eigenvalues of D and the corresponding theoretical predictions. We have thus far shown that our system exhibits marginal stability at ε = 0. We now prove that the relaxation time of the system behaves as τ relax ∼ ε −1 . To this end, we use analytical perturbation theory as described in Avrachenkov 24 We do not need to consider terms of order one in the commutator because D with σ ν ± given in the previous section. We now try to find a perturbation of the σ 2 term to retrieve the perturbation on σ N ± = 0. Using analytical perturbation theory, we see that (ε, σ) = (0, 0) is a splitting point under the perturbation D(ε) (ε = 0 is a multiple point -since D has at least one multiple root for ε = 0 -and σ N ± = 0 is a multiple root.) In this setting, σ N ± = 0 splits under the perturbation D(ε) to give 2 perturbed eigenvalues. Henceforth, for small enough ε, the prime factor σ 2 of χ(σ, 0) is expressed as a second order polynomial whose coefficients depend on ε. We may write 2 ε 2 + · · · ) + σ(a This expansion makes sure that p 0 (σ, ε) −→ ε→0 p 0 (σ). Moreover, at least one of the a (i) 0 is non-zero. Otherwise we would be able to factor out σ in p 0 (σ, ε), meaning that for small enough (but non zero) ε, 0 ∈ Sp(D(ε)) which we know to be false because the system is stable for ε > 0. Furthermore, we know that the splitting behaviour of σ N ± = 0 is imposed, ensuring that the discriminant of p 0 (σ, ε) cannot vanish (leading to a multiple root), which yields another condition on the coefficients. Finally, since we are looking at complex roots in general, p 0 (σ, ε) will always factor into two irreducible and normalized polynomials of degree 1. This ensures that ∀i ≥ 1, a (i) 2 = 0 and that a (1) 0 = 0. This last point is not so straightforward and warrants an explanation. From Avrachenkov et al. (2013) , the Puiseux series for the perturbed eigenvalues σ N α (ε) can be written as where g N α is the degree of the polynomial from which the root σ N α is extracted. In our setting g N α = 1 meaning that the first perturbation to σ N α is of order ε. Now, we also know that σ N α is obtained by solving the second order equation p 0 (σ, ε) = 0. This means that both roots read We may now write ∆ as (1) 0 = 0, the dominant term of σ N α will be of order o( √ ε) which contradicts the previous analysis. Finally, we can attempting looking for a perturbation resembling To determine the different terms in this expansion, we re-use the determinant computation that we carried in the previous section, but keeping now terms up to order ε 2 . This yields: The constant term det Σ (0) (σ) is the characteristic polynomial of D for ε = 0 so that det Σ (0) (σ) = χ(σ, 0). Similarly, it is easy to prove that, for a diagonalizable matrix A with eigenvalues λ and associated eigenvector |λ , the matrix Com (A) can be diagonalized in the same basis and reads (B.15) Using this lemma, we can write We now develop each trace term onto the eigenbasis of Com Σ (0) (σ) . From now on, we drop the σ dependencies of the Σ matrices but bear in mind that these matrices are polynomials of order one in σ. The first trace reads Only the first term is of interest for us and we can use the explicit forms of the blocks of D to find The same computation can be carried out for the second trace term, 1 |e N which we do not need to compute. The square trace terms are very complicated, and we only sketch out their computation. The terms that could have entered in the perturbation of p 0 (σ) cancel out (these are sums of square terms). The terms that are rational fractions of polynomials (and could be pathological since we look for a polynomial perturbation) cancel out as well. The other terms do not enter the perturbation of p 0 (σ) and are non-pathological. Finally the perturbation of p 0 (σ) resembles We now write the discriminant of this polynomial at second order to get We retrieve the same separation as in the large ε regime. Denoting by β c = (α+α +β ) 2 −4α β 4α , we have at order one in ε: (B.17) In the limit ε → 0, ρ N = z max and we retrieve the equations given in the text. Figure 18 shows the adequacy between the theoretical estimate and the actual largest eigenvalue (obtained through numerical simulations of the matrix D) as ε → 0. In this section, we give the values of the perturbation terms for the blocks of the stability matrix. We introduce several notations for quantities that simplify in the case of an undirected network with homogeneous productivity factors. Finally, we use the bra (resp. ket) notation to refer to a row (resp. column) vector |v and we denote by v i its i th component. These results allow to write the equilibrium productions with ψ eq,i = µθi peq,i γ eq,j = µ ε r N | ψ eq l N,j + µ (1.b) The next step is to perturb the stability matrix itself. This yields no particular difficulty but calculations are a bit long so that we only give the results for the different blocks. We denote by τ g k the coefficients of the expansion of zγ where τ i = ρ N e N,i / e N | V for an undirected network. We have Appendix D: Critical volatility of prices and outputs with fluctuations In this section, we consider a general evolution of a vector U(t) given by the linear stochastic equation where D is a real N × N matrix and ξ(t) is a Gaussian correlated noise such that We assume the dynamical matrix D to be diagonalizable with real eigenvalues 26 such that Negative eigenvalues means that the system is stable i.e U(t) → 0 as t → ∞ for any initial condition. We assume that ε → 0 and show that the volatility of U(t) increases as ε −1/2 . We introduce the eigenvectors e ν associated to λ ν and we express U into the diagonal basis Using the quick decay of the exponential term in the τ integral, we can expend the integration domain... Denoting be τ ξ the typical correlation time of G, we see that • if ετ ξ 1 (meaning that G correlates on short time-scales) then G(τ ) ∼ δ(0) such that ∞ 0 dτ e −ετ G(τ ) ≈ 1, • if ετ ξ 1 (meaning that G correlates on long time-scales) then G(τ ) ∼ G(0) on the decay time of the exponential such that ∞ 0 dτ e −ετ G(τ ) ≈ G(0) ε . Finally, the volatility of U(t) behaves as Note also that this result generalizes to discrete time processes (which is of interest in the case of the general ABM that we present) U t+1 = DU t + ξ t . (D.8) The marginal stability condition can be written as λ N = 1 − ε 27 with ε → 0. We can carry out the same kind of computation and derive the same result depending on the behavior of the quantity τ ≥0 (1 − ε) τ G(τ ). If we consider shocks on productivity factors z i (t) = z i + ξ i (t) with ξ(t) given as before, we can linearize the dynamics of the naive model in both small deviations from equilibrium and small shocks. The stochastic equation that we retrieve reads dU(t) dt = DU(t) + Ξ(t), (D.9) with a noise Ξ of the form , (D.10) with notations from C. The correlations of this noise are slightly more complicated than before (D.11) The dynamical matrix of the naive model with an undirected network two eigenvalues yields σ ± N = k ± ε → 0 with associated eigenvectors Σ ± N = (e N , ν ± ε) for undirected networks. We assume β < β c so that the marginal eigenvalues are real as well as their eigenvectors. It follows that, at leading order in ε, the volatility of the marginal components of U(t) behaves as ε −3/2 . Indeed where H represents the inverse participation ratio. To retrieve the volatility as ε −1/2 we may rescale δp i (t) (resp. δγ i (t)) by p eq,i (resp. γ eq,i ). Denoting by w i the i th canonical vector of R 2N , we have Var δp i (t) p eq,i = p −2 eq,i Var As mentioned at the beginning in V, random regular networks are a crude idealization of real interaction networks. Real networks have been studied extensively (see for instance Atalay et al. (2011) , Bernard and Moxnes (2018) , Carvalho et al. (2020) ) and display well identified topological features such as power law distributed in and out vertex degrees. Fig.19 illustrates the topological discrepancies between regular and real world networks, and highlights similarities with scale-free networks. To build the network of 19c, we use the FactSet Supply Chain Relationships database to build a supply chain network. The FactSet dataset (FactSet (2021)) contains a list of relational data between firms, stating if firms A and B have a client/supplier relation, if they are in competition or if they have a joint venture. It is built by collecting information from primary public sources such as SEC 10-K annual filings, investor presentations and press releases, and covers about 23, 000 publicly traded companies with over 325, 000 relationships. Since the relationships are inferred from data released to the public, we cannot be sure that it is an exhaustive database of all the relationships between firms, but the subset of relationships deemed important by the firm themselves. Such links between firms have a finite duration in time and have thus a beginning and end date. For our study, we have chosen the set of client/supplier relationships between the years 2012 and 2015. This allows us to build a graph G where a link i → j exists whenever i is reported to be a supplier of j or when j is reported to be a client of i. Furthermore, since this graph is not fully connected, we extracted its largest strongly connected component. Even though phase diagrams are not changed qualitatively if one changes the network, the features of the dynamics within one phase depends on the structure of interactions. As an illustration, we ran our model on the network represented on Fig. 19c for ε = 10 and ε = −3. Results are reported on Fig. 20 . While equilibria (whether competitive or deflationary) are reached in a somewhat similar manner as on a regular network, oscillatory patterns are much more disordered due to the inhomogeneity of in and out degrees. Finally, another effect closely related to network topology is worth mentioning. In the case of random regular networks, firms are always supplied by at least one firm. However it is possible for a firm to use labour as sole input. Upon simulating the dynamics on scale free networks with labour-supplied firms, we found that whenever deflationary equilibria occured, only a fraction of firms survived while the others saw their prices blow up exponentially and production plummet. Surviving firms are the ones for which going up the supplier network leads only to laborsupplied firms. household classes. Finally, the dynamics class handles the evolution of the model by storing the time series of prices, productions and so on, along with the different methods that allow the model to move forward in time; the dynamics class contains an instance of the economy class that is used for the simulation. In our framework, classes need one another to function properly. For instance, firms need to know the input-output network to compute their optimal quantities as in II.5. As a consequence, some methods take instances of classes in their arguments, as is the case in e.g. the compute optimal quantities method of the firm firms class. We detail this last method as an example in Procedure 1. Procedure 1 The firms class class Firms attributes z: 1d-array Productivity factors α: 1d-array Log elasticity factors of prices to surplus α : 1d-array Log elasticity factors of prices to profits β: 1d-array Log elasticity factors of productions to profits β : 1d-array Log elasticity factors of productions to surplus ω: float Log elasticity factors of wages to labor-market tensions methods 1d-array update prices(p(t), S (t), D(t), G (t), L (t)) Update prices according to (IV.9) float update wages(L s (t), L d (t)) Update wages according to (IV.10) 1d-array compute targets(p(t), Et[x(t)], S (t), γ(t)) Compute targets according to (IV.1) 4d-array compute forecasts(p(t), Et[x(t)], S (t)) Compute forecasts according to (IV.18, IV.17) 1d-array compute optimal quantities(γ(t), p(t), economy) Compute optimal quantities according to (II.5) 4d-array compute profits balance(p(t), x e (t), S (t), D(t)) Compute profits and balance according to (IV.7, IV.8) end class 2. Pseudo-code to execute one step of the time-line In Procedure 2, we present a pseudo-code to execute one time-step of the model. In order to get the full dynamics, one loops over this procedure during a time T , after a careful initialization. To initialize, one needs to give the dynamics an initial value for prices p i (t = 1), wages p 0 (t = 1), production levels γ i (t = 1), targetsγ i (t = 2), stocks I ij (t = 1) and savings S(t = 1). One also needs to carry out the initial planning by the household to have a value for E 1 [C d (t = 1)] and L s (t = 1). This entire process of initialization and loop over Procedure 2 in encapsulated into a class dynamics. This class stores the entire history of the most fundamental quantities (prices, demand matrix...) into array of the appropriate size, and uses reconstruction methods for all the inferable quantities (productions, targets, profits...). This way, the algorithm is quicker and less memory-demanding. Finally, Procedure 2 is quite detailed compared to the actual implementation. Bearing in mind complexity issues, most of the loops of Procedure 2 are implemented in a single line through matrix multiplication. Using the result (∆M ) ij = ∆ ii M ij and (M ∆) ij = ∆ jj M ij with ∆ a diagonal matrix, one can implement the procedure to go from demanded quantities to exchanged quantities as , 1, . . . , 1 x d (t) ∆ min 1, L s (t) L d (t) , min 1, S 1 (t) D 1 (t) , . . . , min 1, where we use the convention x d 00 = x e 00 = 0. Finally, we denote by ∂ in i (resp. ∂ out i ) the set of suppliers (resp. buyers) of firm i. Aggregate fluctuations from independent sectoral shocks: self-organized criticality in a model of production and inventory dynamics Agent-based modeling and economic theory: where do we stand Cascading failures in production networks The macroeconomic impact of microeconomic shocks: Beyond hulten's theorem. Econometrica Putting the cycle back into business cycle analysis The financial accelerator and the flight to quality Networks and trade Marginally stable equilibria in critical ecosystems The phillips curve: Back to the '60s? Instabilities in large economies: aggregate volatility without idiosyncratic shocks How markets slowly digest changes in supply and demand Ecological communities with lotka-volterra dynamics The Macroeconomics of Imperfect Competition and Nonclearing Markets: A Dynamic General Equilibrium Approach Supply Chain Disruptions: Evidence from the Great East Japan Earthquake Production networks: A primer The billion prices project: Using online prices for measurement and research Foundations for a Disequilibrium Theory of the Business Cycle Shocks. Carnegie-Rochester Conference Series on Public Policy Analysis on causes and countermeasures of bullwhip effect The eurace@unibi model: An agent-based macroeconomic model for economic policy analysis Supply and demand shocks in the COVID-19 pandemic: an industry and occupation perspective A new approach to business fluctuations: heterogeneous interacting agents, scaling laws and financial fragility Emergent macroeconomics: An agent-based approach to business fluctuations Macroeconomic Policy in DSGE and Agent-Based Models On matrices with non-positive off-diagonal elements and positive principal minors A complete scheme for computing all direct and cross demand elasticities in a model with many sectors Nonlinear analogue of the may-wigner instability transition Monetary policy, inflation, and the business cycle: an introduction to the new Keynesian framework and its applications Mercator: uncovering faithful hyperbolic embeddings of complex networks Connectance of large dynamic (cybernetic) systems: Critical values for stability Simple heuristics that make us smart The Dynamics of General Equilibrium On endogenous competitive business cycles Temporary Equilibrium. Working Papers Endogenous crisis waves: Stochastic model with synchronized collective behavior Tipping points in macroeconomic agent-based models Monetary policy and dark corners in a stylized agent-based model Solving Ordinary Differential Equations II: Stiff and Differential-algebraic Problems Some conditions of macroeconomic stability Note: Some conditions of macroeconomic stability The equilibrium image of the market The invisible hand : economic equilibrium in the history of science On the psychology of prediction From stochastics to complexity in models of economic instability Generating random regular graphs Time to build and aggregate fluctuations Dynamical Structure and Spectral Properties of Input-Output Networks. NBER Working Papers 28178 Analytical note on certain rhythmic relations in organic systems How directed is a directed network Dynamical systems on large networks with predator-prey interactions are stable and exhibit oscillations Price dynamics, financial fragility and aggregate volatility Will a large complex system be stable? The expected eigenvalue distribution of a large regular graph May's instability in large economies Confidence collapse in a multihousehold, self-reflexive DSGE model Spectra of sparse non-hermitian random matrices: An analytical solution Synchronization of endogenous business cycles The Relation Between Unemployment and the Rate of Change of Money Wage Rates in the United Kingdom Modeling simultaneous supply and demand shocks in input-output networks Production Networks and Epidemic Spreading: How to Restart the UK Economy? European Economics: Microeconomics & Industrial Organization eJournal In and out of lockdown: Propagation of supply and demand shocks in a dynamic input-output model Economic forecasting with an agent-based model Debt deleveraging and business cycles. an agent-based perspective On the complexities of complex economic dynamics Complex interactions can create persistent fluctuations in high-diversity ecosystems V-, U-, L-, or W-shaped recovery after COVID: Insights from an Agent Based Model Good speciation and endogenous business cycles in a constraint satisfaction macroeconomic model Universal transient behavior in large dynamical systems on networks Judgment under uncertainty: Heuristics and biases Fluctuations in the abundance of a species considered mathematically We thank Giulio Biroli, Doyne Farmer, Xavier Gabaix, Stanislao Gualdi, Alan Kirman, Johannes Lumma and Francesco Zamponi for multiple feedbacks on our research program, Vu Do Chi Toai for helping setting up the web app, and Camille Boissel for her help in understanding this model and her numerous insights about economics. We In this appendix, we show the computations that lead to the equilibrium equations on prices and production levels in the case of a general CES production function and a non-constant return-to-scale b. The non-linear dynamics of the naive model are given by Eqs. (III.5) . In this Appendix, we derive the relaxation time of the system in various limits. To linearise the system ,we write p i (t) = p eq,i + δp i (t) and γ i (t) = γ eq,i + δγ i (t) and inject these expressions into (III.5). After a few computations, we establish the following linear equation in the variable U(t) = (δp(t), δγ(t)) to first order:where the different blocks of the matrix are Appendix C: Blocks of the stability matrix Equilibrium prices are easily obtained by applying M −1 to the vector |V yieldingEquilibrium productions can be a little trickier to obtain. We first derive three useful identities to simplify calculations. For s = 1, . . . , n, we haveAppendix F: Code for the simulation The simulation uses an object-oriented approach. Each object has attributes (parameters of the model, or other quantities that we can infer from the parameters) along with methods that carry out more complicated tasks. There are four objects in our simulation. The first two are the firms and household classes, and correspond to the smallest entities in our model (the agents). The economy class carries all of the static information -essentially the different parameters describing the interactions between the agents -of the model, along with instances of the firms and Procedure 2 Fundamental time-step Phase 1 -Planning Input: L s (t), γ(t), p(t), I(t), x(t) for all firms i do Computation of targets according to forecasts x(t) ← compute optimal quantities(γ(t + 1), p(t), economy) for all firms i doWorkers are hiredHousehold adjusts its consumption demands (ν = 1 in this paper)Exchanges of goods are carried outHousehold consumes according to its budgetThe household saves unspent money for all firms i do x a i0 (t) ← xi0(t) Labor available for production is the hired workforce for all firms j ∈ ∂ in i do x a ij (t) ← xij(t) + min ( xij(t), Iij(t)) Available goods depend on exchanges and current stocks p0(t + 1) ← update wage(L s (t), L d (t), ω)Wage is updated for all firms i do pi(t + 1) ← update price(Si(t), Di(t), Gi(t), Li(t), α, α , β, β ) Prices are updated Output: x e (t), x p (t), S(t + 1), B(t), p0(t + 1), Gi(t),Gi(t) Phase 3 -Production Input: S(t + 1), B(t), p0(t + 1), Gi(t),Gi(t),x e (t), x p (t), x(t), I(t) for all firms i do γi(t + 1) ← production function [x a ij ] j∈∂ in i Production beginsFirms update inventories for their own good if q = 0 then If the economy is Leontief, firms need to stock other goods in addition to their own j ← arg minx p ij (t) Iij(t + 1) = e −σ j x a ij (t) − x u ij (t) for all firms i do pi(t + 1) ← pi(t + 1)/p0(t + 1) Prices are updated B(t), S(t + 1), p0(t + 1) ← B(t)/p0(t + 1), S(t + 1)/p0(t + 1), 1Rescaling of monetary quantities C d i (t + 1), L s (t + 1) ← compute demands labor(S(t), L s (t), L d (t), p(t + 1), ω , ϕ) The household starts planning Output: B(t), S(t + 1), p0(t + 1) = 1, pi(t + 1), γi(t + 1), C d i (t + 1), L s (t + 1), I(t + 1)