key: cord-0045107-z6tfetx7 authors: Audrito, Giorgio; Bergamini, Sergio; Damiani, Ferruccio; Viroli, Mirko title: Resilient Distributed Collection Through Information Speed Thresholds date: 2020-05-13 journal: Coordination Models and Languages DOI: 10.1007/978-3-030-50029-0_14 sha: bfdcc83a3da6a40a71e4607e0c3ce57a08702cf9 doc_id: 45107 cord_uid: z6tfetx7 One of the key coordination problems in physically-deployed distributed systems, such as mobile robots, wireless sensor networks, and IoT systems in general, is to provide notions of “distributed sensing” achieved by the strict, continuous cooperation and interaction among individual devices. An archetypal operation of distributed sensing is data summarisation over a region of space, by which several higher-level problems can be addressed: counting items, measuring space, averaging environmental values, and so on. A typical coordination strategy to perform data summarisation in a peer-to-peer scenario, where devices can communicate only with a neighbourhood, is to progressively accumulate information towards one or more collector devices, though this typically exhibits problems of reactivity and fragility, especially in scenarios featuring high mobility. In this paper, we propose coordination strategies for data summarisation involving both idempotent and arithmetic aggregation operators, with the idea of controlling the minimum information propagation speed, so as to improve the reactivity to input changes. Given suitable assumptions on the network model, and under the restriction of no data loss, these algorithms achieve optimal reactivity. By empirical evaluation via simulation, accounting for various sources of volatility, and comparing to other existing implementations of data summarisation algorithms, we show that our algorithms are able to retain adequate accuracy even in high-variability scenarios where all other algorithms are significantly diverging from correct estimations. Nowadays physical environments are more and more filled with heterogeneous connected devices (intelligent and mobile, such as smartphones, drones, robots). These contexts increasingly call for new mechanisms of collective adaptation, ultimately supporting a view of environments as acting as true pervasive computing fabric, where sensing, actuation and computation are naturally seen as inherently resilient and distributed across physical space [16] . In this paper we are concerned with the design of a self-adaptive coordination strategy able to realise distributed sensing concerning physical properties of the environment or virtual/digital characteristic of the computational one. By the strict cooperation and interaction of dynamic sets of mobile entities situated in physical proximity, distributed sensing can generally support forms of complex situation recognition [18] , better monitoring of physical environment [16] , and observation (and then control) of teams of agents [33] . In the context of coordination models and languages, field-based coordination [23, 31, 32] has been recently proposed as framework to program increasingly complex self-organising coordination strategies for such scenarios. A paradigmatic coordination operation of distributed sensing is data summarisation performed on devices filling a region of space: it is a key component on top of which one can then realise other operations such as counting, integration, averaging, maximisation, and the like. In fact, data summarisation corresponds to the reduce phase of the MapReduce paradigm [19] ported into a "spatial" context of agents spread in a physical environment and communicating by proximity, and has close analogues designed for wireless sensor networks [29] . Data summarisation can be solved by an algorithm of distributed collection, where information propagates towards one or more collector devices, and combine enroute until reaching a unique value, i.e, the result of collection. This component of self-organising behaviour (sometimes named the "C" building block, in short [30] ), is one of the most basic and widely used components of collective adaptive systems (CASs). Seen in terms of field-based coordination, collection is essentially a distributed coordination algorithm that computes a specific case of "computational field" [3, 11] , namely, a data structure distributed across space such that each device holds only the local value-which, in the case of collection represents a partial result of counting in a whole sub-region. This "brick" can be applied to a variety of different contexts, as it can be instantiated for values of any data type with an associative and commutative aggregation operator. However, implementing C can be very tricky, especially in mobile and faulty environments (i.e., with changes in the network of computational devices), which are the norm in several emerging application contexts, including airborne sensing by drones [15] , crowd management by people smartphones [14] , and vehicular networks [25] : existing implementations based on heuristic reasoning (single-path and multi-path [5, 30] ) tend to be very fragile in practice. In this paper we present two new algorithms for effectively and efficiently carrying on the computation of the C building block, based on a theoretical approach backed up by simulation results, which is able to achieve adequate accuracy in highly volatile scenarios. In the algorithm for idempotent aggregation (e.g. set union, maximum), as for existing multi-path collection algorithms, data chunks flow through agents through many possible links of the underlying proximity network. Which links to use are selected by imposing differentiated thresholds on minimum information propagation speed, threshold which in turn are set to the highest value ensuring that data is not discarded by all neighbours (under suitable assumptions on the network configuration). Instead, in the algorithm for arithmetic aggregation (e.g. sum, product), data chunks flow through a single outgoing link selected to ensure the maximum information propagation speed in the worst-case scenario. In both arithmetic and idempotent aggregation, the algorithms chosen are designed to maximise the worst-case information propagation speed under the given assumptions. Notice that which of the two algorithms applies depends only on the problem at hand and not on the runtime setup of a network. Thus, a system designer can decide which of the two algorithms are to be exploited depending on the properties of the aggregation operator only, and there is no overlap: arithmetic operators are never idempotent. We validate the performance of the algorithms in archetypal situations, taking into account agent mobility and discontinuities in network configuration, as well as network size and density. Ultimately, by accounting for various sources of volatility, using different state-of-the-art distance estimations, and comparing to other existing implementations of aggregation algorithms, we show that these algorithms are able to retain acceptable precision even in high-variability scenarios where all other algorithms are significantly diverging from correct estimations. The work of this paper is arguably a significant step in the context of engineering CASs. In general, the proposed coordination algorithm can be used as a solid component for engineering collection services in highly distributed and mobile systems. On the other hand, in the specific context of field-based coordination and aggregate computing framework [14] , these algorithms provide an implementation for the fundamental "C block" as advocated in [30] , coupling that of "G block" as of [6] , and together forming a set of combinators effectively supporting construction of higher-level, self-stabilising coordination strategies in mobile distributed systems, such as e.g. the SCR pattern proposed in [17] . The remainder of this paper is organised as follows. Section 2 presents the state-of-the-art in data summarisation techniques and necessary backgrounds. Section 3 presents the algorithms together with the assumptions that ensure achieving optimal reactivity. Section 4 compares these algorithms with the stateof-the-art in archetypal scenarios particularly hard for summarising algorithms. Finally, Sect. 5 concludes with directions of future research. In aggregate programming [14] , a distributed network consists of mobile devices, capable to perform asynchronous computations and interacting by exchanging messages. Every device performs periodically the same sequence of operations, with an usually steady rate T : collection of received messages, computation, and An event is a neighbour of an event , denoted as , if a message sent by was the last from δ( ) able to reach δ( ) before occurred (and has not been discarded as obsolete since). Note that, in an actual asynchronous distributed system, a device could fire more frequently than another, hence multiple messages from a "fast" device could reach a "slow" target before it can fire a new round: the above definition will allow us to focus only on the latter received one. Similarly, no messages from a "slow" device could reach a "fast" target during a round, and the above definition allows to retain messages from such a slow device across rounds, increasing the computation stability. Details on when messages are persisted or discarded are not given in the definition, leaving them as a choice during system design. The neighbouring relation on events forms a direct acyclic graph (DAG), since it is time-driven and anti-symmetric (unlike spatial-only neighbouring which is usually symmetric). The transitive closure of this relation defines the causality partial order ≤, so that ≤ iff there exists a sequence of events . . . connecting to . The causality relation defines which events constitute the past, future or are concurrent to any given event. A set of events with a neighbouring and causality relation is also called event structure 1 (represented in Fig. 1 ), and 1 Event structures for Petri Nets are used to model a spectrum of possible evolutions of a system, hence include also an incompatibility relation, discriminating between alternate future histories and modelling non-deterministic choice. However, following [21] , we use event structures to model a "timeless" unitary history of events, thus avoiding the need for an incompatibility relation. provides a basis to formally define the behaviour of a distributed system. In the remainder of this paper, we shall use the following quantities and primitives: -the radius R within which communication succeeds; 2 -the device δ( ) and time t( ) in which event takes place; -the time difference (lag) between neighbour events lag( , ) = t( ) − t( ); 3 -the measured distance between neighbour events dist( , ), possibly affected by errors. The latter can be obtained in three main different ways, depending on the time to which the two positions p and p involved refer to: (i) in GPS-based systems, p is the position measured in t( ) and p is the position measured in t( ); (ii) if distance is sensed at message receival, both positions refer to t( ); (iii) if distance can be sensed in every moment, then both positions may refer to t( ). Throughout the description of algorithms we will use the notation X( ) to represent a distributed value X depending on events, while X ( ) will symbolize a value depending on neighbouring relationships , that is, a quantity computed in with respect to a neighbour event . Recent works promoted an approach to engineer complex field-based coordination algorithms by combination of basic building blocks [30] , capturing key mechanisms of self-organisation such as spreading (block "G"), collection (block "C"), time evolution (block "T"), leader election and partitioning (block "S"), measuring centrality [7] and so on. For instance, self-organising coordination regions can be developed by a S-G-C-G composition [17] . The most basic and versatile building block is called gradient (G block), which provides distance estimation, creating a spanning tree and performing broadcast operations. In particular, the potential field P ( ) of distances from a source is a crucial input of every data aggregation routine (C block), providing means to guide the direction of aggregation. Accurately computing distances in a distributed and volatile scenario is a demanding task, which can be tackled in different ways depending on the context. In spite of variations, the general framework is that of gradient-based field computations [23, 24] , where local estimates from the source are repetitively shared with neighbours and combined with proximity estimates of mutual distance. If no proximity sensors are available, the harsh hop-count measure can be improved through statistical tools [22] , obtaining continuous and adaptive distance estimates. Furthermore, even when a proximity sensor is available, reactivity to input changes and network variability may be impaired by the rising value problem 4 -simply, reaction to changes causing increase of distance is very low [9] . Several solutions have been proposed to tackle this problem. Following recent reviews of distance estimation algorithms [6, 9] three solutions are shown to always outperform basic algorithms: FLEX [12] , BIS [8] , and ULT [6] . FLEX is an algorithm aimed at maximising stability of values while containing the error within predictable bounds, which also addresses the rising value problem by introducing a metric distortion. BIS, instead, exploits time information in order to solve the rising value problem obtaining optimal single-path reactivity to input changes, without concerns on value stability. ULT develops on BIS by adding a stale values detector running at (faster) multi-path speed, while addressing value stability with the addition of filters and dampers. Being obtained by the integration of different methods, ULT is tuned by a large number of parameters, and can range to being almost identical to BIS (when filters and dampers are disabled) to being closer to FLEX (when dampers are active). Data collection (also called aggregation) is a key component of distributed algorithms. It has been tacked in different ways depending on the application context (like, e.g., wireless sensor networks [26, 29] , high-performance computing [19] and spatial computing [13] ). Notably, all of these different approaches rely on the same basic mechanisms. In data collection, distributed values are combined together through an aggregation operator ⊕ that enjoys the following properties: Provided that the above properties hold, the aggregation C of the elements of a multi-set C is well-defined (the order in which the individual elements are aggregated is immaterial). Some common aggregation operators are the idempotent operators maximum and minimum, and the arithmetic operators addition and multiplication. Scenarios with intrinsic communication errors and input volatility (like, e.g., wireless sensor networks and spatial computing) require to consider a further property: 3. continuity: the effect on the aggregation of a certain percentage p of errors tends to zero as p tends to zero. This property holds for the idempotent and arithmetic aggregation operators cited above, however, it does not hold for other operations like, e.g., modular sum: the modular addition of a single spurious element can fully disrupt the outcome of the aggregation of an arbitrary big collection of elements. In the context of an environment with proximity-based interactions, given a commutative and associative operator, a data aggregation algorithm asynchronously combines input values x( ) from different devices into a single value in a selected device called source (or collector ). The algorithm manages the flow of data towards the source to avoid multiple aggregation of the same values. This twofold prerequisite, of acyclic flows directed towards the source, is met by relying on a given potential field P ( ), approximating a certain measure of distance from the selected source. As long as information flows descending the potential field, cyclic dependencies are prevented and eventual reaching of the source is guaranteed. For each event , potential descent is enforced by splitting the set of neighbours events E = { | } according to their potential value into the two disjoint sets: Thus, values can be received only from E + and must be sent only to E − . Three main algorithms implementing the collection block have been proposed so far: single-path, multi-path and weighted multi-path, all scaling to arbitrarily large systems as they require constant computational resources per node. Single-Path Aggregation. The single-path algorithm C sp ensures that information flows through a forest in the network, by sending the whole partial aggregate C sp ( ) computed during event to the single neighbour m( ) = with minimum potential P among all neighbour events in E . This is accomplished by repeatedly applying the following rule: Equation 1 computes the partial aggregate in by combining together the local input value x( ) and the partial aggregates from direct predecessors with higher potential for which δ( ) is the selected device δ(m( )). A screenshot of this algorithm after convergence is reached is shown in Fig. 2 . Since data flows descending the potential as fast as possible, single-path aggregation attains optimal reactivity to input changes in static environments. However, in mutable environments, the message from to m( ) may be lost, disrupting communication and pruning the entire branch of the forest rooted in . This phenomenon translates into poor performances, provided that values far from the source contribute significantly to the aggregation (e.g., non-zero values for summation, high values for minimisation, and so on). Multi-path Aggregation. The multi-path algorithm C mp allows information to flow through every path compatible with the given potential field. In order to avoid double counting, it is thus necessary to divide the partial aggregate of an event equally among every event with lower potential, by iteratively applying the following rule: where N ( ) = |E − | and is a binary operator such that v n means "dividing by n", i. e., an element that aggregated with itself n times produces the original value v. Since information needs to be "divisible" for to exist, two categories of aggregation operators are supported: 1. arithmetic operations, e.g., point-wise sum and multiplication of vectors v ∈ R n of real numbers (for which is respectively division and root extraction); 2. idempotent operations, e.g., computation of maximum and minimum among values v in a partially ordered set (for which is the identity function). Thus, theoretically, multi-path has a narrower scope than single-path. However, the vast majority of practically occurring (continuous) aggregation operators can be typically recast to be either arithmetic or idempotent. In particular, idempotent operations have been used to emulate several different aggregations through statistical tools: distinct count, sum, uniform sampling, selection of most frequent values [26] , and order statistics [34] . Since data flows through every possible path, it is unlikely for devices to be excluded from the aggregation, thus preventing data loss. On the other hand, the reactivity to input changes of multi-path aggregation is particularly poor. In fact, even in static environments, values flow through every possible path including the longest path, forcing reaction to changes to be delayed until all paths have been exploited (in particular for idempotent operations), and resulting in a reaction speed inversely proportional to the device density. In mutable environments, the problem is further exacerbated by the creation of information loops, which occur when two or more moving devices of similar potential invert their relative potential order in consecutive rounds, causing information from a device δ to come back to the same device, slowing down even further the reaction speed of the algorithm, and inducing exponential overestimations in the arithmetic case. Weighted Multi-path Aggregation. Recent works [4, 5] develop on the multipath algorithm, by allowing partial aggregates to be divided unequally among neighbours. Weights corresponding to neighbours are calculated in order to penalise devices that are likely to lose their "receiving" status, a situation that can happen in two cases: 1. if the "receiving" device is too close to the edge of proximity of the "sending" device, so that it might step outside of it in the immediate future breaking the connection; 2. if the potential of the "receiving" device is too close to the potential of the "sending" device, so that their relative role of sender/receiver might be switched in the immediate future, possibly creating an "information loop" between the two devices. where R is the communication radius and dist( , ) the distance measured between the events. Since these weights do not sum up to any particular value, they need to be normalised by the factor N ( ) = ∈E − w ( ), obtaining normalised weights w ( )/N ( ). The partial aggregates accumulated by devices can then be calculated as in C mp (see 2) with the addition of weights, by iteratively applying the following rule: where ⊗ is a binary operator such that v⊗k "extracts" a certain percentage k of a local value v. 5 In particular, if ⊕ is arithmetic (addition) then ⊗ is multiplication, whereas if ⊕ is idempotent then ⊗ is a threshold function regulating which links should be exploited for transmission and which should be ignored. This algorithm has been shown to significantly outperform both the singlepath and multi-path strategies, however, it is based on heuristics hence cannot provide correctness guarantees: in fact, it produces exponentially growing peaks of error for arithmetic aggregations in scenarios with high mobility [5] . In this section, we present the Lossless Information Speed Thresholds collection algorithm (C list ). It maximises information speed under the general assumptions presented in Sect. 2.1 and the additional assumptions on the network model given in Sect. 3.1, with respect to the algorithms satisfying the constraints given in Sect. 3.2. As for the other summarisation algorithms, we assume a potential field P ( ) to be available as input in each event. Given an event , we denote as next the following event on the same device, so that next and δ( ) = δ( next ). In order for C list to be computed, we need a minimal degree of forecasting values in next events next , as stated by the following assumptions. -Sure connection. For each event and neighbour , there is a Boolean value surelyConnected ( ) which is true iff is sure that its messages will be received by the next event next on δ( ), and is true for at least one neighbour event . Such value can be computed using an upper bound on distance dist( , ) together with a lower bound on connection radius R and possibly an upper bound V on device movement speed, as in the following: where k is 0 if dist refers to t( ), 1 if it refers to both t( ) and t( ) (GPSbased), 2 if it refers to t( ) (see Sect. 2.1). -Scheduled time. For each event , we assume that an upper bound t u ( ) to t( next ) is known. Notice that this is easily satisfied with high accuracy, as activations need to be scheduled and do not happen randomly. -Potential evolution. For each event , we assume that an upper bound P u ( ) to P ( next ) is known. For instance, given the upper bound V on device movement speed, we may set P u ( ) = P ( )+V ·(t u ( )−t( )). This bound may need to be corrected for the error on potential computations, and could be significantly improved if the movement direction is known. Under the previous assumptions, we focus on collection algorithms satisfying the following constraints. -Lossless. A collection algorithm is lossless if it ensures that the input value x( ) in any event participates in the outcome C( ) of the algorithm for at least one event on the collection source (that is, such that P ( ) = 0). -Scalable. We say that a distributed algorithm is scalable if it uses O(1) message size and O(N ) computation time and space in every event , where N is the number of neighbours N = |E |. In the idempotent case data duplication is not an issue, and thus data loss can be easily avoided by resorting to a multi-path algorithm. However, as we will see in Sect. 4.1, plain multi-path is slow in recovering to the point of being effectively equivalent to a gossip algorithm [20] . We thus propose an algorithm that adopts intermediate strategy (as in previous heuristic-based attempts [4, 5] ), which transmits data on a selected set of links, maximising the speed of information flow v (measured as units of potential descended over time) under the assumptions on the network model illustrated in Sect. 3.1. In fact, by discarding for every starting event the longer paths towards the source and preserving the shortest ones, we ensure that old information is quickly discarded, thus allowing the algorithm to promptly adjust to input changes. Notice that it is not possible for a scalable algorithm to select paths for their overall information speed v, since partial results would not be locally computable in intermediate events. Given the candidate values i reaching a same event with a potential descended of ΔP i and a time elapsed of Δt i , we need to select a constant-sized subset of them, without knowing the additional time Δt needed to reach the source, and thus the overall speed that each candidate may achieve. Thus, we indirectly select paths by imposing speed constraints in each one of their edges. Given a potential field P ( ) of distances from the source, we compute a threshold speed θ( ) for each event , so that a message is discarded iff: that is, the information from to is descending the potential at a speed lower than the threshold θ( ) computed in . We allow these thresholds to depend on the event, as a fixed global threshold can easily induce loss of data for large parts of the network. Furthermore, we compute these thresholds as the maximal (in order to prune the most paths possible) granting that at least one neighbour will not discard the message (lossless algorithm). In order to compute these thresholds efficiently and effectively, we base on the network model assumptions in Sect. 3.1. For each event , we need to prevent at least one of the neighbour events for which surelyConnected ( ) is true from discarding the message. We then use P u (·) and t u (·) to predict a lower bound on the speed of the information flowing from to next : Thus, the maximum threshold ensuring no data loss is the following: 6 The partial aggregates accumulated by devices can then be calculated by iteratively applying the following rule: The algorithm C list , globally defined by Eqs. (7) to (9) , computes the partial aggregate associated with event by combining together the local value x( ) and the partial aggregates from direct predecessors for which the true information speed v( , ) was above the threshold computed in the previous events θ( ). Although every event computes the threshold by maximising the expected future information speed, and thus choosing a neighbour that theoretically guarantees the best speed, C list is not a single-path algorithm: messages next can flow at speed greater than the estimated v wst ( ) (defined in Eq. (7)) and thus pass the threshold even though the threshold was not designed for them. According to the above explanation, the following property holds. Property 1 (C list local optimality among lossless collection algorithms). Let θ( ) be such that using information available in an event it is possible to guarantee a lowest speed of information exiting of at least θ( ) without data loss. Then the lowest speed of information exiting for C list is at least θ( ). In the arithmetic case, the situation is made more challenging by the necessity of avoiding data duplication, which can in this case lead to exponentially increasing overestimates. In order to avoid it, we modify C list to become a purely single-path algorithm, 7 although the main structure remains the same. Based on Eqs. Partial aggregates can then be accumulated as in C sp (see 1): Thus, the C list algorithm for arithmetic aggregation computes partial aggregates by combining together the local value x( ) and the partial aggregates from direct predecessors for which δ( ) was the selected device δ(m( )). We compared the new algorithm against reference single-path, multi-path and weighted multi-path implementations (sp [30] , mp [30] , wmp [5] ). The algorithms were implemented in Protelis [28] , which is an implementation of the field calculus [11] universal language for field-based computations [3] . In particular, the implementation uses the recently proposed share operator [2] . The potential estimates guiding aggregation were computed using the stateof-the-art algorithm BIS introduced in [8] (see Sect. 2.2) ensuring theoretically optimal recovery speed. We also tested the usage of an exponential back-off filter to stabilise the collection results: however, we report in the following graphs only its usage for list on arithmetic aggregation, since it was the only case where it had a positive effect. For both the idempotent and arithmetic case, the same archetypal scenarios were selected according to the guidelines developed in [9] . The scenarios consisted of a variable number of devices with almost identical computation rate (1% systematic and accidental error) and unit disc communication model, randomly distributed in a circular area with a source device on the right end of the circle at simulation start, then discontinuously moved to the left end. Devices were moving at constant speed through randomly selected waypoints within the area. The scenarios were tested varying the three fundamental characteristics of such a network (all normalised in order to abstract from a specific communication radius or computation rate): Hop diameter: the diameter of the circular area where devices are randomly displaced, measured as the number of communication radiuses (hops) contained. Values from 2 to 16 were considered (with a step of 1), using 10 when evaluating the other characteristics. Neighbourhood size: the average number of devices in a communication radius area. Values from 5 to 40 were considered (with a step of 2.5), using 25 when evaluating the other characteristics. Device speed: the movement speed of devices, measured as a percentage of the communication radius area covered during one computation round. Values from 0 to 50% were considered (with a step of 2.5%), using 25% when evaluating the other characteristics. For each of the resulting 49 different scenarios, 10 runs with different random seeds were performed, averaging the results. 9 The default values (10 hops, 25 neighbours, 25% speed) were chosen after a broader search in the parameter space, as they were good representatives of the behaviour for most considered parameter values. The simulations were obtained with Alchemist as simulator [27] and the supercomputer OCCAM [1] as platform. 10 We tested collection for idempotent operators by setting ⊕ = min and values to be aggregated chosen to make the aggregation as difficult as possible, showcasing every possible source of error. In fact, a difficult idempotent aggregation problem requires both obsolete and distant values to be able to significantly contribute to the aggregation. If obsolete values have a negligible impact, multi-path collection is optimal as it does not need to react to environmental changes. If distant values have a negligible impact, single-path collection is optimal since even a small coverage of the network may be sufficient. In order to maximise the impact of distant values, we selected a set X of devices at the opposite border of the circular area with respect to the active source. Devices in X transmit a changing value which will be the result of the aggregation, while devices outside X have a fixed high value (set to 400) which is never the minimum. In order to showcase the impact of obsolete data, the values transmitted in X were changing in time according to the following sinusoidal-like function (see Fig. 3 for a graphical depiction): where t( ) is the time elapsed from the start of the simulation, A = 300 is the amplitude, T = 250 is the period, φ = −25 is the phase, with values capped to stay within ±M = ±220. Furthermore, at the time t = 300 of source switch, x( ) becomes a constant equals to 220. This allows to see behaviour in all possible conditions: after a disruption, under steady inputs, and when input rises or drops. Figure 3 summarises the evaluation results. Single-path proves to be unable to properly collect values from X in most situations except for some short time intervals, thus showing extreme variability in results, except when the number of hops is small, neighbourhood sizes are high and devices speeds are low. Multipath produces very good results until t = 200, but is unable to recover when the input rises (not even after a source change), in fact behaving as a gossip algorithm, except for small networks with low density and speeds. Weighted multi-path performs quite well in all configurations, but is outperformed by list in all cases except for very high speeds (>40%). At such high speeds, avoiding information losses forces list to choose a pessimistically low threshold, that could be significantly higher while keeping a low (but non-zero) probability of loss. Finally, notice that the source switch has a minimal impact on all algorithms for idempotent aggregations. We tested collection for arithmetic operators by setting ⊕ = + and values x( ) = 1 for each device. This choice amounts to counting the total number of devices, which is a commonly used routine and a paradigmatic example of arithmetic aggregation. We run 10 instances of each scenario and computed median results, as the relative standard errors between runs were significantly high: Fig. 4 summarises the evaluation results. The single-path (sp) and multi-path (mp) algorithms score the worst results. Single-path underestimates the ideal value by a factor of 10 at all speeds above 5%, error that gets worse as the total number of devices increases (both by hops or neigh), showing the existence of an upper bound to the number of devices that are able to reach the source. Conversely, multi-path significantly overestimates the ideal value with errors that grow approximately linearly with the number of hops or neighbours, and exponentially with speed. Weighted multi-path, shows a behaviour similar to multi-path but with a lower error: in particular, unlike mp, the error decreases as the number of neighbours increases, showing better performance in high density scenarios. Finally, list scores the best performance in every scenario, only slightly underestimating the ideal value, with an error that tends to zero as the number of neighbors increases, and is reasonably small (below 10%) even for speeds around 30%. Unlike for the other algorithms, adding an exponential back-off filter further improves the performance. Notice that the source switch at t = 300 has the effect of disrupting the aggregation process for a short period of time, during which the algorithms show some positive (for multi-path based algorithms mp, wmp) or negative peaks (for singlepath based algorithms sp, list). The recovery time after the switch is similar across algorithm, although the positive peaks are larger in size (overestimating the value by about 3 orders of magnitude). As shown in Fig. 4 (top right) , mp and wmp are always highly unstable, with peak overestimations of 5×; while sp and list have a more contained (while still significant) degree of instability. In this paper, we presented two new algorithms tackling the established problem of data summarisation, both for idempotent and arithmetic operations. These algorithms are designed to maximise the speed of information flow (which translates into reactiveness to input changes) under the constraint of no data loss. We evaluated these algorithms in archetypal scenarios of maximal hardness, varying all fundamental (dimensionless) characteristics of a distributed network: diameter in hops, average number of neighbours, and node speed (relative to the ratio between communication radius and computation period). Overall, these algorithms significantly outperform the state-of-the-art, obtaining sound results even in scenarios with high mobility. However, there is still some margin of future improvement. In very high mobility settings, the no-data-loss constraint forces our algorithms to an overly pessimistic behaviour, thus losing performance with respect to heuristic (lossy) techniques. In this case, future algorithms enforcing a relaxed constraint of a maximum expected percentage of data loss may allow for a more effective choice of the thresholds. Furthermore, our algorithms rely on a rough prediction of quantities (time and potential) across rounds: future work may directly address the prediction step, as more accurate predictions will directly translate into higher information speed thresholds, and thus reactiveness. The open computing cluster for advanced data manipulation (OCCAM) The share operator for field-based coordination Space-time universality of field calculus Resilient blocks for summarising distributed data Effective collective summarisation of distributed data in mobile multi-agent systems Compositional blocks for optimal self-healing gradients Aggregate graph statistics Optimally-self-healing distributed gradient structures through bounded information speed Optimal single-path information propagation in gradient-based algorithms Distributed real-time shortestpaths computations with the field calculus A higher-order calculus of computational fields Flexible self-healing gradients Spatial computing: distributed systems that take advantage of our geometric world Aggregate programming for the Internet of Things Adaptive opportunistic airborne sensor sharing Self-organizing virtual macro sensors Self-organising coordination regions: a pattern for edge computing Context is key MapReduce: simplified data processing on large clusters Gossip-based aggregation in large dynamic networks Time, clocks, and the ordering of events in a distributed system Gradient-based distance estimation for spatial computers Asynchronous distributed execution of fixpoint-based computational fields Co-fields: a physically inspired approach to motion coordination Vehicular Networks: Techniques, Standards, and Applications Synopsis diffusion for robust aggregation in sensor networks Chemical-oriented simulation of computational systems with Protelis: practical aggregate programming A survey on data routing and aggregation techniques for wireless sensor networks Engineering resilient collective adaptive systems by self-stabilisation From distributed coordination to field calculus and aggregate computing A calculus of self-stabilising computational fields Aggregate plans for multiagent systems Duplicateinsensitive order statistics computation over data streams Impact of radio irregularity on wireless sensor networks 7 We also need to guarantee that a message from an event is not able to reach more than one event on a same device, that is, messages are not retained across rounds.