key: cord-0508021-90w4w5pz authors: Liu, Weizhe; Salzmann, Mathieu; Fua, Pascal title: Counting People by Estimating People Flows date: 2020-12-01 journal: nan DOI: nan sha: c33137f511543610ea27c8f6e171888654c44dc1 doc_id: 508021 cord_uid: 90w4w5pz Modern methods for counting people in crowded scenes rely on deep networks to estimate people densities in individual images. As such, only very few take advantage of temporal consistency in video sequences, and those that do only impose weak smoothness constraints across consecutive frames. In this paper, we advocate estimating people flows across image locations between consecutive images and inferring the people densities from these flows instead of directly regressing them. This enables us to impose much stronger constraints encoding the conservation of the number of people. As a result, it significantly boosts performance without requiring a more complex architecture. Furthermore, it allows us to exploit the correlation between people flow and optical flow to further improve the results. We also show that leveraging people conservation constraints in both a spatial and temporal manner makes it possible to train a deep crowd counting model in an active learning setting with much fewer annotations. This significantly reduces the annotation cost while still leading to similar performance to the full supervision case. C ROWD counting is important for applications such as video surveillance and traffic control. For example during the current COVID-19 pandemic, it has a role to play in monitoring social distancing and slowing down the spread of the disease. Most state-of-the-art approaches rely on regressors to estimate the local crowd density in individual images, which they then proceed to integrate over portions of the images to produce people counts. The regressors typically use Random Forests [1] , Gaussian Processes [2] , or more recently Deep Nets [3] , [4] , [5] , [6] , [7] , [8] , [9] , [10] , [11] , [12] , [13] , [14] , [15] , [16] , [17] . When video sequences are available, some algorithms use temporal consistency to impose weak constraints on successive density estimates. One way is to use an LSTM to model the evolution of people densities from one frame to the next [5] . However, this does not explicitly enforce the fact that people numbers must be strictly conserved as they move about, except at very specific locations where they can move in or out of the field of view. Modeling this was attempted in [18] but, because expressing this constraint in terms of people densities is difficult, the constraints actually enforced were much weaker. In this paper, we propose to regress people flows, that is, the number of people moving from one location to another in the image plane, instead of densities. To this end, we partition the image into a number of grid locations and, for each one, we define ten potential flows, one towards each neighboring location, one towards the location itself, and the last towards regions outside the image plane. The flow towards the location itself enables us to account for people who stay in the same location from one instant to the next and the final flow to account for people who enter or exit the field of view. In our experiments, we only use it at the boundaries of the image plane because there are no occluded regions in our datasets. However, if there were occluded regions within the scene, we could simply also use that last channel for motions in and out of those. In this scenario, the places where the • W. Liu tenth channel is to be used would have to be scene-specific and our approach offers the required flexibility. Fig. 1 depicts some of the ten flows we compute. All the flows incident on a grid location are summed to yield an estimate of the people density in that location. The network can therefore be trained given ground-truth estimates only of the local people densities as opposed to people flows. In other words, even though we compute flows, our network only requires ground-truth density data for training purposes, like most others. Our formulation allows us to effectively impose people conservation constraints-people do not teleport from one region of the image to another-much more effectively than earlier approaches. This increases performance using network architectures that are neither deeper nor more complex than state-of-the-art ones. Furthermore, regressing people flows instead of densities provides a scene description that includes the motion direction and magnitude, both of which are useful for crowd analytics. This also enables us to exploit the fact that people flow and optical flow should be highly correlated, as illustrated by Fig. 1 , which provides an additional regularization constraint on the predicted flows and further enhances performance. We will demonstrate on five benchmark datasets that our approach to enforcing temporal consistency brings a substantial performance boost compared to state-of-the-art approaches. We will also show that when the cameras can be calibrated, we can apply our approach in the ground plane instead of the image plane, which further improves performance. Another key strength of our flow-based approach is that we can use it to recast our fully-supervised approach, as described above, in an Active Learning (AL) context that drastically reduces the supervision requirements without giving up accuracy. More specifically, our network learns to enforce the people conservation as best it can but they can still be violated. Our AL approach therefore involves first annotating a fraction of the training images, using them to train the network, running it on the others, selecting the areas where the constraints are most violated for further human annotation, and iterating. In effect, we use people conservation constraints to provide self-supervision and to make active learning possible. We will show that, by the time we have annotated about 6.25% of the images, we achieve almost the same accuracy as arXiv:2012.00452v2 [cs.CV] 3 Aug 2021 when annotating all of them and outperform several state-of-theart approaches trained using full supervision. Our contribution is therefore a novel flow-based approach to estimating people densities from video sequences that enforces strong temporal consistency constraints without requiring complex network architectures. Not only does it boost performance, it also makes it possible to implement an active-learning approach that leverages the expected consistency to reduce sixteen-fold the required amount of annotated data while preserving accuracy. The fully-supervised version of our framework was first introduced in a conference article [19] . We extend it here by introducing an active learning mechanism and showing that reasoning in the ground plane, instead of the image plane, further improves performance by eliminating perspective distortion artifacts. Given a single image of a crowded scene, the currently dominant approach to counting people is to train a deep network to regress a people density estimate at every image location. This density is then integrated to deliver an actual count [18] , [20] , [21] , [22] , [23] , [24] , [25] , [26] , [27] , [28] , [29] , [30] , [31] , [32] , [33] . In this section, we first review these approaches and then existing attempts at reducing the amount of supervision they require. The majority of existing people counting images work on single images. We discuss them first and then move on to those that exploit temporal consistency and model people flows. Single Image Crowd Counting. The people density we want to measure is the number of people per unit area on the ground. However, the deep nets operate in the image plane and, as a result, the density estimate can be severely affected by the local scale of a pixel, that is, the ratio between image area and corresponding ground area. There are many ways to address this issue. For example, the algorithms of [34] , [35] use geometric information to adapt the network to different scene geometries. Because this information is not always readily available, other works have focused on handling the scale implicitly within the model. In [6] , this was done by learning to predict pre-defined density levels. By contrast, the algorithms of [7] , [36] use image patches extracted at multiple scales as input to a multi-stream network. They then either fuse the features for final density prediction [36] or introduce an ad hoc term in the training loss function [7] to enforce prediction consistency across scales. In [3] , [37] , multi-scale features are learned by using different receptive fields and combined to predict the density. Other works propose to handle scale-variation in an adaptive way. In [38] , this is done by weighing different density maps generated from input images at various scales. [4] , [10] train an extra classifier to assign the best receptive field for each image patch. More recently, [22] proposed to extract features at multiple scales and learn how to adaptively combine them. Instead of explicitly handling scale variations in the image plane, other single image crowd counting algorithms improve performance by using synthetic images [33] , [39] , leveraging auxiliary tasks [40] , [41] , encoding attention mechanisms [23] , [42] , [43] , [44] , employing a Bayesian loss function [45] , exploiting multiple views [46] or depth information [47] . We refer the reader to the recent survey [48] for more detail. Enforcing Temporal Consistency. While most methods work on individual images, a few have been extended to exploit temporal consistency. Perhaps the most popular way to do so is to use an LSTM [49] . For example, in [5] , the ConvLSTM architecture [50] is used for crowd counting purposes. It is trained to enforce consistency both in the forward and the backward direction. In [51] , an LSTM is used in conjunction with an FCN [52] to count vehicles in video sequences. A Locality-constrained Spatial Transformer (LST) is introduced in [53] . It takes the current density map as input and outputs density maps in the next frames. The influence of these estimates on crowd density depends on the similarity between pixel values in pairs of neighboring frames. While effective these approaches have two main limitations. First, at training time, they can only be used to impose consistency across annotated frames and cannot take advantage of unannotated ones to provide self-supervision. Second, they do not explicitly enforce the fact that people numbers must be conserved over time, except at the edges of the field of view. The recent method of [18] addresses both these issues. However, as will be discussed in more detail in Section 3.1, because the people conservation constraints are expressed in terms of numbers of people in neighboring image areas, they are much weaker than they should be. Introducing Flow Variables. Imposing strong conservation constraints when tracking people has been a concern long before the advent of deep learning [54] , [55] , [56] , [57] , [58] , [59] , [60] , [61] , [62] , [63] , [64] , [65] . For example, in [65] , people tracking is formulated as multi-target tracking on a grid and gives rise to a linear program that can be solved efficiently using the K-Shortest Path algorithm [66] . The key to this formulation is the use as optimization variables of people flows from one grid location to another, instead of the actual number of people in each grid location. In [67] , a people conservation constraint is enforced and the global solution is found by a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network [68] . Such people conservation constraints have since been combined with additional ones to further boost performance. They include appearance constraints [69] , [70] , [71] to prevent identity switches, spatio-temporal constraints to force the trajectories of different objects to be disjoint [72] , and higherorder constraints [56] , [58] . However, none of these methods rely on deep learning. These kind of flow constraints have therefore never been used in a deep crowd counting context and are designed for scenarios in which people can still be tracked individually. The recent approach of [73] is a good example of this. It leverages density maps and network flow constraints to improve multiple object tracking but still relies on connecting individual people detections. In this paper, we demonstrate that this approach can also be brought to bear in a deep pipeline to handle dense crowds in which people cannot be tracked as individuals anymore. There are relatively few people-counting approaches that rely on self-or weak-supervision. We discuss them below and argue that they lack some of the key features of ours. Semi-Supervised Crowd Counting. In [74] , an autoencoder is used to learn most of the model parameters without supervision. Only those of the last two layers are learned with full supervision, which helps when there is very little annotated data but not when there is some more. In [75] , only 10% of the annotated training images are used to pre-train a model and the algorithm relies on transfer-learning to align the feature distributions across unlabeled images with similar people counts in the remaining 90%. Unfortunately, this method depends crucially on the quality of the pre-training. If it is not good enough, the auto-annotation of the unlabeled images is likely to cause a performance drop. Furthermore, this approach still requires image pairs from different domains that feature the same number of people, which is hard to obtain in many real world cases. Finally, it only outputs the final crowd count without a density map that denotes people's locations. Several very recent work [76] , [77] extend this auto-annotation technique by directly auto-annotating the crowd density map [76] or an auxiliary segmentation mask [19] based on a pre-trained model with a small amount of labeled data. As no physical world constraint is enforced in these models, the pseudo-ground truth can be very different from the true one if the labeled and unlabeled images follow different distributions. Weakly Supervised Crowd Counting. Another way to reduce the annotation cost is to use weak supervision, as in [78] . Instead of object-wise annotation, it relies on region-wise annotation. The image is split into arbitrarily-shaped regions that each contain two or three people. A Gaussian Process is used to map images pixels to a density map. As no localization supervision is provided, the network is prone to producing uninterpretable density maps because edges, image acquisition artifacts, and tiny fluctuations in appearance can yield larger feature changes than expected. Furthermore, manually splitting the image into regions that all contain the required number of people is non-trivial and time consuming. Self-Supervised Crowd Counting. The approach of [8] , [14] is probably the one most related to ours. Two extra unlabeled datasets are collected from Google by keyword searches and query-byexample image retrieval. Then, a multi-task network is trained to rank image patches according to their crowd density, and based on the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. Such inequality constraints can be viewed as a weaker version of our people conservation constraints, which are equalities. However, the resulting accuracy depends on finding and properly curating the unlabeled dataset. This is a labor-intensive process because one must ensure that the unlabeled images from the internet exhibit a similar crowd density and viewpoint angle. We regress people flows from images. We take these flows to be counts between two consecutive time instants of people either moving from their current location to a neighboring one, staying at the same location, or moving in or out of the field of view. They are depicted by Fig. 2 and summarized in Table 1 . People flows incident on a specific location are then summed to derive the number of people per location or people count per location. The crowd density then simply is the people count divided by the location area. Our key insight is that this formulation enables us to impose much tighter people conservation constraints than earlier approaches. By this, we mean that we can accurately model the fact that all people present in a location at a given instant either were already there at the previous one or came from a neighboring location. This assumes the image frequency to be high enough for people not being able to move beyond neighboring locations in the time that separates consecutive frames. This is a common assumption that has proved both valid and effective in many earlier works. S v e y n j 9 l D C l m / X H G r 7 g x 4 m X g 5 q a A c j X 7 5 q z e I a C K Z P F F / T i Q k 0 H o Y e K Y z I N D T 8 9 5 Y / M 9 r x O C f t x I u o x i Y p N N F f i w w h H i c B + 5 w x S i I o S G E K m 5 u x b R H F K F g U s u Y E N z 5 l / + S 6 k n B L R Z O b 4 r Z 0 s U s j j T a R w c o h 1 x 0 h k r o C p V R B V H 0 g J 7 Q C 3 q 1 H q 1 n 6 8 1 6 n 7 a m r N n M L v o F 6 + M b 1 l q V z Q = = < / l a t e x i t > f t 1,t i:i2N (j),j < l a t e x i t s h a 1 _ b a s e 6 4 = " I R p R h p Z K M c 2 T Y 6 / H 4 k 4 Y g Q D l G O Y = " > A A A C A 3 i c b V D L S g N B E J y N r x h f U W 9 6 G Q x C h B h 2 J a J 4 C n r x J B H M A 5 I Y Z i e z y S S z s 8 t M r x C W g B d / x Y s H R b z 6 E 9 7 8 G y e P g y Y W N B R V 3 X R 3 u a H g G m z 7 2 0 o s L C 4 t r y R X U 2 v r G 5 t b 6 e 2 d i g 4 i R V m Z B i J Q N Z d o J r h k Z e A g W C 1 U j P i u Y F W 3 f z X y q w 9 M a R 7 I O x i E r O m T j u Q e p w S M 1 E r v e a 2 Y X 3 D c 4 B L f Z H t H u d 7 w P o Z j J w f D V j p j 5 + 0 x 8 D x x p i S D p i i 1 0 l + N d k A j n 0 m g g m h d d + w Q m j F R w K l g w 1 Q j 0 i w k t E 8 6 r G 6 o J D 7 T z X j 8 w x A f G q W N v U C Z k o D H 6 u + J m P h a D 3 z X d P o E u n r W G 4 n / e f U I v P N m z G U Y A Z N 0 s s i L B I Y A j w L B b a 4 Y B T E w h F D F z a 2 Y d o k i F E x s K R O C M / v y P K m c 5 J 1 C / v S 2 k C l e T u N I o n 1 0 g L L I Q W e o i K 5 R C Z U R R Y / o G b 2 i N + v J e r H Let us consider a video sequence I = {I 1 , . . . I T } and three consecutive images I t−1 , I t , and I t+1 from it. Let us assume that each image has been partitioned into K rectangular grid locations. In our implementation, a location is one spatial position in the final convolutional feature map, corresponding to an 8×8 neighborhood in the image. However, other choices are possible. The main constraint we want to enforce is that the number of people present at location j at time t is the number of people who were already there at time t − 1 and stayed there plus the number of those who walked in from neighboring locations between t − 1 and t. The number of people present at location j at time t also equals the sum of the number of people who stayed there until time t + 1 and of people who went to a neighboring location between t and t + 1. Let m t j be the number of people present at location j at time t, or people count at that location. Let f t−1,t i,j be the number of people who move from location i to location j between times t − 1 and t, and N (j) the neighborhood of location j that can be reached within a single time step. These notations are illustrated by Fig. 2 (a) and summarized in Table 1 . In practice, we take N (j) to be the 8 neighbors of grid location j plus the grid location itself to account for people who remain at the same place, as depicted by Fig. 2 (b) . Our people conservation constraint can now be written as for all locations j that are not on the edge of the grid, that is, locations from which people cannot appear or disappear without being seen elsewhere in the image. Most earlier approaches [3] , [9] , [22] , [36] , [37] , [42] , [79] regress the values of m t j , which makes it hard to impose the constraints of Eq. 1 because many different values of the flows can produce the same m t j values. For example, in [18] , the equivalent constraint is It only states that the number of people at location j at time t is less than or equal to the total number of people at neighboring locations at time t−1 and that the same holds between times t and t + 1. These are much looser constraints than the ones of Eq. 1. They guarantee that people cannot suddenly appear but do not account for the fact that people cannot suddenly disappear either. Our formulation lets us remedy this shortcoming. By regressing the f t−1,t i,j from pairs consecutive images and computing the values of the m t j from these, we can impose the tighter constraints of Eq. 1. We now turn to the task of training a regressor that predicts flows that correspond to what is observed while obeying the above constraints and properly handling the boundary grid locations. Let us denote the regressor that predicts the flows from I t−1 and I t as F with parameters Θ to be learned during training. In other words, f t−1,t = F(I t−1 , I t ; Θ) is the vector of predicted flows between all pairs of neighboring locations between times t − 1 and t. In practice, F is implemented by a deep network. The predicted local people counts m t j , that is, number of people per grid location j and at time t, are taken to be the sum of the incoming flows according to Eq. 1, and the predicted count for the whole image is the sum of all the m t j . As the flows are not directly observable, the training data comes in the form of people countsm t j per grid location j and at time t. During training, our goal is therefore to find values of Θ such that j,k for all i, j, and t, except for locations at the edges of the image plane, where people can appear from and disappear to unseen parts of the scene. The first constraint is the people conservation constraint introduced in Section 3.1. The second accounts for the fact that, were we to play the video sequence in reverse, the flows should have the same magnitude but the opposite direction. As will be discussed below, we enforce these constraints by incorporating them into the loss function we minimize to learn Θ. Finally, we impose that all the flows be non-negative by using ReLU activations in the network that implements F. Note that we only require the people flows to be non-negative; the fact that a location may contain less than 1 person simply means that the flow value will be less than 1. is a vector of predicted flows from neighboring locations between times t − 1 and t. In practice, F is implemented by the encoding/decoding architecture shown in Fig. 3 , and f t−1,t has the same dimension as the image grid and 10 channels per location. The first are the flows to the 9 possible neighbors depicted by Fig. 2 (b) and the tenth represents potential flows from outside the image and is therefore only meaningful at the edges. The fifth channel denotes the flow towards the location itself, which enables us to account for people who stay in the same location from one instant to the next. To compute f t−1,t , consecutive frames I t−1 and I t are fed to the CAN encoder network of [22] . This yields deep features where E e denotes the encoder with weights Θ e . These features are then concatenated and fed to a decoder network to output where D is the decoder with weights Θ d . D comprises the backend decoder of CAN [22] with an additional final ReLU layer to guarantee that the output is always non-negative. The encoder and decoder specifications are given in the supplementary material. Grid Size. In all our experiments, we treated each spatial location in the output people flow map as a separate location. Since our CAN [22] backbone outputs a down-sampled density map, each output grid location represents an 8 × 8 pixel block in the input image. This down-sampling rate is common in crowd counting models [9] , [18] , [22] because it represents a good compromise between high-resolution of the density map and efficiency of the model. In the supplementary material, we will confirm this by showing that changing the down-sampling rate degrades performance. Loss Function and Training. To obtain the ground-truth maps m t of Eq. 3, we use the same approach as in most previous work [3] , [9] , [22] , [36] , [37] , [42] , [79] . In each image I t , we annotate a set of s t 2D points P t = {P t i } 1≤i≤s t that denote the positions of the human heads in the scene. The corresponding ground-truth density mapm t is obtained by convolving an image containing ones at these locations and zeroes elsewhere with a Gaussian kernel N (·|µ, σ 2 ) with mean µ and standard deviation σ. We writem where p j denotes the center of location j. Note that this formulation preserves the constraints of Eq. 3 because we perform the same convolution across the whole image. In other words, if a person moves in a given direction by n pixels, the corresponding contribution to the density map will shift in the same direction and also by n pixels. The final ReLU layer of the regressor guarantees that the estimated flows are non-negative. To enforce the constraints of Eq. 3, we take our combined loss function L combi to be the weighted sum of two loss terms. We write wherem t j is the ground-truth crowd density value, that is, the people count at time t and location j of Eq. 4 and α is a scalar weight we set to 1 in all our experiments. At training time, we systematically use three consecutive frames to evaluate L combi and our flow formulation requires a density map at consecutive triplet frames. A limitation of this formulation is that requires all frames to be annotated. In practice, this is not necessarily the case. In some of the examples we present in the results section, only one in 60 or 255 frames is annotated. Hence, let A be the set of frames that are annotated and U the set of their previous and next frames that are not and for which m t is therefore unavailable. For these frames, it still holds that j,k for all j, even if the value of the sum is unknown. We therefore rewrite our loss function as where L f low and L cycle are defined as in Eq. 5. Algorithm 1 describes our training scheme in more detail. In the results section, we show that our algorithm can handle having only one in 255 frames annotated. Minimize L combi of Eq. 6 w.r.t. Θ using Adam end if end for end procedure When the camera is static, both the people flow discussed above and the optical flow that can be computed directly from the images Fig. 3 : Model architecture: Two consecutive RGB image frames are fed to the same encoder network that relies on the CAN scaleaware feature extractor of [22] . These multi-scale features are further concatenated and fed to a decoder network to produce the final people flow maps. stem for the motion of the people. They should therefore be highly correlated. In fact, this remains true even if the camera moves because its motion creates an apparent flow of people from one image location to another. However, there is no simple linear relationship between people flow and optical flow. To account for their correlation, we therefore introduce an additional loss function, which we define as where m t−1 and m t are density maps inferred from our predicted flows using Eq. 1, O j denotes the corresponding predicted optical flow at grid location j by a pre-trained regressor F o ,ō t−1,t is the optical flow from frames t − 1 to t computed by a state-of-the-art optical flow network [80] , and the indicator function δ(m j > 0) ensures that the correlation is only enforced where there are people. This is especially useful when the camera moves to discount the optical flows generated by the changing background. We also use CAN [22] as the optical flow regressor F o with 2 input channels, one for m t−1 and the other m t . This network is pre-trained separately on the training data and then used to train the people flow regressor. Pre-training the regressor F o requires annotations for consecutive frames, that is, V = 1 in the definition of Algorithm 1. When such annotations are available, we use this algorithm again but replace L combi by In all our experiments, we set β to 0.0001 to account for the fact that the optical flow values are around 4,000 times larger than the people flow values. F o is also pre-trained with Adam and a learning rate of 1e − 4. During pre-training, F o maps the groundtruth density map pairsm t−1 ,m t to the optical flow mapō t−1,t from frames t − 1 to t as This pre-trained network F o is then used as a regularization term when training our people flow model, using Eq. 7 and Eq. 8, where m t−1 and m t are density maps obtained by summing our predicted flows. Recall from Section 3.2 that we annotate only a set of keyframes. In this section, we show that we do not even need to annotate them fully. It is enough to only annotate small portions of them to pretrain the network and then exploit the flow constraints to iteratively select additional patches to be annotated. We will see in the results section that this active learning strategy allows us to achieve an accuracy that is close to what we get with full supervision at a much reduced annotation cost. Let us split each keyframe image I t into a set of n × n patches P t k , where k is the patch index, as shown in Fig. 5 . Instead of annotating whole images, we can annotate a single one of these patches in a subset of the keyframes and use the three-frame Algorithm 1 to pre-train the network. Because we use relatively little training data, it is unlikely that the values of L f low and L cycle of Eq. 5 will be zero if we evaluate the network on patches that we have not used for training purposes, at least not without further-training. In other words, the people conservation constraints of Eq. 3 will be violated. To take advantage of this, we define a measure of how much the people conversation constraint is violated within patch P t k . We then implement the simple patch selection strategy depicted by Fig. 4 and detailed by Algorithm 2. In practice, we initially annotate one patch in 25% of the keyframes, and use 60% of them for training and the remaining 40% for validation. We train our network by minimizing the loss function L combi of Eq. 5, whose supervised component L f low is only evaluated on the annotated patches. We then forward pass the remaining keyframes through our network and, within each one, annotate the patch with the larger E. We repeat this process 5 times, selecting 15% of all the initially-unannotated keyframes at each such iteration and retraining the model with the newly-annotated image patches. In the fully supervised case, there was no need to enforce spatial consistency across patches in the same image because the groundtruth data did it implicitly. However, in the scenario where we Fig. 4 : Our active learning pipeline. We first annotate a fraction of the training image patches, use them to train the network while minimizing the consistency and adversarial loss terms, and then run inference on the others. We then select patches where the people conservation constraints are most violated for further human annotation and iterate the process. for # of selection iterations do for # of unlabeled keyframes do Pick 3 consecutive frames (I t−1 , I t , I t+1 ), where t is a multiple of V (i.e., I t is a keyframe) for # of patches do Pick the l-th patch Compute the measure E of Eq. 10 end for Take the maximum value E over all the patches in each unlabeled keyframe as the error for this keyframe end for Select 0.15U unlabeled keyframes with largest error For one, annotate the patch with highest value E Update the set of unlabeled keyframes Re-train F with all the labeled keyframes end for end procedure have ground-truth data for only a small subset of the patches, this has to be done explicitly. Furthermore, we must avoid overfitting to the labeled patches. To achieve these two goals, we introduce two additional loss terms L spatial and L advers described in the remainder of this section, and thus minimize the overall loss where γ and δ are weighing factors. The training strategy is detailed by Algorithm 3. To handle the scenario where we have ground-truth data for only a subset of the patches, we replace the missing ground-truth data by spatial consistency constraints as follows. Let us consider keyframe I t that has been split into patches {P t k } and assume that we have annotated P t j only. We define S t k as a super-patch composed of P t j and unannotated patches P t k for k ∈ P j , where P j is a set of at most 15 indices, randomly chosen each time we compute the spatial loss. In other words, this means that a superpatch can range between the entire image and the combination of Any adjacent a × a patches would constitute a super-patch. The spatial people conservation constraint hold between any superpatch and all the patches inside it. For example, if we only annotate the 15th patch, one of the people conservation constraint is that the number of people in a super-patch that consists of the 11th, 12th, 15th and 16th patches, equals to the sum of the number of people in the 11th, 12th, 15th and 16th patches. P t j with a single immediate neighbor. We then pass each patch through the network individually to obtain people counts m t k for k ∈ P j , and further forward pass the super-patch through the network to compute people counts M t k . Because the number of people in the super-patch must be the sum of the number of people in each individual patch, we should have We therefore write When we take the super-patch as input, the receptive field for the corresponding sub-patch is larger than the sub-patch itself. By contrast if we only take the sub-patch as input, the receptive field is limited to it. Therefore, our loss term encourages the estimated densities for unlabeled sub-patches to be consistent independently of the contextual information. To prevent overfitting, we further introduce an adversarial loss term inspired by GANs [81] . We take the generator to be the function G that runs our flow-predicting network F(·, ·, , Θ d ) on a pair of images (I t−1 , I t ) and infers from it a people-density in I t by summing the flows according to Eq. 1. We then define a discriminator D(·, Θ d ) as a multilayer perceptron that takes as input the people-density map {m t i , i ∈ P t k } and returns the probability that it comes from a patch that has been annotated. Let A be the set of patches that have been annotated. We write , I t q , Θ) Update Θ d using L advers in Eq. 14 with RMSProp as suggested by [82] Randomly select a super-patch S t k composed of patches from I t j Reconstruct density map of S t k and other unlabeled patches inside this super-patch by passing these patches through the regressor network F Update Θ using L overall in Eq. 11 with Adam end for end procedure In this section, we first introduce the evaluation metrics and benchmark datasets used in our experiments. We then show that our fully supervised approach outperforms state-of-the-art methods when operating in the image plane and does even better when image registration is available by working in the ground plane instead of the image plane. We then quantify the ability of our active learning algorithm to reduce the annotation cost. In both cases we run an ablation study to justify our choices, which we enrich in the supplementary material by studying the impact of hyperparameters. In the supplementary material, we provide additional ablation studies about hyper-parameters, reasoning in the ground plane, settings, and variations of the proposed approach to exploit unlabeled data. Previous works in crowd density estimation use the mean absolute error (M AE) and the root mean squared error (RM SE) as evaluation metrics [3] , [4] , [5] , [6] , [34] , [36] . They are defined as where N is the number of test images, z i denotes the true number of people inside the ROI of the ith image andẑ i the estimated number of people. In the benchmark datasets discussed below, the ROI is the whole image except when explicitly stated otherwise. In practice,ẑ i is taken to be p∈Ii m p , that is, the sum over all locations or people counts obtained by summing the predicted people flows. For evaluations purposes, we use five different datasets, for which the videos have been released along with recently published papers. The first one is a synthetic dataset with ground-truth optical flows. The other four are real world videos, with annotated people locations but without ground-truth optical flow. To use the optional optical flow constraints introduced in Section 3.3, we therefore use the pre-trained PWC-Net [80] to compute the loss function L optical of Eq. 7. Fig. 6 depicts one such flow. CrowdFlow [83] . This dataset consists of five synthetic sequences ranging from 300 to 450 frames each. Each one is rendered twice, once using a static camera and the other a moving one. The ground-truth optical flow is provided as shown at Fig. 7 . As this dataset has not been used for crowd counting before, and the training and testing sets are not clearly described in [83] , to verify the performance difference caused by using ground-truth optical flow vs. estimated one, we use the first three sequences of both the static and moving camera scenarios for training and validation, and the last two for testing. FDST [53] . It comprises 100 videos captured from 13 different scenes with a total of 150,000 frames and 394,081 annotated heads. The training set consists of 60 videos, 9000 frames and the testing set contains the remaining 40 videos, 6000 frames. We use the same setting as in [53] . UCSD [84] . This dataset contains 2000 frames captured by surveillance cameras on the UCSD campus. The resolution of the frames is 238 × 158 pixels and the framerate is 10 fps. For each frame, the number of people varies from 11 to 46. We use the same setting as in [84] , with frames 601 to 1400 used as training data and the remaining 1200 frames as testing data. Venice [22] . It contains 4 different sequences and in total 167 annotated frames with fixed 1,280 × 720 resolution. As in [22] , 80 images from a single long sequence are used as training data. The remaining 3 sequences are used for testing purposes. WorldExpo'10 [34] . It comprises 1,132 annotated video sequences collected from 103 different scenes. There are 3,980 annotated frames, 3,380 of which are used for training purposes. Each scene contains a Region Of Interest (ROI) in which the people are counted. As in previous work [3] , [4] , [6] , [7] , [9] , [10] , [11] , [17] , [34] , [37] , [79] on this dataset, we report the MAE of each scene, as well as the average over all scenes. For CrowdFlow, FDST and UCSD, all frames in the training set are annotated. For Venice and WorldExpo'10, annotations are only available for every 60 and 255 frames, respectively. We denote our model trained using the combined loss function L combi of Section 3.2 as OURS-COMBI and the one using the full loss function L all of Section 3.3 with ground-truth optical flow as OURS-ALL-GT. In other words, OURS-ALL-GT exploits the optical flow while OURS-COMBI does not. If the ground-truth optical flow is not available, we use the optical flow estimated by PWC-Net [80] and denote this model as OURS-ALL-EST. Synthetic Data. Fig. 8 depicts a qualitative result, and we report our quantitative results on the CrowdFlow dataset in Table 2 (a). OURS-COMBI outperforms the competing methods by a significant margin while OURS-ALL-EST delivers a further improvement. Using the ground-truth optical flow values in our L all loss term yields yet another performance improvement, that points to the fact that using better optical flow estimation than PWC-Net [80] might help. Real Data. Fig. 9 depicts a qualitative result, and we report our quantitative results on the four real-world datasets in Tables 2 (b), (c), (d) and (e). For FDST and UCSD, annotations in consecutive frames are available, which enabled us to pre-train the F o regressor of Eq. 7. By contrast, for Venice and WorldExpo'10, only a sparse subset of frames are annotated, and we therefore warp the crowd annotation using optical flow estimation from PWC-NET [80] . We report results for both OURS-COMBI and OURS-ALL-EST. For FDST, UCSD, and Venice, our approach again clearly outperforms the competing methods, with the optical flow constraint further boosting performance when applicable. For World-Expo'10, the ranking of the methods depends on the scene being used, but ours still performs best on average and on Scene3. In short, when the crowd is dense, our approach dominates the others. By contrast, when the crowd becomes very sparse as in Scene1 and Scene5, models that comprise a pool of different regressors, such as [11] , gain an advantage. This points to a potential way to further improve our own method, that is, to also use a pool of regressors to estimate the people flows. Recall that for FDST and UCSD all training frames are annotated whereas only a fraction are for Venice and WorldExpo'10, which demonstrates that our approach can handle a large number of unannotated frames. In the supplementary material, we re-run our training using only a fraction of the annotated frames in FDST and UCSD and demonstrate graceful performance degradation. Until now, we have performed all the computations in image space, in large part so that we can compare our results to that of other recent algorithms that also work in image space. However, this neglects perspective effects as people densities per unit of image area are affected by where in the image the pixels are. To account for them, we can model them by working in the ground plane instead of the image plane, which we do in this section. Let H i be the homography from image I i to the corresponding ground plane. We define the ground-truth density as a sum of Gaussian kernels centered on peoples' heads on the ground plane. Because we now work in the physical world, we can use the same kernel size across the entire scene and across all scenes. A head annotation P i , that is, a 2D image point expressed in projective coordinates, is mapped to H i P i on the ground plane. Given a set A i = {P i 1 , ..., P i ci } of c i such annotations, we take the ground plane density G i at point P expressed in ground plane coordinates to be where N (.|µ, σ) is a 2D Gaussian kernel with mean µ and variance σ. Note the difference compared with image plane crowd density, which is defined at Eq. 4. If we take our grid cells to be 30cm square and use a 30 FPS video, no one going slower than 9m/s, i.e., 32.5 km/h, can exit the neighborhood of its current location between two frames, which is more than enough for most humans. For faster animals, we would have to work with larger grid cells, more extended neighborhoods, or a higher frame rate. Since Venice is the only publicly available video-based singleview crowd counting dataset containing accurate camera pose information, it is the one we used to evaluate this approach. In the supplementary material, we also evaluate results in the ground plane on several multi-view crowd counting datasets. The ground plane regressor architecture is the same as before, with an additional Spatial Transformer Networks [87] to map the output to the ground plane. The results are denoted by OURS-COMBI-GROUND in Table 2 (c) and show a marked improvement over OURS-COMBI that operates strictly in the image plane. Fig. 10 depicts corresponding density estimates in the image and ground planes. We know examine the individual components of our fullysupervised approach and show that each one contributes to these results. People Flows vs People Densities. To confirm that the good performance we report really is attributable to our regressing flows instead of densities, we performed the following set of experiments. Recall from Section 3.2, that we use the CAN [22] architecture to regress the flows. Instead, we can use it to directly regress the densities, as in the original CAN paper. We will refer to this approach as BASELINE. In [18] , it was suggested that people conservation constraints could be added by incorporating a loss term that enforces the conservation constraints of Eq. 2 that are weaker than those of Eq. 1, that is, those we use in this paper. We will refer to this approach relying on weaker constraints while still using the CAN backbone as WEAK. As OURS-COMBI, it takes two consecutive images as input. For the sake of completeness, we also implemented a simplified approach, IMAGE-PAIR, that takes the same two images as input and directly regresses the densities. To show that regressing flows is more effective than simply smoothing the densities, we implement AVERAGE that People are running counterclockwise. The estimated people density map is close to the ground-truth one. It was obtained by summing the flows towards the 9 neighbors of Fig. 2 (b) . They are denoted by the arrows and the circle. The latter corresponds to people not moving and is, correctly, empty. Note that the flow of people moving down is highest on the left of the building, moving right below the building, and moving up on the right of the building, which is also correct. Inevitably, there is also some noise in the estimated flow, some of which is attributable to body shaking while running. takes three images as input, uses CAN to independently compute three density maps, and then averages them. Finally, to highlight the importance of the forward-backward constraints of Eq. 3, we also tested a simplified version of our approach in which we drop them and that we refer to as OURS-FLOW. We compare the performance of these five approaches on CrowdFlow, FDST, and UCSD in Table 3 and AVERAGE do worse than BASELINE, which confirms that temporal averaging of the densities is not the right thing to do. As reported in [18] , WEAK delivers a small improvement. As expected OURS-FLOW improves on IMAGE-PAIR in all three datasets, with further performance increase for OURS-COMBI and OURS-ALL-EST. This confirms that using people flows instead of densities is a win and that the additional constraints we impose all make positive contributions. Training the Optical Flow Regressor. As explained in Section 3.3, we use optical flow to regularize the people flow estimates. To this end, we need to train the regressor F o of Eq. 7 that associates to consecutive density images an optical flow estimate that can be compared to that produced by a state-of-the-art optical flow estimator. In our implementation, F o takes as input the density images but not the original images, our intuition being that if it did, it could predict the correct optical flows even if the density estimates were wrong, which would defeat its purpose. To confirm this, we implemented a version called OURS-IMG-FLOW in which F o takes both the original images and crowd density maps as input. As can be seen in less good. Using the Spatial Loss Term. Our active learning approach of Section 4 relies on the spatial loss term L spatial of Eq. 13, which we do not normally use in the fully-supervised case, essentially because minimizing it imposes constraints that are weaker than those than the flow-consistency constraints of Eq. 3 impose. To check the validity of this choice, we implemented a variant of our approach that includes this additional loss term and that we refer to as OURS-COMBI-SPA. As can be in seen in Table 5 , it performs very comparably to OURS-COMBI, as could be expected. and we can sum either to obtain the people densities. Let OURS-COMBI-FOR and OURS-COMBI-BACK be versions of our approach that does either, whereas OURS-COMBI averages the two values, which provides a slight boost as can be seen in Table 5 . Distance between Annotated Frames. We refer to the number of frames between annotated frames in the training set as V . For CrowdFlow, FDST and UCSD, V = 1 because all frames are annotated. For Venice and WorldExpo'10, annotations are available for every 60 and 255 frames. Hence, V = 60 and V = 255, respectively. We re-run our training on CrowdFlow, FDST and UCSD for V = 2 and V = 5 and report the results in Table 6 . Even using V = 1 enforces stronger constraints, V = 2 delivers almost the same performance. For V = 5, there is a performance decrease but it is relatively small considering that we are now using only one fifth of the annotations. Recall from Section 5.3.1 that OURS-COMBI denotes our full approach when taking a single image as input, that is, without exploiting temporal consistency. Here, we combine it with the active learning strategies for Section 4. Here, we compare our patch selection strategy against other AL approaches in the same setting. • AL-AC [31] : It is a recent approach to active crowd counting, it actively choose the unlabeled images with high dissimilarity in crowd density distribution compared with the labeled one. Besides, a discriminator classifier is also added to distinguish if the sample is labeled or not. • MC-Dropout [88] : It measures the uncertainty by sampling from the average output of multiple forward passes with random dropout masks. Samples with high uncertainty are selected for training in next iteration. • ENS [89] : It is an ensemble-based approach which measures the uncertainty by sampling from the average output of multiple forward passes of different models trained with different initialization . Same as MC-Dropout, samples with high uncertainty are selected for training in next iteration. • VAAL [90] : It learns a latent space using a variational auto encoder (VAE) and an adversarial network trained to discriminate between unlabeled and labeled data. The samples predicted to be unlabeled with high probability are chosen to annotate in next iteration. We extend the above approaches in the same setting as ours with the same crowd density regressors. All models are trained using the same loss function L overall of Eq. 11. The only difference is how we select the patches to annotate. We evaluate the various approaches on FDST, Venice and WorldExpo'10. As can be seen in Fig. 12 , our approach consistently outperforms the others. We now turn to the individual components of our active-learning scheme and implemented the following variants to gauge their impact: • PATCH-BASE. The model is trained using a single patch per image by only minimizing the supervised loss function L combi of Eq. 5 and randomly selecting the patch to annotate. • PATCH-BASE-AL. The model is trained using the same loss as PATCH-BASE but we actively select the patch to annotate using the measure of consistency violation of Eq. 10. • PATCH-SPATIAL. The model is trained using the combined loss function including L combi and L spatial of Eq. 5 and Eq. 13; the patch is selected randomly. • PATCH-SPATIAL-AL. The model is trained using the same loss as PATCH-SPATIAL but we actively select the patch to annotate using the measure of consistency violation of Eq. 10. Fig. 12 : Comparing against other AL approaches. We plot the M AE obtained using different active learning algorithms as a function of the annotation ratio. All models were initially trained with 25% randomly selected images of which only 1/16 of the area was annotated. At each active learning iteration, another 15% of the images were selected either randomly or actively and another 1/16th annotated. All the models are trained using the same loss function, the only difference being how the patches are selected. Our AL approach consistently outperform others in all the datasets. Fig. 13 : Ablation study of our AL approach. We plot the M AE obtained using different versions of our AL strategy as a function of the annotation ratio. As expected, our complete approach does best. • PATCH-ALL. The model is trained with the complete loss function L overall of Eq. 11; the patch to annotate is selected randomly. • PATCH-ALL-AL. The model is trained using the same loss as PATCH-SPATIAL and we actively select the patch to annotate using the measure of consistency violation of Eq. 10. For all models, we start by randomly selecting 25% of the training images, each of which is split into 4 × 4 patches, only one of which is annotated. Therefore, the starting annotation rate is 25%/16 = 1.5625%. During each active learning iteration, another 15% of the training images are selected, and we also annotate one patch of each image. After 5 iterations, only 6.25% of the training patches have been selected, and we measured the ratio of annotated people to be around 5.7%. Fig. 13 depicts the M AE on FDST, Venice and WorldExpo'10. Note that both our loss terms and the AL algorithm consistently improve the performance with the largest boost coming from the active patch selection strategy. Furthermore, as can be seen by comparing these results with those in Tables 2 (b), (c) and (e), even though PATCH-ALL-AL only uses 6.25% of the annotations, it outperforms several SOTA models trained with full supervision. Fig. 11 depicts an example density map inferred by PATCH-ALL-AL. Please refer to the supplementary material for an analysis of the influence of hyper-parameters choices. We have shown that implementing a crowd counting algorithm in terms of estimating the people flows and then summing them to obtain people densities is more effective than attempting to directly estimate the densities. This is because it allows us to impose conservation constraints that make the estimates more robust. When optical flow data can be obtained, it also enables us to exploit the correlation between optical flow and people flow to further improve the results. Furthermore, we have demonstrated that spatial and temporal people conservation can be exploited to train a deep crowd counting model in an active learning fashion, achieving competitive performance with much fewer annotations. In this paper, while we have mostly performed the computations in image space, in large part so that we could compare our results to that of other recent algorithms that also work in image space, we have also shown that modeling the people flows in the ground plane yields even better performance. A promising application is to use drones for people counting because their internal sensors can be directly used to provide the camera registration parameters necessary to compute the homographies between the camera and the ground plane. In this scenario, the drone sensors also provide a motion estimate, which can be used to correct the optical flow measurements and therefore exploit the information they provide as effectively as if the camera was static. This work was supported in part by the Swiss National Science Foundation. Learning to Count Objects in Images Bayesian Poisson Regression for Crowd Counting Single-Image Crowd Counting via Multi-Column Convolutional Neural Network Switching Convolutional Neural Network for Crowd Counting Spatiotemporal Modeling for Crowd Counting in Videos Generating High-Quality Crowd Density Maps Using Contextual Pyramid CNNs Crowd Counting via Adversarial Cross-Scale Consistency Pursuit Leveraging Unlabeled Data for Crowd Counting by Learning to Rank CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes Divide and Grow: Capturing Huge Diversity in Crowd Images with Incrementally Growing CNN Crowd Counting with Deep Negative Correlation Learning Nonlinear Regression via Deep Negative Correlation Learning Crowd Counting Using Deep Recurrent Spatial-Aware Network Exploiting Unlabeled Data in CNNs by Self-Supervised Learning to Rank Locate, Size and Count: Accurately Resolving People in Dense Crowds via Detection Composition Loss for Counting, Density Map Estimation and Localization in Dense Crowds Iterative Crowd Counting Geometric and Physical Constraints for Drone-Based Head Plane Crowd Density Estimation Estimating People Flows to Better Count Them in Crowded Scenes Point In, Box Out: Beyond Counting Persons in Crowds Revisiting Perspective Information for Efficient Crowd Counting Context-Aware Crowd Counting Recurrent Attentive Zooming for Joint Crowd Counting and Precise Localization Crowd Counting with Deep Structured Scale Integration Network Multi-Level Bottom-Top and Top-Bottom Feature Fusion for Crowd Counting Learning Spatial Awareness to Improve Crowd Counting Adaptive Dilated Network with Self-Correction Supervision for Counting Reverse Perspective Network for Perspective-Aware Object Counting Adaptive Mixture Regression Network with Local Counting Map for Crowd Counting Weighing Counts: Sequential Crowd Counting by Reinforcement Learning Active Crowd Counting with Limited Supervision NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization Pixel-wise Crowd Understanding via Synthetic Data Cross-Scene Crowd Counting via Deep Convolutional Neural Networks Incorporating Side Information by Adaptive Convolution Towards Perspective-Free Object Counting with Deep Learning Scale Aggregation Network for Accurate and Efficient Crowd Counting Crowd Counting by Adaptively Fusing Predictions from an Image Pyramid Learning from Synthetic Data for Crowd Counting in the Wild Leveraging Heterogeneous Auxiliary Tasks to Assist Crowd Counting Counting with Focus for Free Adcrowdnet: An Attention-Injective Deformable Convolutional Network for Crowd Understanding Relational Attention Network for Crowd Counting Attentional Neural Fields for Crowd Counting Bayesian Loss for Crowd Count Estimation with Point Supervision Wide-Area Crowd Counting via Ground-Plane Density Maps and Multi-View Fusion CNNs Density Map Regression Guided Detection Network for RGB-D Crowd Counting and Localization Cnn-Based Density Estimation and Crowd Counting: A Survey Long Short-Term Memory Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting FCN-rLSTM: Deep Spatio-Temporal Neural Networks for Vehicle Counting in City Cameras Fully Convolutional Networks for Semantic Segmentation Locality-Constrained Spatial Transformer Network for Video Crowd Counting You'll Never Walk Alone: Modeling Social Behavior for Multi-Target Tracking 3D Scene Flow Estimation with a Rigid Motion Prior Multi-Target Tracking by Lagrangian Relaxation to Min-Cost Network Flow Tracking Sports Players with Context-Conditioned Motion Models Multitarget Data Association with Higher-Order Motion Models Local Velocity-Adapted Motion Events for Spatio-Temporal Recognition Movement Error Rate for Evaluation of Machine Learning Methods for Semg-Based Hand Movement Classification Multiple Target Tracking Using Frame Triplets Transferring Activities: Updating Human Behavior Analysis Continuous Energy Minimization for Multitarget Tracking Globally Optimal Multi-Target Tracking on a Hexagonal Lattice Multiple Object Tracking Using K-Shortest Paths Optimization Disjoint Paths in a Network Globally-Optimal Greedy Algorithms for Tracking a Variable Number of Objects Global Data Association for Multi-Object Tracking Using Network Flows Tracking Multiple People Under Global Apperance Constraints The Way They Move: Tracking Multiple Targets with Similar Appearance Multi-Commodity Network Flow for Tracking Multiple People Connected Component Model for Multi-Object Tracking Tracking-by-Counting: Using Network Flows on Crowd Density Maps for Tracking Multiple Targets Almost Unsupervised Learning for Dense Crowd Counting From Semi-Supervised to Transfer Counting of Crowds Learning to Count in the Crowd from Limited Labeled Data Semi-Supervised Crowd Counting via Self-Training on Surrogate Tasks Gaussian Process Density Counting from Weak Supervision Decidenet: Counting Varying Density Crowds through Attention Guided Detection and Density Estimation Pwc-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume Generative Adversarial Nets Wasserstein Gan Optical Flow Dataset and Benchmark for Visual Crowd Analysis Privacy Preserving Crowd Monitoring: Counting People Without People Models or Tracking Learning to Count with CNN Boosting Perspective-Guided Convolution Networks for Crowd Counting Spatial Transformer Networks Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning The Power of Ensembles for Active Learning in Image Classification Variational Adversarial Active Learning Pascal Fua is a Professor of Computer Science at EPFL, Switzerland. His research interests include shape and motion reconstruction from images, analysis of microscopy images, and Augmented Reality. He is an IEEE Fellow and has been an Associate Editor of the IEEE journal Transactions for Pattern Analysis and Machine Intelligence.