key: cord-0497152-jg4yo8rn authors: Dolhansky, Brian; Bitton, Joanna; Pflaum, Ben; Lu, Jikuo; Howes, Russ; Wang, Menglin; Ferrer, Cristian Canton title: The DeepFake Detection Challenge Dataset date: 2020-06-12 journal: nan DOI: nan sha: c6ba0b2bfed6eef105dfa5aa01c3cf9d4b788322 doc_id: 497152 cord_uid: jg4yo8rn Deepfakes are a recent off-the-shelf manipulation technique that allows anyone to swap two identities in a single video. In addition to Deepfakes, a variety of GAN-based face swapping methods have also been published with accompanying code. To counter this emerging threat, we have constructed an extremely large face swap video dataset to enable the training of detection models, and organized the accompanying DeepFake Detection Challenge (DFDC) Kaggle competition. Importantly, all recorded subjects agreed to participate in and have their likenesses modified during the construction of the face-swapped dataset. The DFDC dataset is by far the largest currently and publicly available face swap video dataset, with over 100,000 total clips sourced from 3,426 paid actors, produced with several Deepfake, GAN-based, and non-learned methods. In addition to describing the methods used to construct the dataset, we provide a detailed analysis of the top submissions from the Kaggle contest. We show although Deepfake detection is extremely difficult and still an unsolved problem, a Deepfake detection model trained only on the DFDC can generalize to real"in-the-wild"Deepfake videos, and such a model can be a valuable analysis tool when analyzing potentially Deepfaked videos. Training, validation and testing corpuses can be downloaded from https://ai.facebook.com/datasets/dfdc. Swapping faces in photographs has a long history, spanning over one hundred and fifty years [7] , as film and digital imagery have a powerful effect on both individuals and societal discourse [15] . Previously, creating fake but convincing images or video tampering required specialized knowledge or expensive computing resources [27] . More recently, a new technology called Deepfakes 1 has emerged [29] a technology that can produce extremely convincing faceswapped videos. Producing a Deepfake does not require specialized hardware beyond a consumer-grade GPU, and several off-the-shelf software packages for creating Deepfakes have been released. The combination of these factors has lead to an explosion in their popularity, both in terms of producing parody videos for entertainment, and for use in targeted attacks against individuals or institutions [9] . With the understanding that it is now possible for a member of the general public to automatically create convincing fake face-swapped videos with simple hardware, the need for creating automated detection methods becomes clear [2] . While digital forensics experts can analyze single, high-impact videos for evidence of manipulation, this cannot scale to reviewing each of the hundreds of thousands of videos uploaded to Internet or social media platforms every day. Detecting Deepfakes at scale necessitates scalable methods, and computer vision or multimodal models are particularly suited to this challenge. However, these models require training data, and even though it is possible to create several convincing Deepfakes easily, the cost of producing the hundreds of thousands of Deepfake videos necessary to train these models is often cost prohibitive. In order to accelerate advancements in the state of the art of Deepfake detection, we have constructed and publicly released the largest Deepfake detection dataset to date. Our first major contribution is the DeepFake Detection Challenge (DFDC) Dataset. Motivated primarily by the fact that many previously-released datasets contained few videos with few subjects and with a limited size and number of methods represented, we wanted to release a dataset with a large number of clips, of varying quality, and with a good representation of current state of the art face swap methods. Furthermore, as we observed that many publicly-released datasets [14, 17, 18, 23, 30] did not guarantee that their subjects were willing participants or agreed to have their faces modified, we solicited video data from 3,426 paid actors and actresses speaking in a variety of settings for roughly 15 minutes each. All participants agreed to appear in a dataset where their faces may be manipulated by a computer vision algorithm. The DFDC Dataset is both the largest currentlyavailable Deepfake dataset, and one of only a handful of datasets containing footage recorded specifically for use in machine learning tasks (the others being the much smaller Google Deepfake Detection Dataset [6] and the preview version of this dataset [5] ). Beyond building and releasing a dataset, our second major contribution is a now-completed benchmark competition using this data 2 , and the resulting analysis. The benefits of a competition of this size are many. First, the monetary prizes provided a large incentive for experts in computer vision or Deepfake detection to dedicate time and computational resources to train models for benchmarking. Second, hosting a public competition obviates the need for the authors of a paper to train and test a model on a dataset they produced. Releasing a dataset and a benchmark simultaneously can introduce bias, as the creators of a dataset have intimate knowledge of what methods were used while constructing the dataset. Third, gathering thousands of submissions and running them against real Deepfake videos that participants never see paints an extremely accurate picture of the true Deepfake detection state of the art. Figure 1 : Comparison of current Deepfake datasets. Both axes are shown in log scale -the DFDC is over an order of magnitude larger than any other available dataset, both in terms of the number of frames and number of videos. Rough boundaries for dataset "generation" (as given in [18] ) also shown. Overlapping circles do not indicate inclusion; circle sizes are merely a visualization of the number of fake identities present in the dataset. Due to the nature of the pairwise auto-encoder style model used to produce the majority of Deepfake videos, and due to limited availability of source footage, previous datasets contain few videos and fewer subjects. Specifically, every pairwise swap between two identities requires retraining a single model, which in modern hardware takes about one day on a single GPU. However, as noted in [23] , the scale of a dataset in terms of raw training videos or frames is critical to a detector's performance. Li et al. [18] break down previous datasets into two broad categories -first generation datasets such as DF-TIMIT [17] , UADFV [30] , and FaceForensics++ DF (FF++ DF) [23] , and the second generation, containing datasets such as the Google DeepFake Detection Dataset [6] , Celeb-DF [18] , and the DFDC Preview Dataset [5] . In general, each generation improves over the previous one by increasing the number of frames or videos by an order of magnitude. However, the datasets in the first two generations all suffer from a small number of swapped identities, which can contribute to overfitting on those particular identities. In all cases, apart from (possibly) FF++ DF, there are fewer than 100 unique identities. The FF++ DF dataset indexes by video sequence rather than ID, so it is unclear how many unique IDs appear in the dataset. Finally, we propose a third generation of datasets that not only have more than an order of magnitude larger number of frames and videos than the second generation, and with better quality, but also with agreement from individuals appearing in the dataset. This generation would include the DFDC, as well as the recent DeeperForensics-1.0 (DF-1.0) dataset [14] . We believe that future face-swapped datasets should seek agreement from individual participants in order to be useful to and ethical for the research community. First generation: Datasets in this generation usually contain less than 1,000 videos and less than 1 million frames. In addition, these datasets generally do not represent that they have the rights to the underlying content or agreement from individuals in the dataset. UADFV, DF-TIMIT, FaceForensics++ all contain videos sourced from YouTube and perform face swaps between public individuals. Additionally, due to the small scale, models trained on datasets such as FaceForensics++ usually do not generalize to real Deepfake videos [19] . Second generation: These datasets generally contain between 1 and 10 thousand videos and 1 and 10 million frames, and contain videos with better perceptual quality than videos in the first generation. The Celeb-DF dataset contains over an order of magnitude more data than previous datasets, but contains data with use restrictions. During this generation, ethical concerns of subjects appearing in a dataset without their consent were publicly raised [26] . As a response, the preview version of this dataset, with consenting actors, was released. Shortly thereafter, the similar Google-DFD Dataset was also released, and also contained 28 paid actors. However, the datasets in this generation also do not contain enough identities that allow for sufficient detection generalization. Third generation: The most recent DeepFake datasets, DeeperForensics-1.0 and the DFDC Dataset, contain tens of thousands of videos and tens of millions of frames. The encouraging trend of using paid actors has continued for both of these datasets. However, there are many major differences between DF-1.0 and the DFDC Dataset, and as DF-1.0 is the most similar dataset to the DFDC dataset, they are covered in detail. First, although it is claimed that all videos in DF-1.0 contain consenting individuals, the target videos in the released dataset are all sourced from the internet, and whether these videos can be used freely is unclear. In addition, we do not count a fake video and the same fake video with a perturbation as two separate fake videos, as is done in DF-1.0, as these perturbations are trivial to add and do not require large scale computing resources. Every one of the 100,000 fake videos in the DFDC Dataset is a unique target/source swap. Ignoring perturbations, DF-1.0 only contains 1,000 unique fake videos. Furthermore, there are several notable differences besides raw numbers that have implications for the generalization performance of models trained on these datasets. The first is in the nature of the source data. DeeperForensics contains videos recorded in a controlled, studio setting, while the DFDC Dataset contains videos of individuals in indoor and outdoor settings, in a variety of real-world lighting conditions. The methods used to generate our dataset are flexible enough to handle this variety, and do not require frontally-captured videos taken in a studio. In addition, the DF-1.0 training dataset only contains videos produced by a single model that is proposed by the authors (and thus has not been used to create any public Deepfake videos), limiting the applicability of training on this set. Finally, in order to avoid the bias introduced by knowing how our manipulated data was produced, we do not propose any detection model trained specifically on our data, and instead solicited the community to contribute models that run on a hidden test set. Therefore, we constructed both a public test set (containing 4,000 videos) and a private test set (containing 10,000 videos), and included real "in-the-wild" Deepfakes. DF-1.0 uses a hidden test set of 400 videos, but it is not clear how many are real or fake or even whether or not they are Deepfaked videos. Finally, the perturbations used in DF-1.0 to expand the original set of 1,000 fake videos only contain basic pixel-level distortions such as color changes and Gaussian noise, and no semantic distractors that are present in real videos. Many Deepfake or face swap datasets consist of footage taken in non-natural settings, such as news or briefing rooms. More worryingly, the subjects in these videos may not agreed to have their faces manipulated. With this understanding, we did not construct our dataset from publicly-available videos. Instead, we commissioned a set of videos to be taken of individuals who agreed to be filmed, to appear in a machine learning dataset, and to have their face images manipulated by machine learning models. In order to reflect the potential harm of Deepfaked videos designed to harm a single, possibly non-public person, videos were shot in a variety of natural settings without professional lighting or makeup, (but with high-resolution cameras, as resolution can be easily downgraded). The source data consisted of: The source videos were pre-processed with an internal face tracking and alignment algorithm, and all face frames were cropped, aligned, and resized to 256x256 pixels. For Deepfake methods, a subsample of 5,000 face frames collected from all videos was used to train models. Throughout this section, the terms target and source are used. In general, target refers to the base video in which a face will be swapped; source refers to the source content that is used to extract the identity that will be swapped onto the target video. For example, for face swapping, we wish to put the source face onto the target face, resulting in a video identical to the target video, but with the source identity. All of the face-swapped videos in the dataset were created with one of N methods. The set of models selected were designed to cover some of the most popular face swapping methods at the time the dataset was created. In addition, some methods with less-realistic results were included in order to represent low-effort Deepfakes. The number of videos per-method are not equal; the majority of faceswapped videos were created with the Deepfake Autoencoder (DFAE). This choice was made as to reflect the distribution of public Deepfaked videos, which are usually created with off-the-shelf software like DeepFaceLab 3 or other public repositories used for creating Deepfakes. For a full description of each model's architecture, please refer to the Appendix. DFAE: The DFAE model does not have a consistent name or design in public versions, but it is generally structured like a typical convolutional autoencoder model with several small but important differences. First, the model uses one shared encoder, but two separately-trained decoders, one for each identity in the swap. Additionally, the shared portion of the encoder extends one layer beyond the bottleneck, and the upscaling functions used are typically PixelShuffle operators, which is a non-standard, nonlearned function that maps channels in one layer to spatial dimensions in the next layer. This architecture encourages the encoder to learn common features across both identities (like lighting and pose), while each decoder learns identityspecific features. At inference time, a given input identity in a frame is run through the opposite decoder, thus producing a realistic swap. The model is flexible; in the DFDC dataset, we included models that used an input/output resolution of 128x128 and 256x256. All of the images in the banner are fake faces, produced by a DFAE at 128x128 input/output resolution. MM/NN face swap: The next method performed swaps with a custom frame-based morphable-mask model. Facial landmarks in the target image and source image are computed, and the pixels from the source image are morphed to match the landmarks in the target image using the method described in [13] . The eyes and the mouth are copied from the original videos using blending techniques, and spherical harmonics are used to transfer the illumination. This method works best when both the target and source face expressions are similar, so we used a nearest-neighbors approach on the frame landmarks in order to find the best source/target face pair. In a video, this approach is immediately evident, but on a frame-per-frame basis, the results look more realistic and could fool detectors that only operate on individual frames. We included three additional models based on methods that incorporate GANs -the Neural Talking Heads (NTH) model, FSGAN, and a method utilizing StyleGAN. NTH: The NTH [31] model is able to generate realistic talking heads of people in few-and one-shot learning settings. It consists of two distinct training stages: a metalearning stage and a fine-tuning stage. In the meta-learning stage, the model learns meta-parameters by transforming landmark positions into realistic-looking talking heads with a handful of training images of that person. In the fine tuning stage, both the generator and the discriminator are initialized with meta-parameters and quick coverage to the state that generate realistic and personalized images after seeing a couple of images of a new person. A pre-trained model is fine-tuned with pairs of videos in the raw DFDC set: the land marking positions are extracted from the driving video and fed into the generator to produce images with the appearance of the person in the other video. FSGAN: The FSGAN method (fully described in [20] ) uses GANs to perform face swapping (and reenactment) of a source identity onto a target video, accounting for pose and expression variations. FSGAN applies an adversarial loss to generators for reenactment and inpainting, and trains additional generators for face segmentation and Poisson blending. For the DFDC, we generated FSGAN swap videos using generator checkpoints trained on the data described in [20] , after performing brief fine-tuning on the reenactment generator for each source identity. StyleGAN: The StyleGAN [16] method is modified to produce a face swap between a given fixed identity descriptor onto a video by projecting this descriptor on the latent face space. This process is executed for every frame. Refinement: Finally, a random selection of videos went through post processing. Using a simple sharpening filter on the blended faces greatly increased the perceptual quality in the final video with nearly no additional cost, as shown in Figure 2 . In addition to these face-swapping methods, we also performed audio swapping on some video clips using the TTS Skins voice conversion method detailed in [22] . Video clips were selected for audio swapping independent of whether they were Deepfakes, or which face-swapping method was used. The voice identity used to swap did not depend on the face identity used in a swap. Although these audio manipulations were not considered 'Deepfakes' for the competition, we included them in the set to provide more breadth of manipulation type, and they may be of use in further research. All subjects were split into one of four sets: training, validation, public test, or private test. The training and validation sets were released publicly, while the public and private test sets were not released, as they were used to rank the final scores of all submissions to the Kaggle contest. Not all possible pairs were trained. For example, an initial set of 857 subjects were selected into the training set, and there are over 360,000 potential pairings within this group. Training all pairs would require almost 1,000 GPUyears, assuming that it takes one day to train a DFAE model on one GPU. Instead, within a set, subjects were paired with those with similar appearances, as this tended to give better results for models like the DFAE. Over 800 GPUs were used to train 6,683 pairwise models (which required 18 GPU-years), as well the more flexible models such as NTH or FSGAN that only required a small amount of finetuning per subject. Finally, a subset of 10 second clips were selected from the output of all models, and the overall distribution of gender and appearance was balanced across all sets and videos. After inference, all methods produced a cropped image containing the face at 256x256 resolution. However, some methods do not infer details around the face, such as hair or background information. Therefore, we re-blended the face onto the original full-resolution raw frame using several steps, and combined the original raw frames and audio with ffmpeg. First, we created a face mask using detected landmarks. The mask produced by these landmarks included the forehead region -many off-the-shelf algorithms only use a mask that extends to the eyebrow region, but this can lead to blending artifacts where "double eyebrows" appear in the final video. Next, we blended the face using the mask onto the original frame using Poisson blending. However, we did not use Poisson blending over the entire mask, as this would often blend the two identities and create an "average" face rather than a face that looks like the source subject. Instead, we only blended a small region along the edges of the mask. Practically, this was done using a set of morphological operations that extracted the mask border, applying a Gaussian filter to the mask border pixels, and finally Poisson blending the original and swapped face frames using this transformed mask. Finally, it is important to note that proper face alignment enhanced the quality of all methods. Having each face aligned at a similar scale allowed models to focus on the details of a face, rather than having to rotate or translate misaligned faces. In addition, consistent alignment reduced face jitter in the final videos, which is usually a telltale sign that a video was Deepfaked. Faces were aligned by using a triangular set of positions formed by the two eyes and the nose, and computing an affine transform that best aligned a given face with these positions. Training set: The training set provided was comprised of 119,154 ten second video clips containing 486 unique subjects. Of the total amount of videos, 100,000 clips contained Deepfakes which translates to approximately 83.9% of the dataset being synthetic videos. In order to create the Deepfakes, the DFAE, MM/NN face swap, NTH, and FS-GAN methods were used. No augmentations were applied to these videos. Validation: The validation set is the public test set used to compute the public leaderboard positions in the Kaggle competition. This dataset consisted of 4,000 ten second video clips, in which 50% (2000 clips) included Deepfakes. 214 unique subjects were used, none of which were a part of the training set. Additionally, the dataset included one unseen generation method for Deepfakes: StyleGAN. Augmentations were applied to roughly 79% of all videos. Test: The private test set used was comprised of 10,000 ten second clips. Similar to the public test set, 50% of the clips included Deepfakes and the other 50% were non-Deepfaked clips. However, unlike the public test set, 50% of this dataset includes organic content found on the internet and the other 50% is unseen content from our source video dataset, collected from various sources. We are releasing the DFDC-like data portion from the private test set as the final test set. The half of the final evaluation test set consisting of DFDC videos was assembled using 260 unique subjects from the source video dataset that have not been seen before. The data was constructed identically to our public test set, including all listed model types except for StyleGAN. Augmentations were applied to approximately 79% of all videos in the final evaluation test set. New, never-beforeseen augmentations were applied including a dog mask and a flower crown filter. Training, validation and testing corpuses can be downloaded from http://ai.facebook.com (URL to be updated). Various augmentations such as geometric transforms or distractors were added to the videos in both the public Kaggle test set as well as the final evaluation test set. We defined two overarching types of augmentations: 1. Distractor: overlays various kinds of objects (including images, shapes, and text) onto a video 2. Augmenter: applies geometric and color transforms, frame rate changes, etc. onto a video Augmenters were randomly applied to approximately 70% of the videos and are fairly straightforward transforms. The following types of augmenters were applied, all at randomly chosen levels: Gaussian blurring, brightening/darkening, adding contrast, altering the framerate, converting to grayscale, horizontal flipping, audio removal, adding noise, altering the encoding quality, altering the resolution, and rotating. All augmenters were present in both the public and final test sets, except for the grayscale augmenter which was only present in the final evaluation test set. About 30% of all videos contained distractors, some being more adversarial than others. The simplest distractors overlay random text, shapes, and dots onto each frame of a video and move around frame to frame. There can either be consistent movement (i.e. moving across horizontally or vertically) or random movement. The next subset of distractors overlay images onto each frame of a video. Similar to the previous subset, there can either be consistent or random movement. Additionally, there is an option to have the same image moving around the entire video, or the option to choose a random image every frame. Some of the added images included faces from the DFDC dataset. The last subset of distractors are facial filters that are commonly used on social media platforms. The facial filters implemented were the dog and flower crown filters. All distractors were present in the final evaluation test set, however only the text, shapes, and faces distractors were present in the public Kaggle test set. The Deepfake YouTube channels logo distractor was only applied to benign videos in order to detect if any models were overfitting to YouTube data, which was not allowed by the competition's rules. See Figure 3 for visual examples of the augmentations. In most analyses of a machine learning model's performance, classification metrics such as log-loss are reported. In certain settings, this type of metric may be appropriate. For instance, a model trained to classify whether or not a video is a Deepfake may have its input pre-filtered by experts who want to use the model's score as an additional piece of evidence when performing a forensic analysis on a video. However, in many other settings, especially those in which an entity wants to find faked videos from a large set of input videos, detection metrics are more appropriate. In this case, it is important to create metrics that reflect the true prevalence (or the percentage of true positives) of a given type of fake video. In realistic distributions, the ratio of Deepfaked videos to real videos may be less than one in a million. With such an extreme class imbalance, accuracy is not as relevant as the precision or false positive rate of a model -the number of false positives of even an extremely accurate model will outnumber the true positives, thus decreasing the utility of the detection model. If we assume that the ratio between Deepfake and unaltered videos is 1 : x in organic traffic and 1 : y in a Deepfakes dataset, it is likely that x y. Given the large number of true negatives, it is important for an automatic detection model to be precise. Even a model with high accuracy will detect many more false positives than true positives, simply because of the large class imbalance, which diminishes the utility of a automatic detection model. Metrics like the F β score (which does not weight false positives) or even the false positive rate (which only measures the tradeoff between true negatives and false positives) do not capture this issue. Instead, precision (or its inverse, the false discovery rate) is the most indicative metric for how a detection model will perform over a real distribution of videos. However, it is not practical to construct a dataset that mimics the statistics of organic traffic due to the sheer number of videos. We can define a weighted precision for a Deepfakes dataset as a very rough approximation of the precision that would be computed by evaluating on a dataset equal in size to the magnitude of organic traffic. Assuming the ratios of unaltered to tampered videos differ between a test dataset and organic traffic by a factor of α = x/y, we define Figure 4 : The average log loss for each augmenter and distractor. As expected, videos containing the noise and faces augmentations were among the most difficult to predict accurately. Surprisingly however, the flower filter, which only covers a portion of the forehead, was the most difficult distractor while the dog filter, which covers the nose and mouth, was one of the easier ones to predict. Horizontal flips, blurring, and rotation were among the easiest augmenters, likely due to the fact that they are common data augmentations. Note that videos that contained both augmenters and distractors were excluded from this analysis. weighted precision wP and (standard) recall R as where TP, FP, and FN signify true positives, false positives, and false negatives. This metric differs from the F β score, as it assigns a weight to false positives instead of false negatives (and ignores true negatives). For an example applied to submissions in the DFDC, see Figure 9 . For the purposes of ranking submissions in the DFDC Competition, models were ranked by log loss, as the weighted PR metric can be extremely small and somewhat noisy. In addition, we only needed a relative measure of performance to rank competitors, rather than an absolute measure. However, in Section 6, we report weighted PR metrics in addition to log loss to assess each model's performance as a detector on a realistic distribution of videos. As we do not introduce any novel architectures, in this section we describe how well different models and methods perform in practice, and show some of the best and worst examples of each method. Unlike other work in this area, we explicitly do not show the worst examples from datasets other than the DFDC Dataset as a comparison, as (a) it is simple to cherry-pick the worst examples from a distribution of data produced by automatic methods, and (b) the perceptual quality of a moving video cannot be demonstrated with individual still frames. Instead, we believe that producing a very large dataset that covers a wide range of output qualities and with many unique videos is more useful than a hand-tuned, high-quality dataset of limited size. In general, face swaps produced with DFAE methods were of higher quality over a wider range of videos than swaps produced with GAN-like methods, and required much less fine tuning. Our hypothesis is that GAN-like methods work well in limited settings with even lighting, such as news rooms, interviews, or controlled-capture videos as in [14] , but do not work well automatically (yet). Beyond ease of use, this may explain why most public Deepfake videos are produced with DFAE methods. Consequently, the majority of videos in the DFDC Dataset were produced with several different DFAE variants. Some qualitative results from each method are shown in Figure 5 , with further discussion here. MM/NN: While this method was able to produce convincing single frame images, overall the NN approach tended to produce discontinuities in the face. In addition, there was sometimes a failure in the mask fitting, as seen in the left image of the top row of Figure 5 . DFAE: The DFAE methods were generally the most flexible and produced the best results out of the methods included in this paper. They were able to handle a variety of lighting conditions and individuals with good temporal coherence, even though inference happened on a frame-byframe basis. Particular areas of weakness were glasses and extreme poses. FSGAN: The FSGAN was able to produce convincing results in scenes with good lighting, but struggled to maintain an even skin tone in darker settings. In addition, it tended to produce flat-looking results. One particular strength of this method is in handling extreme head poses, as shown in the rightmost image of row 3 of Figure 5 . NTH: Of the GAN like methods, this method produced the most consistent quality. However, it tended to insert similar looking eyes across subjects, regardless of the source ID. Like other GAN methods, NTH did not produce good results in darker settings. StyleGAN: Overall, StyleGAN produced the worst overall results, both at the frame level and at the video level. By far the most common in issue in videos was an unconstrained eye gaze. Without conditioning on the input gaze, the gaze of the swapped face tender to wander, with eyes looking in different directions at once. In addition, Style-GAN had trouble matching the illumination in a scene. The second component of this work involved a large public competition, where participants submitted Deepfake detection models trained on the full DFDC Dataset. Initially, the public test set was used to rank the public leaderboard while the competition was ongoing. This set only contained DFDC videos with subjects that never appeared in the dataset. The "private" test set included real videos, some of which were Deepfakes, in addition to more DFDC videos that contained even more subjects that hadn't appeared in any previous set. Participants were free to use additional external data, as long as it complied to the policies of the competition. The following analysis presents a comprehensive snapshot of the current performance of Deepfake detectors, and in particular, the performance against the private test set gives an idea as to how the best models would perform on a real video distribution. During the course of the competition, 2,114 teams participated. Teams were allowed to submit two different submissions for final evaluation. Of all of the scores on the private test set, 60% of submissions had a log loss lower than or equal to 0.69, which is roughly the score if one were to predict a probability of 0.5 for every video. As seen in Figure 8 , many submissions were simply random. Good performance on the public test set correlated with good performance on the private test set, as shown in the first image of Figure 6 . All final evaluations were performed on the private test set, using a single V100 GPU. Submissions had to run over all 10,000 videos in the private test set within 90 hours, but most submissions finished evaluating all videos within 10 total hours, giving a rough average inference time of around 3.6s per-video. After the competition ended, all scores for all submissions were computed over all videos in the private test set. Shown in Figure 7 are detection metrics computed over the entire private set, only on videos from the DFDC, and only on real videos. As expected, the best models achieved very good detection performance on DFDC videos, as the all of the videos in the training set came from this distribution. On real videos, there was an expected performance drop, but the best models achieved an average precision of 0.753 and a ROC-AUC score of 0.734, only on real videos, which demonstrates that training on the DFDC Dataset allows a model to generalize to real videos. The second and third plots in figure 6 show the correlation between detection metrics computed on DFDC videos only and scores on real videos only, providing additional evidence that good performance on the DFDC dataset translates to good performance on real videos, and consequently that the DFDC dataset is a valuable resource for training real Deepfake detection models. Finally, we provide a brief description of the top-5 winning solutions here -more detailed analysis of each approach can be found at the accompanying reference for each model. The first submission, Selim Seferbekov [24] , used MTCNN [33] for face detection and an EfficientNet B-7 [28] for feature encoding. Structured parts of faces were dropped during training as a form of augmentation. The second solution, WM [34] , used the Xception [3] architecture for frame-by-frame feature extraction, and a WS-DAN [12] model for augmentation. The third submission, NTechLab [4] , used an ensemble of EfficientNets in addition to using the mixup [32] augmentation during training. The fourth solution, Eighteen Years Old [25] , used an ensemble of frame and video models, including EfficientNet, Xception, ResNet [10] , and a SlowFast [8] video-based network. In addition, they tailored a score fusion strategy specifically for the DFDC dataset. Finally, the fifth winning solution, The Medics [11] , also used MTCNN for face detection, as well as an ensemble of 7 models, 3 of which were 3D CNNs (which performed better than temporal models), including the I3D model [1] . There are three main areas of future work regarding the DFDC Dataset. First, we would like to perform a large scale perceptual study of the quality of the videos in the dataset. Due to time constraints and extenuating circumstances surrounding COVID-19, this portion of the project is delayed, but is ongoing. Second, we would like to expand the overall size of the dataset. Only 960 of the roughly 3,500 original identities were included in the dataset, again due to time and computational constraints. Finally, we are exploring the possibility of releasing the original raw dataset to the research community. One of the main differences with previous Deepfake datasets is that they do not purport to have agreement from individuals to be included in the datasets. Releasing all of the roughly 50k 1 minute videos with some additional annotations will help alleviate this problem, and hopefully lead to even higher quality and larger Deepfake datasets in the future. We set α = 100, which weights false positives 100x more than usual when calculating precision, and is designed to represent a more realistic distribution of DeepFake videos. Note that performance drops precipitously as more false positives are encountered. Quo vadis, action recognition? a new model and the kinetics dataset Deepfakes: A looming challenge for privacy, democracy, and national security Xception: Deep learning with depthwise separable convolutions The Deepfake Detection Challenge (DFDC) Preview Dataset Contributing data to deepfake detection research Photo tampering throughout history Slowfast networks for video recognition Artificial intelligence, deepfakes and a future of ectypes Deep residual learning for image recognition See better before looking closer: Weakly supervised data augmentation network for fine-grained visual classification Facial action transfer with personalized bilinear regression DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection Fake photographs: making truths in photography A style-based generator architecture for generative adversarial networks DeepFakes: a New Threat to Face Recognition? Assessment and Detection Celeb-DF: A Large-scale Challenging Dataset for DeepFake Forensics Towards deepfake detection that actually works FSGAN: Subject agnostic face swapping and reenactment Deepfakes and cheapfakes TTS skins: Speaker conversion via asr FaceForen-sics++: Learning to detect manipulated facial images Proc. of IEEE International Conference on Computer Vision (ICCV) Facial recognition's 'dirty little secret': Millions of online photos scraped without consent A brief history of motion capture for computer character animation Rethinking model scaling for convolutional neural networks. CoRR, abs/1905.11946 Media forensics and deepfakes: an overview Exposing Deep Fakes Using Inconsistent Head Poses Few-shot adversarial learning of realistic neural talking head models Mixup: Beyond empirical risk minimization Joint face detection and alignment using multitask cascaded convolutional networks