RDeepfakes are becoming increasingly ubiquitous, particularly in facial manipulation. Numerous researchers and companies have released multiple datasets of face deepfakes labeled to indicate different methods of forgery. NHowever, naming these labels is often arbitrary and inconsistent, leading to the fact that most researchers now choose to use only one of the datasets for research work. However, researchers must use multiplthese datasets in practical applications toand conduct traceability research. The researchersIn this study, we employ some models to extract forgery features from various deepfake datasets and utilize the K-means clustering method to identify datasets with similar feature values and. We analyze the feature values using the Calinski Harabasz Index method. DOur findings reveal that datasets with the same or similar labels in different deepfake datasets exhibit different forgery features. T We proposed the KCE system canto solve this problem, which combines multiple deepfake datasets according to feature similarity. In thWe analyzed four groups of test datasets and found that the model trained based on KCE combined data, the faced unknown data types, and Calinski Harabasz scored 42.3% higher than the combined data by the same forgery nameby forged names. Furthermore, it is 2.5% higher than the model using all data, although the latter has more training data. It shows that this method improves the generalization ability of the model. This paper introduces a fresh perspective for effectively evaluating and utilizing diverse deepfake datasets and conducting deepfake traceability research.
Measuring the relevance of each deepfake dataset is crucial. To address this problem, We establish the KCE-System. It uses the Xception model
[519] as a forgery feature extractor that maps various deepfake images into the feature space. Then, we use PCA for dimensionality reduction and the K-means method for clustering. We use these clustered datasets to retrain the Xception model and use the Calinski Harabasz Index
[69] to judge the models' performance. To improve the credibility of the experimental results, we repeat part of the experiments on The Frequency in the Face Forgery Network (F3-Net)
[720] and Residual Neural Network (ResNet) [8]. We also combine these deepfake datasets based on forgery method labels as a control group.
Our experiments prove that some forgery category labels of the same name differ significantly across different datasets. When the forgery method of the deepfake dataset is unknown, the KCE-System can achieve better generalization performance by training on merged datasets based on closer feature distances.
Methods Based on Spectral Features
Chen et al. [44] proposed a forgery detection algorithm that combines spatial and frequency domain features using an attention mechanism. The method uses a convolutional neural network and an attention mechanism to extract spatial domain features. After the Fourier transform, the frequency domain features are extracted, and finally, these features are fused for classification. Qian et al. proposed a network structure called F3-Net (Frequency in Face Forgery Network) and designed a two-stream collaborative learning framework to learn the frequency domain adaptive image decomposition branch and image detail frequency statistics branch. The method has a significant lead over other methods on low-quality video. Liu et al. proposed a method based on Spatial Phase Shallow Learning (SPSL). The method combines spatial images and phase spectra to capture upsampled features of facial forgery. For forgery detection tasks, local texture information is more critical than high-level semantic information. By making the network shallower, the network is more focused on local regions. Li et al. [46] proposed a learning framework based on frequency-aware discriminative features. They designed a single-center loss function (SCL), which only compresses the intra-class variation of real faces while enhancing the inter-class variation in the embedding space. In this way, the network can learn more discriminative features with less optimization difficulty.We cutrrently the primary deepfake traceability method. Cumulative upsampling can cause apparent changes in the frequency domain, and minor forgery defecained an Xception model as a feature extractor using various deepfake datasets and compression errors can be well described in this domain. Using this information can identify fake videos. Spectrum-based methods have certain advantages in generalization because most existing image and video compression methods are also related to the frequency domain. Mreal datasets as training sets. When examining different deepfake datasets in feature space, we observe that specific forgery methods Based on Generative Adversarial Network Inherent Tracesare clustered together. In contrast, [36][37][38] arsome anfother primary deepfake traceabilitrgery method. The fake faces generated by generative adversarial networks have distincts with similar names are separated, as shown in Figure 2. tForaces and texture information compared to real-world photographs, including using an Expectation Maximization algorithm to extract local features that model the convolutional generation process. Use global image textures and example, one of the FOMM forgery methods is very close to the FaceSwap methods based on globally consistent fingerprints.
Met but far from the othods based on frequency domain and model fingerprints provide traceability for differentr FOMM forgery methods. It shows that the forgery methods. Although researchers claim high accuracy rates in identifying and tracing related with the same name have a significant feature gap in different datasets, and different forgery methods, they typically only use a specific dataset for research. This approach reduces the comprehensiveness of traceability and the model’s generalizatio will have relatively similar features. The same trend can be seen in the Cosine Similarity results in abilityFigure 3. Therefore, researchers need to consider evaluate the similarity and correlation between samples in each dataset to make full use of thesedifferent forgery methods across various datasets.
How Wever, this presents a significant challenge. Unlike typical computer visionassume that incorporating datasets, deepfake datasets’ labels are based on technical methods and that use the same forgery patterns rather than human concepts, making it impossible for humans to identify and evaluate them. The more severe problem is that the labels of forgery methods used in various deepfakemethods will beneficially enhance the model’s performance. Conversely, merging different datasets are entirely arbitrary. Some labels are based on implementation technology, while others are based on forgery modes. For example, many datasets have the label “DeepFakes.” The irregularity and ambiguity of these labeling methods make it difficult to fully utilize the forged data of various deepfake datasets. Some deepfake datasets do not indicate specific forgery methodsor dividing the similar dataset into separate subsets may adversely affect the model’s performance. We developed the K-means and Calinski Harabasz Evaluation System based on the above assumptions. For the sake of simplicity, we refer to it as the KCE-System for short.
WeFigure 3. assuSime that incorporating datasets that use the same forgery methods will beneficially enhance the model’s performance. Conversely, merging ilarity matrices for different datasets or dividing the similar dataset into separate subsets may adversely affect the model’s performance. Based on the above assumptions, we developed the K-means and Calinski Harabasz Evaluation System. For the sake of simplicity, we refer to it as the KCE-System for shortforgery methods in each deepfake dataset.
The KCE-System incorporates unsupervised learning. The system divided the deepfake datasets into training sets and evaluation sets. Then it trains a deepfake recognition model using training sets, and extracting high-dimensional vectors from the middle layer of the model. After dimensionality reduction, the system used the K-means clustering method to merge various deepfake datasets. Using these datasets, the system then trains the new Xception, F3-net, and ResNet models. The trained models are then used to extract 2048-dimensional or 512-dimensional values from the evaluation set as feature values. Finally, the system uses the Calinski Harabasz Index method on the feature values after dimensionality reduction to evaluate The model’s performance, as shown in Figure 24. Next, we will introduce several main parts of the system in detail.
Figure 2. Overview of the KCE-System. The proposed architecture consists of two parts: the cluster section and the evaluation section.
4. Overview of the KCE-System. The proposed architecture consists of two parts: the cluster section and the evaluation section.
Theoretically, when a model reaches a high classification accuracy for various categories of deep fake data, the model can extract the corresponding deepfake feature. We use the trained deepfake recognition model as a feature extractor, as the accuracy of these models in deepfake multi-classification tasks can reach more than 90%. For a comprehensive evaluation, we provide several representative models with different sizes.
The Xception [18] is a traditional CNN model based on separable convolutions with residual connections. The model has shown high accuracy when detecting deepfake videos. In terms of the training process of the feature extractor, the forgery method indicated in each dataset is used as a pseudo-labelling for multi-class training on the Xception. The training accuracy rate reaches 94%. We use it as the main F, and the model converges after three training rounds. We use the trained model extract feature Extractoron the data of 27 categories of deepfake datasets. We take out its 2048-dimensional data as the sample’s feature from the global pooling layer of Xception. Considering the trade-off between performance and efficiency, we select Xception as the baseline model.
The ResNet [20] is an improvement over the traditional deep neural network architecture that solves the problem of vanishing gradients and allows the training of much deeper networks. One of the main advantages of ResNet is its ability to handle deeper architectures, which leads to better accuracy in image classification tasks. Another notable model in facial forgery detection is the F3-Net, as proposed in [9]. This model leverages frequency domain analysis and comprises two branches;, one learnsfocused on learning subtle forgery patterns via Frequency-aware Image Decomposition, (FAD) and the other eaimed at extractsing high-level semantics from Local Frequency Statistics. (LFS). Extensive experiments have demonstrated the effectiveness of the F3-Net in identifying low-quality forgery videos. Given the widespread applicability of the ResNet model in various computer vision fields and the unique position of the F3-Net in the domain of deepfake detection, we also select these two models as Feature Extractorevaluation models and test them on half of the test group. To avoid the interference of the model itself on the experimental results to the greatest extent.
In this field, clustering algorithms, such as K-means [3950], Gaussian Mixture, and DBSCAN [4051] are commonly used. However, the DBSCAN algorithm is ineffective in controlling the number of clusters formed. In our system, we need to control the number of clusters formed for easy comparison with the data merged by name. The Gaussian Mixture algorithm is mainly designed for non-spherical clusters, while we focus more on the distance between categories in feature space, which emphasizes spherical clustering. Therefore, we chose to use the K-means clustering algorithm in our system.
The K-means algorithm uses Euclidean distance for clustering, but it can fail in high dimensions, so a dimension reduction method must be used. PCA [4152] and t-SNE [4253] are two methods we utilized for comparison. PCA is stable but retains less information when reduced to two or three dimensions. When reducing dimensions to 64 using PCA, the interpretable variance contribution rate can be preserved at 95.2%. From Figure 35, we can see that it effectively preserves most of the information needed for clustering. The t-SNE supports low-dimensional reduction for visual analysis but has poor stability.
We utilize five different dimensionality reduction parameters to determine the most appropriate clustering dimension. We apply the t-SNE algorithm to reduce the high-dimensional feature data to 2 dimensions and use the PCA algorithm to reduce the dimensionality to 32,64, and 128 dimensions. We also keep the 2048 dimensional original features without applying any dimensionality reduction algorithm. We then performed K-means clustering on each of these dimensions individually.
Figure 35. Illustration of dimensionality reduction using PCA. After using PCA to reduce the dimension, use the t-SNE method to reduce the dimension to two dimensions for display (Different colors indicate different forgery methods).
We select four categories of deepfake datasets not involved in the training and clustering process as evaluation sets. We extract Xception, ResNet, and F3-net models’ global pooling layer output and use the PCA algorithm reduces the data to 128 dimensions. An example of the results in Figure 6, demonstrating a clear distinction between the four unknown deepfake categories. This figure indicates that our model has indeed learned the relevant characteristics for identifying deepfakes.
Figure 6. The model output of the evaluation sets, that be reduced to three dimensions using the t-SNE method for display.
Evaluating the performance of models trained with unreliably labeled or unlabeled data is difficult. We can not use precision and recall because we do not have a way to figure out whether each sample is classified correctly. To address this issue, we utilize the Calinski Harabasz Index [619], introduced by Calinski and Harabasz in 1974, as an effective evaluation method. This index is defined in Equation (1) as the ratio of the sum of between cluster dispersion and inter-cluster dispersion for all clusters. Therefore, the Calinski Harabasz Index can be used to evaluate the models, with higher scores indicating that the model performs better on the test datasets.
For a set of data of size
, which has been clustered into
clusters, the Calinski Harabasz score s is defined as the ratio of the between-cluster dispersion means and the within-cluster dispersion, as shown in Equation (1).
(1)
where is trace of the between group dispersion matrix and
is the trace of the within-cluster dispersion matrix defined by:
(2)
(3)
Here, represents the set of points in cluster
,
represents the center of cluster
,
represents the center of
, and
represents the number of points in cluster
.
When using the Calinski Harabasz Index to evaluate clustering quality, it can be observed that the elbow points of the Calinski Harabasz Index tend to be around 3 or 4 of cluster number, as depicted in Figure 47. The results obtained from the Calinski Harabasz Index are consistent with the number of forged method categories in the actual evaluation set. This suggests that the Calinski Harabasz Index is a valuable method to assess the model’s ability to identify new categories of deepfakes. When other training parameters remain the same, if a model’s performance is outstanding, it indicates that the quality of the training set is excellent, with fewer incorrect labels. In other words, we effectively improve the reliability of these classification labels in the training set. Therefore, the Calinski Harabasz Index can effectively evaluate the correlation of these unreliable classification labels in our system.
Figure 47.
Using Calinski Harabasz Index to evaluate its clustering quality, it can be found that its elbow point is about 3 to 4.
In this section, we first introduce the overall experimental setup. Our equipment includes four NVIDIA GeForce2080Ti GPUs. We use PyTorch to train and evaluate models, OpenCV to image data preprocessing, and Scikit-learn algorithm library for data analysis. We extract 620,000 fake face images from 10 deepfake datasets and train 40 models, including 32 Xception, 4 F3-net, and 4 ResNet models. The entire data preparation and experimental process spanned approximately three3 months.
ThWe researchers seselect 31 datasets labeled with forgery method names from CelebDF, DeeperForensics1.0, DeepFakeMnist+, FaceForensics++, ForgeryNet, and FakeAVCeleb;, see Table 1 for details. ThWe researchers use a random method to divide 31 deepfake categories into two sets, where the training set contains 27 categories, and the evaluation set contains four categories. The researchers reWe repeat the above division four times to obtain four sets of training sets and evaluation sets. See Table 2 for details.
Table 2. The table displays fouresearchers e sets of experimental data, each containing four evaluation datasets, with the remaining 27 datasets designated for training purposes.
to intercept the face area. Then, thwe researchers increase the side length of the areaimage by a factor of 1.25. Finally, the researchers rawe randomly select 20,000 fake faces of each category and save these images as test data in png format.
Table 2. The table displays four sets of experimental data, each containing four evaluation datasets, with the remaining 27 datasets designated for training purposes.
Datasets | Synthesis Method | Count | Group1 | Group2 | Group3 | Group4 |
CelebDFv1 | FaceSwapPRO | 20,000 | ||||
CelebDFv2 | FaceSwapPRO | 20,000 | evaluate | |||
DeeperForensics | DF-VAE | 20,000 | evaluate | |||
DeepFakeMnist+ | FOMM | 20,000 | ||||
DeepfakeTIMIT | FaceSwap-GAN | 20,000 | evaluate | |||
FaceForensics++ DeepFakeDetection | FaceSwap | 20,000 | ||||
Faceforensics++ | DeepFakes | 20,000 | ||||
Faceforensics++ | Face2Face | 20,000 | evaluate | |||
Faceforensics++ | FaceShifter | 20,000 | evaluate | |||
Faceforensics++ | FaceSwap | 20,000 | ||||
Faceforensics++ | NeuralTextures | 20,000 | evaluate | |||
FakeAVCeleb | FaceSwap | 20,000 | evaluate | |||
FakeAVCeleb | FSGAN | 20,000 | ||||
FakeAVCeleb | Wav2Lip | 20,000 | evaluate | |||
ForgeryNet | ATVG-Net | 20,000 | evaluate | |||
ForgeryNet | BlendFace | 20,000 | evaluate | |||
ForgeryNet | DeepFakes | 20,000 | ||||
ForgeryNet | DeepFakes-StarGAN-Stack | 20,000 | ||||
ForgeryNet | DiscoFaceGAN | 20,000 | evaluate | |||
ForgeryNet | FaceShifter | 20,000 | ||||
ForgeryNet | FOMM | 20,000 | evaluate | |||
ForgeryNet | FS-GAN | 20,000 | evaluate | |||
ForgeryNet | MaskGAN | 20,000 | ||||
ForgeryNet | MMReplacement | 20,000 | ||||
ForgeryNet | SC-FEGAN | 20,000 | ||||
ForgeryNet | StarGAN-BlendFace-Stack | 20,000 | ||||
ForgeryNet | StarGAN2 | 20,000 | evaluate | |||
ForgeryNet | StyleGAN2 | 20,000 | ||||
ForgeryNet | Talking_Head_Video | 20,000 | evaluate | |||
Patch-wise_Face_Image_Forensics | PROGAN | 20,000 | evaluate | |||
Patch-wise_Face_Image_Forensics | StyleGAN2 | 20,000 |
Group | Evaluation Datasets |
1 | Faceforensics++_FaceShifter, FakeAVCeleb_FaceSwap, ForgeryNet_ATVG-Net, ForgeryNet_FOMM |
2 | DeeperForensics_DF-VAE, Faceforensics++_Face2Face, FakeAVCeleb_Wav2Lip, ForgeryNet_DiscoFaceGAN |
3 | DeepfakeTIMIT_FaceSwap-GAN, ForgeryNet_BlendFace, ForgeryNet_StarGAN2, Patch-wise-Face-Image-Forensics _PROGAN |
To verify our conjecture that there is large randomness in the naming of the forged methods in the deepfake dataset, we specially merged the training set data according to the principle of the same or close to the forged method names and used them as a control group. We use the following merging rules.
We randomly see Table 3ample corresponding proportions of data from the merged dataset and reassemble them into 20,000 images per category. The number of training set categories of the merged four groups areis that Group 1, 3, and 4 have has a total of 19 categories, and Group 2 has 17.
Table 3. Thea resetotarchers randomly sample corresponding proportions of data from the merged dataset and reassemble them into 20,000 images per l of 17 categories, Group 3 has a total of 19 categories, and Group 4 has a total of 19 categoryies.
Rule Number | Merge Categories |
1 | CelebDFv1_FaceSwapPRO, CelebDFv1_FaceSwapPRO |
2 | DeepFakeMnist+_FOMM, ForgeryNet_FOMM |
3 | DeepfakeTIMIT_FaceSwap-GAN, DeepFakeDetection_FaceSwap, FaceForensics++_FaceSwap, FakeAVCeleb_FaceSwap |
4 | Faceforensics++_DeepFakes, ForgeryNet_DeepFakes |
5 | FakeAVCeleb_FSGAN, ForgeryNet_FS-GAN |
6 | ForgeryNet_DeepFakes-StarGAN-Stack,ForgeryNet_StarGAN-BlendFace-Stack ,ForgeryNet_StarGAN2 |
7 | ForgeryNet_StyleGAN2, Patch-wise_Face_Image_Forensics_STYLEGAN2 |
One of the purposes of our experiment is to determine the appropriate dimensionality for K-means clustering to address this type of problem. We need to ensure that we do not lose too many classification features due to excessive dimensionality reduction, nor do we cause the K-means algorithm to fail due to excessive dimensionality. Since Wwe chose the Xception model as the baseline, we use the PCA algorithm to reduce the Xception model's 2048-dimensional output to 128, 64, and 32 dimensions. We also reduce it to two dimensions using the t-SNE algorithm. For the F3-net and ResNet models, we only use the PCA algorithm to reduce the output feature value to 64 dimensions since we only need to verify that our method applies to these models.
In the previous section, we created training data for the control group based on name mergers. To facilitate comparison, we ensure that the number of categories of the experimental data for each group is identical. Therefore, we use the K-means clustering algorithm to cluster these training sets based on the specified number of clusters. Groups 1, 3, and 4 have 19 clusters, while Group 2 has 17 clusters. Finally, we use the results of the K-means clustering algorithm to combine the training set.
ThWe researchers train Xception, F3-net, and ResNet models using training data merged by K-means clustering results and category names, respectively. For comparison, the researchers awe also train the same models using the original training set without merging.
To obtain feature vectors for the validation set, we used these models as feature extractors and applied PCA to reduce them to 64 dimensions. The rWesearchers then calculated the Calinski Harabasz Index. Please refer to Table 4 Table 3 for the result.
Table 43. The Calinski Harabasz Index results. Italicized and underlined marks indicate the best result for that group of tests.
Model | Train Data Merge by | Group 1 CH | Group 2 CH | Group 3 CH | Group 4 CH | Avg CH |
Xception | Without merging | 128.02825 | 117.448499 | 68.6994684 | 93.5723306 | 101.937137 |
Xception | Name | 84.0837009 | 73.8172086 | 74.579957 | 61.2651927 | 73.4365148 |
Xception | K-means on 2048D | 124.241305 | 105.070655 | 76.2218761 | 84.212058 | 97.4364735 |
Xception | K-means on t-SNE 2D | 103.627829 | 87.1461055 | 66.6143003 | 76.5264273 | 83.4786656 |
Xception | K-means on PCA 64D | 137.241584 | 101.192327 | 85.2535376 | 94.2137508 | 104.4753 |
Xception | K-means on PCA 128D | 101.197038 | 101.502163 | 74.8441997 | 86.6358341 | 91.0448087 |
Xception | K-means on PCA 32D | 114.247635 | 89.1934801 | 62.3932779 | 75.9596147 | 85.4485019 |
F3-net | Name | 62.6592813 | 65.6510862 | 64.1551837 | ||
F3-net | K-means on PCA 64D | 85.361067 | 72.018708 | 78.6898875 | ||
ResNet | Name | 42.895651 | 47.9716533 | 45.4336522 | ||
ResNet | K-means on PCA 64D | 49.7529116 | 54.0786263 | |||
4 | CelebDFv2_FaceSwapPRO, Faceforensics++_ NeuralTextures, ForgeryNet_FS-GAN, ForgeryNet_Talking Head Video |
We extract the frame data of each category according to the instructions of the relevant dataset and use the face detection model Retinaface [43][54]
51.915769 | ||||||
Model | Train Data Merge by | Group 1 CH | Group 2 CH | Group 3 CH | Group 4 CH | Avg CH |
Xception | Without merging | 128.02825 | 117.448499 | 68.6994684 | 93.5723306 | 101.937137 |
Xception | Name | 84.0837009 | 73.8172086 | 74.579957 | 61.2651927 | 73.4365148 |
Xception | K-means on 2048D | 124.241305 | 105.070655 | 76.2218761 | 84.212058 | 97.4364735 |
Xception | K-means on t-SNE 2D | 103.627829 | 87.1461055 | 66.6143003 | 76.5264273 | 83.4786656 |
Xception | K-means on PCA 64D | 137.241584 | 101.192327 | 85.2535376 | 94.2137508 | 104.4753 |
Xception | K-means on PCA 128D | 101.197038 | 101.502163 | 74.8441997 | 86.6358341 | 91.0448087 |
Xception | K-means on PCA 32D | 114.247635 | 89.1934801 | 62.3932779 | 75.9596147 | 85.4485019 |
F3-net | Name | 62.6592813 | 65.6510862 | 64.1551837 | ||
F3-net | K-means on PCA 64D | 85.361067 | 72.018708 | 78.6898875 | ||
ResNet | Name | 42.895651 | 47.9716533 | 45.4336522 | ||
ResNet | K-means on PCA 64D | 49.7529116 | 54.0786263 | 51.915769 |
The Calinski Harabasz Index of the model trained on the data merged by K-means is 42.27% higher than that pooled by name. Furthermore, these scores are slightly higher than those directly using the original training set, even though the original set contains more data. At the same time, the Calinski Harabasz Index is also higher at 22.66% and 14.27% in F3-net and ResNet models. These prove an appropriate combination of deepfake datasets with similar features improves the model’s generalization in the unknown forgery categories.
The Calinski Harabasz Index of the model trained on the data merged by K-means is 42.27% higher than that pooled by name. Furthermore, these scores are slightly higher than those directly using the original training set, even though the original set contains more data. At the same time, the Calinski Harabasz Index is also higher at 22.66% and 14.27% in F3-net and ResNet models. These prove an appropriate combination of deepfake datasets with similar features improves the model’s generalization in the unknown forgery categories.
Compared with the other three groups, the results of Group 2 are different. Furthermore, its Calinski Harabasz Index is lower than the training results on the original data. Because Group 2 has only 17 categories after the merger, with fewer training samples than other groups. More information loss can destroy the performance of the model.
Theis researchers prove the labels of variousarticle starts with the traceability requirements of the deep forgery method. When using multiple deepfake datasets contain, we found many randomnesses. If researchers use modifferent deepfake datasets using the same or similar label names. Confusion arises in how to use these datasets comprehensively.
We leverage than two deepfake datasets, combining these Xception model to extract fake features from the deepfake datasets only based on forgery labels will hurt the model's. Subsequently, PCA and t-SNE methods are employed to reduce dimensionality and performance. We propose K-means and Calinski Harabasz evaluation systems to evaluat K-means clustering. Then, combine the datasets based on the clustering results, and use the similarity of various deepfake datasets, laying the foundation for future researchers to use them comprehensively. Tcombined data to train Xception, F3-net, and ResNet models, respectively. Finally, we use these models to extract features from the evaluation set and evaluate the generalization ability of the deepfake recognition model in these models using the Calinski Harabasz index as an evaluation metric. Our contributions are mainly three-fold:
Our freseace of new samples can be improved by mergingrch is only a helpful exploration for entirely using various deep forgery datasets with highfrom the source of deep forgery feature similarity.
Ourmethods. We mainly research revealed the arbitrariness of label naming in deepfake datasets and the resulting troubles in the traceability of forgery methods. There is still a long way to go to solve this problem completely. In addition, different image compression algorithms and image resolutions significantly impact the fake features of deepfake datasets, which will seriously interfere with the model’s extraction of fake features from deepfake datasets, and pose a significant challenge to the identifiability and traceability of deepfake datasets. We are committed to conducting further research to address these challenges effectively.
Furthermore, tTo ensure the healthy development of the field, we appeal to researcher institutions and companies touniversities should standardize the label nomenclature of deepfake datasetss. Additionally, legislation should require digital watermarking and blockchain technology to accurately trace deepfake content to its source. Our research is a helpful exploration of the use of various deep forgery datasets, and we hope it will inspire future work in this field.