You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Submitted Successfully!
Thank you for your contribution! You can also upload a video entry or images related to this topic. For video creation, please contact our Academic Video Service.
Version Summary Created by Modification Content Size Created at Operation
1 Xiujian Liu -- 1336 2022-07-22 10:41:33 |
2 format correct Catherine Yang Meta information modification 1336 2022-07-22 11:05:21 |

Video Upload Options

We provide professional Academic Video Service to translate complex research into visually appealing presentations. Would you like to try it?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Chen, Y.;  Han, G.;  Lin, T.;  Liu, X. CAFS for Nasopharyngeal Carcinoma Segmentation. Encyclopedia. Available online: https://encyclopedia.pub/entry/25429 (accessed on 22 December 2025).
Chen Y,  Han G,  Lin T,  Liu X. CAFS for Nasopharyngeal Carcinoma Segmentation. Encyclopedia. Available at: https://encyclopedia.pub/entry/25429. Accessed December 22, 2025.
Chen, Yitong, Guanghui Han, Tianyu Lin, Xiujian Liu. "CAFS for Nasopharyngeal Carcinoma Segmentation" Encyclopedia, https://encyclopedia.pub/entry/25429 (accessed December 22, 2025).
Chen, Y.,  Han, G.,  Lin, T., & Liu, X. (2022, July 22). CAFS for Nasopharyngeal Carcinoma Segmentation. In Encyclopedia. https://encyclopedia.pub/entry/25429
Chen, Yitong, et al. "CAFS for Nasopharyngeal Carcinoma Segmentation." Encyclopedia. Web. 22 July, 2022.
CAFS for Nasopharyngeal Carcinoma Segmentation
Edit

Accurate segmentation of nasopharyngeal carcinoma is essential to its treatment effect. CAFS addresses the above challenges through three mechanisms: the teacher–student cooperative segmentation mechanism, the attention mechanism, and the feedback mechanism. CAFS can use only a small amount of labeled nasopharyngeal carcinoma data to segment the cancer region accurately. The average DSC value of CAFS is 0.8723 on the nasopharyngeal carcinoma segmentation task. 

nasopharyngeal carcinoma deep learning semi-supervision

1. Introduction

Nasopharyngeal carcinoma [1][2] is one of the most common cancers, wildly occurring around the world. According to global cancer statistics, there were 133,354 new nasopharyngeal carcinoma cases and 80,008 deaths in 2020 [3]. Nasopharyngeal carcinoma is an epithelial carcinoma arising from the nasopharyngeal mucosal lining [4], which is generally observed at the pharyngeal recess of the nasopharynx [5]. In the clinic, nasopharyngeal carcinoma has three types: ascending, descending, and mixed [6]. The ascending type invades the skull base crania and destroys nerves, the descending type metastasizes to distant tissues through cervical lymph, and the mixed type has both. Thus, due to the particular location of nasopharyngeal carcinoma, it is abnormally dangerous once it metastasizes.
Currently, radiotherapy has become one of the most effective methods for treating nasopharyngeal carcinoma [7]. The segmentation of nasopharyngeal carcinoma images significantly affects the effects of radiotherapy [8]. Accurate segmentation would improve the effectiveness of radiotherapy and thus increase patient survival [9]. The traditional method of segmentation is manually operated by the physician. However, due to the irregularity of nasopharyngeal carcinoma tissues, it is often a time-consuming burden for doctors to manually segment the boundaries [10]. Moreover, manual segmentation is often so subjective that doctors with different levels of expertise may come up with different segmentation results [11].
To reduce the burden on physicians, more and more deep learning algorithms are now being utilized to segment medical images [12][13][14]. However, it is difficult for many deep learning models to segment nasopharyngeal carcinoma boundaries accurately. First, lots of deep learning algorithms typically utilize the fully-supervised approach. The fully-supervised approach is that all training data are labeled and the model is trained using these labeled data [15]. This means that the model requires a large amount of labeled data to obtain the expected training results [16]. However, the hardship of annotating interested targets hinders fully-supervised learning in medical imaging. In contrast, unlabeled data are readily available [17]. Second, the imaging characteristics of nasopharyngeal carcinoma usually resemble the surrounding tissue [18][19], making it challenging to identify. That leads many algorithms to mistake the surrounding tissue for nasopharyngeal carcinoma. Third, due to the irregular shape of the nasal cavity, the shape of nasopharyngeal carcinoma is usually very complex as well [20][21], which leads to many algorithms that do not segment the boundaries accurately.
To address the challenges encountered in the above-mentioned conventional methods of fully-supervised segmentation of nasopharyngeal carcinoma, and therefore to improve the efficacy and survival rate of nasopharyngeal carcinoma, this entry proposes an attention-based co-segmentation semi-supervised method named CAFS for automatic segmentation of nasopharyngeal carcinoma. The semi-supervised approach means that only a portion of the training data contains labels, and uses these labeled and unlabeled data to train the model collaboratively [22]. As shown in Figure 1, CAFS contains three primary strategies: the teacher–student cooperative segmentation mechanism, the attention mechanism, and the feedback mechanism. The teacher–student model is typically used in knowledge distillation [23]. In general, the teacher model uses the obtained knowledge to guide the student model training, making the student model have comparable performance to the teacher model. Among CAFS, the teacher–student cooperative segmentation mechanism aims to reduce the number of nasopharyngeal carcinoma labels used. The teacher model learns from a small amount of labeled nasopharyngeal carcinoma data and then generates pseudo-masks for the unlabeled nasopharyngeal carcinoma data. The student model utilizes the unlabeled nasopharyngeal carcinoma data and the pseudo-mask generated by the teacher model to train itself and segment the unlabeled nasopharyngeal carcinoma data. This allows for reducing the use of labeled data. The attention mechanism serves to pinpoint the location of cancer, which zooms in on the target and thus captures more information to localize the nasopharyngeal carcinoma. The feedback mechanism aims to make the segmentation boundaries of nasopharyngeal carcinoma more accurate. The student model is trained on unlabeled data and pseudo-masks and then predicts the labeled data. The prediction results are compared with the ground truth to generate feedback to update the model’s parameters. 
Figure 1. The task of CAFS is to automatically segment out the nasopharyngeal carcinoma boundaries by using only a small amount of labeled data. However, there are several challenges of segmenting nasopharyngeal carcinoma. First, reliable labeled data are difficult to obtain. Second, the nasopharyngeal carcinoma resembles the surrounding tissue. Third, the boundaries of nasopharyngeal carcinoma are irregular. The CAFS utilizes the cooperative, attention mechanism, and the feedback mechanism to address these difficulties, respectively.
In general, the main contribution of CAFS are as follows:
  • The teacher–student cooperative segmentation mechanism allows CAFS to segment nasopharyngeal carcinoma using only a small amount of labeled data;
  • The attention mechanism could prevent confusing nasopharyngeal carcinoma with surrounding tissues;
  • The feedback mechanism allows CAFS to segment nasopharyngeal carcinoma more accurately.

2. Fully-Supervised

The most common method for the automatic segmentation of nasopharyngeal carcinoma is the fully-supervised methods [24][25][26][27][28]. In the last few decades, deep learning methods have been increasingly used in medical image segmentation [29][30][31]. Among them, many fully supervised algorithms have been proposed for nasopharyngeal carcinoma segmentation. Convolutional neural networks (CNN) [32] are an effective image segmentation method that captures contextual semantics by computing high-level feature maps [33][34]. Since the pioneering CNN algorithm by Lecun et al., in 1990, more and more improved CNN algorithms for image segmentation have been proposed. Pan et al. [35] improved the typical CNN network by designing dilated convolution at each layer of the FPN to obtain contextual associations, which was applied to nasopharyngeal organ target segmentation. Some other scholars segment nasopharyngeal carcinoma by improving CNN into the CNN-based method with three-dimensional filters [36][37][38]. Ronneberger et al. [39] propose in 2015 a convolutional networks called U-Net for biomedical image segmentation. After that, many segmentation algorithms for medical images were adapted from U-Net. Some scholars combined mechanisms such as attention mechanism and residual connectivity with U-Net to improve segmentation performance and segment the nasopharyngeal carcinoma [40][41][42]. In order to accommodate the volume segmentation of medical images, many U-Net-based 3D models have been developed as well [43][44]. While these fully supervised methods are capable of achieving the excellent segmentation effect, they predicate by using a large amount of labeled data. The fact that reliable labeled data are often tough to obtain as specialized medical knowledge and time are both demanded.

3. Semi-Supervised

More and more semi-supervised segmentation methods have been proposed in recent years to confront the challenge of difficult access to annotated data [45]. Self-training is one of the most commonly used semi-supervised methods [46]. It first trains using a small amount of labeled data, then makes predictions on unlabeled data, and finally mixes the excellent predictions with labeled data for training [47][48]. Another common semi-supervised method is co-training, which uses the interworking between two networks to improve the segmentation performance [49][50]. Hu et al. [51] proposed uncertainty and attention guided consistency semi-supervised method to segment nasopharyngeal carcinoma. Lou et al. [52] proposed a semi-supervised method that extends the backbone segmentation network to produce pyramidal predictions at different scales. Zhang et al. [53] then use the teacher’s uncertainty estimates to guide the student and perform consistent learning to uncover more information from the unlabeled data.
Sun et al. [54] also applies the teacher–student paradigm in medical image segmentation. It is worth noting that the mixed supervision in [54] stands for partial dense-labeled supervision from labeled datasets and supplementary loose bounding-box supervision for both labeled and unlabeled data. The work only uses partial dense-labeled supervision. In addition, Ref. [54] applies bounding-box supervision to provide localization information. In the work, the attention mechanism involves localizing the nasopharyngeal carcinoma target. Moreover, Sun et al. [54] have the teacher model well-trained before providing pseudo label guidance for the student, while optimize both teacher and student models simultaneously.
In addition to the self-training and the co-training, semi-supervised methods include paradigms such as generative models, transductive support vector machines, and Graph-Based methods as well.

References

  1. Mohammed, M.A.; Abd Ghani, M.K.; Hamed, R.I.; Ibrahim, D.A. Review on Nasopharyngeal Carcinoma: Concepts, methods of analysis, segmentation, classification, prediction and impact: A review of the research literature. J. Comput. Sci. 2017, 21, 283–298.
  2. Chua, M.L.; Wee, J.T.; Hui, E.P.; Chan, A.T. Nasopharyngeal carcinoma. Lancet 2016, 387, 1012–1024.
  3. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2021, 71, 209–249.
  4. Chen, Y.P.; Chan, A.T.; Le, Q.T.; Blanchard, P.; Sun, Y.; Ma, J. Nasopharyngeal carcinoma. Lancet 2019, 394, 64–80.
  5. Wei, W.I.; Sham, J.S. Nasopharyngeal carcinoma. Lancet 2005, 365, 2041–2054.
  6. Yao, J.J.; Qi, Z.Y.; Liu, Z.G.; Jiang, G.M.; Xu, X.W.; Chen, S.Y.; Zhu, F.T.; Zhang, W.J.; Lawrence, W.R.; Ma, J.; et al. Clinical features and survival outcomes between ascending and descending types of nasopharyngeal carcinoma in the intensity-modulated radiotherapy era: A big-data intelligence platform-based analysis. Radiother. Oncol. 2019, 137, 137–144.
  7. Lee, A.; Ma, B.; Ng, W.T.; Chan, A. Management of nasopharyngeal carcinoma: Current practice and future perspective. J. Clin. Oncol. 2015, 33, 3356–3364.
  8. Pham, D.L.; Xu, C.; Prince, J.L. Current methods in medical image segmentation. Annu. Rev. Biomed. Eng. 2000, 2, 315–337.
  9. Jaffe, N.; Bruland, O.S.; Bielack, S. Pediatric and Adolescent Osteosarcoma; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010; Volume 152.
  10. Pirner, S.; Tingelhoff, K.; Wagner, I.; Westphal, R.; Rilk, M.; Wahl, F.; Bootz, F.; Eichhorn, K.W. CT-based manual segmentation and evaluation of paranasal sinuses. Eur. Arch. Oto-Rhino-Laryngol. 2009, 266, 507–518.
  11. Chiu, S.J.; Li, X.T.; Nicholas, P.; Toth, C.A.; Izatt, J.A.; Farsiu, S. Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation. Opt. Express 2010, 18, 19413–19428.
  12. Ching, T.; Himmelstein, D.S.; Beaulieu-Jones, B.K.; Kalinin, A.A.; Do, B.T.; Way, G.P.; Ferrero, E.; Agapow, P.M.; Zietz, M.; Hoffman, M.M.; et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 2018, 15, 20170387.
  13. Gao, Z.; Chung, J.; Abdelrazek, M.; Leung, S.; Hau, W.K.; Xian, Z.; Zhang, H.; Li, S. Privileged modality distillation for vessel border detection in intracoronary imaging. IEEE Trans. Med. Imaging 2019, 39, 1524–1534.
  14. Wu, C.; Zhang, H.; Chen, J.; Gao, Z.; Zhang, P.; Muhammad, K.; Del Ser, J. Vessel-GAN: Angiographic reconstructions from myocardial CT perfusion with explainable generative adversarial networks. Future Gener. Comput. Syst. 2022, 130, 128–139.
  15. Gao, Z.; Wu, S.; Liu, Z.; Luo, J.; Zhang, H.; Gong, M.; Li, S. Learning the implicit strain reconstruction in ultrasound elastography using privileged information. Med. Image Anal. 2019, 58, 101534.
  16. Zhang, M.; Zhou, Y.; Zhao, J.; Man, Y.; Liu, B.; Yao, R. A survey of semi-and weakly supervised semantic segmentation of images. Artif. Intell. Rev. 2020, 53, 4259–4288.
  17. Guo, S.; Xu, L.; Feng, C.; Xiong, H.; Gao, Z.; Zhang, H. Multi-level semantic adaptation for few-shot segmentation on cardiac image sequences. Med. Image Anal. 2021, 73, 102170.
  18. Chong, V.; Fan, Y.F. Detection of recurrent nasopharyngeal carcinoma: MR imaging versus CT. Radiology 1997, 202, 463–470.
  19. Dumrongpisutikul, N.; Luangcharuthorn, K. Imaging characteristics of nasopharyngeal carcinoma for predicting distant metastasis. Clin. Radiol. 2019, 74, 818.e9–818.e15.
  20. Huang, K.W.; Zhao, Z.Y.; Gong, Q.; Zha, J.; Chen, L.; Yang, R. Nasopharyngeal carcinoma segmentation via HMRF-EM with maximum entropy. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 2968–2972.
  21. Tsao, S.W.; Yip, Y.L.; Tsang, C.M.; Pang, P.S.; Lau, V.M.Y.; Zhang, G.; Lo, K.W. Etiological factors of nasopharyngeal carcinoma. Oral Oncol. 2014, 50, 330–338.
  22. Zhu, X.J. Semi-Supervised Learning Literature Survey; University of Wisconsin-Madison: Madison, WI, USA, 2005.
  23. Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531.
  24. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016.
  25. Tao, G.; Li, H.; Huang, J.; Han, C.; Chen, J.; Ruan, G.; Huang, W.; Hu, Y.; Dan, T.; Zhang, B.; et al. SeqSeg: A sequential method to achieve nasopharyngeal carcinoma segmentation free from background dominance. Med. Image Anal. 2022, 78, 102381.
  26. Li, X.; Tang, M.; Guo, F.; Li, Y.; Cao, K.; Song, Q.; Wu, X.; Sun, S.; Zhou, J. DDNet: 3D densely connected convolutional networks with feature pyramids for nasopharyngeal carcinoma segmentation. IET Image Process. 2022, 16, 39–48.
  27. Li, Y.; Dan, T.; Li, H.; Chen, J.; Peng, H.; Liu, L.; Cai, H. NPCNet: Jointly Segment Primary Nasopharyngeal Carcinoma Tumors and Metastatic Lymph Nodes in MR Images. IEEE Trans. Med. Imaging 2022, 41, 1639–1650.
  28. Meng, M.; Gu, B.; Bi, L.; Song, S.; Feng, D.D.; Kim, J. DeepMTS: Deep multi-task learning for survival prediction in patients with advanced nasopharyngeal carcinoma using pretreatment PET/CT. IEEE J. Biomed. Health Inform. 2022, 1–10.
  29. Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248.
  30. Razzak, M.I.; Naz, S.; Zaib, A. Deep learning for medical image processing: Overview, challenges and the future. Classif. BioApps 2018, 26, 323–350.
  31. Gao, Z.; Wang, X.; Sun, S.; Wu, D.; Bai, J.; Yin, Y.; Liu, X.; Zhang, H.; de Albuquerque, V.H.C. Learning physical properties in complex visual scenes: An intelligent machine for perceiving blood flow dynamics from static CT angiography imaging. Neural Networks 2020, 123, 82–93.
  32. LeCun, Y.; Boser, B.; Denker, J.; Henderson, D.; Howard, R.; Hubbard, W.; Jackel, L. Handwritten digit recognition with a back-propagation network. Adv. Neural Inf. Process. Syst. 1989, 2, 396–404.
  33. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377.
  34. Li, Y.; Han, G.; Liu, X. DCNet: Densely Connected Deep Convolutional Encoder–Decoder Network for Nasopharyngeal Carcinoma Segmentation. Sensors 2021, 21, 7877.
  35. Pan, X.; Dai, D.; Wang, H.; Liu, X.; Bai, W. Nasopharyngeal Organ Segmentation Algorithm Based on Dilated Convolution Feature Pyramid. In Proceedings of the International Conference on Image, Vision and Intelligent Systems (ICIVIS 2021), Changsha, China, 15–17 June 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 45–58.
  36. Yang, G.; Dai, Z.; Zhang, Y.; Zhu, L.; Tan, J.; Chen, Z.; Zhang, B.; Cai, C.; He, Q.; Li, F.; et al. Multiscale Local Enhancement Deep Convolutional Networks for the Automated 3D Segmentation of Gross Tumor Volumes in Nasopharyngeal Carcinoma: A Multi-Institutional Dataset Study. Front. Oncol. 2022, 12, 827991.
  37. Guo, F.; Shi, C.; Li, X.; Wu, X.; Zhou, J.; Lv, J. Image segmentation of nasopharyngeal carcinoma using 3D CNN with long-range skip connection and multi-scale feature pyramid. Soft Comput. 2020, 24, 12671–12680.
  38. Cheng, W.; Wan, S.; Zhaodong, F.; Zhou, Q.; Lin, Q. Automatic Gross Tumor Volume Delineation of Nasopharyngeal Carcinoma in 3D CT Images. Int. J. Radiat. Oncol. Biol. Phys. 2021, 111, e381–e382.
  39. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241.
  40. Zhang, J.; Gu, L.; Han, G.; Liu, X. AttR2U-Net: A Fully Automated Model for MRI Nasopharyngeal Carcinoma Segmentation Based on Spatial Attention and Residual Recurrent Convolution. Front. Oncol. 2021, 11, 816672.
  41. Li, F.H.; Zhao, X.M. MD-Unet: A deformable network for nasal cavity and paranasal sinus tumor segmentation. Signal Image Video Process. 2022, 16, 1225–1233.
  42. Tang, P.; Yang, P.; Nie, D.; Wu, X.; Zhou, J.; Wang, Y. Unified medical image segmentation by learning from uncertainty in an end-to-end manner. Knowl.-Based Syst. 2022, 241, 108215.
  43. Chen, H.; Qi, Y.; Yin, Y.; Li, T.; Liu, X.; Li, X.; Gong, G.; Wang, L. MMFNet: A multi-modality MRI fusion network for segmentation of nasopharyngeal carcinoma. Neurocomputing 2020, 394, 27–40.
  44. Lin, M.; Cai, Q.; Zhou, J. 3D Md-Unet: A novel model of multi-dataset collaboration for medical image segmentation. Neurocomputing 2022, 492, 530–544.
  45. Liao, W.; He, J.; Luo, X.; Wu, M.; Shen, Y.; Li, C.; Xiao, J.; Wang, G.; Chen, N. Automatic delineation of gross tumor volume based on magnetic resonance imaging by performing a novel semi-supervised learning framework in nasopharyngeal carcinoma. Int. J. Radiat. Oncol. Biol. Phys. 2022, 113, 893–902.
  46. Senkyire, I.B.; Liu, Z. Supervised and semi-supervised methods for abdominal organ segmentation: A review. Int. J. Autom. Comput. 2021, 18, 887–914.
  47. Cheplygina, V.; de Bruijne, M.; Pluim, J.P. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med. Image Anal. 2019, 54, 280–296.
  48. Li, Y.; Chen, J.; Xie, X.; Ma, K.; Zheng, Y. Self-loop uncertainty: A novel pseudo-label for semi-supervised medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 614–623.
  49. Qiao, S.; Shen, W.; Zhang, Z.; Wang, B.; Yuille, A. Deep co-training for semi-supervised image recognition. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 135–152.
  50. Ning, X.; Wang, X.; Xu, S.; Cai, W.; Zhang, L.; Yu, L.; Li, W. A review of research on co-training. Concurr. Comput. Pract. Exp. 2021, e6276.
  51. Hu, L.; Li, J.; Peng, X.; Xiao, J.; Zhan, B.; Zu, C.; Wu, X.; Zhou, J.; Wang, Y. Semi-supervised NPC segmentation with uncertainty and attention guided consistency. Knowl.-Based Syst. 2022, 239, 108021.
  52. Luo, X.; Liao, W.; Chen, J.; Song, T.; Chen, Y.; Zhang, S.; Chen, N.; Wang, G.; Zhang, S. Efficient semi-supervised gross target volume of nasopharyngeal carcinoma segmentation via uncertainty rectified pyramid consistency. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 318–329.
  53. Zhang, Y.; Liao, Q.; Jiao, R.; Zhang, J. Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised Medical Image Segmentation. arXiv 2021, arXiv:2112.02508.
  54. Sun, L.; Wu, J.; Ding, X.; Huang, Y.; Wang, G.; Yu, Y. A teacher–student framework for semi-supervised medical image segmentation from mixed supervision. arXiv 2020, arXiv:2010.12219.
More
Upload a video for this entry
Information
Subjects: Oncology
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : Yitong Chen , Guanghui Han , Tianyu Lin , Xiujian Liu
View Times: 650
Revisions: 2 times (View History)
Update Date: 22 Jul 2022
Notice
You are not a member of the advisory board for this topic. If you want to update advisory board member profile, please contact office@encyclopedia.pub.
OK
Confirm
Only members of the Encyclopedia advisory board for this topic are allowed to note entries. Would you like to become an advisory board member of the Encyclopedia?
Yes
No
${ textCharacter }/${ maxCharacter }
Submit
Cancel
There is no comment~
${ textCharacter }/${ maxCharacter }
Submit
Cancel
${ selectedItem.replyTextCharacter }/${ selectedItem.replyMaxCharacter }
Submit
Cancel
Confirm
Are you sure to Delete?
Yes No
Academic Video Service