Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2226 2023-01-05 13:14:20 |
2 Revised + 7377 word(s) 9603 2023-01-05 13:28:51 | |
3 Full Paper Link + 8 word(s) 9611 2023-01-07 03:41:24 | |
4 reformat references list & layout -5981 word(s) 3630 2023-01-09 02:20:32 | |
5 VR with psycological inputs. Meta information modification 3630 2024-02-05 14:12:15 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Zhang, H.;  Lee, S.;  Lu, Y.;  Yu, X.;  Lu, H.; Zhang, H. Metaverse-Related Technologies and Applications. Encyclopedia. Available online: https://encyclopedia.pub/entry/39793 (accessed on 30 April 2024).
Zhang H,  Lee S,  Lu Y,  Yu X,  Lu H, Zhang H. Metaverse-Related Technologies and Applications. Encyclopedia. Available at: https://encyclopedia.pub/entry/39793. Accessed April 30, 2024.
Zhang, Haolan, Sanghyuk Lee, Yifan Lu, Xin Yu, Huanda Lu, Haolan Zhang. "Metaverse-Related Technologies and Applications" Encyclopedia, https://encyclopedia.pub/entry/39793 (accessed April 30, 2024).
Zhang, H.,  Lee, S.,  Lu, Y.,  Yu, X.,  Lu, H., & Zhang, H. (2023, January 05). Metaverse-Related Technologies and Applications. In Encyclopedia. https://encyclopedia.pub/entry/39793
Zhang, Haolan, et al. "Metaverse-Related Technologies and Applications." Encyclopedia. Web. 05 January, 2023.
Metaverse-Related Technologies and Applications
Edit

The definition of the Metaverse is a virtual space where users can interact with one another, and with their environment, via 3D digital objects and virtual avatars, in a complex manner that mimics the real world, holding things developed using artificial intelligence techniques; therefore, creating digital humans is essential to the development of the Metaverse and other Virtual Reality (VR), Augmented Reality (AR), Extended Reality (XR) applications.

big data metaverse digital human big data technologies virtual worlds VR

1. Digital Human Reconstruction

How to create digital humans has been a much-studied subject recently, due to the rising demand for virtual reality applications, including the Metaverse. One of the core drivers of mathematical progress is the discovery of objects, patterns and ultimately their formulaic representations; in the course of such progress, scientists often need to leverage a variety of tools and data to help them cultivate ideas, propose a conjecture, and eventually prove/disprove with experiments and evidence, where possible. There is no doubt that the evolution of computational methodology has not only changed the way scientists conduct their studies, but has also accelerated the life cycle of scientific research, leading to profound impacts on people’s daily lives—including, for example, the early hand-calculated prime number tables used by Gauss (which led to the prime number theorem) [1], the RSA public key algorithm [2] inspired by prime number theory, and our modern blockchain infrastructure.
The introduction of computational methodology has given scientists an understanding of problems previously incomprehensible; however, while previous computational methodologies have proven effective in certain scientific problems or domains, they are not easily generalized to other domains. Big data technologies, especially the field of deep learning that has emerged in recent years, offer a range of techniques capable of effectively detecting patterns in data, and are increasingly proving their utility in scientific disciplines. A specific case of virtual human reconstruction in the Metaverse will serve as an example, to illustrate how deep learning can be used to solve mathematical problems in practical settings.
Virtual human reconstruction is one of the essential tasks in various Metaverse applications: it aims to utilize sensory data to recover the three-dimensional geometry and appearance of humans, achieving accurate photorealistic reconstructions, and ultimately producing compact 3D representations that can be ported to a variety of devices. This problem involves many practical facets that require sophisticated engineering; however, its core challenges lie in deep learning modeling and mathematical optimization, as shown in Figure 1.
Figure 1. A hybrid approach of regression-based and optimization-based paradigms (courtesy of Kolotouros et al. [3]): an iterative optimization routine is embedded into a neural network training loop, leading to a self-improving loop. Better fits help the network train better, while better initial estimates from the network help the optimization routine converge to better fits.
Various techniques have been applied to recreate human models in the Metaverse. Many studies start from simple image-based 2D feature detection, such as key points [4], silhouettes [5] and limb segments [6]. It seems that simple movements can be represented relatively clearly by two-dimensional contents; however, it is becoming clear that complex human behaviors, which often occur in practical settings, do not fit the simple assumptions imposed by two-dimensional models, and that more descriptive models with finer granularity are desirable; consequently, more studies [7][8][9] have turned to exploring more complex human pose modeling in three dimensions. Recently, researchers have noticed that body shapes, contacts, gestures and expressions which directly interact with the world are much easier to measure and evaluate; consequently, the focus of researchers has shifted towards three-dimensional mesh recovery of the human body [10][11]. Human body modeling is then further extended by face and hands support [12][13][14][15]. Meanwhile, similar techniques have also facilitated downstream tasks, such as clothed human reconstruction [16][17][18], volume rendering [19], virtual try-on [20], the computer-assistant system [21] and many more Metaverse applications. There are two common paradigms for dealing with virtual human reconstruction: the optimization-based paradigm and the regression-based paradigm.
Although these two paradigms may have different advantages/disadvantages, and address different aspects, both paradigms can share similar human body modeling techniques. Figure 2 shows an interesting possible way of integrating both paradigms into one coherent framework. The next section will review the existing approaches, in terms of human body modeling.
Figure 2. A virtual reality shop developed by Unity3D for future integration into the Metaverse.
 

2. Review of Human Body Modeling

Early human body modeling started with the study of articulated geometric primitives, including line segments [22], cylinders [23], planar rectangles [24] and ellipsoids [25]. As three-dimensional full-body scanners became accessible, more detailed measurements of body surfaces could be accurately recorded, such as the CAESAR (Civilian American and European Surface Anthropometry Resource) [26] dataset. The availability of large amounts of body scan data has given rise to a powerful representation: the statistical body model, which factors body deformations into identity-dependent and pose-dependent components. Among the statistical body models, SCAPE [27], SMPL [28], SMPL-X [13], SMPL+H [29], 3DMM [30] and STAR [31] are popular ones, which are not only capable of effectively modeling both shape and pose deformations, but are also highly compatible with existing graphics rendering engines, benefiting from the explicit mesh model. This family of explicit approaches first learns shape deformations through principal component analysis of body scans, and then combines them with skeletal pose-driven deformations (so-called linear blend skinning in traditional skeletal animation), to construct a shape-and-pose parametric human body model. Despite the popularity of explicit approaches, they still have their limitations: firstly, global blend shapes may capture spurious long-range correlations [31], resulting in non-local deformation artifacts; secondly, correlations between body shape and pose-dependent shape deformation may be ignored; furthermore, due to the linear nature of principal component analysis, it can be difficult to reproduce the highly nonlinear deformations of body soft tissue.
In order to overcome the limitations of explicit approaches, instead of explicitly defining the human body as mesh vertices and edges or other elements, implicit approaches try to define surfaces as level sets of continuous functions. Due to these continuous properties, this implicit representation has a better chance of being elegantly optimized and integrated with deep learning frameworks: it is continuous across the spatial domain, and thus theoretically has infinite resolution, and it can easily handle highly nonlinear deformations, and even topological changes, which are not possible with explicit approaches. Study [32][33] estimated implicit surface functions, by aligning image pixels with the global three-dimensional shape or texture of the photographed object, and then using a dedicated multi-level network to refine the resulting geometry. The flexibility of implicit approaches enabled it to handle intricate surfaces and topological changes with ease, but there was one drawback, which was that topologically distinct human representations can exist across time: in other words, implicit human representations may not be topologically consistent in time.

3. Optimization-Based Paradigm

In this paradigm, the human body model is explicitly optimized, by minimizing an objective function that fits the model to the observations in an iterative manner. The objective function typically consists of two parts: (1) the data term is a measure of the alignment between the extracted observation features and the transformed human body features; (2) the regularization term is added, to constrain the convergence that preserves a physically plausible body model. In earlier work, the silhouette feature played a crucial role in fitting the body model to the image, as it was used to penalize pixels in non-overlapping regions [34][35].
With the emergence of deep learning, many studies have utilized it to calibrate the optimization initial conditions. SMPLify [10] adopts off-the-shelf neural networks [36] to detect two-dimensional key points, and then iteratively fits a SMPL model, to detect the key points of an unconstrained image. While SMPLify produces relatively well-aligned results, sparse key points do not offer sufficient constraints for body shape optimization. To improve geometric details, [37][38][39] combined key points, silhouettes and part segments, to further constrain the optimization process. Moreover, [40][41] have shown that deep learning techniques can learn local landscapes and decent directions of optimization from training data, and then use them to guide the gradient-based optimization process: in this way, traditional problem-independent optimization schemes can be endowed with the ability to adaptively learn problem-specific convergence schemes. Image-based key point regression was performed by [42][43], to obtain three-dimensional body key points, then solve the inverse kinematics based on the key points and the skeletal structure, so as to calculate the accurate joint rotations, ultimately estimating the parameters of a SMPL model.
Although the optimization-based paradigm can faithfully reconstruct the human body when high quality data is available, it performs poorly in situations where data is scarce and useful information is latent; furthermore, as the optimization-based paradigm intrinsically tries to solve complex non-convex optimization problems in high-dimensional spaces, its outcomes are susceptible to initialization and prone to falling into spurious local minima.

4. Regression-Based Paradigm

Alternatively, the regression-based paradigm exploits the powerful learning and approximation capabilities of neural networks, to recover model parameters directly from sensory data. To achieve better performance, researchers have explored a wide variety of network architectures and regression objectives—for example, [12] was one of the pioneering efforts to incorporate the SMPL model into an end-to-end network architecture that minimized the reprojection errors between manually annotated and estimated key points. An end-to-end adversarial learning framework was proposed by [11], which used a discriminator to supervise the training process, so as to exclude anthropometrically implausible or self-intersecting body structures. A top-down framework was proposed by [44], to simultaneously regress SMPL parameters of multiple people in a coherent manner, where depth ordering was consistent, and no interpenetration occurred among reconstructed people. Instead of regressing the SMPL parameters, [45] opted to directly regress the mesh vertices using a Graph Convolutional network, thus allowing the template mesh structure to be explicitly encoded within the network, easily exploiting the mesh spatial locality. Inspired by [11], VIBE [46] went a step further, to estimate dynamic motion sequence from videos. By replacing the regression network with a temporal generative network, and changing the three-dimensional supervision dataset to a motion capture dataset, AMASS [47], VIBE empowered an adversarial learning framework with temporal information, enabling motion sequence estimation as a whole.
To leverage expressive human models and paired data, [14][48][49] adopted a divide-and-conquer strategy, by breaking down the human reconstruction problem into part-specific estimation subproblems, where body, hand and face estimates were performed using the respective part-specific models. The final expressive model was obtained by assembling the individual results of the subproblems into the corresponding body template layers. ExPose [14] directly regressed hands, face and body parameters in the SMPL-X format, and utilized body-driven attention to localize the face and hands regions for refinement, using part-specific knowledge learned from existing face- and hand-only datasets. A real-time method was introduced by [50], to capture body, hands and face with competitive accuracy, by exploiting correlations between body and hands. Pose2Pose [51] extracted joint-specific local and global features, to train a graph convolutional neural network, and regress body/hand joint rotations from it. PIXIE [48] first fused the features from body, face and hand experts, according to their part-specific confidences, and then fed these features into the part-specific networks, for robust regression.

5. Technologies in AR/VR/XR Platforms and the Metaverse: Future Trends

In the researchers' opinion, AR/VR/XR applications will undoubtedly, in the near future, become the ultimate customer service platforms. In other words, AR/VR/XR applications will at least become the dominant platforms, if they do not completely wipe out the current mobile and computer platforms. Consequently, a big data surge will very soon occur in the virtual world. The Metaverse is likely to be the front platform to face the data surge challenge, due to its rapid growth in recent years. The following figure shows a recently developed VR-based shopping platform.
The researchers observed that two extreme situations would occur in the Metaverse, while conducting user recommendation and data analysis: (1) The cold start problem. This situation often occurs when too little data is available for data analysis, due to the VR platforms being new to users, and to not much information having been generated and accumulated for analysis, a common situation in the big data environment, when new platforms are released for users; (2) The virtual data explosion problem. This situation occurs when the Metaverse or VR platforms generate too much data, including user interaction data, wearable sensor data, eye tracking data, location trajectory data, brain EEG data, and business transaction data. Figure 3 shows the data sources of the Metaverse and its architecture [52], which indicates that the Metaverse consists of various data sources from physical, social and digital worlds.
Figure 3. Metaverse architecture of integrated social, physical and digital worlds, modified based on [52]. The social world mainly consists of human communities.
Several methods have been suggested for solving the abovementioned problems. In [53], a position-based VR online shopping recommendation system was developed, to solve the cold start problem in VR platforms. In such a system, the cold start problem is tackled by analyzing new users’ interaction and behaviors within the virtual world. For instance, the position-based VR online shopping system acquires new users’ trajectories in the virtual world, and conducts analysis based on their movements, to generate user recommendations, as shown in Figure 4.
Figure 4. Position-based analysis for VR shopping recommendation (green line is user trajectory).
Future trends in solving the cold start problem in the Metaverse will further utilize users’ behavior and sentiment data, including user eye tracking data, user movement trajectory, wearable user device data, and user sentiment data. In particular, human brain data analysis will likely become an essential technology for user analysis in VR platforms, such as the Metaverse.

The cold start problem is not a persistent problem in VR platforms, as it can be solved automatically when data accumulation reaches a certain quantity, whereas the virtual data explosion problem is a persistent challenge to VR platforms like the Metaverse. The wide range of data sources in the Metaverse will grow exponentially, due to its digitization in nature. Some research studies have suggested adopting the Data as a Service (DaaS) framework [54], as the solution to the data explosion problem in the digital world, including the Metaverse. Several other solutions, including tensor networks and sentiment analysis, have been proposed, to solve this problem. The future trends of technical development in the Metaverse and other VR platforms can be summarized as follows:

  • Digital human reconstruction is becoming a crucial area for the Metaverse and other VR platforms: this is a core technology that can accelerate the development of the Metaverse, so as to truly realize human–machine interaction in virtual worlds, as mentioned in the previous sections;
  • Digital Twin-related methods are the foundation for creating digital worlds that can mimic the physical world. The digital twin is defined as the effortless integration of data between a physical and virtual environment, in either direction [167]. VR-developing tools, such as Unreal Engine, Unity, 3DS Max & Maya, SketchUp, etc., will be the major developer’s toolkits for digital twin models in the coming decades. The future trends in digital twin will focus on the following: enabling a conformance relationship between digital twin and the real world; digital world autonomy, runtime self-adaptation and self-management; and integration and cooperation, to achieve common goals or provide services [168]. A number of digital twin applications have been developed, based on Microsoft Kinect sensors and the Oculus VR headset.
  • Brain–Computer Interface (BCI) technology will become a very important area for the Metaverse and for VR platforms. Previous research indicates that non-invasive BCI technology has been applied extensively in various areas in recent years, because of its minimal potential risks and time precision [55]. Figure 5 shows the high-performance EEG BCI method (left), and EEG BCI experiments (right) [55][56].

Figure 5. Segmented EEG time window (left), source: [55]; EEG experiment (right), source: [56].

The NDA/PDA-based methods are adopted, to enhance EEG data analytical efficiency, in order to accommodate the real-time interaction in the Metaverse and VR platforms [74]. The definition for the NDA method is as follows: if S [a, b] ⊆ A [1, k], if x∈[a, b] satisfies:

f ( A ( x ) , μ , σ ) = 1 σ Φ ( A ( x ) μ σ ) a x b a b S [ a , b ] N D

Φ ( S ) ( 1 m r ) × 1 σ 2 π a b exp ( c 2 2 ) d x

where mr is the adjusting parameter, and S [a, b] is an NDA set. The ND-based method derives the data values using ksdensity function, to generate a probability distribution [56]. The definition for the PDA method is as follows: the PDA model takes one of the calculated σ and λ values as λ × t, as indicated in the following equations, 11 and 12. Assuming the original data set has σ, then Mean (λ) is the event rate. If Mean (λ) − λ = ∆, then λ × t is lying between Mean (λ) and λ. With |y − λ × t| = a, a1/2+a = ∆ is satisfied.

P ( k   e v e n t s   i n   f i x   t i m e ) = e λ λ k k !

P ( N ( t ) = n ) = ( λ t ) n e λ t n !

where N(t) is the sample data in the t time window. The Gamma function is utilized in the PDA method for processing complex numbers, which is expressed in (5) below [57]:

Γ ( z ) = 0 x z 1 e x d x

The ∆ parameter is used to regulate the size of the sample data sets, to get the nearest λ and σ values. The ∆ parameter in the PDA plays the same role that it plays in the NDA method. The PDA model employs a PDA benchmark point selection method [55][56][57].

  • Blockchain technology is an efficient and secure solution for digital worlds, such as the Metaverse. In the blockchain model, a new transaction can be verified and added to existing records, i.e., blocks, through linking the new transaction to previous ones, by cryptographic hash operation [58]. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data [59]. The main characteristics of blockchain technology are that it is secure, decentralized, digitized, collaborative and immutable: these characteristics make blockchain technology a perfect solution for digital virtual worlds, such as the Metaverse. Currently, the most successful security technology for blockchain employs the Public Key Infrastructure (PKI)-based blockchain methods [60]. Researchers in the field have started to search for more efficient solutions. The future trends in blockchain technology development in the Metaverse intend to focus on more autonomous, intelligent and scalable models, such as intelligence-agent-based blockchain [61], Self-Sovereign Identity (SSI) blockchain [62], non-fungible tokens (NFTs) [63] and bio-identity-based blockchain.
  • Artificial intelligence (AI) is a discipline essential to almost all areas in our modern world, particularly for future virtual worlds such as the Metaverse. AI can accelerate analytical efficiency, enhance security and privacy, improve interoperability, and provide better solutions for human–machine interaction and collaboration. The increase in applications of Natural Language Processing (NLP), sentiment analysis and brain informatics technologies to digital worlds is stimulating the development of AI in these areas. The successful stories of AI implementation in image recognition, voice recognition, human–machine interaction and intuition, reveal the promising future of AI in the Metaverse and other virtual worlds. A recent survey showed that a majority of studies had focused on exploring efficient integration and collaboration between Edge AI architecture and the Metaverse [64].

The following Figure 6 demonstrates how the Metaverse and its related technologies, which include big data, have evolved and developed [64].

Figure 6. A chronicle of the Metaverse and its related techniques, modified based on [64].

Data sources in the Metaverse and other virtual platforms are growing exponentially; therefore, big data technologies are crucial for the Metaverse, if it is to efficiently manage its digital world, and provide users with real-time analytical services. Big data technologies are fundamental tools for rendering virtual platforms, such as the Metaverse, feasible for users. In other words, big data is a fundamental component in the Metaverse; and the Metaverse accelerates the development of big data technologies; however, big data is not only crucial in the virtual world—it is also an important component of our real physical world, as evidenced in various areas. Figure 7 shows the relationship between big data and the Metaverse.

Figure 7. Big data plays a key component in both the physical world and virtual worlds. The Metaverse is a virtual world parallel to the real physical world: the two are sometimes connected by augmented reality and digital twin.

The current definitions of the Metaverse vary according to different studies; however, many researchers share a common view that the Metaverse is imitating our physical world. In this work, the researchers believe that future virtual worlds, including the Metaverse, will develop to be totally different world from our physical world: these virtual worlds will go beyond our current social structure and civil life. Table 1 shows the example applications of the Metaverse and big data in several key sectors.

Table 1. A brief review of example applications of big data and the Metaverse in major sectors.

Sectors Big Data Metaverse
Healthcare
  • Real-time big data analytical models (Health-CPS);
  • Data as a Service e-health systems, etc., [65].
  • Metaverse hospital (Thumbay, Davita);
  • Interactive diagnosis platforms, etc., [66].
Finance
and Economy
  • Big data finance and business analytics (Splashback);
  • Online business decision support, etc., [67][68][69][70][71][72][73][74][75].
  • Metaverse banks (Onyx, ZELF);
  • NFTs, Bitcoins, VR-funds, etc., [76][77].
Education
  • Learning performance analysis and customization [78];
  • Education data warehouse, BD curriculum, etc., [79].
  • Metaversity (Novartis, King’s InterHigh);
  • Immersive realistic learning scene [80].
Entertainment and Social
  • User behavior and opinion analysis, social trends [81];
  • Game data monitoring, sentiment analysis [82].
  • Metaverse games (Roblox, Sandbox) [83];
  • Virtual social (Meta, Altspace VR) [84].

6.  Conclusion and Discussion

The Metaverse and other virtual platforms have grown rapidly in recent years. The PwC Co. predicts that VR and AR platforms will boost global GDP by USD 1.5 trillion by 2030 [85]. To date, applications of the Metaverse have included online shopping, virtual social media, video games, virtual tours, and online museums and arts [86][87][88]. Many large technology companies have announced plans to launch their Metaverse products, such as Facebook Horizon, Nvidia Omniverse, and Amazon Metaverse. The future trends in technical development in the Metaverse and other VR platforms can be grouped into five main areas: digital human; digital twin; brain–computer interface (BCI), blockchain and artificial intelligence. Notably, brain–computer interface technologies have become increasingly important to Metaverse development in recent years, as immersive interactions provided by BCI can enhance user experience [89][90][91][92][93].

References

  1. de Gérase, N. Nicomachi Geraseni Pythagorei Introductionis Arithmeticae Libri II; Bibliotheca Scriptorum Graecorum et Romanorum Teubneriana; Aedibvs B.G. Teubneri: Stuttgart, Germany, 1866; pp. 1–198. Available online: https://openlibrary.org/works/OL3947510W/Nicomachi_Geraseni_Pythagorei_introductionis_arithmeticae_libri_II (accessed on 30 November 2022).
  2. Rivest, R.L.; Shamir, A.; Adleman, L. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 1978, 21, 120–126.
  3. Kolotouros, N.; Pavlakos, G.; Black, M.; Daniilidis, K. Learning to Reconstruct 3D Human Pose and Shape via Model-Fitting in the Loop. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019.
  4. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 172–186.
  5. Lin, S.; Yang, L.; Saleemi, I.; Sengupta, S. Robust high-resolution video matting with temporal guidance. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4 – 8 January 2022; pp. 238–247.
  6. Bhatia, S.; Sigal, L.; Isard, M.; Black, M. 3D Human Limb Detection using Space Carving and Multi-View Eigen Models. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004; p. 17.
  7. Agarwal, A.; Triggs, B. Recovering 3D Human Pose from Monocular Images. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 44–58.
  8. Martinez, J.; Hossain, R.; Romero, J.; Little, J.J. A Simple Yet Effective Baseline for 3D Human Pose Estimation. In Proceedings of the IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 22–29 October 2017; pp. 2659–2668.
  9. Mehta, D.; Sotnychenko, O.; Mueller, F.; Xu, W.; Elgharib, M.; Fua, P.; Seidel, H.P.; Rhodin, H.; Pons-Moll, G.; Theobalt, C. XNect: Real-time multi-person 3D motion capture with a single RGB camera. ACM Trans. Graph. 2020, 39, 82:1–82:17.
  10. Federica, B.; Kanazawa, A.; Lassner, C.; Gehler, P.; Romero, J.; Black, M.J. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 561–578.
  11. Kanazawa, A.; Black, M.J.; Jacobs, D.W.; Malik, J. End-to-end Recovery of Human Shape and Pose. In Proceedings of the Computer Vision and Pattern Regognition (CVPR), San Juan, PR, USA, 17–19 June 2018; pp. 1–10.
  12. Pavlakos, G.; Zhu, L.; Zhou, X.; Daniilidis, K. Learning to Estimate 3D Human Pose and Shape from a Single Color Image. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 459–468.
  13. Pavlakos, G.; Choutas, V.; Ghorbani, N.; Bolkart, T.; Osman, A.A.; Tzionas, D.; Black, M.J. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2019; pp. 10975–10985.
  14. Choutas, V.; Pavlakos, G.; Bolkart, T.; Tzionas, D.; Black, M.J. Monocular Expressive Body Regression Through Body-Driven Attention. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 20–40.
  15. Zhang, Y.; Li, Z.; An, L.; Li, M.; Yu, T.; Liu, Y. Lightweight multi-person total motion capture using sparse multi-view cameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 5560–5569.
  16. Zheng, Z.; Yu, T.; Liu, Y.; Dai, Q. Pamir: Parametric model-conditioned implicit representation for image-based human re-construction. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3170–3184.
  17. Huang, Z.; Xu, Y.; Lassner, C.; Li, H.; Tung, T. Arch: Animatable reconstruction of clothed humans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, DC, USA, 13–19 June 2020; pp. 3093–3102.
  18. Ma, Q.; Yang, J.; Tang, S.; Black, M.J. The power of points for modeling humans in clothing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10974–10984.
  19. Peng, S.; Zhang, Y.; Xu, Y.; Wang, Q.; Shuai, Q.; Bao, H.; Zhou, X. Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9050–9059.
  20. Mir, A.; Alldieck, T.; Pons-Moll, G. Learning to transfer texture from clothing images to 3d humans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, DC, USA, 13–19 June 2020; pp. 7023–7034.
  21. Yang, F.; Li, R.; Georgakis, G.; Karanam, S.; Chen, T.; Ling, H.; Wu, Z. Robust multi-modal 3d patient body modeling. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: New York, NY, USA, 2020; pp. 86–95.
  22. Lee, H.-J.; Chen, Z. Determination of 3D human body postures from a single view. Comput. Vis. Graph. Image Process. 1985, 30, 148–168.
  23. Nevatia, R.; Binford, T.O. Description and recognition of curved objects. Artif. Intell. 1977, 8, 77–98.
  24. Ju, S.X.; Black, M.J.; Yacoob, Y. Cardboard people: A parameterized model of articulated image motion. In Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, Killington, VT, USA, 14–16 October 1996; pp. 38–44.
  25. Wang, M.; Qiu, F.; Liu, W.; Qian, C.; Zhou, X.; Ma, L. Monocular Human Pose and Shape Reconstruction using Part Differen-tiable Rendering. Comput. Graph. Forum 2020, 39, 351–362.
  26. Robinette, K.M.; Blackwell, S.; Daanen, H.; Boehmer, M.; Fleming, S. Civilian American and European Surface Anthropometry Resource (CAESAR), Final Report. Volume 1. Summary. 2002. Available online: https://www.humanics-es.com/CAESARvol1.pdf (accessed on 30 November 2022).
  27. Anguelov, D.; Srinivasan, P.; Koller, D.; Thrun, S.; Rodgers, J.; Davis, J. SCAPE: Shape completion and animation of people. ACM Trans. Graph. 2005, 24, 408–416.
  28. Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; Black, M.J. SMPL: A Skinned Multi-Person Linear Model. ACM Trans. Graphics 2015, 34, 248.
  29. Romero, J.; Tzionas, D.; Black, M.J. Embodied hands: Modeling and capturing hands and bodies together. arXiv 2022, arXiv:2201.02610.
  30. Blanz, V.; Vetter, T. A morphable model for the synthesis of 3D faces. In Siggraph 1999, Computer Graphics Proceedings; Rockwood, A., Ed.; Addison Wesley Longman: Los Angeles, CA, USA, 1999; pp. 187–194.
  31. Osman, A.A.A.; Bolkart, T.; Black, M.J. STAR: A Sparse Trained Articulated Human Body Regressor. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 598–613.
  32. Saito, S.; Huang, Z.; Natsume, R.; Morishima, S.; Li, H.; Kanazawa, A. PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 2304–2314.
  33. Saito, S.; Simon, T.; Saragih, J.; Joo, H. PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization. In Proceedings of the 2020 IEEE/CVF Conference on CVPR, Seattle, DC, USA, 13–19 June 2020; pp. 81–90.
  34. Balan, A.O.; Sigal, L.; Black, M.J.; Davis, J.E.; Haussecker, H.W. Detailed Human Shape and Pose from Images. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8.
  35. Guan, P.; Weiss, A.; Balan, A.O.; Black, M.J. Estimating human shape and pose from a single image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1381–1388.
  36. Pishchulin, L.; Insafutdinov, E.; Tang, S.; Andres, B.; Andriluka, M.; Gehler, P.V.; Schiele, B. DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 4929–4937.
  37. Zanfir, A.; Marinoiu, E.; Sminchisescu, C. Monocular 3D Pose and Shape Estimation of Multiple People in Natural Scenes: The Importance of Multiple Scene Constraints. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2148–2157.
  38. Christoph, L.; Romero, J.; Kiefel, M.; Bogo, F.; Black, M.J.; Gehler, P.V. Unite the people: Closing the loop between 3d and 2d human representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6050–6059.
  39. Xiang, D.; Joo, H.; Sheikh, Y. Monocular total capture: Posing face, body, and hands in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019.
  40. Joo, H.; Neverova, N.; Vedaldi, A. Exemplar Fine-Tuning for 3D Human Model Fitting Towards In-the-Wild 3D Human Pose Estimation. In Proceedings of the 2021 International Conference on 3D Vision (3DV), Virtual, 1–3 December 2021; pp. 42–52.
  41. Song, J.; Chen, X.; Hilliges, O. Human body model fitting by learned gradient descent. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 744–760.
  42. Iqbal, U.; Xie, K.; Guo, Y.; Kautz, J.; Molchanov, P. KAMA: 3D Keypoint Aware Body Mesh Articulation. In Proceedings of the 2021 International Conference on 3D Vision (3DV), Virtual, 1–3 December 2021; pp. 689–699.
  43. Li, J.; Xu, C.; Chen, Z.; Bian, S.; Yang, L.; Lu, C. HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation. In Proceedings of the IEEE/CVF CVPR, Virtual, 20–25 June 2021; pp. 3382–3392.
  44. Jiang, W.; Kolotouros, N.; Pavlakos, G.; Zhou, X.; Daniilidis, K. Coherent reconstruction of multiple humans from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, DC, USA, 13–19 June 2020; pp. 5579–5588.
  45. Kolotouros, N.; Pavlakos, G.; Daniilidis, K. Convolutional mesh regression for single-image human shape reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4501–4510.
  46. Kocabas, M.; Athanasiou, N.; Black, M.J. VIBE: Video Inference for Human Body Pose and Shape Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, DC, USA, 13–19 June 2020; pp. 5252–5262.
  47. Mahmood, N.; Ghorbani, N.; Troje, N.F.; Pons-Moll, G.; Black, M.J. AMASS: Archive of Motion Capture as Surface Shapes. In Proceedings of the International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 5442–5451.
  48. Feng, Y.; Choutas, V.; Bolkart, T.; Tzionas, D.; Black, M.J. Collaborative regression of expressive bodies using moderation. In Proceedings of the 2021 International Conference on 3D Vision (3DV), London, UK, 1–3 December 2021; pp. 792–804.
  49. Zanfir, A.; Bazavan, E.G.; Zanfir, M.; Freeman, W.T.; Sukthankar, R.; Sminchisescu, C. Neural descent for visual 3d human pose and shape. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14484–14493.
  50. Zhou, Y.; Habermann, M.; Habibie, I.; Tewari, A.; Theobalt, C.; Xu, F. Monocular Real-Time Full Body Capture with Inter-Part Correlations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4811–4822.
  51. Moon, G.; Lee, K.M. Pose2pose: 3d positional pose-guided 3d rotational pose prediction for expressive 3d human pose and mesh estimation. arXiv 2020, arXiv:2011.11534.
  52. Wang, Y.; Su, Z.; Zhang, N.; Xing, R.; Liu, D.; Luan, T.H.; Shen, X. A Survey on Metaverse: Fundamentals, Security, and Privacy. In IEEE Communications Surveys & Tutorials 2022; IEEE Press: New York, NY, USA, 2022; pp. 1–32.
  53. Huang, J.; Zhang, H.L.; Lu, H.; Xin, Y.; Li, S. A Novel Position-based VR Online Shopping Recommendation System based on Optimized Collaborative Filtering Algorithm. In Proceedings of the Web Intelligence Workshops (WI), Melbourne, VIC, Canada, 14–17 December 2022; pp. 1–7.
  54. Zheng, Z.; Zhu, J.; Lyu, M.R. Service-Generated Big Data and Big Data-as-a-Service: An Overview. In Proceedings of the 2013 IEEE International Congress on Big Data, Silicon Valley, CA, USA, 6–9 October 2013; pp. 403–410.
  55. Zhang, H.L.; Liu, J.; Dowens, M.G. Complex brain activity analysis and recognition based on multiagent methods. Concurr. Comput. Pr. Exp. 2020, 34, e5855.
  56. Zhang, H.; Zhao, Q.; Lee, S.; Dowens, M.G. EEG-Based Driver Drowsiness Detection Using the Dynamic Time Dependency Method. In Proceedings of the Brain Informatics, Haikou, China, 13–15 December 2019; pp. 39–47.
  57. Zhang, H.L.; Lee, S.; Li, X.; He, J. EEG Self-Adjusting Data Analysis Based on Optimized Sampling for Robot Control. Electronics 2020, 9, 925.
  58. Xu, H.; Li, Z.; Li, Z.; Zhang, X.; Sun, Y.; Zhang, L. Metaverse Native Communication: A Blockchain and Spectrum Prospective. In Proceedings of the IEEE International Conference on Communications Workshops, Seoul, Korea, 16–20 May 2022; pp. 7–12.
  59. Narayanan, A.; Bonneau, J.; Felten, E.; Miller, A.; Goldfeder, S. Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction; Princeton University Press: Princeton, NJ, USA, 2016.
  60. Talamo, M.; Arcieri, F.; Dimitri, A.; Schunck, H.C. A blockchain based PKI validation system based on rare events manage-ment. Future Internet 2020, 12, 40.
  61. Badruddoja, S.; Dantu, R.; He, Y.; Thompson, M.; Salau, A.; Upadhyay, K. Trusted AI with Blockchain to Empower Metaverse. International Conference on Blockchain Computing and Applications (BCCA); IEEE Press: New York, NY, USA, 2022; pp. 237–245.
  62. Mühle, A.; Grüner, A.; Gayvoronskaya, T.; Meinel, C. A survey on essential components of a self-sovereign identity. Comput. Sci. Rev. 2018, 30, 80–86.
  63. Heimes, A.; Zenkert, J.; Fathi, M. Current State and Latest Trends in Blockchain Technology and its Usage and the Effects on Business Use Cases. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 3311–3316.
  64. Chang, L.; Zhang, Z.; Li, P.; Xi, S.; Guo, W.; Shen, Y.; Xiong, Z.; Kang, J.; Niyato, D.; Qiao, X.; et al. 6G-Enabled Edge AI for Metaverse: Challenges, Methods, and Future Research Directions. J. Commun. Inf. Netw. 2022, 7, 107–121.
  65. Prayitno; Shyu, C.-R.; Putra, K.T.; Chen, H.-C.; Tsai, Y.-Y.; Hossain, K.S.M.T.; Jiang, W.; Shae, Z.-Y. A Systematic Review of Federated Learning in the Healthcare Area: From the Perspective of Data Properties and Applications. Appl. Sci. 2021, 11, 11191.
  66. Bhugaonkar, K.; Bhugaonkar, R.; Masne, N. The Trend of Metaverse and Augmented & Virtual Reality Extending to the Healthcare System. Cureus 2022, 14, 29071.
  67. Han, J.; Pei, J.; Tong, H. Data Mining Concepts and Techniques, 4th ed.; Elsevier: Amsterdam, The Netherlands, 2022.
  68. Willis, G.; Tranos, E. Using ‘Big Data’ to understand the impacts of Uber on taxis in New York City. Travel Behav. Soc. 2021, 22, 94–107.
  69. Zhang, H.L.; Zhao, Y.; Pang, C.; He, J. Splitting Large Medical Data Sets Based on Normal Distribution in Cloud Environment. IEEE Trans. Cloud Comput. 2015, 8, 518–531.
  70. Lyko, K.; Nitzschke, M.; Ngomo, A.N. Big Data Acquisition. In New Horizons for a Data-Driven Economy; Cavanillas, J.M., Ed.; Springer Press: New York, NY, USA, 2016; pp. 39–61.
  71. Coda, F.A.; Filho, D.J.S.; Junqueira, F.; Miyagi, P.E. Big Data Acquisition Architecture: An Industry 4.0 Approach. In Technological Innovation for Life Improvement; Camarinha-Matos, L., Farhadi, N., Lopes, F., Pereira, H., Eds.; Springer: New York, NY, USA, 2020; pp. 222–229.
  72. Chang, W.; Boyd, D.; Levin, O. NIST Big Data Interoperability Framework: Volume 6, Reference Architecture; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2019. Available online: https://www.nist.gov/publications/nist-big-data-interoperability-framework-volume-6-reference-architecture?pub_id=918936 (accessed on 19 October 2022).
  73. Jain, P.; Gyanchandani, M.; Khare, N. Big data privacy: A technological perspective and review. J. Big Data 2016, 3, 25.
  74. Rafiq, F.; Awan, M.J.; Yasin, A.; Nobanee, H.; Zain, A.M.; Bahaj, S.A. Privacy Prevention of Big Data Applications: A Systematic Literature Review. SAGE Open 2022, 12.
  75. Sun, H.; Rabbani, M.R.; Sial, M.S.; Yu, S.; Filipe, J.A.; Cherian, J. Identifying Big Data’s Opportunities, Challenges, and Implications in Finance. Mathematics 2020, 8, 1738.
  76. Musamih, A.; Dirir, A.; Yaqoob, I.; Salah, K.; Jayaraman, R.; Puthal, D. NFTs in Smart Cities: Vision, Applications, and Chal-lenges. IEEE Consum. Electron. Mag. 2022, 1–14.
  77. Edwards, C. Are NFTs Key to Accessing the Metaverse? Eng. Technol. 2022, 17, 1–8.
  78. Shabihi, N.; Kim, M.S. Big Data Analytics in Education: A Data-Driven Literature Review. In Proceedings of the International Conference on Advanced Learning Technologies (ICALT), Tartu, Estonia, 12–15 July 2021; pp. 154–156.
  79. Qureshi, H.; Sagar, A.K.; Astya, R.; Shrivastava, G. Big Data Analytics for Smart Education. In Proceedings of the IEEE 6th Inter-national Conference on Computing, Communication and Automation (ICCCA), Arad, Romania, 17–19 December 2021; pp. 650–658.
  80. Gim, G.; Bae, H.; Kang, S. Metaverse Learning: The Relationship among Quality of VR-Based Education, Self-Determination, and Learner Satisfaction. In Proceedings of the IEEE/ACIS 7th International Conference on Big Data, Cloud Computing, and Data Science, Danang, Vietnam, 4–6 October 2022; pp. 279–284.
  81. Agrawal, D.; Budak, C.; El Abbadi, A.; Georgiou, T.; Yan, X. Big Data in Online Social Networks: User Interaction Analysis to Model User Behavior in Social Networks. In Proceedings of the Databases in Networked Information Systems, Aizu-Wakamatsu, Japan, 24–26 March 2014; Madaan, A., Kikuchi, S., Bhalla, S., Eds.; Lecture Notes in Computer Science. pp. 1–16.
  82. Britto, L.F.S.; Pacifico, L.D.S. Evaluating Video Game Acceptance in Game Reviews using Sentiment Analysis Techniques. In Proceedings of the SBGames, Virtual, 7–10 November 2020; pp. 399–402.
  83. Mirza-Babaei, P.; Robinson, R.; Mandryk, R.; Pirker, J.; Kang, C.; Fletcher, A. Games and the Metaverse. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, Bremen, Germany, 2–5 November 2022; pp. 318–319.
  84. Cheng, R.; Wu, N.; Varvello, M.; Chen, S.; Han, B. Are we ready for Metaverse: A measurement study of social virtual reality platforms. In Proceedings of the 22nd ACM Internet Measurement Conference, Nice, France, 25–27 October 2022; pp. 504–518.
  85. Hobson, D. How Banks Can Make Money in the Metaverse. Future of Finance, 17 June 2022; 1–4.
  86. Suzuki, S.-N.; Kanematsu, H.; Barry, D.M.; Ogawa, N.; Yajima, K.; Nakahira, K.T.; Shirai, T.; Kawaguchi, M.; Kobayashi, T.; Yoshitake, M. Virtual Experiments in Metaverse and their Applications to Collaborative Projects: The framework and its significance. Procedia Comput. Sci. 2020, 176, 2125–2132.
  87. Gogolin, G.; Gogolin, E.; Kam, H.-J. Virtual worlds and social media: Security and privacy concerns, implications, and practices. Int. J. Artif. Life Res. 2014, 4, 30–42.
  88. Falchuk, B.; Loeb, S.; Neff, R. The Social Metaverse: Battle for Privacy. IEEE Technol. Soc. Mag. 2018, 37, 52–61.
  89. Hu, X.; Liu, Y.; Zhang, H.L.; Wang, W.; Li, Y.; Meng, C.; Fu, Z. Noninvasive Human-Computer Interface Methods and Ap-plications for Robotic Control: Past, Current, and Future. Comput. Intell. Neurosci. 2022, 2022, 1635672.
  90. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain-computer interface par-adigms. J. Neural Eng. 2019, 16, 1–21.
  91. Oudeyer, P.Y.; Gottlieb, J.; Lopes, M. Intrinsic motivation, curiosity, and learning: Theory and applications in educational technologies. Prog. Brain Res. 2016, 229, 257–284.
  92. Fahimi, F.; Dosen, S.; Ang, K.K.; Mrachacz-Kersting, N.; Guan, C. Generative Adversarial Networks-Based Data Augmentation for Brain–Computer Interface. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4039–4051.
  93. Chen, X.; Huang, X.; Wang, Y.; Gao, X. Combination of Augmented Reality Based Brain- Computer Interface and Computer Vision for High-Level Control of a Robotic Arm. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3140–3147.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , ,
View Times: 1.0K
Revisions: 5 times (View History)
Update Date: 05 Feb 2024
1000/1000