Anorexia Nervosa (AN) is a condition that mainly impacts adolescents and can be associated with functional impairment. AN is more frequent in females and contributes to psychological, and biological dysfunctions 
. The lifetime prevalence of AN in adults is about 0.6% (0.9% in females and 0.3% in males) 
. Neuropsychological investigations have found that AN patients are impaired in different cognitive domains, such as visuospatial abilities 
, empathic abilities 
, executive functioning 
, and central coherence 
Among executive functioning, a specific weakness in set-shifting or cognitive flexibility has been consistently reported in AN patients 
. Impaired behavioural response shifting has been related to abnormalities in the fronto-striato-thalamic circuitry 
. Whereas several studies have provided evidence that cognitive alterations are present during starvation, empirical support that documents these deficits outside the AN acute phase of malnutrition is more elusive. Reduced set-shifting and weak central coherence are thought to be part of the eating disorder endophenotype 
. It has been proposed that these neuropsychological dysfunctions may have specific links with the core clinical characteristics of AN. In particular, impaired set-shifting may be linked with the cognitive and behavioural pattern of inflexibility 
; weak central coherence may be linked with the excessive preoccupation with detail of body parts and weight 
, visuospatial deficits may be related to the distortion of body image (BI) 
It has been shown that restrictive (AN-R) or purging subtypes of AN may have different cognitive profiles, with the first characterized by reflective cognitive style and the latter by impulsivity 
. Previous studies have documented in particular an inaccuracy in the estimation of one’s own body parts in patients with AN, suggesting that disordered body perception may be a central aspect of AN. Disordered body perception is one of the principal risk factors 
, one of the main symptoms in AN (linked to depression and anxiety), a significant maintenance and prognosis factor, a predictor of relapse. In sum, many studies have confirmed the strong relationship between AN and BI; when dealing with patients with AN, trying to improve their BI is a commonly used intervention approach.
Ziser and colleagues (2018) carried out a systematic review, conducted according to the PRISMA statement, about the evidence on BI directed interventions in AN. Targeting BI disturbances may be efficacious 
, and exposure therapy is a potential method for the treatment of AN and eating disorders (EDs): VR exposure could improve accessibility and feasibility of exposures in the clinical setting 
Clus and colleagues (2018) 
conducted a broad analysis of studies on the use of VR in patients with various EDs (publication date of the articles included range from the year 1998 to the first period of 2016), showing that VR is an acceptable and promising therapeutic tool for patients with EDs. Commercialized VR technology is increasingly accessible to the general public and medical research, especially in mental health, with diagnostic, therapeutic, and preventive aims. The use of VR in the evaluation or treatment of patients with EDs is being led by European teams, and most of the technologies used to enable immersion in the virtual environment have been developed in the United States. The heterogeneity of the populations studied, the studies’ objectives and the content of the VR protocols make interpretation of the results difficult. Moreover, many studies concluded without differentiating the subtypes of EDs 
even if knowledge of both transdiagnostic and specific mechanisms in ED subtypes is of relevance for clinical practice 
Autonomic imbalance and its connection with interoception has an important role in AN symptomatology 
; this altered sense of the physical body in AN contributes generating very specific emotions and behaviours 
. A previous study suggested that two main neural networks—the limbic and the frontal—could be particularly relevant in AN 
A recent review by Riva and colleagues (2021) outlined that VR could be used to modify the allocentric memory of the body, to improve the processes of multisensory integration through multisensory body illusions and to reduce attentional biases to body-related stimuli 
. The hypothesis developed to explain the perturbations of BI in patients with EDs is the allocentric lock hypothesis 
. The use of VR could make it possible to unblock this transmission. The theory of objectification as a specific cognitive process is cited to understand these perturbations: a person internalizes an objectified self-image when using a reference allocentric frame (observer mode) to recall the events in which they evaluate themselves based on body appearance. The validity of using a virtual environment in a population can be judged by the individual’s emotional reactions. Unique exposure to virtual silhouettes of the patient or mannequin increases anxiety and changes mood, reproducing physiological reactions to a real situation. The repetition of VR sessions with modules of exposure to silhouettes of the patient or mannequin reduces negative emotions, by progressive attenuation of the anxiogenic response 
. As explained by Dakanalis and colleagues “allocentric lock” model of EDs and AN in particular 
provides a rich conceptual framework for understanding the source of BI disturbance. People are generally unable to accurately determine their own body measurements and translate this knowledge to identify a model/avatar that best represents their own body. This inability has been related mainly to health problems as in the first place AN 
The “rubber hand illusion” (RHI) was the forerunner of current VR research: Since Botvinick and Cohen’s original publication 
revealing that observing a rubber hand being stroked or touched synchronously with one’s own hidden hand generates the illusion in people that a rubber hand is part of their body (RHI), there has been increasing research interest in the study and modulation of the brain’s body representation. More recently, an increasing body of pioneering research has endeavoured to adapt the RHI to the entire body (body-swap illusion) using the same principles (as visuo-tactile synchrony between the real body and the seen surrogate body) 
. Current research revealed that the embodiment in a virtual body substituting ones’ own body in VR with visuo-tactile stimulation alters body percept 
. A recent study 
has shown that the body-swap illusion was able to induce an update of the negative stored representation of the body. Although true that the studies on body-swap illusion can be classified in terms of the main cross-modal stimuli provided 
, in the EDs field, all the studies conducted are based on visuo-tactile triggers for body-swap. In fact, visuotactile and proprioceptive integration are critical in perceiving human body highlighting the issue of the multisensory and affective impairment of body perception and representation 
2. Avatar and Multisensory Integration: Stimulus Generation, Technical and Emotional Setup
Irvine et al. (2020) 
tested the efficacy of a training program delivered in VR to modify BI in female volunteers with high BI concerns. In a 4-day training programme in VR, participants categorized a series of 3D models (thin or fat). One group was presented with the stimuli briefly, while the other intervention group had no time limits, and inflationary feedback to shift their categorizations of the stimulus models towards higher body mass indexes (BMIs) was given. The “reference frame-shifting approach”, which is focused on the reorganization of body-related memories 
involves the VR adaption of the imagery rescripting method aimed at changing the meaning linked with negative memories of the body. Riva et al. (2018) 
developed a specific BI rescripting protocol: a sensory training to ‘unlock’ the body memory by increasing the contribution of new somatosensory information related to the negative memory 
. Riva and his team pioneered the use of avatars to measure and treat body representation disturbances 
: patients were asked to select among nine avatars ranging from underweight to overweight, indicating how they perceive themselves 
. After the discussion of their emotions emerged in the first phases (exposition to digitalized photographs of their real bodies in different formats) with clinicians, patients modelled their perceived BI using an avatar and compared it with their actual and ideal BI.
In the Neiret et al. study 
, a virtual representation of participants’ internal image of their body shape was offered. A virtual body corresponding to the representation they had of their ideal body was also created, and another virtual body based on their real body measures was built. Furthermore, in this case, participants saw three different virtual bodies from an embodied first-person perspective and a third-person perspective. Participant bodies were scanned and generated as an avatar. Two alternatives were generated for each body, increasing or decreasing its size. Afterwards, participants had to choose which is their body between the three proposed. In the study of Provenzano et al. 
, combined virtual reality and multisensory bodily illusion were used to characterize and reduce the perceptual (body overestimation) and the cognitive-emotional (body dissatisfaction) components of BI distortion in AN. Interpersonal multisensory stimulation (IMS) was applied to the avatar, reproducing the participant’s perceived body. The two avatars reproduced increases and losses of 15%, all presented with a first-person perspective (1PP). Participants had to choose their avatar respecting their actual body size. After that, they had to complete a set of tasks and experience a set of both synchronous and asynchronous stimuli with three different body size avatars.
New technological tools such as virtual reality (VR) applications have improved the feeling to be the avatar with the immersive conditions. Head-tracking technology allowed for the implicit measurement of explicit choices of patients. The retrospective study by Fisher et al. 
examines the hypothesis that VR with standardized 3D avatars would improve BI perception and then BI evaluation by adolescents with AN, compared to the paper-based figure rating scales (FRS). The creation of personalized avatars to simulate realistic changes in body size is useful when studying self-perception of body size. Hudson and colleagues 
explored this topic in young adult women, using a generalized line drawing scale and several types of personalized avatars, including 3D textured images presented in immersive virtual reality (VR). Each participant views both 2D and 3D avatars of their body along with the other three avatars with different body sizes. The order of the avatar showing is random. Body perception ratings using generalized line drawings were often higher than responses using individualized visualization methods.
Mölbert et al. 
, in a case-control study aimed at disentangling the components of BID in AN, investigated 24 women with AN and n
= 24 controls. Using different psychophysical tasks, participants considered their actual and their desired body shape testing for general perceptual biases. Based on a three-dimensional (3D) body scan, the researchers offered virtual 3D bodies in a virtual reality mirror scenario. The experiment is composed of three parts: (a) body scanning; (b) moving inside a virtual scene and observing the avatar in a mirror; (c) observing the 2D version of the avatar on Desktop. In Rubo et al. 
the avatar is calibrated with the body size of the participant and is then slightly incremented. After the calibration phase, participants observe themselves through a mirror and have to fulfil simple tasks such as touching their hips and stomach and walking around a table.
Fonseca Baeza et al. 
presented the study protocol of a novel Virtual Reality (VR) multisensorial paradigm to assess and treat BID. A female standard virtual body has been developed for all participants to see it in a first-person point-of-view or a third person point-of-view in a mirror. The participant will not be able to see the face or the hair of the avatar. It will be possible to choose the body size along a continuum from 133 cm of waist and 151 cm of hips (extremely overweight) to 65 cm of waist and 88 cm of hips (thin), covering a body BMIs rate from 42.5 to 12.5. The avatar was presented with a body mass index (BMI) similar to the participant’s one. Afterwards, it is increased and decreased by 2-point BMI, and the participant is asked to modify the avatar until they consider that the avatar coincides with their actual abdomen. The control group condition task consisted of making slow movements observing the virtual abdomen. This protocol allows the development of a more realistic corporal representation. In Corno et al. 
, a sample of 27 community women recreated in VR their perceived body in both an allocentric and egocentric perspective. Attitudinal indexes of BID were assessed through validated questionnaires. The third-person view (TPV) is obtained by a mirror located in the virtual scene. Starting from a body with a BMI of 20.5, participants have to indicate how to modify the avatar to recreate a body size corresponding to the one perceived both in first-person view (FPV) and TPV. Virtual bodies (presented in a continuum from extreme underweight to morbid obesity) were viewed without their heads and dressed in blue shorts and black crop top.
To modify body-size perception through an illusion of ownership over a virtual body, Buche et al. 
proposed to couple a tactile stimulation when viewing an avatar from a third-person perspective (a condition known to produce this kind of illusion). The application offers the possibility to choose between avatars of different builds and to perform morphing to reduce the avatar’s body. Moreover, the application allows to implicitly measure how people perceive their body size from an affordance estimation task in which people have to appreciate if they can pass through doors of different sizes without twisting their shoulders. Buche et al. 
carried on a 3D virtual environment for inducing body ownership illusion implementing an experiment on 16 female participants. They performed the affordance estimation task five times: the first time before being exposed to their chosen avatar to get a baseline measure, and the other four times after exposure to their avatar in different situations. These different situations are defined by the crossing of two experimental factors: morphing (presence or absence) and simultaneous visuotactile stimulation (presence or absence). In Ferrer Garcia 
, college students (5 males) were exposed to an immersive VR environment, where the illusion of ownership of a virtual body induced using visuomotor synchronization assessed the ability of a VR-based software to produce body anxiety responses in a non-clinical sample. BMI, drive for thinness and body dissatisfaction were assessed before exposure, while body anxiety, fear of gaining weight and ownership illusion were assessed after exposure to each avatar. In Gutiettez-Maldonado and colleagues 
, a 22-year old female anorectic patient underwent a VR and Experiential Cognitive Therapy (ECT) to address both body experience disturbances and motivation for change. Exposure to an embodied avatar has been used by Porras-Garcia et al. 
. The procedure consisted of five sessions in which a patient suffering from AN embodied an avatar of progressively increasing BMI. Porras-Garcia et al. 
used a VR-based embodiment procedure in which participants owned an avatar with their own body measurements, in comparison to a larger-size one 
3. Intervention Evaluation Studies
In most of the analyzed studies, intervention groups experienced reductions in BI concern and, in the groups with longer stimulus presentation times, these reductions were consistent with a clinically meaningful effect. Third-person perspective allowed them to perceive the real body shape without applying the negative prior beliefs associated with the self; this resulted in a more positive evaluation of their body shape. Only in Fisher et al. 
and in Hudson et al. 
, are results of BID evaluation by VR standardized 3D avatars comparable to those obtained by paper-based FRS, and presentation in immersive VR may not be essential.
Full body ownership illusions in VR can be robustly induced by providing congruent visual stimulation, and that congruent tactile experiences provide a dispensable extension to an already established phenomenon. In Rubo et al. 
visuotactile congruency indeed does not add to already high measures for body ownership on explicit measures but does modulate movement behaviour when walking in the laboratory. Participants who took ownership over a more corpulent virtual body with intact visuotactile congruency increased safety distances towards the laboratory’s walls compared to participants who experienced the same illusion with deteriorated visuotactile congruency. This effect is in line with the body schema more readily adapting to a more corpulent body after receiving congruent tactile information. The researchers concluded that the action-oriented, unconscious body schema relies more heavily on tactile information compared to more explicit aspects of body ownership. In Corno et al. 
, in line with the allocentric lock hypothesis, the results confirmed the existence of two different mechanisms underlying BID: the egocentric and the allocentric frame. In Buche et al. 
, the REVAM application links two main aspects of body size perception: the first one focuses on its modification and the second one concerns its assessment indicated that exposing people to a virtual body reduced in could be a way to modify body size perception, at least temporarily.
In Ferrer Garcia et al. 
, students reported higher levels of body anxiety and fear of gaining weight after owning a 40% larger virtual body, in particular, the students with higher scores in the scales of body dissatisfaction and drive for the thinness of the validated tests used. In the research of Conxa Perpiñá et al. 
, improvement was maintained in post-treatment and at one-year follow-up. The results reveal the advantage of including a treatment component addressing BI disturbances in the protocol for the general treatment of EDs.
Gutiettez-Maldonado and colleagues 
detailed the characteristics of the ECT, an integrated approach ranging from cognitive-behavioural therapy to virtual reality (VR) sessions. Multisensory bodily illusions since the pivotal work of Botvinick and Cohen 
have been used to investigate the plasticity of bodily experience and representation 
. Keizer and colleagues showed in a case-control study that patients with AN experienced a stronger RHI when compared to healthy controls. Moreover, RHI was able to induce a decrease in the overestimation of hand width only in AN subjects 
. Keizer and colleagues 
also showed that after the embodiment procedure, participants with AN exhibited a decrease in the overestimation of their bodies and part of the body lasting for two hours. Porras-Garcia et al. 
carried out several studies using a VR based embodiment method. Results showed a reduction in the body related anxiety, fear of gaining weight, body-related attentional bias and BI disturbances 
4. The VR Technologies
In the examined period (about 20 years), VR technologies have certainly evolved a lot. The very complex and bulky systems, which required very powerful computers, has been tranformed to cheaper and much more agile systems. The advent of low-cost VR devices such as the Oculus Rift or HTC Vive has undoubtedly given a significant boost to research works that use virtual reality technologies.
Among the works analyzed, 10 use the Oculus Rift (Dk1 and Dk2), five the HTC Vive, five other devices (generally older and more expensive) while the remaining five did not specify the system used.
The display of an avatar (especially in TPV form, present in 12 articles among those selected) controlled by the subject’s posture requires a device capable of detecting and tracking it, such as the Optitrack Motion Capture Systems, reported in Article 2. Surely the Kinect device (created by Microsoft to interact with video games) has greatly facilitated, and economically, this type of task. In the works analyzed, three 
use the Kinect for movement tracking. The Kinect can also be used to perform a body scanning similar (albeit with a lower resolution) to that obtainable from dedicated devices, useful where the experiment provides for a pseudo-realistic representation of one’s body 
Some tasks of the various experiments require additional hardware. Tasks that involve gaze analysis require an eye tracking device, such as the VR HMD FOVE 
. As for systems that provide approaches related to third body ownership illusion, some works use input devices such as Razer Hydra 
, to track the movement pertaining to tactile stimulation in the 3D setting. Reference 
on the other hand, envisages a more complex system for tactile stimulation, using a self-built system based on vibration actuators connected to Arduino boards.
Among the various software (SW) tools used, it should be noted that Makehuman 
is often used for the preparation of avatars. This is an open-source tool for creating 3D characters complete with rigging, which can be easily imported into various game engines. As for the game engines used (among the few articles that report it) for the preparation of 3D environments, the most common is undoubtedly Unity 3D 
. Even with the free version, this engine can build complete and functional systems on many platforms. Furthermore, compared to other game engines (e.g., Unreal) 
it has a certainly sweeter learning curve.
5. Technical Details on the Hardware/Software Used
5.1. VR Headset
VR Headset allows users to experience the virtual environment immersively. Through them, users have a direct stereoscopic view of the environment that provides a real sense of space and distances and, thanks to the accelerometer sensors equipped on the headsets, head movements are translated into camera movement. Most of the works analyzed before, which exploits these features to provide a set of stimulations to the participants, use commercial devices: 10 of them are based on Oculus Rift, five on HTC Vive. Less common (or prototype) devices have been used in some of the works, such as the Thunder 400/C. Finally, some works do not report any details on VR Hardware used.
Oculus Rift is a line of virtual reality headsets developed and manufactured by Oculus VR, a division of Facebook, Inc., released in 2016. Despite Oculus Quest 2 being the only model currently produced, Oculus VR has developed and distributed several different models: DK1 (development kit), DK2, Oculus Rift, Oculus Go, Oculus Rift S, Oculus Quest and Oculus Quest 2. DK1 and DK2 are the only pre-production models, among the five developed, which were shipped to backers; the DK1 in 2013 and DK2 in 2014, intended to provide developers with a platform to develop applications before the final release. The Rift DK1 was released in 2013 and used a 7-inch screen with a 1280 × 800 resolution (640 × 800 effective) per eye. It included interchangeable lenses that aim to allow for simple dioptric correction. The DK2 (2014) featured several key improvements over the first development kit, such as having a higher-resolution (960 × 1080 per eye) low-persistence OLED display, higher refresh rate, positional tracking, a detachable cable, and the omission of the need for the external control box. The first consumer version, the CV1, was released in 2015 and featured per-eye displays with a 1080 × 1200 resolution, running at 90 Hz, 360-degree positional tracking, integrated audio, increased positional tracking volume, and a better focus on ergonomics. The last Rift version was the S: it has a 1280 × 1440 display running at 80 Hz display and a slightly larger field of view than that of the CV1. The Rift S tracks the position of itself and its controllers in 3D space using a system known as Oculus Insight, which uses the five cameras on the HMD to track points in the environment and infrared LEDs on the controllers, information from accelerometers in both the HMD and controllers and computer vision to predict what path the HMD and controllers are most likely to take.
Oculus Go, Oculus Quest and Oculus Quest 2 are the portable versions of Oculus Rift; none of them is used in the works analyzed before which use Oculus Rift as a headset. Thus, their specs will not be described.
HTC Vive is a virtual reality headset produced by HTC Corporation and presented during the Mobile World Congress keynote in 2015. HTC Vive has been developed within a collaboration with Valve Corporation, to implement the SteamVR hardware and software ecosystem. The HTC Vive implements “room-scale” virtual reality. In general, VR applications can allow users to walk freely around a predetermined area or, to avoid motion sickness, to constrain them to a set of stationary positions. The controllers and headset use a positional tracking system known as “Lighthouse” based on LED lights, and two infrared lasers on the system’s base stations. The headset is connected to the Windows PC using an adapter called “link box”. Link Box is composed of a USB 3.0, an HDMI and power connectors. In 2018 HTC presented an upgraded Vive model known as HTC Vive Pro, having higher-resolution displays, now at 1440 × 1600 resolution per eye. Vive Pro Eye, released in 2019, added built-in eye tracking. In 2021, HTC released the Vive Pro 2, which upgrades its screens to 2448 × 2448 resolution per eye (marketed as 5 K resolution), with a 120-degree field of view and 120 Hz refresh rate. Other, more recent models are the Vive Focus (now at version 3), which offers has a per-eye resolution of 2448 × 2448 at 90 Hz with a 120-degree field of view, and the stand-alone Vive Cosmos (similar to Oculus Quest), which has a 2880 × 1700 display.
5.2. Other Hardware/Software (HW/SW) Devices
Razer Hydra is a motion and orientation controller developed by Sixense Entertainment, in partnership with Razer USA. It uses a weak magnetic field to detect the absolute position and orientation of the controllers with a precision of 1 mm and 1°; it has six degrees of freedom. It was used in some of the works, together with Oculus Rift DK2 (which has no controllers), for implementing the third body illusion.
Kinect is a motion-detection device produced by Microsoft in three versions starting from 2010 (V1, V2, Azure). The devices contain infrared cameras and projectors that map depth through structured light (version V1) or time-of-flight calculation (version V2 or Azure). Kinect can perform real-time gesture recognition, body skeleton detection, and face detection (V2 and Azure). It is connected to the computer via a standard USB 3 connection. The most recent version (Azure) requires the latest generation Nvidia GPU.
MakeHuman is an SW designed for the prototyping of photorealistic humanoids, free and open source. MakeHuman allows designing a great variety of male and female characters, starting from a unique starting base mesh. Each unique character design is obtained through linear interpolation. By defining four morphing targets (baby, teen, young, old), MakeHuman can design all the intermediate ageing stages automatically. Using this SW, it is possible to reproduce a considerable number of different characters.