Clinical Protocols

- Applied Technology for Neuro-Psychology Lab




Mail Icon


Clinical Protocols


An experimental Virtual Reality (VR)-based procedure to investigate impairments in spatial reference frame processing in eating disorders


This experimental Virtual Reality (VR)-based procedure is designed to investigate impairments in spatial reference frame processing. On the basis of the reference point used to encode and store information, it is possible to distinguish between the “egocentric reference frame” and the “allocentric reference frame” (Klatzky 1998; Paillard 1991). The egocentric reference frame codes and continuously updates information in relation to the individual (i.e., body as the reference point for first-perspective experience), while the allocentric reference frame is responsible for the long-term storage of information (i.e., environmental features as reference points with the body as an object similar to others in the physical world). As described in the well-known Boundary Vectory Cells Model (Burgess et al. 2001; Byrne et al. 2007), the process of spatial retrieval requires a continuous interaction between the allocentric long-term memory and the egocentric spatial updating thanks to perceptual inputs, which occurs via a coordinate transformation in the posterior parietal and retrosplenial cortex (Vann et al. 2009).

Driven by the Allocentric Lock Theory (Serino et al. 2014; Riva 2012; Riva 2014; Riva and Gaudio 2012; Riva et al. 2014), to investigate the existence of impairments in spatial reference frame processing, this experimental VR-based procedure may be used to compare the performances of ED patients and healthy controls on five standard spatial abilities, allocentric retrieval (Task 1), and its update following perceptual inputs (Task 2).


Before starting the VR-based procedure, a comprehensive neuropsychological battery for evaluating different spatial abilities is administered. At the start of the experimental session, after an initial training in VR technology (i.e. a simple navigation task), the experimental procedure is initiated, consisting of an encoding phase, followed by a retrieval phase in two counterbalanced different tasks, i.e. Task 1 and Task 2. This VR-based procedure for assessing impairments in spatial reference frame processing was developed using the software NeuroVirtual 3D, a recent extension of the software NeuroVR (Cipresso et al. 2014; Riva et al. 2011).

Spatial Evaluation

The Corsi Block Test (Corsi 1972) is used to measure short-term spatial memory (Corsi Span) and long-term spatial memory (Corsi Supraspan). In the Corsi Span, while seated in front of a an array of wooden blocks scattered on a wooden base, the participants are required to tap a sequence in the same order as the researcher, with increasing span length on each trial. In the Corsi Supraspan, the researcher propose the same sequence of nine blocks to be repeated for several trials.

The Money Road Map (Money et al. 1965) is used to assess navigation abilities. Participants are given a city map, on which a route is marked including 32 turns with left/right intersections. The participants are invited to imagine themselves travelling along this route, and have to decide whether a right or left turn is needed at each intersection. There is no time limit, and the maximum score is 32 points.

To evaluate mental rotation abilities, the Manikin Test (Ratcliff 1979) is administered. The participants are given 32 sheets, each of which displayed a "little man" holding a ball from different perspectives. Participants are invited to identify in which hand this little man is holding the ball (right or left). No time limit is imposed and the maximum score is 32 points.

The Judgment of Line Orientation (Benton et al. 1978) is used to assess visuo-spatial skills. Participants are given 30 sheets showing pairs of target lines positioned above a reference figure of 11 lines arranged in a semicircle and numbered from 1 to 11. They are asked to identify their angular positions in relation to this reference figure. There is no time limit, and the maximum score is 30 points.

Experimental VR-based procedure

A virtual city was developed as test environment. This city was built around a central square with a tower in the middle, which represents the start of the navigation. The participants are invited to find an object (i.e., a plant) that had been hidden in the eastern side of city. Landmarks (e.g., buildings, shops, and trees) were spread throughout the city. At the start of experimental session, participants enters in a virtual city and are invited to find a the object (encoding phase). There is no time limit. For the retrieval phase, two different tasks were developed. In Task 1, participants are invited to retrieve the position of plant they had discovered in the virtual city and memorized on a map – a full aerial view of the virtual city (see Figure 1).



Figure 1: In Task 1, participants are asked to indicate the position of the plant on a map on a map – a full aerial view of the virtual city.

In Task 2, entered in the virtual city from another starting point, participants are asked to indicate the position of that object, which now is absent (see Figure 2).


Figure 2: In the Task 2, participants, entered from another starting point in the virtual city, are asked to retrieve the position of the object

While Task 1 (“allocentric retrieval”) requires and measures the ability to retrieve an allocentric viewpoint-independent representation (i.e., a retrieval with spatial allocentric information independent of point of view), Task 2 (“allocentric updating”) assessed the ability to change this stored long-term representation following perceptual inputs (i.e., a retrieval without any visible spatial allocentric information independent of point of view). Indeed, Task 2 (where participants changed their viewpoint with respect to that they had in the encoding phase), force participants to refer to their allocentric viewpoint-independent representation and sync it with the egocentric representation driven by perceptual inputs in order to indicate the position of the object (Bosco et al. 2008). In both tasks, the spatial accuracy of the answer is the dependent variable. The accuracy of spatial location is defined as the difference between the correct and the estimated positions of the object. First, spatial coordinates (x and y) of the object location in the two task (i.e. Task 1 and Task 2) have been corrected by dividing the measured distance by the total length: in this way all the coordinates have a value between ( 0 , 0 ) and ( 1 , 1 ), where ( 0 , 0 ) is at the bottom left side of the city and ( 1 , 1 ) is at the top right side of the city, and ( 0.5 , 0.5 ) is at the center of the city. This operation was needed to compare the two different tasks. Once corrected, the spatial coordinates can be compared by calculating their distance using the formula Sqrt [ ( x2 - x1 )^2 - ( y2 - y1 )^2 ] , where ( x1 , y1 ) and ( x2 , y2 ) are the corrected coordinates. So the several distances calculated within this procedure are always corrected coordinates and are comparable with each other.


Benton AL, Varney NR, Hamsher KS (1978) Visuospatial judgment Archives of Neurology 35:364-367

Bosco A, Picucci L, Caffo AO, Lancioni GE, Gyselinck V (2008) Assessing human reorientation ability inside virtual reality environments: the effects of retention interval and landmark characteristics Cognitive processing 9:299-309

Burgess N, Becker S, King JA, O'Keefe J (2001) Memory for events and their spatial context: models and experiments Philosophical Transactions of the Royal Society of London Series B: Biological Sciences 356:1493-1503

Byrne P, Becker S, Burgess N (2007) Remembering the past and imagining the future: a neural model of spatial memory and imagery Psychological review 114:340

Cipresso P, Serino S, Pallavicini F, Gaggioli A, Riva G (2014) NeuroVirtual 3D: A Multiplatform 3D Simulation System for Application in Psychology and Neuro-Rehabilitation. In: Virtual, Augmented Reality and Serious Games for Healthcare 1. Springer, pp 275-286

Corsi PM (1972) Human memory and the medial temporal region of the brain (Unpublished Thesis). McGill University, Montreal

Money I, Alexander D, Walker HT (1965) Manual: A standardized road-map test of direction sense. Johns Hopkins Press, Baltimore

Ratcliff G (1979) Spatial thought, mental rotation and the right cerebral hemisphere Neuropsychologia 17:49-54

Riva G et al. NeuroVR 2-A Free Virtual Reality Platform for the Assessment and Treatment in Behavioral Health Care. In: MMVR, 2011. pp 493-495

Serino, S., Morganti, F., Di Stefano, F., & Riva, G. Detecting early egocentric and allocentric impairments deficits in Alzheimer’s Disease: an experimental study with virtual reality. Frontiers in Aging Neuroscience,.

Klatzky RL (1998) Allocentric and egocentric spatial representations: definitions, distinctions, and interconnections In: Freksa C, Habel C (eds) Spatial Cognition. An Interdisciplinary Approach to Representing and Processing Spatial Knowledge. Springer, pp 1-17

Paillard J (1991) Brain and space. OxfordScience Publications, Oxford, UK

Riva G (2012) Neuroscience and eating disorders: The allocentric lock hypothesis Medical hypotheses 78:254-257 doi:10.1016/j.mehy.2011.10.039

Riva G (2014) Out of my real body: cognitive neuroscience meets eating disorders Frontiers in human neuroscience 8 doi:10.3389/fnhum.2014.00236

Riva G, Gaudio S (2012) Allocentric lock in anorexia nervosa: New evidences from neuroimaging studies Medical hypotheses 79:113-117 doi:10.1016/j.mehy.2012.03.036.

Riva G, Gaudio S, Dakanalis A (2014) I'm in a virtual body: a locked allocentric memory may impair the experience of the body in both obesity and anorexia nervosa Eating and weight disorders: EWD 19:133-134 doi:10.1007/s40519-013-0066-3

Vann SD, Aggleton JP, Maguire EA (2009) What does the retrosplenial cortex do? Nature Reviews Neuroscience 10:792-802 doi:10.1038/nrn2733