Spaceline: A Montage Concept for Cinematic VR

Introduction

Coming into contact for the first time with Virtual Reality (VR), most people are fascinated by this new experience. The feeling of being in a different place, right in the middle of the action, far from reality, astonishes and allows the opportunity of immersion in another world. This world can be a computer-generated 3D environment, or an omnidirectional movie recorded by cameras. Watching omnidirectional movies via head-mounted displays puts the viewer inside the scene. In this way, one can enjoy an immersive movie experience. However, due to the free choice of field of view, it is possible to miss details which are important for the story; despite this, the additional spatial component gives filmmakers new opportunities to construct stories. To support both creators and viewers, this paper discusses the concept of the spaceline1 (named in analogy to the traditional timeline) which connects movie sequences via interactive regions. It explains the terms of the concept and introduces various methods that make it easier for an audience to follow a story, at their own pace and with their own focus. In the spaceline concept, a cut is no longer based only on elapsed time, but also on the direction in which the viewer is looking. While on a timeline, films are mounted in chronological sequence, the spaceline describes spatial relationships. This requires a specification of regions and scene transitions, as well as techniques for transition.

Cinematic VR is omnidirectional video content that is watched on VR devices such as head-mounted displays. One hears the term “360° movie,” but this is not quite accurate – not only are the 360° of looking around relevant, but also looking up and down. The term “omnidirectional movie” is more suitable, and is often used in academic literature, yet with Cinematic VR it is important that the viewer is in a VR environment, not watching an omnidirectional movie on a desktop with a mouse for changing viewing direction. As in other fields of VR, it is also important that methods do not destroy presence and do not cause simulator sickness. “Presence” describes the feeling of being in the virtual world, and can be measured by recognized questionnaires.2 In cases where the experience in the virtual world does not exactly match the feelings perceived at the same time in the real world, simulator sickness can occur.3 This should be avoided in Cinematic VR experiences, and it is important to know if cuts or other techniques cause this sickness.

Another important aspect which needs to be taken into consideration is montage, which is a stylistic tool and has a major impact on a movie’s construction. Using cuts makes it possible to switch between places and characters, to compress temporal sequences, and to change camera perspectives and settings. Since in Cinematic VR the viewer looks around the scene, the use of cuts requires new approaches. On the one hand, an important part of the montage – the selection of the visible image section – is shifted from the filmmaker to the viewer; on the other hand, the additional spatial component opens up new possibilities: the spectator’s viewing direction can be used to implement new film constructions by making a cut dependent on the viewer's line of sight.

Cinematic language developed over decades cannot simply be transferred to this new medium. The viewer can freely choose the direction of view and thus the visible section of the picture, so it is not always possible to show the viewer what is important for the story. Traditional methods for directing attention – such as close-ups or zooms – are not easy to use; others – such as movement and color – need to be determined and adjusted.

As far back as the 1920s, Russian film director Sergei Eisenstein longed for non-linear movies and books, in which a story could unfold in all directions: “I want to create a spatial form that makes it possible to step from each contribution directly to another and to materialize their relationship; such a synchronic manner of circulation and mutual penetration of essays could be carried out only in the form – of a sphere.”4 Presenting omnidirectional movies via head-mounted displays, we take a step closer to actualizing this dream of spherical dramaturgy, as the additional spatial component facilitates interactivity in a natural way.

A short history of film cuts

Comparing the historical development of traditional movies with today's developments in omnidirectional movies, many parallels can be found. At the beginning of movie history, the attraction of moving images was enough to fascinate an audience. Films such as Louis Le Prince’s 1888 Roundhay Garden Scene were just a few seconds long and did not follow a story. Films therefore began as living photographs, some famous examples of which are Louis Lumière’s Workers Leaving the Lumière Factory in Lyon and the Lumière Brothers’ Arrival of a Train at La Ciotat (both 1895). A short time later, the first stories were being narrated via the cinematic medium, for example, Georges Méliès’ 1902 A Trip to the Moon. However, in those years, the camera always recorded from the same angle and with the same type of shot – just as theater spectators were used to.

It was quickly recognized that separate film segments could be connected to one another, with new effects thereby achieved,5 and the first systematic film experiments with montage were soon carried out. Film theorist Lev Kuleshov investigated the use of editing to emotionally influence an audience, and combined shots in various ways. In a famous Kuleshov experiment, different emotion-triggering shots were mounted between shots of a motionless actor, in order to suggest different situations to the audience.6 Other experiments used montage to connect places that were far apart in reality, but in the film looked like a common location.7 This effect is still commonly used today, and is called “creative geography” in film theory.8 In order that such effects work, it is necessary to embed them in a story9 – context is what enables the viewer to interpret the scenes in the desired way.

Step by step, the language of film was developed, and changes in space and time were implemented. Filmmakers and viewers learned how to process these, and the use of cuts became one of the most important stylistic devices. Cuts determine the rhythm of a film, guide the viewer, and are an essential part of the narration. Cuts can bridge discontinuities in space, time, and action,10 and are needed to connect various shot types, such as long shots, medium shots, and close-ups.

As cinematic language continued to develop, filmmakers and viewers learned to use new methods and adapt their viewing habits. The advancement of new technologies also made new elements of film language possible. Thereby, a narrative art developed, which is also the subject of scientific investigation. Today, filmmakers and researchers experiment with interaction for movies as well as several other types of display, and a new kind of movie emerges.

Cinematic VR attracts the attention of an audience. Even if omnidirectional movies are also recorded by cameras, the narrative methods of traditional film production cannot simply be transferred. The transition of some activities from the filmmaker to the viewer, along with the new possibilities of interaction, requires and enables new approaches.

In Cinematic VR, viewers watch a spherical film via head-mounted displays, and feel transported into the scenery. Nevertheless, cuts can still be used,11 and usually serve to transport the viewer to another location. The use of conventional film editing methods and tools creates two problems. Firstly, it is possible that important aspects of the story are missed by viewers because they are not in their field of view; secondly, the scene changes after a certain amount of time, even if the viewer wants to look around further. The interests and needs of viewers vary, and these variations should, ideally, be taken into account by the viewing experience. Furthermore, the additional spatial component gives filmmakers new opportunities to construct non-linear interactive stories. Cuts do not necessarily have to depend on elapsed time, but can also be based on the viewer's gaze.

Comparison of Cinematic VR and traditional movies

Table 1: Differences between traditional movies and Cinematic VR.

Even if Cinematic VR applications are similar in some respects to traditional movies, there are several differences that require new approaches and a separate cinematic language. Comparing Cinematic VR with traditional movies, it is notable that some activities are transferred from the filmmaker to the viewer. In Cinematic VR, the filmmaker no longer chooses the framing, but the viewer determines where and what exactly is seen. A pan is initiated by the viewer when turning their head. Table 1 shows some differences between traditional movies and cinematic virtual reality.

New freedoms for the viewer mean at the same time that the filmmaker gives the viewer control. By choosing the framing, panning, and zooming in traditional filmmaking, the filmmaker can show details that are important to the story. In Cinematic VR, the viewer is less guided, which can cause the fear of missing something, and in this way diminish the viewing experience. There are many methods for guiding viewers’ attention within a movie’s narrative, and which are also relevant for Cinematic VR, such as moving objects, people, lighting, and colors. The additional spatial component in Cinematic VR offers further possibilities for guiding.

For montage, special knowledge about the effects of shots taken at the same location (e.g. in the same room) is required. Such shots are called “co-located,” and the connection between them a “co-located cut.” In contrast, shots in different locations are “dis-located,” and the connection between them a “dis-located cut.” Currently, most of the cuts in Cinematic VR connect dis-located shots. Because viewers change the visible image section themselves, often no additional cuts are required in a scene.12 Cuts and transitions in several professional Cinematic VR movies have been analyzed. In the movies of the data set by Knorr et al.,13 all cuts are dis-located, and following a cut, a new scene was always presented to the audience. In that way, the background changes completely.

However, there are several reasons for segmentation by co-located cuts: guiding attention to a region of interest, different shot types, and the use of different camera perspectives. In addition, cuts are relevant to the style of a movie, and they influence the viewing experience. It is easier to link two dis-located shots, because the viewer sees a new environment and there are no orientational conflicts. However, if two co-located shots are consecutive, the viewer may be confused when the direction or position changes. It is important to know which camera positions and directions can be used for co-located links. This research is the first step toward examining the problem.

Montage on a timeline

Cuts have different functions in a movie. They shorten the time of a recorded sequence of actions, make it possible to change location, and create suspense. The length of a shot affects the pace and rhythm of a movie.14 Walter Murch described six criteria for a good cut, and indicated by percentage how important each of them is: (1) emotion – 51%, (2) story – 23%, (3) rhythm – 10%, (4) eye-trace – 7%, (5) two-dimensional area of the canvas – 5%, (6) three-dimensional action space – 3%.15 The percentages are not intended to be exact values, but indicate that emotion (1) in a story is more important than all other criteria together. If an editor cannot meet all six criteria, the last should be abandoned first. Criteria (5) and (6) cannot be separated in Cinematic VR, because the entire three-dimensional space is recorded and not just a section of the image. In Cinematic VR, criterion (5) corresponds to the image section selected by the viewer and cannot be determined by the filmmaker alone. The action space described in criterion (6) is the space surrounding the camera in Cinematic VR, and is usually completely recorded. Since the viewer is virtually in this room, criteria (5) and (6) are possibly more important than in traditional movies. The importance of the eye-trace depends on the size of the screen and the speed of the movie.16 Rhythm (3) in Cinematic VR cannot be determined by the filmmaker alone – it is also influenced by the viewer, with fast or slow head movements.

In traditional filmmaking, the timeline is used to arrange footage in the right order and duration: after a certain time, the next shot begins. In Cinematic VR there are several related challenges. The omnidirectional image provides a lot of content – the viewer cannot see all of it and can therefore become anxious. Story-relevant details can be outside the current field of view – in the time the viewer takes to inspect an area, the movie continues in other areas. It can therefore happen that the spectator has not yet seen everything important to the story, and yet the next shot starts. In some cases, the viewer should be able to decide when the next shot is seen.

Many approaches have been tried in recent years in order to meet these challenges. The most obvious of them is to direct the viewer by cinematic elements such as movement and lighting. Such methods are called “diegetic” – in contrast to non-diegetic methods, which are not part of the movie, e.g. the use of additional signs, such as arrows.17 Another approach is the “match on attention” cut.18 With this, an out-point is defined as the region to where the spectators will most likely look at the end of a shot, and an in-point where their gaze is likely to be at the beginning. Movement, lighting, etc. are taken into account to predict where those looks will most likely be, and pictures are aligned so that the out- and in-points are in the same direction. The advantage of this method is that you can cut such films using traditional film editing programs; the disadvantage is that you cannot be sure whether the viewer is really looking into the region the filmmaker wishes them to.

One step further is viewpoint-oriented cutting,19 where the next scene is aligned so that the spectator looks in the direction that is important, no matter where he or she has looked before. This method cannot be realized by traditional editing programs. It needs an implementation, e.g. using Unity3D. But, once again, the next shot starts after a pre-defined time, no matter what the viewer has seen.

The concept of the spaceline

Figure 1: Example of a linear spaceline. If the viewer has seen a pre-defined region, the movie continues.

All the described approaches are based on linear film structures. However, there is another component in Cinematic VR besides time: space – the space in which the viewer looks around. It is obvious, therefore, that cuts not only depend on elapsed time, but also on the regions that the spectator has seen. Therefore, I have worked with Heinrich Hußmann to introduce the concept of the spaceline, where film montage works in that space. The example in Figure 1 shows what a linear spaceline looks like. It is a movie from the perspective of a wheelchair user. The next shot starts only when the viewer has discovered alternative methods for wheelchair users, for example a door opener or a ramp.

Figure 2: Examples of non-linear spacelines. Depending on the region the viewer is looking at, the movie continues with another shot.

Instead of time-dependent cuts, the filmmaker defines regions. If the viewer looks into such a region, the movie continues. The spaceline concept also allows for non-linear structures. One can define multiple regions, and the region the viewer is looking at will determine which shot will be next. Figure 2 shows another example: the movie starts in a Celtic village and, depending on what the viewer is looking at, the story continues in one of the huts.

Figure 3: Examples of story structures: above – “string of pearls” (linear); right – “hub and spokes”; below – a combination of several structures.

These story structures have long been used for non-linear storytelling, for example in game production. Figure 3 shows a few of the very many possible examples.20 The spaceline concept is therefore the perfect way for realizing interactive and non-linear stories with an intuitive interface: the continuation of the story depends on the viewing direction of the viewer. Additionally, it can be used for guiding attention: by selecting the viewing direction after the cut, the viewer can be made aware of something in the new shot.

Figure 4: Camera distance depends on viewing direction. Camera A is closer to object 2 than camera B, whereas for object 1 they have a similar distance.

Up until now, a cut in Cinematic VR has often been related to a change of location; however, co- located shots, where the camera changes position in one location, should also be considered. Co-located cuts are also relevant for realizing specific elements of film language – the camera can move closer or further away to / from objects or people, and in this way different shot types can be used, e.g. close-ups or full shots. However, setting various camera positions in one location is much more complicated than in traditional filmmaking. Such methods are only possible if the viewing direction is taken into account – for example, in Figure 4, camera A is closer to object 2 than camera B, whereas for object 1 they have a similar distance.

Terminology of the spaceline concept

Two fundamental terms for film montage are the shot and the scene. While a shot is a segment of film between two cuts, a scene represents a unit of a movie at the same location and continuous in time, which in traditional film often consists of several shots. The number of cuts is reduced in Cinematic VR, since viewers themselves select different parts of the scenery for viewing. Often a scene has no further cuts; in a traditional film, the image of the camera and that of the viewer are the same.21

Figure 5: Terms of the spaceline concept.

In Cinematic VR there are two perspectives: the all-around view of the camera and the smaller, self-selected field of view of the spectator. The term “shot” is therefore not directly transferable. Two terms are instead required for the film segment between two cuts – it is necessary to distinguish between a space and a shot. A space is an omnidirectional movie segment that has been recorded without interruption. A shot is a movie segment chosen by the viewer between the cuts, within this space. It is not omnidirectional, but rather corresponds to the viewer's field of view in a space. With all spaces, the filmmaker can design a spaceline construct following the well-known story structures previously mentioned. In this story construct, the viewer can choose their own path – the spaceline – a line through the construct, consisting of several shots. In contrast to the timeline-based movie, which is determined by the filmmaker alone, the spaceline is determined by the filmmaker and the viewer.

When one defines a scene change in a timeline, it is necessary to specify the end-time of the last shot, the start-time of the next shot, and the transition type, e.g. fade to black, blur, or hard cut. For a spaceline, regions are required instead of timecodes: the out-region in the last shot, the in-region in the next shot, and the transition type.

The out-region is the area whose activation ends a shot.22 From there, the switch to the next shot occurs where the viewer first sees the in-region, from where the scenery can then be explored. The spaceline structure links out-regions with in-regions. In this way, shot changes become interactive, triggered by the viewer. For non-linear stories, more than one out-region can be defined in a space.

Figure 6: Regions of the spaceline.

The filmmaker has multiple possibilities of influencing the viewer’s perception of the process of space changes. The out-region can be invisible, so that the viewer does not notice specific regions, or there can be a frame, or something similar, to show specific regions. Additionally, the method of transition has to be determined, e.g. the next scene starts after the viewer has looked towards the out-region for one or two seconds.

In addition, act-regions are introduced, offering supplementary interaction options, such as enlarging details or retrieving additional visual or aural content. One important characteristic of a region is its size: a large region is discovered faster than a small one. After the cut, the filmmaker can determine the viewing direction of the viewer. One possibility is to show a region of interest (RoI) and to define an in-region; however, there are further opportunities. Figure 7 illustrates an example of positions before (1) and after (2) the cut. The directions are displayed by arrows.

The to-RoI method is illustrated by arrow A. After the cut, the viewer sees the region of interest (the fish on the table), independent of the previous viewing direction. Using the keep-Dir method, the viewing direction does not change between the cuts – only the position changes (arrow B). For the keep-focus method (arrow C), the camera changes position to a place determined by the filmmaker, but the viewer’s focus remains unaffected – the object that was in the field of view before the cut can also be seen after it.

Figure 7: Different viewing directions after the cut.

In the course of experiments I have participated in, I have learned that the viewer is not confused when experiencing another viewing direction after the cut, if there is a story-relevant detail in the field of view which attracts the attention. Keep-focus cuts should not be applied with timeline cuts, since in this case the cut could be in the middle of an eye movement. The viewer has to focus on something before this type of cut.

In some cases, it can be necessary to guide the viewer to a special area – a region of interest (RoI). In research literature about VR and AR, one finds several methods which can be investigated in relation to Cinematic VR. Based on this, I worked with Hußmann to introduce indicators – Figure 8 shows some examples. Depending on the direction of view, an RoI can be inside the field of view or outside of it. For drawing attention to an object in the current field of view, methods can be applied that are already used in traditional movies, such as moving objects, colors, or lighting.23 These are called “on-screen indicators” (Figure 8, left). To make the regions recognizable they can, e.g., be highlighted or framed.

Figure 8: Examples of on-screen and off-screen indicators: green frame (above left); blinking sign (above right); visible cursor (below left); green ray (below right).

Visual methods, such as saliency modulation, only work when the region is in the field of view. In Cinematic VR, however, it can happen that the viewer must first change the viewing direction in order to have the RoI in the field of view. In this case, off-screen methods are required (Figure 8, right). Which of the two methods is used does not depend on the creator but on the direction of view. The creator has to decide whether indicators are needed. Indicators can inform about the target direction, the distance, the relevance, and the type of regions, e.g., by using different colors or sizes. Furthermore, unmarked regions are feasible, e.g. where the out-region is not noticeable to the viewer, but when looked at for a certain time, the next shot starts. It is important that their visualizations do not disturb the viewing experience.

Crossing the line / 180° rule

Figure 9: Investigating the Crossing the line problem.

Another topic which should be addressed is the 180° rule. In traditional filmmaking, a violation of this rule can lead to viewer disorientation: what was left before is right after the cut, and vice versa. To avoid this, the rule prescribes that the camera does not pass over the action line. Even if the rule can be broken, every filmmaker needs to understand it. The question arises of whether the same or a similar rule exists for Cinematic VR – perhaps other types of consecutive shots can confuse the viewer? In an experiment, several conditions were compared, and it was found that if a talking person was on the viewer's right side before the cut and on the left side after the cut, the situation caused disorientation for the viewer – similar to the 180° problem in traditional filmmaking. To avoid such disorientation in Cinematic VR, the camera has not to cross the line (Figure 9, left). Whether disorientation happens depends on the viewing direction of the viewer. In several experiments, we showed that co-located cuts are possible if the camera is aligned to a region of interest after the cut (Figure 9, right). In such cases, there is continuity in the story, and the discontinuity in space does not attract the viewer’s attention. The viewer does not have to look around to find the RoI (e.g. a speaking person). This result confirms Murch’s rule,24 that the story is more important than the space. The question subsequently arises of which other camera alignments were accepted by the viewers and which confused them.

Conclusion

In this article, as well as in my previous collaboration with Heinrich Hußmann,25 in order to describe the concept of the spaceline for cinematic virtual reality, both in analogy and in addition to the traditional timeline, I have applied film terms such as shot and sequence into the new context of CVR. I have introduced terms such as spaces, spaceline, in-, out-, and act-regions, and presented on-screen and off-screen indicators, as well as applying the concept of the spaceline to other areas, such as traditional filmmaking, interaction concepts, and story structures. My aim has been to encourage the use of dynamic storylines, where cuts depend on interactive regions selected by audience members.

The spaceline is a feasible and valuable concept for filmmakers and viewers of interactive Cinematic VR when supported by helpful indicators. This paper presents the first indicator designs, describing their potential for guiding the viewer. I have also addressed the Cinematic VR-specific challenge of balancing discovery and distraction with off-screen indicators. The present findings seem to be relevant not only for Cinematic VR, but also for VR and AR applications, motivating further research.

1 The concept was previously discussed in: Sylvia Rothe and Heinrich Hußmann, “Spaceline: A Concept for Interaction in Cinematic Virtual Reality” (2019), in: Interactive Storytelling: 12th International Conference on Interactive Digital Storytelling, ICIDS 2019, Little Cottonwood Canyon, UT, USA, November 19–22, 2019, Proceedings, 115–119.

2 Thomas Schubert, Frank Friedmann, and Holger Regenbrecht, “Igroup Presence Questionnaire (IPQ)” (2002), http://www.igroup.org/pq/ipq/index.php (accessed June 2, 2022); Mel Slater, “A Note on Presence Terminology” (2003), http://www0.cs.ucl.ac.uk/research/vr/Projects/Presencia/ConsortiumPublications/ucl_cs_ papers/presence-terminology.htm (accessed June 2, 2022).

3 Simon Davis, Keith Nesbitt, and Eugene Nalivaiko, “A Systematic Review of Cybersickness” (2014), in: Proceedings of the 2014 Conference on Interactive Entertainment – IE2014 (New York, NY: ACM, 2014) 1–9.

4 Sergei Eisenstein, “Tagebuch 5. August 1929,” https://www.fondation- langlois.org/html/e/page.php?NumPage=749 (accessed June 2, 2022).

5 David Bordwell and Kristin Thompson, Film Art: An Introduction (New York: McGraw-Hill, 2013).

6 Hans Beller, Handbuch Der Filmmontage. Praxis Und Prinzipien Des Filmschnitts, ed. Hans Beller (Munich: TR-Verlagsunion, 2002).

7 Ibid.

8 Caroline Amann, “Kreative Geographie – Lexikon Der Filmbegriffe” (2012), https://filmlexikon.uni-kiel.de/index.php?action=lexikon&tag=det&id=5685 (accessed June 2, 2022).

9 James E. Cutting, “Perceiving Scenes in Film and in the World,” in: Moving Image Theory: Ecological Considerations, eds. Joseph D. Anderson and Barbara Fisher Anderson (Carbondale: Southern Illinois University Press, 2007), 9–27.

10 T. J. Smith and J. Y. Martin-Portugues Santacreu, “Match-Action: The Role of Motion and Audio in Creating Global Change Blindness in Film,” Media Psychology vol. 20, no. 2 (2017), 317–348; T. J. Smith and J. M. Henderson, “Edit Blindness: The Relationship between Attention and Global Change Blindness in Dynamic Scenes,” Journal of Eye Movement Research vol. 2, no. 2 (2008), 1–17.

11 Tina Kjær, Christoffer B. Lillelund, Mie Moth-Poulsen, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin, “Can You Cut It? An Exploration of the Effects of Editing in Cinematic Virtual Reality” (2017), in: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology VRST ’17 (New York, NY: ACM, 2017), 1–4; Kasra Rahimi Moghadam and Eric D. Ragan, “Towards Understanding Scene Transition Techniques in Immersive 360 Movies and Cinematic Experiences” (2017), in: 2017 IEEE Virtual Reality Conference IEEEVR ’17 (IEEE), 375–376.

12 Colm O. Fearghail, Cagri Ozcinar, Sebastian Knorr, and Aljosa Smolic, “Director’s Cut – Analysis of VR Film Cuts for Interactive Storytelling” (2018), in: 2018 International Conference on 3D Immersion – IC3D (IEEE), 1–8.

13 Sebastian Knorr, Cagri Ozcinar, Colm O Fearghail, and Aljosa Smolic, “Director’s Cut – A Combined Dataset for Visual Attention Analysis in Cinematic VR Content” (2018), in: Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production CVMP ’18 (New York, NY: ACM, 2018), 1–10.

14 Bordwell and Thompson, Film Art.

15 Walter Murch, In the Blink of an Eye: A Perspective on Film Editing. 2nd ed. (West Hollywood: Silman-James Press. 1992).

16 Piotr Toczyński, “Editing with an Eye-Trace in Mind: Is the Rule of Six Incorrect?” (2018), https://nofilmschool.com/2018/08/editing-eye-trace-mind-rule-six-incorrect (accessed June 2, 2022).

17 Sylvia Rothe and Heinrich Hußmann, “Guiding the Viewer in Cinematic Virtual Reality by Diegetic Cues” (2018), in: International Conference on Augmented Reality, Virtual Reality and Computer Graphics (Edinburgh: Springer, Cham, 2018), 101–117; Sylvia Rothe, Heinrich Hußmann, and Mathias Allary, “Diegetic Cues for Guiding the Viewer in Cinematic Virtual Reality” (2017), in: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology – VRST ’17 (New York, NY: ACM, 2017), 1–2.

18 Jessica Brillhart, “In the Blink of a Mind – Attention,” (2016), https://medium.com/the-language-of-vr/in-the-blink-of-a-mind-attention-1fdff60fa045 (accessed June 2, 2022).

19 Amy Pavel, Björn Hartmann, and Maneesh Agrawala, “Shot Orientation Controls for Interactive Cinematography with 360 Video” (2017), in: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology – UIST ’17 (New York, NY: ACM, 2017), 289–297.

20 Tynan Sylvester, Designing Games: A Guide to Engineering Experiences (Sebastopol: O’Reilly Media, Inc., 2013), https://www.oreilly.com/library/view/designing- games/9781449338015/ch04.html (accessed June 2, 2022).

21 Rothe and Hußmann, “Spaceline.”

22 Ibid.

23 Daniel Arijon, Grammar of the Film Language (West Hollywood: Silman-James Press, 1991); Bordwell and Thompson, Film Art.

24 Murch, In the Blink of an Eye.

25 Rothe and Hußmann, “Spaceline.”

Amann, Caroline. “Kreative Geographie – Lexikon Der Filmbegriffe” (2012). https://filmlexikon.uni-kiel.de/index.php?action=lexikon&tag=det&id=5685.

Arijon, Daniel. Grammar of the Film Language. West Hollywood: Silman-James Press, 1991.

Beller, Hans. Handbuch Der Filmmontage. Praxis Und Prinzipien Des Filmschnitts. Edited by Hans Beller. Munich: TR-Verlagsunion, 2002.

Bordwell, David and Kristin Thompson. Film Art: An Introduction. New York: McGraw-Hill, 2013.

Brillhart, Jessica. “In the Blink of a Mind – Attention” (2016). https://medium.com/the-language-of-vr/in-the-blink-of-a-mind-attention-1fdff60fa045.

Cutting, James E. “Perceiving Scenes in Film and in the World.” In Moving Image Theory: Ecological Considerations. Edited by Joseph D. Anderson and Barbara Fisher Anderson. Carbondale: Southern Illinois University Press, 2007. https://doi.org/use.jhu.edu/book/24943.

Davis, Simon, Keith Nesbitt, and Eugene Nalivaiko. “A Systematic Review of Cybersickness” (2014). In Proceedings of the 2014 Conference on Interactive Entertainment – IE2014. New York, NY: ACM, 2014. https://doi.org/10.1145/2677758.2677780.

Eisenstein, Sergei. “Tagebuch 5. August 1929.” https://www.fondation-langlois.org/html/e/page.php?NumPage=749.

Fearghail, Colm O., Cagri Ozcinar, Sebastian Knorr, and Aljosa Smolic. ”‘Director’s Cut – Analysis of VR Film Cuts for Interactive Storytelling” (2018). In 2018 International Conference on 3D Immersion - IC3D. IEEE. https://doi.org/10.1109/IC3D.2018.8657901.

Kjær, Tina, Christoffer B. Lillelund, Mie Moth-Poulsen, Niels C. Nilsson, Rolf Nordahl, and Stefania Serafin. “Can You Cut It? An Exploration of the Effects of Editing in Cinematic Virtual Reality” (2017). In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology – VRST ’17. New York, NY: ACM. https://doi.org/10.1145/3139131.3139166.

Knorr, Sebastian, Cagri Ozcinar, Colm O. Fearghail, and Aljosa Smolic. “Director’s Cut – A Combined Dataset for Visual Attention Analysis in Cinematic VR Content” (2018). In

Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production – CVMP ’18. New York, NY: ACM, 2018. https://doi.org/10.1145/3278471.3278472.

Moghadam, Kasra Rahimi and Eric D. Ragan. “Towards Understanding Scene Transition Techniques in Immersive 360 Movies and Cinematic Experiences” (2017). In 2017 IEEE Virtual Reality Conference – IEEEVR ’17. IEEE. https://doi.org/10.1109/VR.2017.7892333.

Murch, Walter. In the Blink of an Eye: A Perspective on Film Editing. 2nd ed. West Hollywood: Silman-James Press, 1992.

Pavel, Amy, Björn Hartmann, and Maneesh Agrawala. “Shot Orientation Controls for Interactive Cinematography with 360 Video” (2017). In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology – UIST ’17. New York, NY: ACM, 2017. https://doi.org/10.1145/3126594.3126636.

Rothe, Sylvia and Heinrich Hußmann. “Spaceline: A Concept for Interaction in Cinematic Virtual Reality” (2019). In Interactive Storytelling: 12th International Conference on Interactive Digital Storytelling, ICIDS 2019, Little Cottonwood Canyon, UT, USA, November 19–22, 2019, Proceedings. https://doi.org/10.1007/978-3-030-33894-7_12.

Rothe, Sylvia and Heinrich Hußmann. “Guiding the Viewer in Cinematic Virtual Reality by Diegetic Cues” (2018). In International Conference on Augmented Reality, Virtual Reality and Computer Graphics. Edinburgh: Springer, Cham, 2018. https://doi.org/10.1007/978-3-319-95270- 3_7.

Rothe, Sylvia, Heinrich Hußmann, and Mathias Allary. “Diegetic Cues for Guiding the Viewer in Cinematic Virtual Reality” (2017). In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology – VRST ’17. New York, NY: ACM. https://doi.org/10.1145/3139131.3143421.

Schubert, Thomas, Frank Friedmann, and Holger Regenbrecht. “Igroup Presence Questionnaire (IPQ)” (2002). http://www.igroup.org/pq/ipq/index.php.

Slater, Mel. “A Note on Presence Terminology” (2003). http://www0.cs.ucl.ac.uk/research/vr/Projects/Presencia/ConsortiumPublications/ucl_cs_ papers/presence-terminology.htm.

Smith, T. J. and M. J. Henderson. “Edit Blindness: The Relationship between Attention and Global Change Blindness in Dynamic Scenes.” Journal of Eye Movement Research vol. 2, no. 2 (2008). https://doi.org/https://doi.org/10.16910/jemr.2.2.6.

Smith, T. J. and J. Y. Martin-Portugues Santacreu. “Match-Action: The Role of Motion and Audio in Creating Global Change Blindness in Film.” Media Psychology vol. 20, no. 2 (2017). https://doi.org/10.1080/15213269.2016.1160789.

Sylvester, Tynan. Designing Games: A Guide to Engineering Experiences. Sebastopol: O’Reilly Media, Inc. 2013. https://www.oreilly.com/library/view/designing- games/9781449338015/ch04.html.

Thompson, Kristin and David Bordwell. Film History: An Introduction. New York: McGraw-Hill Higher Education, 2010.

Tikka, Pia. “(Interactive) Cinema as a Model of Mind.” Digital Creativity vol. 15, no. 1 (2004). https://doi.org/10.1076/digc.15.1.14.28151.

Toczyński, Piotr. “Editing with an Eye-Trace in Mind: Is the Rule of Six Incorrect?” (2018). https://nofilmschool.com/2018/08/editing-eye-trace-mind-rule-six-incorrect.