Driving from a Distance: Challenges and Guidelines for Autonomous Vehicle Teleoperation Interfaces

Felix Tener, Information Systems/Human-Computer Interaction, University of Haifa, Israel and Information Systems/Human-Computer Interaction, University of Haifa, Israel, felix.tener@gmail.com
Joel Lanir, Information Systems, The University of Haifa, Israel and Information Systems, The University of Haifa, Israel, ylanir@is.haifa.ac.il

Autonomous vehicle (AV) technologies are rapidly evolving with the vision of having self-driving cars moving safely with no human input. However, it is clear that at least in the near and foreseeable future, AVs will not be able to resolve all road incidents and that in some situations remote human assistance will be required. However, remote driving is not trivial and introduces many challenges stemming mostly from the physical disconnect of the remote operator. In order to highlight these challenges and understand how to better design AV teleoperation interfaces, we conducted several observations of AV teleoperation sessions as well as in-depth interviews with 14 experts. Based on these interviews, we provide an investigation and analysis of the major AV teleoperation challenges. We follow this by providing design suggestions for the development of future teleoperation interfaces for assistance and driving of AVs.

CCS Concepts: Human-centered computing, Human computer interaction, Interaction design

KEYWORDS: Teleoperation, Autonomous vehicles, Remote driving, Tele-assistance, Tele-driving, User interface design, Teleoperation challenges

ACM Reference Format:
Felix Tener and Joel Lanir. 2022. Driving from a Distance: Challenges and Guidelines for Autonomous Vehicle Teleoperation Interfaces. In CHI Conference on Human Factors in Computing Systems (CHI '22), April 29-May 05, 2022, New Orleans, LA, USA. ACM, New York, NY, USA, 19 Pages. https://doi.org/10.1145/3491102.3501827

1 INTRODUCTION

Autonomous vehicle (AV) technology has the potential to disrupt the transportation system by increasing traffic safety, reducing congestion, and changing travel behaviors [18]. Latest technological advancements, in particular in the field of artificial intelligence, enable the evolvement of AV technology to the point that operational AVs are already being tested in the streets today. However, even full automation of driving does not mean there would be no need at all for any human involvement [12]. It is widely believed today, both in academia and industry, that AVs will not be able to resolve all ambiguous traffic situations by their own [12,35]. Situations such as a malfunctioning traffic light, bad visibility conditions, or a big puddle in the middle of the road, might prevent an AV from moving autonomously because it did not yet encounter such a situation before, because the situation or how to address it is unclear to the algorithm, or because there are rules and regulations which the AV must obey. A promising approach to resolve these ambiguities and provide an actionable solution for edge situations is Teleoperation. Teleoperation involves a remote human operator (RO) who can monitor and control the vehicle from afar. When a vehicle encounters a problem in a particular situation, a RO can be called to assess the situation and guide the vehicle until the problem is resolved.

Teleoperation systems for AVs are already in use and are being developed by various automotive companies [35]. However, remote driving is not a trivial task and there are various challenges that a RO must face. For example, since the RO is physically disconnected from the operated AV, she cannot feel the forces that are applied on the teleoperated vehicle or hear its surroundings sounds. Other challenges might be related to the fact that a lot of information should be transmitted from the AV to the RO over the network, and thus, latency might be an issue [22][45]. Finally, the RO has to gain situation awareness (SA) quickly under a heavy cognitive load and impaired visibility conditions.

The purpose of this work is to unveil the major vehicle teleoperation challenges and create design recommendations for future teleoperation interfaces of AVs. Human performance issues have been widely investigated in field of robotic teleoperation [11]. Many of these issues, such as the importance of latency, are also applicable for teleoperated AVs. However, AVs operate in a much more complex environment and face greater and different challenges. For example, while for most robots, a short communication problem will simply have the robot stop and wait until the communication returns, a communication problem with a remotely operated vehicle driving 100 km/h on a highway is a totally different story. In order to unveil the major challenges and issues in AV teleoperation, we conducted semi-structured in-depth interviews with 14 experts from industry and academia, who have extensive experience with AVs, teleoperation, command and control, or the auto-industry in general. To complement the interviews, we also observed eight teleoperation driving sessions of AVs. Based on these interviews and observations, we created a framework of overall tele-driving challenges grouped into six clusters. In addition, we provide initial suggestions for the design of AV teleoperation interfaces. The results of this work may be used to help future engineers, designers, and researchers to design and build remote driving interfaces, which take into account the discovered limitations and challenges.

2 RELATED WORK

2.1 Robotic Teleoperation Interfaces

The design and implementation of robotic teleoperation interfaces have been explored since the early 1970’s. These robotic vehicles have been used in situations in which it may be inconvenient, dangerous, or impossible for a human operator to be present. Applications of robotic telepresence are most common in military and space contexts, but also span civilian off-road contexts such as agriculture, manufacturing and mining. The operator's situation awareness, i.e., her ability to perceive and comprehend the remote environment was determined to be crucial for the success of the remote operation task [1]. Initial studies examined how to improve SA using various tools to better perceive the remote environment, attempting to maximize information transfer while minimizing cognitive and sensorimotor workload [20]. Other works looked into various human performance issues and user interface design of teleoperation interfaces, including how to make decisions and to issue commands, how to increase spatial orientation and object identification of the operator, and the effect of reliability, field of view, orientation, viewpoint and depth perception of the video images on human performance [10,11].

2.2 Teleoperation of Autonomous Vehicles

While robotic teleoperation interfaces have been examined since the early 1970’s, vehicle-teleoperated interfaces are a relatively new field of research. The Society of Automotive Engineers (SAE) defined an industry standard of six levels of driving automation from 0 to 6 [41]. In level three (conditional automation), the vehicle can drive autonomously, however a human must be present in the car, seated in the driving seat and ready to take over if there is an immediate need. This passing of control between the autonomous vehicle and the driver is called take-over request. Several works investigated how to best utilize take-over requests and what may affect the driver's response time [46,39]. Levels four (high automation) and five (full automation) do not require the humans in the car to monitor or control the vehicle, and in fact, do not even require the vehicle to have pedals and a steering wheel. While AV technology is continuously evolving, the highest automation level currently available is at level 3. However, several automotive companies are continuously doing test runs for high automation, and many resources are invested into making self-driving cars a reality.

Although there might be long periods of time of self-driving in highly automated vehicles, it is widely acknowledged today that AVs will not be able to handle all road situations [26,28,35]. The underlying assumption is that there are simply too many exceptional situations (i.e., “edge cases") vehicles can encounter when driving. These situations may occur, for example, because of perception problems (e.g., heavy snow that does not allow to perceive the AVs environment via radar, LIDAR, or cameras), because the vehicle encounters an unknown situation (e.g., an unknown animal blocks the road), because the vehicle cannot make a decision due to rules, regulations or other issues (e.g., inability to cross a continuous separation lane when the road is blocked), or because the AV cannot unambiguously determine the situation (e.g., whether an object is a plastic bag or a big rock). Thus, despite the advancements in AV sensors and AI algorithms, and although humans may also struggle with some of the edge cases, humans still have higher-level interpretation skills for complex or novel situations. A complex edge case for an AV might be easily interpreted and handled by a human. Furthermore, in automation level 4, the vehicle is still limited to some aspects and there might be regulations that require humans to perform certain actions (e.g., decide to cross a separation line). Thus, for a vehicle to be able to operate in automation level 4 or level 5, in which there is no human behind the steering wheel able to intervene, a remote human operator must be available to interpret edge case scenarios and remotely intervene when a problem occurs.

Teleoperation of a vehicle on an urban road is a very different challenge than robotic teleoperation. Driving a remote car on an urban road requires the operator to utilize much cognitive resources in order to keep the vehicle in lane, control the speed and acceleration of the car, and be able to react to immediate hazards such as pedestrians, other vehicles, and various unexpected events. The teleoperation system and interface should be designed to maximize the operator's situation awareness so he or she could best perceive the environment and act accordingly. Several works examined whether head-mounted displays can enhance the situation awareness and spatial awareness of remote drivers of AVs [23][7]. These studies mostly indicate that while state of the art head-mounted displays allow a higher feeling of immersion, they do not necessarily improve the driving performance.

Few recent studies started to look at requirements for teleoperation-based interfaces [24], at defining the design space for advanced visual interfaces of AVs [25], and investigating the needs of future control centers [30]. Our study extends these works. However, while Graf and Hussmann [24,25] focused on providing a list of user requirements for AV teleoperation, which includes comprehensive information on the vehicle and surrounding environment (e.g., vehicle position, vehicle status, weather condition, etc.) which is needed for teleoperation purposes, our work takes a different perspective focusing on the general challenges of tele-driving and the design of tele-driving interfaces.

3 METHODOLOGY

To find out what are the main teleoperation challenges in remote driving of autonomous vehicles, we first conducted in-depth semi-structured interviews with professionals working in the automotive industry as well as a few researchers of teleoperation from the academia. Following the interviews, we conducted informal observations of 8 people who drove an autonomous vehicle remotely in a test site. In the following section we describe the process in more detail.

3.1 Formal interviews

Participants. We recruited 14 participants (1 female) ranging from 29 to 55 years old (M = 43.5, SD = 6.95). Seven participants are members of a consortium composed of academic institutions and leading automotive companies, formed to promote the legal and technological foundations for the widespread deployment of AV fleets through teleoperations. Other participants were recruited from innovation centers of well-known automotive corporations or leading start-ups in the AV teleoperation field, which were linked to the consortium. The participants were recruited via email and did not receive any compensation for their assistance. On average, participants had 20.3 years of experience in the industry/academy and 16.1 years of experience with AVs, teleoperation, command and control, or auto-industry in general. All the participants obtained undergraduate education in engineering, science, or business, most of them had graduate degrees, and all had a valid driving license. Table 1 summarizes the background of each participant.

Table 1: Participants current role and experience in the AV industry.
Participant Participant Current Role Experience Description
P1 Co-Founder & CEO of a startup, dealing with computer vision systems for mobile robotics. Has 9 years of experience in the field of computer vision. In the last 3 years focused on remote navigation of robotic systems in domestic, industrial, and agricultural settings.
P2 VP Mobility Solutions in a company developing “robot as a service” platform to connect multiple autonomous systems to real world application. Manages a division focusing on experiments and regulations in the area of smart transportation and autonomy, as well as a national project that deals with traffic congestion, sensors, passengers load, vehicle safety, etc.
P3 Senior test manager and an autonomous driver in a company deploying autonomous shuttles. Plans and executes various tests for automotive and smart transportation technologies.
P4 Senior innovation leader in an innovation lab of a large vehicle corporation. Leads the sensors field within the innovation center and is focused on scouting and evaluating relevant start-ups.
P5 Teleoperation researcher (post doctorate) in an academic institution. Research is focused on human factor issues in remote operation of AVs.
P6 Technical scout in an innovation lab of a large German vehicle corporation (different from the corporation where P4 is employed). Looking for innovative solutions in areas related to AVs and teleoperation. Previously worked for 15 years as a journalist reviewing auto-related issues.
P7 Co-founder and COO of a teleoperation company. Worked over 18 years in the AVs area in military-oriented companies and later in a start-up company.
P8 VP of business development and autonomous systems consultant in a teleoperation company. Founded the AVs and robotics branch in the military, where he spent researching, developing, and deploying AVs for real world applications for 10 years. Later, co-founded a company for teleoperation of AVs.
P9 Lab group manager in a large American vehicle corporation (different from the corporations where P4 and P6 are employed). Works in the field of human factors and engineering for Advanced Driving Assistance System systems and AVs for more than 25 years.
P10 Director of innovation in a start-up, which focuses on tele-communication with AVs. Has 20 years of experience in system engineering and communication with autonomous satellites. Recently joined a company dealing with teleoperation of AVs.
P11 Research scientist (faculty) in an academic laboratory for robotics and AVs. Engaged in the research of transportation safety systems and autonomous systems for more than 16 years.
P12 Director of partnerships and business development in a small company developing simulations for Advanced Driving Assistance System and AV developers. Has around 7 years of experience in companies that are related to vehicles and vehicle simulations.
P13 VP of business development and co-founder of a start-up for teleoperation of AVs. Has more than 20 years of experience in the field of tele-communications and in the last 5 years works in the field of AVs.
P14 VP of Innovation in a company that develops command and control systems. Many years of experience as a group manager of command-and-control systems in the navy, recently focusing on command and control for AVs.

Interview Procedure. We conducted semi-structured interviews with the participants. The main questions were the same for all participants, however, we requested each participant to elaborate and asked follow-up questions according to the interview answers. The main questions, which were asked during the interview were:

What are the main teleoperation challenges that a remote operator encounters when driving, assisting, or monitoring an AV?

What is, in your opinion, important for the remote operator to know about the AV in order to assist it in its movement? Which elements should be available to the remote operator in the interface?

Based on your experience, what types of faults can happen to an autonomous vehicle? please give concrete examples for situations in which an AV might need remote intervention.

Each interview lasted around 60 minutes, but in some cases, depending on the willingness of our participants, we scheduled an additional session to collect more information. The interviews were conducted remotely via the Zoom video conferencing tool. All sessions were recorded. In the beginning of each interview session, a consent form was obtained, and participants were asked to fill in a demographic questionnaire. In the current work, we focus on the teleoperation challenges (question 1) and the teleoperator's user interface (questions 2).

Data analysis. First, we fully transcribed the interviews from the recordings. We used an inductive approach to the qualitative analysis of the texts following the guidelines of thematic analysis [8]. Specifically, we grouped participants’ answers using a separate spreadsheet for each question. Each data record in the “Teleoperation Challenges” section had the following characteristics attached to it: (1) The challenge – general description, (2) Literal description of the challenge by each participant (i.e., a citation of what the participant said), (3) Origin – which participant mentioned this challenge, (4) Number of appearances of the challenge within the data, and (5) Category. The “Category” characteristic was added after cleaning redundancies and grouping challenges together based on thematic similarities (Bottom-up approach). After categorizing our findings, we counted the number of times each category or sub-category was mentioned by different participants (Figure 2).

3.2 Observations of tele-driving sessions

Participants. Eight participants (2 female) ranging from 28 to 64 years old (M = 41.75, SD = 12.39) participated in the driving sessions. All participants had an undergraduate degree, six of them in engineering or sciences. Only one participant had previously driven an AV remotely, and three of them had extensive experience with video games. Most of the participants (6/8) were employees of the teleoperation company, which conducted the experiment for its internal needs. All participate agreed to participate in our observations and did not receive any compensation. In the rest of the paper, we refer to these participants as OP1-OP8.

Procedure. After going through an appropriate safety briefing, each participant was requested to perform several remote driving tasks using a dedicated tele-driving station (Figure 1 - left). The remote car was located 60km away from the station in a dedicated experimental polygon for smart transportation testing (Figure 1 - right).

Figure 1
Figure 1: Left - Remote tele-driving station. Right - Experimental polygon.

During the tele-driving sessions, participants wore a headset (Figure 1 - left) via which they communicated with a safety driver who was located withing the AV at the remote polygon. Each participant drove the AV for around 15 minutes and was guided to perform the following tasks:

  1. Driving straight within the boundaries of a lane.
  2. Performing a U-turn.
  3. Speeding up and stopping abruptly.
  4. Maneuvering between several cones, organized in a straight line several meters apart.
  5. Bypassing an obstacle without touching nearby standing cones.

The participants were observed while performing the tele-driving tasks and interviewed immediately after. Both the observations and the following interviews were video recorded. In the beginning of each interview, a consent form was obtained and participants were asked to fill in a demographic questionnaire. The main questions, which were presented to the participants were:

  1. Can you please share your recent remote driving experience? What did you feel?
  2. Which challenges did you face while driving the vehicle remotely?
  3. What disturbed you in the existing interface? What could be improved? How would you improve the UI of the teleoperation station?

Data analysis. During the driving sessions we recorded notes of any difficulty or problem we observed. Analysis of observation interviews was similar to the interview step. First, we fully transcribed the interviews from the recordings. Then, we grouped participants’ answers. Each data record had the following characteristics attached to it: (1) The challenge – general description, (2) Literal description of the challenge by each participant (i.e., a citation of what the participant said), (3) Origin – which participant mentioned this challenge, (4) Number of appearances of the challenge within the data. We also separated generic teleoperation challenges from station specific challenges. After categorizing our findings, we counted how many times each category or sub-category was mentioned by different participants (OPm) and added it to the sum we already calculated based on the answers of the interviewed experts (Pn) (Figure 2).

Figure 2
Figure 2: Categories of teleoperation challenges. The numbers near each category name indicate how many times themes in this category appeared in the data (interviews and observations). Within the subcategories we show how many times each theme was mentioned by interviewees (left) and remote drivers (right).

4 RESULTS

From the interview analysis, the following main categories for teleoperation challenges have emerged (Figure 2): 1. Lack of physical sensing (mentioned 31 times), 2. Human cognition and perception (28), 3. Video and communication quality (25), 4. Remote interaction with humans (19), 5. Impaired visibility (15), and 6. Lack of sounds (8). All the categories and the sub-categories (except one1 ) above were directly derived from the statements (citations) of the interviewed experts (Pn) or the remote drivers (OPM). Additionally, at the end of this section we also share some of the observed mistakes and phenomena that were noticed during the remote driving experiment. Next, we explain each category and provide additional details and examples.

4.1 Lack of physical sensing

When a human driver is physically present within a vehicle, she receives various kinds of physical feedback on her actions. For example, when a driver presses the gas pedal, the vehicle accelerates and the driver's chair applies force on the driver's body. However, when a remote driver performs the same action, the feedback is provided only in a visual form while the haptic and physical feedbacks are missing [11][27]. Participants commented that when driving a vehicle remotely, a RO has difficulty to feel and estimate 1. Linear and angular accelerations (or forces that influence the driver) (mentioned by 8 participants), 2. Speeds (of the car, as well as the speed of other vehicles) (5 participants), and 3. Moderate road inclinations (1). It's also worth mentioning that 5 ROs in the experiment reported the general disconnect between their bodies and the AV without mentioning any of the above sub-categories.

P13 commented that “… The information arrives to the driver mostly visually and there are almost no other sensations: hard to hear, feel vibrations, feel accelerations, feel speeds …”. The same participant also said that “… The issue of turns is very difficult. I didn't see a [remote] driver, who didn't ask his passengers ‘how was the turn?’ [after completing one] …”. He also mentioned that “… It is very difficult to get the sense of speed. When you are in a car and you look away, you have no idea if you are traveling 60km/h or 80km/h. This happens a lot of times when you drive remotely …”. This was corroborated by our observations of the tele-driving sessions; Two participants noted that “… the speed [as observed in from the video feed] isn't connected to reality [and] looks much faster than it is …”. OP6 also mentioned that it was “…very hard to take a turn and maintain the same speed …”.

In addition, since the RO does not have physical connection with the vehicle's controls, she lacks the feedback from the steering wheel and the pedals (mentioned by 6 experts and 6 ROs). P3, who works as a teleoperator of an autonomous shuttle, shared that it is “… very difficult to feel when you actually remove your leg from the brakes …”. Moreover, the feedback of the road surface, is missing. P6 commented that the RO doesn't feel the “… slip of the road, bumps, etc. …”. Connection to road surfaces and its influence on applied forces and accelerations was also mentioned during our observations by OP3 and OP6. Finally, it stems from our observations, that physical disconnect from a remote vehicle not only makes it challenging to control the AV remotely, but also may have a negative physiological influence on some of the remote drivers. OP2 and OP5 reported that they felt nauseous and dizzy after a 15-minute driving experience. This finding also correlates well with the literature [19].

4.2 Human cognition and perception

This category deals with various cognition and perception factors that influence the RO's ability to successfully drive the remote vehicle. These factors include: 1. Situation awareness (SA) (mentioned by 5 participants), 2. Cognitive load (6), 3. Depth perception (7), 4. Spatial awareness (7), and 5. Development of mental models (3).

Situation awareness is a general term that describes how a person perceives the environment. More specifically, according to Endsley [15], SA is the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status. SA can be considered as a high-level challenge for remote operation [11] and indeed, many of the challenges mentioned in this paper are important for improving the overall SA of the RO. In remote driving, SA is critical before, during, and after an intervention request. One of the challenges, specifically in remote driving, is that the RO must gain SA quickly because the situation on the road changes rapidly and human lives are involved.

Another remote driving challenge is the cognitive load of the RO. According to P3, who works as a remote operator on a daily basis, “... The concentration level that is needed while teleoperating a vehicle is different ... You are in a high alert state and it sucks [your] cognitive abilities...". In our observations, OP2 said the tele-driving experience was “… hard and dubious …”. He also explained that “… In a regular driving, you absorb the information automatically, [while] in remote driving it's overwhelming ... [It's] a huge effort ...”. OP5 defined the experience to be “… stressful …”. Other participants emphasized the need to design a teleoperation station in a way that “…will give the relevant information when it is needed, but not more than that…” (P11) in order to avoid overwhelming the RO with too much information.

Another challenge is the lack of spatial awareness in teleoperation. When a driver is physically present in a vehicle, she feels how much space the vehicle occupies, and which maneuvers the vehicle is able to perform. When one drives a vehicle remotely this feeling is lost, which may cause remote drivers not to take risks that they are ready to take when inside a vehicle. P6 gave an example for this in road flooding: “... When you drive a car, you feel the depth of the water. You can feel if the water is standing or moving because the car will move you a bit. That is exactly the difference between a small puddle and drowning a passenger ... Orientation in space is easier when you move with the vehicle ...". Understanding the vehicle's placement in space was a major issue that was raised during our tele-driving observations. Five participants stated that it was hard for them to estimate the vehicle's width. One of our observation participants also mentioned that she couldn't tell when she has completed a U-turn.

Depth perception was also raised as a challenge by several participants. It is more difficult to estimate distances in teleoperation: “… you seem to see where you are, but all the depth hints are no longer there …” (P5) and thus, “… it is very hard to know what is close and what is far away …” (P3). Additionally, it is difficult to estimate approaching / moving away speed of vehicles that move with high speed. During our observations, depth perception was mentioned as a challenge by 3 participants, however, two of the participants said that “… it takes time to get used to it …”. Additionally, based on our observations, we can tell that depth perception was mostly an issue when they tried to maneuver around the cones. One possible reason to the above could be the fact that the side cameras have distorted the distance, similarly to the phenomena with side mirrors in regular vehicles.

Finally, it may be challenging for the RO to develop correct mental models of the vehicle and its environment [38], especially when the RO needs to alternate and operate more than one vehicle. In the context of remote driving, mental models were raised by participants in regards to the understanding of the remote vehicle's controls, size, and behavior. Participant P14 mentioned that “… in tele-driving there is a challenge of adapting to different vehicles even when they are of the same size (bus vs. truck) …”. Participant P11 asks “… how does pressing the accelerator pedal affect a Ford Focus versus the same pressure in a Fiat Punto? …”. In other words, it is a challenge for the RO to build the correct understanding of the remote vehicle and its behaviors in various situations. Since ROs in teleoperation centers may not have the luxury of habituation to each and every AV they operate, switching various AVs will become a real challenge for ROs, who will have to get used to various vehicle sizes, different sensations of pedals, different camera settings, etc.

4.3 Video and communication quality

When a driver is physically present and driving the vehicle, she receives the information from three major sensory channels: visual, haptic, and audio. However, as mentioned above, in remote driving the haptic and audio modalities are usually missing. Therefore, the visual information has greater importance in remote driving. We recognize several challenges that deal with the way that the RO can view the remote environment. This mostly has to do with the way the cameras in the car capture and transmit the video feeds and the way this information is presented to the RO.

Nine of our interviewees mentioned latency as one of the major challenges in teleoperation, which also correlates with the literature [2][13][14][36]. According to P8, “… if the latency is below 200-250ms, one can do teleoperation, but more than that, it's impossible …”. Additional important characteristic of latency is its variability. In other words, “… latency isn't something permanent, but something that changes depending on various factors. If it changes significantly all the time, it is very tiring for a RO [to the point of dizziness] …” (P8). Latency was also a major issue in our observations; Five out of eight participants mentioned that latency was a challenge for them. Latency not only caused over-steering and under-steering2, but also caused remote drivers to press more (or less) on the gas and brake pedals. These phenomena undermined the feeling of confidence in the system for some operators.

Another affecting aspect of video transmission which is related to latency is the video's frequency, which is measured in Frames Per Second (FPS) [13]. Three of the interviewed experts have mentioned this challenge and P8 shared with us that " ... When there is a (network) load, the first thing you do is reduce FPS. The human eye does not differentiate over 24 FPS, but there is a huge difference between 30 and 24 FPS. When the pace is 30FPS the person gets much less tired even though seemingly there should be no difference. If you need to reduce the rate to 15-20 FPS, it's bad ...".

The third characteristic of video that was mentioned in the interviews by three experts is the video resolution. P6 shared that “…if the remote driver cannot view the video in full HD, her ability to function is impaired ...”. In addition, P13 stated that "... The sensors have a much lower resolution than the human eye... A person can see something and focus on it, but in remote driving it's very difficult to do so… The video doesn't let you zoom in where you want. At 10 degrees, you see very sharp, at 30-40 degrees you see fine, and beyond that you can barely see …”. In other words, the video's sharpness is not consistent throughout the field of view (Figure 3).

Figure 3
Figure 3: A view from a front camera installed on an AV. The image resolution within the red circle is much lower than the image resolution within the blue circle because of the angle from the center of the camera.

An additional aspect of video quality is whether video stitching (combining multiple images with overlapping fields of view to produce a wide-angle image) was done. In a remote driving station, often several camera feeds (from different cameras) placed on a car are stitched together to provide one wide-angle view. According to P2 “… the RO's chance of understanding what's happening, when he has a concave screen in which the images are stitched, is higher than when he sees several screens with bezels between them …”. However, video stitching is an expensive image processing procedure that companies are reluctant to invest in. Lack of stitching or low-quality stitching (Figure 4) may increase the RO's cognitive load. This finding was supported by one interviewee and experienced by 3 remote drivers.

Figure 4
Figure 4: A view from 3 front cameras in a teleoperation station, taken from our observations. The image captures a cone, which appears twice on the screen because an overlap between two front cameras.

Finally, another factor, which can reduce the RO's ability to assist a remote vehicle, is cameras that are uncalibrated. P2 mentioned that “… The brightness, resolution, and contrast of all cameras should be the same. That is, the cameras must be calibrated. The green should be the same green and the blue the same blue…”. Additional important aspect of camera calibration is how distance to objects is reflected in the video feeds. In other words, the distance to an object from the front cameras should be the same as the distance to the same object from the side cameras (Figure 5). Lack of such calibration creates a confusing effect, which makes it hard for the RO to estimate distance to objects.

Figure 5
Figure 5: The left image shows the video feed from the front 3 cameras. The right image shows the video feed from the right-side camera. The distance to the pavement seems different in both views. The color palate is also not calibrated.

4.4 Remote interaction with humans

Ultimately, AVs are intended to provide service to people. Therefore, in addition to the many technical and human factors challenges, ROs of AVs will also have to address the challenges that are related to remote communication with other humans that will be involved in various traffic scenarios [3] .

Remote communication with passengers within an AV is one such scenario (e.g., see also [31][33]). P2 described a hypothetical scenario in which a “… passenger [and] a child arrived [in an AV] to their destination. [The passenger] got out of the vehicle and went to take her [small] child out of the AV. However, the vehicle locked itself and drove to its next destination because it recognized one exit and one entrance [into itself] …”. This is a very complex scenario, and it is clear that a RO must be quickly involved in the loop. Communication with passengers was mentioned by 6 experts.

Communication with drivers of other, non-autonomous vehicles is another possible scenario that was mentioned by one expert during the interviews. Oftentimes, human drivers communicate with each other through gestures and eye contact in order to bridge gaps that were not outlined by the law. For example, drivers arriving simultaneously on a 4-way stop sometimes rely on non-verbal communication between themselves to decide who will proceed first. In a hybrid environment, in which one vehicle is controlled by a human and the other is autonomous, such situations can be ambiguous. Therefore, a RO might be evoked to resolve the problem.

Finally, AVs might also need to communicate with pedestrians and people outside the vehicle (mentioned by 6 experts). Several works have examined AV-pedestrian communication, looking at how pedestrians could better understand AVs intents [17][34][40]. However, in certain situations, there would also be a need to open a channel of communication between the RO and a pedestrian. P9 stated that "... Ultimately there should be communication between a police officer and a remote operator …”. Indeed, there may be scenarios in which a law enforcement representative directs the traffic and contradicts the traffic rules in a place. Thus, for an AV, alternative mechanisms of communication will have to be set, and methods of communication, between the RO and the human agent should be determined. Similarly, the RO will need to be able to communicate with other humans in the environment. Participants mentioned use cases that included communication between a RO and a guard at an entrance to a compound, a pizza delivery (by an AV) that didn't arrive to the correct destination, an older adult who blocks a road lane because his groceries fell from his basket, and more.

4.5 Impaired visibility

As mentioned above, often, visual feedback is the only feedback available to a RO and thus, the video quality is imperative. However, even with perfect video representation, the vision of the RO may be impaired during remote driving because of various reasons. The interviews highlighted several such situations.

Five experts and one RO mentioned that a limited (or distorted) field of view can impair the RO's perception of the actual scene (see also [16][44]). Since the world view is seen by the RO via computer screens and depends on the cameras that are installed on the vehicle, if the angular coverage of the front cameras does not reach 180 degrees, the operator's view may be impaired. Additionally, a geometric distortion might happen when compressing 180 degrees field of view from the cameras to a computer screen that is not wide enough (Figure 3). According to P10, such distortion “… can cause slower response times …” because according to P11, “… the information is delivered to the RO not necessarily in the way he is used to seeing it …”.

Three interviewed experts claimed that lack of peripheral vision is another RO challenge since peripheral vision is important for lane keeping and lateral control [32][16]. Thus, as P5 has stated: “… Everything that peripheral vision gives me in a real vehicle, is lacking in remote operation. In driving, the effect of peripheral vision is very important …”. In other words, the peripheral vision may not be used when the vehicle's environment is located on screens in front of the RO. In fact, during our observations, we noticed that ROs are so focused on the central front part of the screen that they completely neglect or preferred not to look at other UI elements that appear on the sides of the screen: OP3 didn't look at the steering wheel UI element, OP5 reported that she has “… deleted [in her mind] the side cameras …”, and OP7 completely neglected the speedometer, which was right in front of her.

Lack of ability to improve the viewpoint is another challenge (mentioned by four interviewees). When a driver is physically present inside a vehicle, she can change and improve her viewpoint based on the situation, and is not necessarily dependent on sensors. For example, during parking, she can lengthen or turn her neck to see a bit further, and when she performs a turn right, she has an option to turn her head and increase her field of view. Such freedom of movement does not exist in remote driving. According to P6 “…This [point of view improvement] is missing in teleoperation because all the sensors are fixed to one place. If there is something that interferes with your field of vision, you don't know what is going on behind it …”. P5 further explained that “… it's difficult for a RO that he can't turn his head and see if he can perform a road turn [or not] …”.

Finally, changing lighting conditions can also affect the RO's perception (one expert + one RO). Our eyes can adapt to lighting changes in the environment by changing the pupil's diameter. The camera's aperture is built to imitate the eye pupil to control its exposure to light. However, when rapid lighting changes are visible through eyes and cameras it can cause a visual disturbance to the RO. According to P8, “… when an AV approaches a lighting poll at night, the camera's shutter closes and vice versa. If the traveling speed is fast, a flickering effect is created and it can drive the ROs crazy …”. During our observations, OP2 has explicitly mentioned the fact that when different camera views have different lighting conditions (because of the sun direction), it disturbs performance of remote driving (Figure 5).

4.6 Lack of sounds

According to eight participants (7 experts and 1 RO), an important environmental aspect that can be missing in remote driving is sound [9]. When inside a vehicle, drivers can hear a multitude of sounds: 1. Sounds that are produced by other entities on the road (such as sirens, honks, human voices, or dog barking), 2. Sounds that appear as a result of the interaction of the vehicle with its surrounding (such as the road surface, or the leaves on the side of the road), 3. Sounds that appear from within the vehicle itself (such as the sounds of the engine or other mechanical parts), and 4. Weather-related sounds (such as strong winds, a rockslide, etc.)3 . All these sounds may impact the SA of the RO if they are not available or if they are provided poorly. P6 gave us a vivid example: “… Suppose that a rack component is broken, it is a physical component that doesn't have a sensor. Therefore, it will not activate any warning in the vehicle's dashboard. If you were in the car, you would hear a rumbling sound, but if you are not there, you don't feel the vibrations of the car, and you don't hear the car, you will not know it. Thus, it will be more difficult for you to understand the problem and why the vehicle drags to one side…”.

4.7 Remote Driving Performance Issues.

While measuring remote driving performance was not the main purpose of this research, we would like to share some of the observed mistakes and phenomena that were noticed during the remote driving experiment. While some ROs could easily complete all the remote driving tasks without any difficulty and even described the experience as “…cool…”, three participants (OP1, OP2, OP3) reported nausea and dizziness. In one case, a RO (OP1) has even requested to stop the experiment in the middle because of seasickness. It is also interesting to mention that two of the operators (OP4, OP8), who were very successful in the teleoperation tasks, had previous extensive experience with video games. One of these drivers also had previous driving experience of driving a (real) light military truck, which enabled him to estimate vehicle's width dimensions without seeing its whole front side. Additionally, it was observed that operators relatively quickly adapted to UI deficiencies and to the experienced latency. Among all the tasks that ROs had to perform, the slalom task among cones and bypassing an obstacle were the most challenging. In some cases, ROs overran cones while driving and almost always weren't sure if they did or did not hit a cone. Oversteering and understeering effects were evident in the above tasks, but also existed when performing a U-turn.

5 USER INTERFACE DESIGN SUGGESTIONS

In this section we outline several design guidelines for the AV RO's user interface that may address or mitigate some of the challenges as listed above. We focus on general guidelines that came out of the interviews or observations, have to do with the RO's user interface, and address one or more challenges.

5.1 Add UI cues to bridge physical disconnect

As mentioned, in Section 4.1, one of the major challenges of remote driving is the physical disconnect of the RO from the operated AV. Because of this disconnect, ROs have difficulty to feel and estimate speeds, accelerations (linear – when speeding up and slowing down, angular – when performing turns), road inclinations, and the fine-grained feel of the effect of the gas and brake pedals. To alleviate this, we recommend adding special visual elements and cues to the teleoperation UI that would indicate felt and applied physical forces. For example, in [29], a visualization of pedaling force was made for training of cyclers. In our case, to provide feedback about the force that is applied (by the RO) on the gas and the brake pedals, it is possible to add visualizations that correlate to this force. Figure 6 provides an example of how this can be visualized in the RO user interface. We believe that such a visualization can also help ROs create proper mental models, as mentioned in Section 4.5. In addition, instead of visualizing accelerations, which not only depend on the contact of the RO with the gas and stop pedals, but also on the road grip and the road inclinations, we suggest visualizing the forces, which are applied on the humans inside the AV. Figure 7 shows a design example that supports force visualization.

Figure 6
Figure 6: Visualizing gas pedal contact via a continuous green element across the main tele-driving screen. The more the operator presses on the gas pedal, the wider the element becomes and its opacity decreases (and vice versa). A similar red element can depict pressing on the brake pedal.
Figure 7
Figure 7: The blue human figures on the left of the compass and on the right of the speed limit should move left and right according to the force that the RO (or the passenger) feels when in the AV.

5.2 Emphasize the intervention reason

Each remote intervention consists of three main phases: In the first phase, the RO accepts an intervention request and understands the situation and what needed assistance. In the second phase, the operator provides the needed assistance and makes sure that her guidance was followed by the AV. In the third and final phase, the RO leaves the session. One of the challenges, specific to remote driving, is that the RO must gain SA quickly because the situation on the road changes rapidly and because human lives might be involved. In order to shorten the length of the first phase and increase RO's SA during this phase, it is important not only to notify the RO that she has to take control of the vehicle [2][5], but also to clearly and quickly show the RO the reason for the requested intervention. P6 gave an example: “... Did the vehicle stop because the camera stopped working or because someone sprayed some mud on the lens? ...". This was echoed by other participants (P6, P9, P10, P12, P14) as well. The requested intervention can be conveyed by a simple message, by adding appropriate virtual layers on top of the video feed, or in other ways.

5.3 Add contextual road information

One important difference between continuous driving inside a vehicle and a remote driving intervention is that a real driver has contextual information about the vehicle, the situation and the road conditions that stems from the continuous driving process. Conversely, a RO may be missing critical information because of the short time she was requested to intervene in. According to P12, “…when a real driver drives, he holds this information in his head, while the teleoperator - doesn't because he is entering the situation in the middle …”. Thus, it is important to provide contextual road information, such as speed limits or prohibition of detours to the RO. Forster et.al. [21] suggested fusing vehicle-localized environment perception (i.e., what can be perceived according to the vehicle's sensors) with information provided by other road users or infrastructure (so-called cooperative perception) to improve in-vehicle assistive systems for human drivers. For example, it may be possible to provide drivers with advanced information about upcoming system limits to improve driving performance during in-vehicle take-over situations. We suggest expanding this approach to remotely operated vehicles in order to increase RO's SA during interventions.

5.4 Integrate AI suggestions into the UI

Automated vehicles are operated based on advanced AI algorithms. Human assistance is needed when these algorithms fail or are unable to determine what to do. However, the human RO can highly benefit from this AI knowledge [43], even if it may not be able to precisely determine the action to take. AI-based insight can be integrated into the teleoperation UI to help ROs in their decisions and to reduce cognitive load. One possible way to do so is to provide the RO with possible alternatives to solve a specific use-case. For example, if a dog is lying in front of an AV in the middle of a two-way street with a continuous separation line, there might be several options: (1) Cross the separation line, (2) Find an alternative route, (3) Stop and wait, (4) Slowly move towards the dog without harming it. Having the AI suggest these solutions to the RO would significantly shorten the intervention time and might reduce the RO's cognitive load (see Section 4.5). In addition, the AI might suggest its preferred solution for the RO to simply approve.

5.5 Visualize AVs direction based on the current position of the steering wheel

Another important point that stems mostly from our observations, but also came up in the interviews, is the importance of providing continuous feedback on the AV's trajectory as a factor of the steering wheel rotation angle. In several industrial UIs that we have examined, a rotating UI element that represents the position of the steering wheel was used to convey the vehicle's actual direction. When using such a representation, the RO projects the AV's trajectory in his mind based on the angle of the steering wheel in the UI. Such a cognitive effort has an even greater downside when taking the overall RO workload into account (see Section 4.5). Instead, we suggest projecting the future trajectory of the AV on the video image itself (see Figure 8). A similar approach was taken by researchers from Japan [42] in order to create a more comfortable environment for the passenger by increasing the passenger's SA. Another important benefit of the suggested trajectory overlay is that, when appropriately designed, it can visualize the AV's width, which may increase the spatial awareness of the RO (see Section 4.5).

Figure 8
Figure 8: projecting future AV trajectory on the video image.

5.6 Add depth perception cues

Another challenge that was raised in Section 4.5 was the lack of depth perception (e.g., it might be difficult to estimate whether a vehicle is approaching or moving away from the AV). This can be mitigated by adding depth cues to the interface, possibly by utilizing color, similarly to what is often done in medical visualizations [37]. For example, when a vehicle is moving away (in the same direction), its color can become less saturated and vice versa. An additional method, suggested by P6, is to apply different colors to vehicles that move in the same direction (of the teleoperated AV) and those that move in the opposite direction. However, not every visual representation will be useful in such cases. For instance, simply adding a number that represents a distance to an object would probably not be useful and would add too much cognitive load. A possible approach, suggested by P3, is adding virtual layers on top of the video feed, which will be divided into colorful sections (red, yellow, green) that represent depth, similarly to what is used today in rear cameras of vehicles when driving backwards.

5.7 Calibrate all video cameras and stitch images from overlapping video streams

One of the challenges mentioned in Sections 4.3 and 4.4 is the fact that uncalibrated cameras and changing lighting conditions can increase cognitive load making it challenging to drive a vehicle remotely. Additionally, during our observations we noticed that different zoom levels of front and side cameras can create confusion and make it more difficult to estimate distances to objects. Thus, it is important to calibrate all the front cameras and apply images processing techniques to reduce RO's cognitive load and to avoid losing details when the video image is dark (Figure 8). Additionally, the zoom levels of all cameras should be calibrated such that the distance to an object from the front cameras will be similar to that same distance when viewed from a side camera. Another challenge, mentioned in Section 4.3, was lack of image stitching. Image stitching is an expensive task, because it requires to apply image processing techniques that require processing resources and time. However, it is important in order to provide a clear image of the environment, reducing the RO's cognitive load.

5.8 Visualize network and video quality

In Section 4.3, we mentioned that video and communication quality are among the major teleoperation challenges. Multiple companies and teams work toward the goal of reducing latency and improving video quality of remote vehicles. However, this challenge can also be mitigated by improving the user experience. Giving the RO continuous feedback about the network quality and the frame rate at any particular moment, can help to increase the RO's situation awareness, and thus, solve any misunderstandings the RO might have. For example, visualizing network speed, similarly to what is done today in online video games, can be an intuitive way to show network latency (see also the two outmost right icons in figure 7).

6 LIMITATIONS

Our research has several limitations mostly related to the remote driving experiment. First, the experimental scenario was somewhat limited in scope. For example, ROs were not required to drive backwards or drive within traffic (an essential safety precaution). Thus, it was not possible to evaluate the usage of the rear, right, and left cameras. Additionally, the experiment was performed in a hot and dry weather. Thus, it was not possible to examine whether related phenomena, such as sliding or bad visibility. Finally, as one can see in Figure 1 (right), the experiment site is relatively flat. This fact did not allow us to investigate ROs sensations on roads with inclinations, which might possibly worsen the physical disconnect between the AV and the RO (since the ROs would probably not feel the forces that were applied on the vehicle).

Second, several other limitations were related to the teleoperation station itself. The tele-driving tasks were performed using an initial version of a single user interface and were not compared to other teleoperation stations or user interfaces. Thus, some difficulties could be related to a specific interface design. For example, during the experiment, the engineers discovered that there is a delay between the steering wheel and its representation on the screen, regardless to the inevitable network latency. Such a deficiency could be another reason for over-steering and under-steering phenomena. In addition, since the three front vehicle's cameras were arranged to minimize the overlap among them, it resulted in cutting the full width of the vehicle's front part (see Figures 6 and 8). Such an arrangement could reduce the spatial orientation of the ROs during the experiment. Additionally, almost all the ROs complained on the lack of pedals’ sensitivity. While we believe that this problem stems from the disconnect of the RO from the actual vehicle, a different more or less sensitive set of pedals might be able to alleviate this problem.

Another limitation is that most of the remote driving participants were employees of a single company, which is related to the design and development of tele-driving technology. While we observed their behavior when remotely driving (all but one had no experience), their answers might be biased as they may wish the experiment to be successful.

Finally, related to the expert interviews, while being experts in their respective fields (AVs, teleoperation, command and control, and auto-industry in general), not all interviewed experts had the chance to remotely drive a vehicle. In addition, our findings could be more robust if we had the chance to interview more experts.

7 CONCLUSION

Autonomous vehicles are rapidly evolving in recent years with the promise of replacing human-driven vehicles and changing not only the vehicle industry, but also the world around us. However, at least in the nearby future, it seems that AVs will not be able to handle each and every case they encounter. Thus, a remote operator will be needed to help in cases in which the vehicle does not know or cannot decide how to proceed. As such, AV teleoperation is emerging as a new and promising field of research. The purpose of this work was to unveil the major tele-driving challenges in order to inform the design of future teleoperation interfaces. Taking a first look at AV teleoperation from the perspective of the remote operator's task and interface, we present a framework of teleoperation challenges grouped into six clusters and provide initial suggestions for the design of AV teleoperation interfaces.

This work lays an initial foundation for future research and raises important challenges which should be addressed by designers and engineers in the future. How to bridge the physical disconnect between an RO and an AV without investing in sophisticated motion simulators or specialized force-sensitive tele-driving suits? How to convey remote sounds to the ROs without overwhelming them? Can user interface methods be used to alleviate engineering problems such as latency? How to compensate RO's inability to change her point of view? How to help ROs to create correct mental models of the remote environment in very short periods of time? All these questions and many more are left for future research. Addressing the above challenges might require not only a creative and a multi-disciplinary approach, but also the adoption of novel design paradigms. One such paradigm is tele-assistance [35], which suggests that ROs will not be required to take full remote control of the AV in order to provide effective assistance to it. This paradigm suggests that AVs will be responsible for their own safety and movement in space, while the RO will only provide short discrete commands to guide them by resolving emerging ambiguities. Additionally, researchers, designers, and engineers will have to think about teleoperation more holistically; Particularly, examine a solution of a collection of co-located teleoperation stations within a single control center that should provide assistance to tens of thousands of AVs in complex urban environments on a daily basis – teleoperation as a service. Finally, given the various challenges highlighted in our research, it is clear that ROs will not only need to go through a preliminary selection process, but will also have to be trained accordingly in order to be capable of such a complex and a responsible job.

ACKNOWLEDGEMENTS

The research was supported by the Israeli Innovation Authority through the Andromeda consortium. We would like to thank the people from DriveU and especially Eli Shapira for allowing us to participate in their remote driving experiment and for their support in this study. We would also like to thank all our interviewees for sharing their knowledge and insights.

REFERENCES

  • Julie A Adams. 2007. Unmanned vehicle situation awareness: A path forward. In Human systems integration symposium, 31–89.
  • P. Bazilinskyy, S. M. Petermeijer, V. Petrovych, D. Dodou, and J. C.F. de Winter. 2018. Take-over requests in highly automated driving: A crowdsourcing survey on auditory, vibrotactile, and visual displays. Transp. Res. Part F Traffic Psychol. Behav. 56, (2018), 82–98. DOI:https://doi.org/10.1016/j.trf.2018.04.001
  • Berthold Färber. 2016. Communication and Communication Problems Between Autonomous Vehicles and Human Drivers.
  • Shadan Sadeghian Borojeni, Susanne C.J. Boll, Wilko Heuten, Heinrich H. Bülthoff, and Lewis Chuang. 2018. Feel the movement: Real motion influences responses to Take-over requests in highly automated vehicles. Conf. Hum. Factors Comput. Syst. - Proc. 2018-April, (2018), 1–13. DOI:https://doi.org/10.1145/3173574.3173820
  • Shadan Sadeghian Borojeni, Lewis Chuang, Wilko Heuten, and Susanne Boll. 2016. Assisting drivers with ambient take-over requests in highly automated driving. AutomotiveUI 2016 - 8th Int. Conf. Automot. User Interfaces Interact. Veh. Appl. Proc. (2016), 237–244. DOI:https://doi.org/10.1145/3003715.3005409
  • Shadan Sadeghian Borojeni, Lars Weber, Wilko Heuten, and Susanne Boll. 2018. From reading to driving: priming mobile users for take-over situations in highly automated driving. In Proceedings of the 20th international conference on human-computer interaction with mobile devices and services, 1–12.
  • Martijn Bout, Anna Pernestal Brenden, Maria Klingeagrd, Azra Habibovic, and Marc Philipp Böckle. 2017. A head-mounted display to support teleoperations of shared automated vehicles. AutomotiveUI 2017 - 9th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. Adjun. Proc. (2017), 62–66. DOI:https://doi.org/10.1145/3131726.3131758
  • Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qual. Res. Psychol. (2006). DOI:https://doi.org/10.1191/1478088706qp063oa
  • Nidia Calvo and Elias Seixas Lorosa. 2005. A Survey on Teleoperation. 100, August (2005), 607–612.
  • Jessie Y.C. Chen and Michael J. Barnes. 2014. Human - Agent teaming for multirobot control: A review of human factors issues. IEEE Trans. Human-Machine Syst. 44, 1 (2014), 13–29. DOI:https://doi.org/10.1109/THMS.2013.2293535
  • Jessie Y.C. Chen, Ellen C. Haas, and Michael J. Barnes. 2007. Human performance issues and user interface design for teleoperated robots. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37, 6 (2007), 1231–1245. DOI:https://doi.org/10.1109/TSMCC.2007.905819
  • Nancy J Cooke. 2006. Human factors of remotely operated vehicles. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 166–169.
  • S. R. Ellis, B. D. Adelstein, S. Baumeler, G. J. Jense, and R. H. Jacoby. 1999. Sensor spatial distortion, visual latency, and update rate effects on 3D tracking in virtual environments. Proc. - Virtual Real. Annu. Int. Symp. (1999), 218–221. DOI:https://doi.org/10.1109/vr.1999.756954
  • Stephen R. Ellis, Katerina Mania, Bernard D. Adelstein, and Michael I. Hill. 2004. Generalizeability of Latency Detection in a Variety of Virtual Environments. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 48, 23 (2004), 2632–2636. DOI:https://doi.org/10.1177/154193120404802306
  • M. R. Endsley. 1995. Measurement of situation awareness in dynamic systems. Hum. Factors (1995). DOI:https://doi.org/10.1518/001872095779049499
  • Jan B.F. Van Erp and Pieter Padmos. 2003. Image parameters for driving with indirect viewing systems. Ergonomics 46, 15 (2003), 1471–1499. DOI:https://doi.org/10.1080/0014013032000121624
  • Stefanie M. Faas, Andrea C. Kao, and Martin Baumann. 2020. A Longitudinal Video Study on Communicating Status and Intent for Self-Driving Vehicle A- Pedestrian Interaction. Conf. Hum. Factors Comput. Syst. - Proc. (2020), 1–14. DOI:https://doi.org/10.1145/3313831.3376484
  • Daniel J. Fagnant and Kara Kockelman. 2015. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transp. Res. Part A Policy Pract. 77, (2015), 167–181. DOI:https://doi.org/10.1016/j.tra.2015.04.003
  • Terrence Fong. 2001. Collaborative Control: A Robot-Centric Model for Vehicle Teleoperation. Aaai Mishkin (2001), 198.
  • Terrence Fong and Charles Thorpe. 2001. Vehicle teleoperation interfaces. Auton. Robots 11, 1 (2001), 9–18. DOI:https://doi.org/10.1023/A:1011295826834
  • Yannick Forster, Frederik Naujoks, and Alexandra Neukum. 2016. Your Turn or My Turn? (2016), 253–260. DOI:https://doi.org/10.1145/3003715.3005463
  • Jean Michael Georg and Frank DIermeyer. 2019. An adaptable and immersive real time interface for resolving system limitations of automated vehicles with teleoperation. Conf. Proc. - IEEE Int. Conf. Syst. Man Cybern. 2019-Octob, (2019), 2659–2664. DOI:https://doi.org/10.1109/SMC.2019.8914306
  • Jean Michael Georg, Johannes Feiler, Frank DIermeyer, and Markus Lienkamp. 2018. Teleoperated Driving, a Key Technology for Automated Driving? Comparison of Actual Test Drives with a Head Mounted Display and Conventional Monitors∗. IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC 2018-Novem, (2018), 3403–3408. DOI:https://doi.org/10.1109/ITSC.2018.8569408
  • Gaetano Graf and Heinrich Hussmann. 2020. User requirements for remote teleoperation-based interfaces. Adjun. Proc. - 12th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. AutomotiveUI 2020 (2020), 85–88. DOI:https://doi.org/10.1145/3409251.3411730
  • Gaetano Graf, Henri Palleis, and Heinrich Hussmann. 2020. A Design Space for Advanced Visual Interfaces for Teleoperated Autonomous Vehicles. ACM Int. Conf. Proceeding Ser. (2020). DOI:https://doi.org/10.1145/3399715.3399942
  • Robert C Hampshire, Shan Bao, Walter S Lasecki, Andrew Daw, and Jamol Pender. 2020. Beyond safety drivers: Applying air traffic control principles to support the deployment of driverless vehicles. PLoS One 15, 5 (2020), e0232837.
  • Enrique Hortal. 2019. Contribution of neuroscience to the teleoperation of rehabilitation robot. (2019), 49–67. DOI:https://doi.org/10.1007/978-3-319-95705-0_4
  • Nidhi Kalra and Susan M Paddock. 2016. Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transp. Res. Part A Policy Pract. 94, (2016), 182–193.
  • Oral Kaplan, Goshiro Yamamoto, Yasuhide Yoshitake, Takafumi Taketomi, Christian Sandor, and Hirokazu Kato. 2017. In-situ visualization of pedaling forces on cycling training videos. 2016 IEEE Int. Conf. Syst. Man, Cybern. SMC 2016 - Conf. Proc. (2017), 994–999. DOI:https://doi.org/10.1109/SMC.2016.7844371
  • Carmen Kettwich and Annika Dreßler. 2020. Requirements of Future Control Centers in Public Transport. Adjun. Proc. - 12th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. AutomotiveUI 2020 (2020), 69–73. DOI:https://doi.org/10.1145/3409251.3411726
  • Sangwon Kim, Jennifer Jah Eun Chang, Hyun Ho Park, Seon Uk Song, Chang Bae Cha, Ji Won Kim, and Namwoo Kang. 2020. Autonomous Taxi Service Design and User Experience. Int. J. Hum. Comput. Interact. 36, 5 (2020), 429–448. DOI:https://doi.org/10.1080/10447318.2019.1653556
  • M. F. Land and D. N. Lee. 1994. Where we look when we steer. Nature (1994). DOI:https://doi.org/10.1038/369742a0
  • David R. Large, Kyle Harrington, Gary Burnett, Jacob Luton, Peter Thomas, and Pete Bennett. 2019. To please in a pod: Employing an anthropomorphic agent-interlocutor to enhance trust and user experience in an autonomous, self-driving vehicle. Proc. - 11th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. AutomotiveUI 2019 (2019), 49–59. DOI:https://doi.org/10.1145/3342197.3344545
  • Karthik Mahadevan, Sowmya Somanath, and Ehud Sharlin. 2018. Communicating awareness and intent in autonomous vehicle-pedestrian interaction. Conf. Hum. Factors Comput. Syst. - Proc. 2018-April, (2018), 1–12. DOI:https://doi.org/10.1145/3173574.3174003
  • Clare Mutzenich, Szonya Durant, Shaun Helman, and Polly Dalton. 2021. Updating our understanding of situation awareness in relation to remote operators of autonomous vehicles. Cogn. Res. Princ. Implic. 6, 1 (2021), 1–17.
  • Stefan Neumeier, Philipp Wintersberger, Anna Katharina Frison, Armin Becher, Christian Facchi, and Andreas Riener. 2019. Teleoperation: The Holy Grail to solve problems of automated driving? Sure, but latency matters. Proc. - 11th Int. ACM Conf. Automot. User Interfaces Interact. Veh. Appl. AutomotiveUI 2019 (2019), 186–197. DOI:https://doi.org/10.1145/3342197.3344534
  • Lucio Tommaso De Paolis and Valerio De Luca. 2019. Augmented visualization with depth perception cues to improve the surgeon's performance in minimally invasive surgery. Med. Biol. Eng. Comput. 57, 5 (2019), 995–1013. DOI:https://doi.org/10.1007/s11517-018-1929-6
  • So Yeon Park, Dylan James Moore, and David Sirkin. 2020. What a Driver Wants: User Preferences in Semi-Autonomous Vehicle Decision-Making. Conf. Hum. Factors Comput. Syst. - Proc. (2020), 1–13. DOI:https://doi.org/10.1145/3313831.3376644
  • Ioannis Politis, Stephen Brewster, and Frank Pollick. 2015. Language-based multimodal displays for the handover of control in autonomous cars. c (2015), 3–10. DOI:https://doi.org/10.1145/2799250.2799262
  • Dirk Rothenbucher, Jamy Li, David Sirkin, Brian Mok, and Wendy Ju. 2016. Ghost driver: A field study investigating the interaction between pedestrians and driverless vehicles. 25th IEEE Int. Symp. Robot Hum. Interact. Commun. RO-MAN 2016 (2016), 795–802. DOI:https://doi.org/10.1109/ROMAN.2016.7745210
  • SAE. 2018. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. (2018). Retrieved from https://www.sae.org/standards/content/j3016_201806/
  • Shota Sasai, Itaru Kitahara, Yoshinari Kameda, Yuichi Ohta, Masayuki Kanbara, Yoichi Morales, Norimichi Ukita, Norihiro Hagita, Tetsushi Ikeda, and Kazuhiko Shinozawa. 2015. MR visualization of wheel trajectories of driving vehicle by seeing-through dashboard. Proc. 2015 IEEE Int. Symp. Mix. Augment. Real. Work. ISMARW 2015 (2015), 40–46. DOI:https://doi.org/10.1109/ISMARW.2015.17
  • Kristin E. Schaefer, Edward R. Straub, Jessie Y.C. Chen, Joe Putney, and A. W. Evans. 2017. Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res. 46, (2017), 26–39. DOI:https://doi.org/10.1016/j.cogsys.2017.02.002
  • David R Scribner and James W Gombash. 1998. The effect of stereoscopic and wide field of view conditions on teleoperator performance.pdf. March (1998).
  • Tao Zhang. 2020. Toward Automated Vehicle Teleoperation: Vision, Opportunities, and Challenges. IEEE Internet Things J. 7, 12 (2020), 11347–11354. DOI:https://doi.org/10.1109/JIOT.2020.3028766

FOOTNOTE

1The sub-category Lack of Sounds from Other Entities was added after holistic evaluation of all the research findings and based on common sense.

2Understeer and oversteer are vehicle dynamics terms used to describe the sensitivity of a vehicle to steering. Oversteer is what occurs when a car turns (steers) by more than the amount commanded by the driver. Conversely, understeer is what occurs when a car steers less than the amount commanded by the driver.

3The importance to hear sounds was mentioned by 7 experts and 1 RO. However, not all the participants mentioned sounds in a particular sub-category. In addition, the sub-category Lack of Sounds from other Entities wasn't explicitly mentioned. However, after evaluating our findings holistically, as well as, applying common sense, we decided to add this category as well.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

CHI '22, April 29–May 05, 2022, New Orleans, USA

© 2022 Association for Computing Machinery.
ACM ISBN 978-1-4503-9157-3/22/05…$15.00.
DOI: https://doi.org/10.1145/3491102.3501827