Eyes4ICU Workshop at ETRA 2023


Eyes4ICU. New Frontiers of Eye Tracking: Eyes for Interaction, Communication, and Understanding.


*** Eyes4ICU Workshop will take place as part of ETRA 2023 in Tübingen ***


General information

Eye tracking has been used in Human-Computer Interaction already for decades, be it for evaluating interactions and their usability, be it as its interaction modality. Recent advances such as the miniaturization of hardware or the progress in machine learning enable eye tracking technology to become a core part of pervasive human-computer interaction - as it is inherent in human-human interaction. Working on these new frontiers and expanding the boundaries and exploring state-of-the-art eye tracking methods, eye movement models, and gaze-based applications, is the aim of the Eyes4ICU workshop.

Workshop contributions showcase novel works which may include completed systems and/or studies, preliminary results of piloting studies or early iterations of design. We welcome submissions from any field using eye tracking for interaction. During the workshop, participants are asked to contribute with presentations, position statements, or discussions.

Eye tracking has been widely used in Human-Computer Interaction (HCI) for decades – be it for evaluating interactions and their usability or as an interaction modality [for review see Duchowski 2020; Majaranta and Bulling 2014; Majaranta and Räihä 2002]. Gaze holds great potential for universal inclusive technologies as it may be used both for communication between humans and between human and machine. Recent advances in eye tracking method, technology, and applications, e.g., the miniaturization of hardware, new computational approaches for data analysis, and rapid progress in artificial intelligence (AI), are trailblazers for eye tracking to become central to everyday human computer interaction. Working on these new frontiers and exploring the boundaries of gaze-based human computer interaction is the aim of the proposed workshop. The workshop will ensure a unified and aligned progress and concept of interdisciplinary eye tracking research and application for human-computer interaction by bringing together researchers from three different perspectives: a empirical method perspective, a computational modeling perspective, and an application perspective.

Empirical Methods.

Human-Computer Interaction, especially using eye gaze, has often been researched in single-user scenarios. Pervasive technologies and mobile eye tracking enable multi-user as well as multi-device interaction. Although eye tracking in-the-wild often results in noisy and inaccurate data [Bednarik et al. 2013], the shift towards mobile eye tracking opens up new possibilities for research and practical applications. Using carefully designed psychological studies, in laboratories and in natural settings, as a basis for valid computational models, advances both the models and the understanding of users, and thus also contributes to increasing the trust in AI technologies and in science in general.

Computational Modeling

The advances in artificial intelligence have been a catalyst in achieving improved models for eye movement analysis such as event detection [Zemblys et al. 2019], user modeling and prediction (e.g., for cognitive abilities [Barral et al. 2020], confusion [Sims and Conati 2020], etc.). Adaptations, alternations, and new developments of customized computational methods aim to further enhance the speed and the predictive power of eye movement analyses. The challenges of eye movement modeling are also that of AI and machine learning in general, e.g., how to train models that are able to generalize to out-of-distribution data, how to learn from limited labels and utilize more unsupervised and semi-supervised methods, but also how to ensure privacy and personal data protection of sensitive eye gaze data.

Gaze-based Applications

Within the last decade, there were several developments which – when combined – have been pushing gaze-based assistance forward. Technological advances have led to a huge increase in spatial and temporal resolution of gaze tracking as well as miniaturization of eye tracking hardware. This enables also innovative solutions for gaze interaction [Strauch et al. 2017], The aforementioned advances in artificial intelligence producing powerful models enable the inclusion of promising gaze parameters (e.g., binocular coordination, microsaccades, pupil movements) in real time. Highly effective and efficient machine learning algorithms enhance the predictive power and thus enable extremely fast feedback. Future interactive technologies can thus present user-adapted information in a timely and accurate manner, what might be crucial for several fields of applications from driving and machines operations, medical diagnosis, to interactive learning.

The focus of the workshop is to stimulate the development of real-time gaze-based intent predictions in HCI. Current states of hard- and software in eye tracking such as mobile eye tracking glasses and machine learning algorithms providing real-time interaction enable new insights in gaze-based interaction. To investigate new directions of research and applications and crossing current technological borders, we will stimulate lively discussion within an interdisciplinary group of experts in eye tracking, among them experts in machine learning, in psychology, in neuroscience, or in various application fields. Next to invited talks, the workshop will include a panel discussion. The panelists, from academia and industry will move forward the discussion on potential applications for real-time gaze-based communication, Human-Computer and Human-Machine Interaction, and media assistive technologies.

The half-day workshop structure is planned having in mind the need for presentation of novel approaches in gaze-based human-computer interaction (presentation session and panel discussion) and the community building (break-out groups work). The present workshop schedule is planned as follows:

(1) Introduction (15 minutes). Introduction to the workshop topic and goals by the main organizers (Krzysztof Krejtz, Anke Huckauf); introduction of all participants.

(2) Scientific Contributions (90 minutes). Invited keynote lecture given by an author of a recent review article; presentations of extended abstracts.

(3) Break-out Sessions (45 minutes). Separate break-out groups focusing on Methods (lead: Andreas Bulling, Roman Bednarik), on Models (lead: Dan Witzner Hansen, Maria Bielikova), and Applications (lead: Izabela Krejtz, Peter Kiefer). Break-out sessions will start by a lightning talk given by experts in the field. Position statements of the attendees provide short presentations of own ideas, arguments for certain directions of developments.

(4) Wrap-up & Panel Discussion (30 minutes). A panel consisting of workshop organizers, invited experts, and workshop participants will discuss about the lessons learned and the future of eye tracking technology in gaze-based human-computer interaction. The workshop will have a hybrid form with interactive work via Zoom’s white board and break-out rooms. Additionally, we will equip the room with a 360-degree camera, microphone, and speaker giving onsite and remote participant an immersive experience and ensuring an impression for remote participants.

To contribute, please submit an extended abstract of maximum of three pages excluding references. Papers must be original and not accepted previously for publication or under review elsewhere.

We use the ACM article template for all submissions. To prepare your paper for submission, please use the single-column format using the provided Word or LaTeX templates. For LaTeX use “manuscript,review,anonymous” style available in the template. Please use the "author year" citation and reference format.

Links to paper templates:

  • LaTeX (Use sample-manuscript.tex for submissions)
  • Microsoft Word
  • Overleaf (or search for ACM Conference Proceedings "Master" Template)

Submitted papers must be anonymized for double-blind peer review, i.e., paper submissions must be appropriately anonymized to conceal the authors' identities and institutions.

Please submit only the PDF version of your manuscript. Submissions may include supplementary material, such as videos, code, or datasets that will be archived together with the paper in the ACM DL. Video submissions are not required, but encouraged to help demonstrate interactive systems that are otherwise difficult to showcase using images or text. Videos should use the MP4 format with H.264 codec and file size should not exceed 100 MB. We recommend the standard 1920x1080 resolution (1280x720 as an alternative). Any supplementary materials, including the video, have to be anonymized for review.

Eyes4ICU workshop uses the Precision Conference System (PCS) to handle the submission and reviewing process: https://new.precisionconference.com/user/login?society=etra
Please ensure that you are making a submission to Society=ETRA, Conference=ETRA 2023, Track={Eyes4ICU}, and choose the respective track.

All submissions will go through a single-phase review process. They will be carefully reviewed by three reviewers, who will evaluate the submissions based on their fit with the workshop theme, originality, and quality. After the review process, the authors will receive the final acceptance or rejection notification.

Accepted papers will be published in the adjunct proceedings of the ETRA’23 conference in the ACM Digital Library.

Note that at least one author of an accepted submission must attend the workshop, provide a short presentation, and register for both, the workshop and at least one day of the main ETRA conference. Details about the presentation format will follow after the acceptance notification.

Important dates

February 26, 2023 (23:59) Papers submissions 

March 27, 2023 (23:59) Notifications to authors 

April 4, 2023 (23:59) Camera-ready paper submission

Note that all deadlines are in the AOE time zone.