Publications
2020
Augmenting Public Bookcases to Support Book Sharing
Maximilian Schrapel,
Thilo Schulz,
Michael Rohs
Proceedings of the 22th international conference on Human computer interaction with mobile devices and services
Design and Evaluation of On-the-Head Spatial Tactile Patterns
Oliver Beren Kaul,
Michael Rohs,
Marc Mogalle
19th International Conference on Mobile and Ubiquitous Multimedia - MUM '20
We propose around-the-head spatial vibrotactile patterns for representing different kinds of notifications. The patterns are defined in terms of stimulus location, intensity profile, rhythm, and roughness modulation. A first study evaluates recall and distinguishability of 30 patterns, as well as agreement on meaning without a predetermined context: Agreement is low, yet the recognition rate is surprisingly high. We identify which kinds of patterns users recognize well and which ones they prefer. Static stimulus location patterns have a higher recognition rate than dynamic patterns, which move across the head as they play. Participants preferred dynamic patterns for comfort. A second study shows that participants are able to distinguish substantially more around-the-head spatial patterns than smartphone-based patterns. Spatial location has the highest positive impact on accuracy among the examined features, so this parameter allows for a large number of levels.
Vibrotactile Funneling Illusion and Localization Performance on the Head
Oliver Beren Kaul,
Michael Rohs,
Benjamin Simon,
Kerem Can Demir,
Kamillo Ferry
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
The vibrotactile funneling illusion is the sensation of a single (non-existing) stimulus somewhere in-between the actual stimulus locations. Its occurrence depends upon body location, distance between the actuators, signal synchronization, and intensity. Related work has shown that the funneling illusion may occur on the forehead. We were able to reproduce these findings and explored five further regions to get a more complete picture of the occurrence of the funneling illusion on the head. The results of our study (24 participants) show that the actuator distance, for which the funneling illusion occurs, strongly depends upon the head region. Moreover, we evaluated the centralizing bias (smaller perceived than actual actuator distances) for different head regions, which also showed widely varying characteristics. We computed a detailed heat map of vibrotactile localization accuracies on the head. The results inform the design of future tactile head-mounted displays that aim to support the funneling illusion.
TactileWear: A Comparison of Electrotactile and Vibrotactile Feedback on the Wrist and Ring Finger
Dennis Stanke,
Tim Duente,
Michael Rohs
Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society
(NordiCHI ’20)
Wearables are getting more and more powerful. Tasks like notifications can be delegated to smartwatches. But the output capabilities of wearables seem to be stuck at displays and vibration. Electrotactile feedback may serve as an energy-efficient alternative to standard vibration feedback. We developed prototypes of wristbands and rings and conducted two studies to compare electrotactile and vibrotactile feedback. The prototypes have either four electrodes for electrotactile feedback or four actuators for vibration feedback. In a first study we analyzed the localization characteristics of the created stimuli. The results suggest more strongly localized sensations for electrotactile feedback, compared to vibrotactile feedback, which was more diffuse. In a second study we created notification patterns for both modalities and evaluated recognition rates, verbal associations, and satisfaction. Although the recognition rates were higher with electrotactile feedback, vibrotactile feedback was judged as more comfortable and less stressful. Overall, the results show that electrotactile feedback can be a viable alternative to vibrotactile feedback for wearables, especially for notification rings.
Watch my Painting: The Back of the Hand as a Drawing Space for Smartwatches
Maximilian Schrapel,
Florian Herzog,
Steffen Ryll,
Michael Rohs
Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems
Skiables: Towards a Wearable System Mounted on a Ski Boot for Measuring Slope Conditions
Maximilian Schrapel,
Jonathan Liebers,
Michael Rohs,
Stefan Schneegass
19th International Conference on Mobile and Ubiquitous Multimedia
2019
Enhancement of a Lightweight Attribute-Based Encryption Scheme for the Internet of Things
Syh-Yuan Tan,
Kin-Woon Yeow,
Seong Oun Hwang
IEEE Internet of Things Journal
Refining Vision Videos
Kurt Schneider,
Melanie Busch,
Oliver Karras,
Maximilian Schrapel,
Michael Rohs
CoRR
Complex software-based systems involve several stakeholders,their activities and interactions with the system. Vision videos are used during the early phases of a project to complement textual representations.
They visualize previously abstract visions of the product and its use.
By creating, elaborating, and discussing vision videos, stakeholders and developers gain an improved shared understanding of how those abstract visions could translate into concrete scenarios and requirements to which individuals can relate.
[Question/problem] In this paper, we investigate two aspects of refining vision videos: (1) Refining the vision by providing alternative answers to previously open issues about the system to be built. (2) A refined understanding of the camera perspective in vision videos. The impact of using a subjective (or “ego”) perspective is compared to the usual third-person perspective.
[Methodology] We use shopping in rural areas as a real-world application domain for refining vision videos. Both aspects of refining vision videos were investigated in an experiment with 20 participants.
[Contribution] Subjects made a significant number of additional contributions when they had received not only video or text but also both – even with very short text and short video clips. Subjective video elements were rated as positive.
However, there was no significant preference for either subjective or non-subjective videos in general.
Talk to Me Intelligibly: Investigating An Answer Space to Match the User's Language in Visual Analysis
Jan-Frederik Kassel,
Michael Rohs
Proceedings of the 2019 on Designing Interactive Systems Conference
Conversational interfaces (CIs) have the potential to empower a broader spectrum of users to independently conduct visual analysis. Yet, recent approaches do not fully consider the user's characteristics. In particular, the objective of matching the user's language has been understudied in visual analysis. In order to close this gap, we introduce an answer space motivated by Grice's cooperative principle for framing personalized communication in complex data situations. We conducted both an online survey (N=76) to analyze communication preferences and a qualitative experiment (N=10) to investigate personalized conversations with an existing CI. In order to match the user's language properly, our results suggest to consider additional user characteristics along with their knowledge level. While mismatching communication preferences triggers negative reactions, a preference-aligned communication evokes positive reactions. As our analysis confirms the importance of matching the user's language in visual analysis, we provide design implications for future CIs.
Online Learning of Visualization Preferences through Dueling Bandits for Enhancing Visualization Recommendations
Jan-Frederik Kassel,
Michael Rohs
EuroVis 2019 - Short Papers
A visualization recommender supports the user through automatic visualization generation. While previous contributions primarily concentrated on integrating visualization design knowledge either explicitly or implicitly, they mostly do not consider the user's individual preferences. In order to close this gap we explore online learning of visualization preferences through dueling bandits. Additionally, we consider this challenge from a usability perspective. Through a user study (N = 15), we empirically evaluate not only the bandit's performance in terms of both effectively learning preferences and properly predicting visualizations (satisfaction regarding the last prediction: μ = 85%), but also the participants' effort with respect to the learning procedure (e.g., NASA-TLX = 24:26). While our findings affirm the applicability of dueling bandits, they further provide insights on both the needed training time in order to achieve a usability-aligned procedure and the generalizability of the learned preferences. Finally, we point out a potential integration into a recommender system.
3DTactileDraw: A Tactile Pattern Design Interface for Complex Arrangements of Actuators
Oliver Beren Kaul,
Leonard Hansing,
Michael Rohs
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
Creating tactile patterns for a grid or a 3D arrangement of a large number of actuators presents a challenge as the design space is huge. This paper explores two different possibilities of implementing an easy-to-use interface for tactile pattern design on a large number of actuators around the head. Two user studies were conducted in order to iteratively improve the prototype to fit user needs.
Concept for Navigating the Visually Impaired using a Tactile Interface around the Head
Oliver Beren Kaul,
Michael Rohs
Hacking Blind Navigation Workshop at CHI '19
Approximate Distributed Discrete Event Simulation using Semi-Conservative Look-Ahead Estimation
Desheng Fu,
Marcus OConnor,
Matthias Becker,
Helena Szczerbicka
2019 IEEE/ACM 23rd International Symposium on Distributed Simulation and Real Time Applications (DS-RT)
Evaluation of Algorithms for Forecasting of Insect Populations
Matthias Becker
33rd European Simulation and Modelling Conference
A Review on the Planning Problem for the Installation of Offshore Wind Farms
Daniel Rippel,
Nicolas Jathe,
Matthias Becker,
Michael Lütjen,
Helena Szczerbicka,
Michael Freitag
IFAC-PapersOnLine
Offshore wind farms provide a promising technology to produce renewable and sustainable energy. Nevertheless, the installation and operation of offshore wind farms pose a particular challenge to the planning and execution of operations. This article aims to identify requirements towards a decision support tool for the installation planning. Therefore, it provides a review of existing research in planning approaches and summarizes the overall planning problem. Afterwards this problem is decomposed into single tasks, according to their planning horizons. This decomposition shows a high level of interconnection between tasks across all levels. Higher levels provide constraints for the tasks on lower levels, while the results of these tasks are incorporated at higher levels. Finally, the article discusses the advantages and disadvantages of different approaches to solve these tasks.
2018
Fußverkehr als Beitrag zur Gesunden Stadt
Anne Finger,
Lena Greinke,
Maximilian Schrapel
PLANERIN 5/2018
Bewegungsmangel ist laut WHO zu einem der führenden Risikofaktoren für gesundheitliche Probleme geworden (WHO 2007: 8) und resultiert aus unserer veränderten Lebens- und Arbeitswelt mit langen körperlichen Ruhezeiten. Neben diesen Phasen, die beispielsweise sitzend am Büroarbeitsplatz verbracht werden, spielt hierbei auch unser Mobilitätsverhalten eine zentrale Rolle. Knapp die Hälfte der Wege, die mit dem Automobil zurückgelegt werden, sind fünf Kilometer lang oder kürzer (infas & DLR 2010, 41). Diese Streckenlängen können auch durch Fuß- und Radverkehr als Bestandteile der aktiven Mobilität geleistet werden.
An dieser Stelle setzt das Forschungsprojekt „Aktive Navigation“ der Forschungsinitiative „Mobiler Mensch: Intelligente Mobilität in der Balance von Autonomie, Vernetzung und Security“ der Leibniz Universität Hannover an. Aufbauend auf der Nutzung von Wearables und Smartphones wird eine App entwickelt, die basierend auf der Vorhersage der täglichen Aktivität der Nutzenden eine Route zum Ziel auswählt. Die Routenwahl schließt andere Verkehrsmittel mit ein, soll insgesamt aber dazu dienen, die tägliche Schrittzahl und damit die körperliche Aktivität zu erhöhen.
Pentelligence: Combining Pen Tip Motion and Writing Sounds for Handwritten Digit Recognition
Maximilian Schrapel,
Max-Ludwig Stadler,
Michael Rohs
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
Digital pens emit ink on paper and digitize handwriting. The range of the pen is typically limited to a special writing surface on which the pen's tip is tracked.
We present Pentelligence, a pen for handwritten digit recognition that operates on regular paper and does not require a separate tracking device. It senses the pen tip's motions and sound emissions when stroking.
Pen motions and writing sounds exhibit complementary properties. Combining both types of sensor data substantially improves the recognition rate.
Hilbert envelopes of the writing sounds and mean-filtered motion data are fed to neural networks for majority voting.
The results on a dataset of 9408 handwritten digits taken from 26 individuals show that motion+sound outperforms single-sensor approaches at an accuracy of 78.4% for 10 test users.
Retraining the networks for a single writer on a dataset of 2120 samples increased the precision to 100% for single handwritten digits at an overall accuracy of 98.3%.
JIS: Pest Population Prognosis with Escalator Boxcar Train
Kin-Woon Yeow,
Matthias Becker
2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM)
MuscleIO: Muscle-Based Input and Output for Casual Notifications
Tim Duente,
Justin Schulte,
Max Pfeiffer,
Michael Rohs
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
Receiving and reacting to notifications on mobile devices can be cumbersome. We propose MuscleIO, the use of electrical muscle stimulation (EMS) for notification output and electromyography (EMG) for reacting to notifications. Our approach provides a one-handed, eyes-free, and low-effort way of dealing with notifications. We built a prototype that interleaves muscle input and muscle output signals using the same electrodes. EMS and EMG alternate such that the EMG input signal is measured in the gaps of the EMS output signal, so voluntary muscle contraction is measured during muscle stimulation. Notifications are represented as EMS signals and are accepted or refused either by a directional or a time-based EMG response. A lab user study with 12 participants shows that the directional EMG response is superior to the time-based response in terms of reaction time, error rate, and user preference. Furthermore, the directional approach is the fastest and the most intuitive for users compared to a button-based smartwatch interface as a baseline.
International Workshop on Integrating Physical Activity and Health Aspects in Everyday Mobility
Maximilian Schrapel,
Anne Finger,
Jochen Meyer,
Michael Rohs,
Johannes Schoening,
Alexandra Voit
Accepted Workshops at Ubicomp 2018
Everyday mobility encompasses different forms of public and private transportation and different forms of physical activity. However, in general everyday mobility does not involve substantial levels of physical activity.
There are sometimes structural reasons or a lack of motivation and time to realize an active lifestyle in the context of mobility.
The goal of this workshop is to investigate ways to integrate physical activity into everyday mobility in accordance with widely accepted health recommendations. We aim to explore wearable and ambient systems that sense and support active navigation as well as conceptual aspects from a variety of perspectives, such as persuasive technologies, and thus invite researchers from different disciplines to contribute their point of view by means of position papers, posters, and demonstrations. One planned outcome of this workshop is a set of design guidelines for navigation systems that explicitly consider health aspects. For the full-day workshop we aim to explore requirements and design challenges in a creative setting.
Valletto: A Multimodal Interface for Ubiquitous Visual Analytics
Jan-Frederik Kassel,
Michael Rohs
Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems
Modern technologies enable data analysis in scenarios where keyboard and mouse are not available. Research on multimodality in visual analytics is facing this challenge. But existing approaches consider exclusively static environments with large screens. Therefore, we envision Valletto, a prototypical tablet app which allows the user to generate and specify visualizations through a speech-based conversational interface, through multitouch gestures, and through a conventional GUI interface. We conducted an initial expert evaluation to gain information on the modality function mapping and for the integration of different modalities. Our aim is to discuss design and interaction considerations in a mobile context which fits the user's daily life.
Integrating Recommended Physical Activity in Everyday Mobility
Maximilian Schrapel,
Anne Finger,
Michael Rohs
Accepted Workshoppapers at the workshop on Augmented Humanity using Wearable and Mobile Devices for Health and Wellbeing at MobileHCI'18
Nowadays, wearables can easily monitor and display physical activities throughout the day. Health recommendations are often used to set daily goals, but these barely take individual requirements into account. In addition, due to limited individual adaptability, there are various life situations in which these goals are not achieved due to missing motivation or time. In this position paper we discuss in particular how health recommendations can be integrated into everyday life and what challenges arise. We also address spatial requirements that are necessary for an active lifestyle.
Requirements of Navigation Support Systems for People with Visual Impairments
Oliver Beren Kaul,
Michael Rohs
Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers
Improving an Evolutionary Approach to Sudoku Puzzles by Intermediate Optimization of the Population
Matthias Becker,
Sinan Balci
International Conference on Information Science and Applications
2017
On the security of two sealed-bid auction schemes
Kin-Woon Yeow,
Swee-Huay Heng,
Syh-Yuan Tan
2017 19th International Conference on Advanced Communication Technology (ICACT)
From Sealed-Bid Electronic Auction to Electronic Cheque
Kin-Woon Yeow,
Swee-Huay Heng,
Syh-Yuan Tan
International Conference on Information Science and Applications
Known Bid Attack on an Electronic Sealed-Bid Auction Scheme
Kin-Woon Yeow,
Swee-Huay Heng,
Syh-Yuan Tan
International Conference on Information Science and Applications
Zap++: A 20-channel Electrical Muscle Stimulation System for Fine-grained Wearable Force Feedback
Tim Duente,
Max Pfeiffer,
Michael Rohs
Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services
Electrical muscle stimulation (EMS) has been used successfully in HCI to generate force feedback and simple movements both in stationary and mobile settings. However, many natural limb movements require the coordinated actuation of multiple muscles. Off-the-shelf EMS devices are typically limited in their ability to generate fine-grained movements, because they only have a low number of channels and do not provide full control over the EMS parameters. More capable medical devices are not designed for mobile use or still have a lower number of channels and less control than is desirable for HCI research. In this paper we present the concept and a prototype of a 20-channel mobile EMS system that offers full control over the EMS parameters. We discuss the requirements of wearable multi-electrode EMS systems and present the design and technical evaluation of our prototype. We further outline several application scenarios and discuss safety and certification issues.
Emotion Actuator: Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation
Mariam Hassib,
Max Pfeiffer,
Stefan Schneegass,
Michael Rohs,
Florian Alt
Proc. of CHI 2017
The human body reveals emotional and bodily states through measurable signals, such as body language and electroencephalography. However, such manifestations are difficult to communicate to others remotely. We propose EmotionActuator, a proof-of-concept system to investigate the transmission of emotional states in which the recipient performs emotional gestures to understand and interpret the state of the sender.We call this kind of communication embodied emotional feedback, and present a prototype implementation. To realize our concept we chose four emotional states: amused, sad, angry, and neutral. We designed EmotionActuator through a series of studies to assess emotional classification via EEG, and create an EMS gesture set by comparing composed gestures from the literature to sign-language gestures. Through a final study with the end-to-end prototype interviews revealed that participants like implicit sharing of emotions and find the embodied output to be immersive, but want to have control over shared emotions and with whom. This work contributes a proof of concept system and set of design recommendations for designing embodied emotional feedback systems.
HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality
Oliver Beren Kaul,
Michael Rohs
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI '17
Current virtual and augmented reality head-mounted displays usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing multiple vibrotactile actuators distributed in three concentric ellipses around the head for intuitive haptic guidance through moving tactile cues. We conducted three experiments, which indicate that HapticHead vibrotactile feedback is both faster (2.6 s vs. 6.9 s) and more precise (96.4{%} vs. 54.2{%} success rate) than spatial audio (generic head-related transfer function) for finding visible virtual objects in 3D space around the user. The baseline of visual feedback is as expected more precise (99.7{%} success rate) and faster (1.3 s) in comparison, but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments. Mean final precision with HapticHead feedback on invisible targets is 2.3° compared to 0.8° with visual feedback. We successfully navigated blindfolded users to real household items at different heights using HapticHead vibrotactile feedback independently of a head-mounted display.
Squeezeback: Pneumatic Compression for Notifications
Henning Pohl,
Peter Brandes,
Hung Ngo Quang,
Michael Rohs
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '17
Current mobile devices commonly use vibration feedback to signal incoming notifications. However, vibration feedback exhibits strong attention capture, limiting its use to short periods and prominent notifications. Instead, we investigate the use of compression feedback for notifications, which scales from subtle stimuli to strong ones and can provide sustained stimuli over longer periods. Compression feedback utilizes inflatable straps around a user's limbs, a form factor allowing for easy integration into many common wearables. We explore technical aspects of compression feedback and investigate its psychophysical properties with several lab and in situ studies. Furthermore, we show how compression feedback enables reactive feedback. Here, deflation patterns are used to reveal further information on a user's query. We also compare compression and vibrotactile feedback and find that they have similar performance.
Increasing Presence in Virtual Reality with a Vibrotactile Grid Around the Head
Oliver Beren Kaul,
Kevin Meier,
Michael Rohs
Human-Computer Interaction -- INTERACT 2017: 16th IFIP TC 13 International Conference, Mumbai, India, September 25-29, 2017, Proceedings, Part IV
A high level of presence is an important aspect of immersive virtual reality applications. However, presence is difficult to achieve as it depends on the individual user, immersion capabilities of the system (visual, auditory, and tactile) and the concrete application. We use a vibrotactile grid around the head in order to further increase the level of presence users feel in virtual reality scenes. In a between-groups comparison study the vibrotactile group scored significantly higher in a standardized presence questionnaire compared to the baseline of no tactile feedback. This suggests the proposed prototype as an additional tool to increase the level of presence users feel in virtual reality scenes.
EMS in HCI: Challenges and Opportunities in Actuating Human Bodies
Tim Duente,
Stefan Schneegass,
Max Pfeiffer
Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services
Electrical Muscle Stimulation (EMS) recently received considerable attention in the HCI community. By applying small signals to the user's body, different types of movement can be generated. These movements allow designers to create more meaningful and embodied haptic feedback compared to vibrotactile feedback. This advantage also comes with further technical and practical challenges which need to be tackled. These challenges include a fine grained calibration procedure and a close contact to the user's body at specific on-body locations. This tutorial gives an overview about current research projects, challenges, and opportunities to use EMS for providing rich embodied feedback followed by a hands on experience. The main goal of this tutorial is that participants get a basic understanding of how EMS works and how systems that are using EMS can be developed and evaluated.
Inhibiting Freedom of Movement with Compression Feedback
Henning Pohl,
Franziska Hoheisel,
Michael Rohs
Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA '17
Compression feedback uses inflatable straps to create uniform pressure sensations around limbs. Lower-pressure stimuli are well suited as a feedback channel for, e.g., notifications. However, operating compression feedback systems at higher pressure levels allows to physically inhibit movement. Here, we describe this modality and present a pervasive jogging game that employs physical inhibition to push runners to reach checkpoints in time.
Immersive Navigation in Visualization Spaces through Swipe Gestures and Optimal Attribute Selection
Jan-Frederik Kassel,
Michael Rohs
Proceedings of the 2nd Workshop on Immersive Analytics: Exploring Future Interaction and Visualization Technologies for Data Analytics
Exploratory data analysis is an essential step in discovering patterns and relationships in data. However, the exploration may start without a clear conception about what attributes to pick or what visualizations to choose in order to develop an understanding of the data. In this work we aim to support the exploration process by automatically choosing attributes according to an information-theoretic measure and by providing a simple means of navigation through the space of visualizations. The system suggests data attributes to be visualized and the visualization's type and appearance. The user intuitively modifies these suggestions by performing swiping gestures on a tablet device. Attribute suggestions are based on the mutual information between multiple random variables (MMI). The results of a preliminary user study (N = 12 participants) show the applicability of MMI for guided exploratory data analysis and confirm the system's general usability (SUS score: 74).
Indoor Positioning Solely Based on User's Sight
Matthias Becker
Information Science and Applications (ICISA) 2017, Lecture Notes in Electrical Engineering (LNEE) Series
Implementing Real-Life Indoor Positioning Systems Using Machine Learning Approaches
Matthias Becker,
Bharat Ahuja
IEEE 8th International Conference on Information, Intelligence, Systems, Applications IISA
Estimating performance of large scale distributed simulation built on homogeneous hardware
Desheng Fu,
Matthias Becker,
Marcus O'Connor,
Helena Szczerbicka
2017 IEEE/ACM 21st International Symposium on Distributed Simulation and Real Time Applications (DS-RT)
Beyond Just Text: Semantic Emoji Similarity Modeling to Support Expressive Communication 👫 📲 😃
Henning Pohl,
Christian Domin,
Michael Rohs
ACM Transactions on Computer-Human Interaction
Emoji, a set of pictographic Unicode characters, have seen strong uptake over the last couple of years. All common mobile platforms and many desktop systems now support emoji entry and users have embraced their use. Yet, we currently know very little about what makes for good emoji entry. While soft keyboards for text entry are well optimized, based on language and touch models, no such information exists to guide the design of emoji keyboards. In this article, we investigate of the problem of emoji entry, starting with a study of the current state of the emoji keyboard implementation in Android. To enable moving forward to novel emoji keyboard designs, we then explore a model for emoji similarity that is able to inform such designs. This semantic model is based on data from 21 million collected tweets containing emoji. We compare this model against a solely description-based model of emoji in a crowdsourced study. Our model shows good performance in capturing detailed relationships between emoji.
2016
Let Your Body Move: A Prototyping Toolkit for Wearable Force Feedback with Electrical Muscle Stimulation
Max Pfeiffer,
Tim Duente,
Michael Rohs
Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services
Electrical muscle stimulation (EMS) is a promising wearable haptic output technology as it can be miniaturized considerably and delivers a wide range of haptic output. However, prototyping EMS applications is challenging. It requires detailed knowledge and skills about hardware, software, and physiological characteristics. To simplify prototyping with EMS in mobile and wearable situations we present the Let Your Body Move toolkit. It consists of (1) a hardware control module with Bluetooth communication that uses off-the-shelf EMS devices as signal generators, (2) a simple communications protocol to connect mobile devices, and (3) a set of control applications as starting points for EMS prototyping. We describe EMS-specific parameters, electrode placements on the skin, and user calibration. The toolkit was evaluated in a workshop with 10 researchers in haptics. The results show that the toolkit allows to quickly generate non-trivial prototypes. The hardware schematics and software components are available as open source software.
EmojiZoom: Emoji Entry via Large Overview Maps 😄 🔍
Henning Pohl,
Dennis Stanke,
Michael Rohs
Proceedings of the 18th international conference on Human-computer interaction with mobile devices and services - MobileHCI '16
Current soft keyboards for emoji entry all present emoji in the same way: in long lists, spread over several categories. While categories limit the number of emoji in each individual list, the overall number is still so large, that emoji entry is a challenging task. The task takes particularly long if users pick the wrong category when searching for an emoji. Instead, we propose a new zooming keyboard for emoji entry. Here, users can see all emoji at once, aiding in building spatial memory where related emoji are to be found. We compare our zooming emoji keyboard against the Google keyboard and find that our keyboard allows for 18% faster emoji entry, reducing the required time for one emoji from 15.6s to 12.7s. A preliminary longitudinal evaluation with three participants showed that emoji entry time over the duration of the study improved at up to 60% to a final average of 7.5s.
ScatterWatch: Subtle Notifications via Indirect Illumination Scattered in the Skin
Henning Pohl,
Justyna Medrek,
Michael Rohs
Proceedings of the 18th international conference on Human-computer interaction with mobile devices and services - MobileHCI '16
With the increasing popularity of smartwatches over the last years, there has been a substantial interest in novel input methods for such small devices. However, feedback modalities for smartwatches have not seen the same level of interest. This is surprising, as one of the primary function of smartwatches is their use for notifications. It is the interrupting nature of current notifications on smartwatches that has also drawn some of the more critical responses to them. Here, we present a subtle notification mechanism for smartwatches that uses light scattering in a wearer's skin as a feedback modality. This does not disrupt the wearer in the same way as vibration feedback and also connects more naturally with the user's body.
Casual Interaction: Moving Between Peripheral and High Engagement Interactions
Henning Pohl
Peripheral Interaction: Challenges and Opportunities for HCI in the Periphery of Attention
In what we call the focused-casual continuum, users pick how much control they want to have when interacting. Through offering several different ways for interaction, such interfaces can then be more appropriate for, e.g., use in some social situations, or use when exhausted. In a very basic example, an alarm clock could offer one interaction mode where an alarm can only be turned off, while in another, users can choose between different snooze responses. The first mode is more restrictive but could be controlled with one coarse gesture. Only when the user wishes to pick between several responses, more controlled and fine interaction is needed. Low control, more casual interactions can take place in the background or the periphery of the user, while focused interactions move into the foreground. Along the focused-casual continuum, a plethora of interaction techniques have their place. Currently, focused interaction techniques are often the default ones. In this chapter, we thus focus more closely on techniques for casual interaction, which offer ways to interact with lower levels of control. Presented use cases cover scenarios such as text entry, user recognition, tangibles, or steering tasks. Furthermore, in addition to potential benefits from applying casual interaction techniques during input, there is also a need for feedback which does not immediately grab our attention, but can scale from the periphery to the focus of our attention. Thus, we also cover several such feedback methods and show how the focused-casual continuum can encompass the whole interaction.
Hands-on introduction to interactive electric muscle stimulation
Pedro Lopes,
Max Pfeiffer,
Michael Rohs,
Patrick Baudisch
CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
In this course, participants create their own prototypes using electrical-muscle stimulation. We provide a ready-to-use device and toolkit consisting of electrodes, microcontroller, and an off-the-shelve muscle stimulator that allows for programmatically actuating the user's muscles directly from mobile devices.
HapticHead: 3D Guidance and Target Acquisition through a Vibrotactile Grid
Oliver Beren Kaul,
Michael Rohs
Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems
Current generation virtual reality (VR) and augmented reality (AR) head-mounted displays (HMDs) usually include no or only a single vibration motor for haptic feedback and do not use it for guidance. We present HapticHead, a system utilizing 20 vibration motors distributed in three concentric ellipses around the head to give intuitive haptic guidance hints and to increase immersion for VR and AR applications. Our user study indicates that HapticHead is both faster (mean=3.7s, SD=2.3s vs. mean=7.8s, SD=5.0s) and more precise (92.7{%} vs. 44.9{%} hit rate) than auditory feedback for the purpose of finding virtual objects in 3D space around the user. The baseline of visual feedback is as expected more precise (99.9{%} hit rate) and faster (mean=1.5s, SD=0.6s) in comparison but there are many applications in which visual feedback is not desirable or available due to lighting conditions, visual overload, or visual impairments.
Follow the Force: Steering the Index Finger towards Targets using EMS
Oliver Beren Kaul,
Max Pfeiffer,
Michael Rohs
Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems
In mobile contexts guidance towards objects is usually done through the visual channel. Sometimes this channel is overloaded or not appropriate. A practicable form of haptic feedback is challenging. Electrical muscle stimulation (EMS) can generate mobile force feedback but has a number of drawbacks. For complex movements several muscles need to be actuated in concert and a feedback loop is necessary to control movements. We present an approach that only requires the actuation of six muscles with four pairs of electrodes to guide the index finger to a 2D point and let the user perform mid-air disambiguation gestures. In our user study participants found invisible, static target positions on top of a physical box with a mean 2D deviation of 1.44 cm from the intended target.
Multi-Level Interaction with an LED-Matrix Edge Display
Henning Pohl,
Bastian Krefeld,
Michael Rohs
Proceedings of the 18th international conference on Human-computer interaction with mobile devices and services adjunct - MobileHCI '16 Adjunct
Interaction with mobile devices currently requires close engagement with them. For example, users need to pick them up and unlock them, just to check whether the last notification was for an urgent message. But such close engagement is not always desirable, e.g., when working on a project with the phone just laying around on the table. Instead, we explore around-device interactions to bring up and control notifications. As users get closer to the device, more information is revealed and additional input options become available. This allows users to control how much they want to engage with the device. For feedback, we use a custom LED-matrix display prototype on the edge of the device. This allows for coarse, but bright, notifications in the periphery of attention, but scales up to allow for slightly higher resolution feedback as well.
Improving Plagiarism Detection in Coding Assignments by Dynamic Removal of Common Ground
Christian Domin,
Henning Pohl,
Markus Krause
CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
Plagiarism in online learning environments has a detrimental effect on the trust of online courses and their viability. Automatic plagiarism detection systems do exist yet the specific situation in online courses restricts their use. To allow for easy automated grading, online assignments usually are less open and instead require students to fill in small gaps. Therefore solutions tend to be very similar, yet are then not necessarily plagiarized. In this paper we propose a new approach to detect code re-use that increases the prediction accuracy by dynamically removing parts in assignments which are part of almost every assignment—the so called common ground. Our approach shows significantly better F-measure and Cohen's Kappa results than other state of the art algorithms such as Moss or JPlag. The proposed method is also language agnostic to the point that training and test data sets can be taken from different programming languages.
On-skin Technologies for Muscle Sensing and Actuation
Tim Duente,
Max Pfeiffer,
Michael Rohs
Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
Electromyography (EMG) and electrical muscle stimulation (EMS) are promising technologies for muscle sensing and actuation in wearable interfaces. The required electrodes can be manufactured to form a thin layer on the skin. We discuss requirements and approaches for EMG and EMS as on-skin technologies. In particular, we focus on fine-grained muscle sensing and actuation with an electrode grid on the lower arm. We discuss a prototype, scenarios, and open issues.
Wearable Head-mounted 3D Tactile Display Application Scenarios
Oliver Beren Kaul,
Michael Rohs
Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct
A Wearable Force Feedback Toolkit with Electrical Muscle Stimulation
Max Pfeiffer,
Tim Duente,
Michael Rohs
CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
Electrical muscle stimulation (EMS) is a promising wearable haptic output technology as it can be miniaturized and delivers a wide range of tactile and force output. However, prototyping EMS applications is currently challenging and requires detailed knowledge about EMS. We present a toolkit that simplifies prototyping with EMS and serves as a starting point for experimentation and user studies. It consists of (1) a hardware control module that uses off-the-shelf EMS devices as safe signal generators, (2) a simple communication protocol, and (3) a set of control applications for prototyping. The interactivity allows hands-on experimentation with our sample control applications.
Planning in Dynamic, Distributed and Non-automatized Production Systems
Matthias Becker,
Michael Lütjen,
Helena Szczerbicka
Information Science and Applications (ICISA) 2016
Improving the performance of distributed discrete event simulation by exchange of conditional look-ahead
Desheng Fu,
Matthias Becker,
Helena Szczerbicka
Concurrency and Computation: Practice and Experience, Wiley 2016
Optimization of Tire Noise by Solving an Integer Linear Program (ILP)
Matthias Becker,
Nicolas Ginoux,
Sebastien Martin,
Zsuzsanna Roka
IEEE International Conference on Systems Man and Cybernetics (SMC 2016)
Basic Algorithms for Bee Hive Monitoring and Laser-based Mite Control
Larissa Chazette,
Matthias Becker,
Helena Szczerbicka
IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2016)
Analysing the Cost-Efficiency of the Multi-agent Flood Algorithm in Search and Rescue Scenarios
Florian Blatt,
Matthias Becker,
Helena Szczerbicka
German Conference on Multiagent System Technologies
2015
A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification
Markus Krause,
Marc Mogalle,
Henning Pohl,
Joseph Jay Williams
Proceedings of the second ACM conference on Learning @ scale - L@S '15
Many MOOCs report high drop off rates for their students. Among the factors reportedly contributing to this picture are lack of motivation, feelings of isolation, and lack of interactivity in MOOCs. This paper investigates the potential of gamification with social game elements for increasing retention and learning success. Students in our experiment showed a significant increase of 25% in retention period (videos watched) and 23% higher average scores when the course interface was gamified. Social game elements amplify this effect significantly – students in this condition showed an increase of 50% in retention period and 40% higher average test scores.
Applications of undeniable signature schemes
Kin-Woon Yeow,
Syh-Yuan Tan,
Swee-Huay Heng,
Rouzbeh Behnia
2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)
Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation
Max Pfeiffer,
Tim Duente,
Stefan Schneegass,
Florian Alt,
Michael Rohs
Proc. of CHI 2015
Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16 degree/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.
3D Virtual Hand Pointing with EMS and Vibration Feedback
Max Pfeiffer,
Wolfgang Stuerzlinger
3DUI'15
One-Button Recognizer: Exploiting Button Pressing Behavior for User Differentiation
Henning Pohl,
Markus Krause,
Michael Rohs
Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp '15
We present a novel way to recognize users by the way they press a button. Our approach allows low-effort and fast interaction without the need for augmenting the user or controlling the environment. It eschews privacy concerns of methods such as fingerprint scanning. Button pressing behavior is sufficiently discriminative to allow distinguishing users within small groups. This approach combines recognition and action in a single step, e.g., getting and tallying a coffee can be done with one button press. We deployed our system for 5 users over a period of 4 weeks and achieved recognition rates of 95% in the last week. We also ran a larger scale but short-term evaluation to investigate effects of group size and found that our method degrades gracefully for larger groups.
Visualizing Scheduling: A Hierarchical Event-Based Approach on a Tablet
André Sydow,
Jan-Frederik Kassel,
Michael Rohs
Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct
The amount of logistical data in the automotive industry drastically increases due to digitalization and data that is automatically generated due to Auto-ID-Technologies. However, new methods need to be devised to make sense of this data, in particular when users are mobile, and when users need to collaborate to solve complex logistical tasks, such as resource scheduling. We propose a visualization method for hierarchical event data that is designed for tablets. The main design goals have been to foster collaboration and enable mobility. Our think aloud user study shows that both the event recognition and understanding of the participants improved with the proposed solution.
3D Virtual Hand Pointing with EMS and Vibration Feedback
Max Pfeiffer,
Wolfgang Stuerzlinger
CHI'15
CapCouch: Home Control With a Posture-Sensing Couch
Henning Pohl,
Markus Hettig,
Oliver Karras,
Hatice Ötztürk,
Michael Rohs
Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication - UbiComp '15 Adjunct
In relaxed living room settings, using a phone to control the room can be inappropriate or cumbersome. Instead of such explicit interactions, we enable implicit control via a posture-sensing couch. Users can then, e.g., automatically turn on the reading lights when sitting down.
Let your body move: electrical muscle stimuli as haptics
Pedro Lopes,
Max Pfeiffer,
Michael Rohs,
Patrick Baudisch
Let your body move - a tutorial on electrical muscle stimuli as haptics 2015
Wrist Compression Feedback by Pneumatic Actuation
Henning Pohl,
Dennis Becke,
Eugen Wagner,
Maximilian Schrapel,
Michael Rohs
CHI '15 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '15
Most common forms of haptic feedback use vibration, which immediately captures the user's attention, yet is limited in the range of strengths it can achieve. Vibration feedback over extended periods also tends to be annoying. We present compression feedback, a form of haptic feedback that scales from very subtle to very strong and is able to provide sustained stimuli and pressure patterns. The demonstration may serve as an inspiration for further work in this area, applying compression feedback to generate subtle, intimate, as well as intense feedback.
Evaluating heuristic optimization, bio-inspired and graph-theoretic algorithms for the generation of fault-tolerant graphs with minimal costs
Matthias Becker,
Markus Krömker,
Helena Szczerbicka
Information Science and Applications
Advantages of Heterogeneous Agent Populations for Exploration and Pathfinding in Unknown Terrain
Florian Blatt,
Matthias Becker,
Helena Szczerbicka
Open-Access journal Frontiers in Sensors (FS)
Unterstützung für eine effiziente Montagesteuerung -Simulation identifiziert und bewertet Handlungsoptionen bei Störungen in einer Baustellmontage
E Hund,
S Bohlmann,
H Szczerbicka,
M Becker
Springer-VDI wt Werkstattstechnik online
Universal Simulation Engine (USE) A Model-Independent Library for Discrete Event Simulation
Desheng Fu,
Matthias Becker,
Helena Szczerbicka
SpringSim 2015
Optimizing the exploration efficiency of autonomous search and rescue agents using a concept of layered robust communication
Florian Blatt,
Matthias Becker,
Helena Szczerbicka
2015 IEEE 20th Conference on Emerging Technologies \& Factory Automation (ETFA)
On the influence of state update interval length on the prediction success of decision support system in multi-site production environment
Matthias Becker,
Helena Szczerbicka
2015 IEEE 20th Conference on Emerging Technologies \& Factory Automation (ETFA)
On the Efficiency of Nature-Inspired Algorithms for Generation of Fault-Tolerant Graphs
Matthias Becker
IEEE Conference on Systems, Man and Cybernetics SMC 2015
A Framework for Decision Support in Systems with a low Level of Automation
Matthias Becker,
Sinan Balci,
Helena Szczerbicka
International Conference on Computers and Industrial Engineering, CIE45
Casual Interaction: Scaling Interaction for Multiple Levels of Engagement
Henning Pohl
CHI '15 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '15
In the focused-casual continuum, users are given a choice of how much they wish to engage with an interface. In situations where they are, e.g., physically encumbered, they may wish to trade some control for the convenience of interacting at all. Currently, most devices only offer focused interaction capabilities or restrict users to binary foreground/background interaction choices. In casual interactions, users consciously pick a way to interact that is suitable for their desired engagement level. Users will be expecting devices to offer several ways for control along the engagement scale.
2014
Let Me Grab This : A Comparison of EMS and Vibration for Haptic Feedback in Free-Hand Interaction
Max Pfeiffer,
Stefan Schneegass,
Florian Alt,
Michael Rohs
Augmented Human
Free-hand interaction with large displays is getting more common, for example in public settings and exertion games. Adding haptic feedback offers the potential for more realis- tic and immersive experiences. While vibrotactile feedback is well known, electrical muscle stimulation (EMS) has not yet been explored in free-hand interaction with large displays. EMS offers a wide range of different strengths and qualities of haptic feedback. In this paper we first systematically inves- tigate the design space for haptic feedback. Second, we ex- perimentally explore differences between strengths of EMS and vibrotactile feedback. Third, based on the results, we evaluate EMS and vibrotactile feedback with regard to differ- ent virtual objects (soft, hard) and interaction with different gestures (touch, grasp, punch) in front of a large display. The results provide a basis for the design of haptic feedback that is appropriate for the given type of interaction and the material.
Around-Device Devices: My Coffee Mug is a Volume Dial
Henning Pohl,
Michael Rohs
Proceedings of the 16th international conference on Human-computer interaction with mobile devices and services - MobileHCI '14
For many people their phones have become their main everyday tool. While phones can fulfill many different roles they also require users to (1) make do with affordance not specialized for the specific task, and (2) closely engage with the device itself. We propose utilizing the space and objects around the phone to offer better task affordance and to create an opportunity for casual interactions. Such around-device devices are a class of interactors that do not require users to bring special tangibles, but repurpose items already found in the user's surroundings. In a survey study, we determine which places and objects are available to around-device devices. Furthermore, in an elicitation study, we observe what objects users would use for ten interactions.
Uncertain Text Entry on Mobile Devices
Daryl Weir,
Henning Pohl,
Simon Rogers,
Keith Vertanen,
Per Ola Kristensson
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '14
Modern mobile devices typically rely on touchscreen keyboards for input. Unfortunately, users often struggle to enter text accurately on virtual keyboards. To address this, we present a novel decoder for touchscreen text entry that combines probabilistic touch models with a long-span language model. We investigate two touch models – one based on Gaussian Processes that implicitly models the inherent uncertainty of the touching process and a second that allows users to explicitly control the uncertainity via touch pressure. Using the first model we show that character error rate can be reduced by up to 7% over a baseline, and by up to 1.3% over a leading commercial keyboard. With the second model, we demonstrate that providing users with control over input certainty results in improved text entry rates for phrases containing out of vocabulary words.
Ergonomic Characteristics of Gestures for Front- and Back-of-tablets Interaction with Grasping Hands
Katrin Wolf,
Robert Schleicher,
Michael Rohs
Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices - MobileHCI '14
The thumb and the fingers have different flexibility, and thus, gestures performed on the back of a held tablet are suggested to be different from ones performed on the touchscreen with the thumb of grasping hands. APIs for back-of-device gesture detection should consider that difference. In a user study, we recorded vectors for the four most common touch gestures. We found that drag, swipe, and press gestures are significantly differently when executed on the back versus on the front side of a held tablet. Corresponding values are provided that may be used to define gesture detection thresholds for back-of-tablet interaction.
A Design Space for Electrical Muscle Stimulation Feedback for Free-Hand Interaction
Max Pfeiffer,
Stefan Schneegass,
Florian Alt,
Michael Rohs
Workshop on Assistive Augmentation at CHI 2014
Free-hand interaction becomes a common technique for interacting with large displays. At the same time, providing haptic feedback for free-hand interaction is still a challenge, particularly feedback with different characteristics (i.e., strengths, patterns) to convey particular information. We see electrical muscle stimulation (EMS) as a well-suited technology for providing haptic feedback in this domain. The characteristics of EMS can be used to assist users in learning, manipulating, and perceiving virtual objects. One of the core challenges is to understand these characteristics and how they can be applied. As a step in this direction, this paper presents a design space that identifies different aspects of using EMS for haptic feedback. The design space is meant as a basis for future research investigating how particular characteristics can be exploited to provide specific haptic feedback.
Casual Interaction: Scaling Fidelity for Low-Engagement Interactions
Henning Pohl,
Michael Rohs,
Roderick Murray-Smith
Workshop on Peripheral Interaction: Shaping the Research and Design Space at CHI 2014
When interacting casually, users relinquish some control over their interaction to gain the freedom to devote their engagement elsewhere.
This allows them to still interact even when they are encumbered, distracted, or engaging with others. With their focus on something else, casual interaction will often take place in the periphery---either spatially by, e.g., interacting laterally or with respect to attention, by interacting in the background.
Imaginary Reality Basketball: A Ball Game Without a Ball
Patrick Baudisch,
Henning Pohl,
Stefanie Reinicke,
Emilia Wittmers,
Patrick Lühne,
Marius Knaust,
Sven Köhler,
Patrick Schmidt,
Christian Holz
CHI '14 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '14
We present imaginary reality basketball, i.e., a ball game that mimics the respective real world sport, i.e., basketball, except that there is no visible ball. The ball is virtual and players learn about its position only from watching each other act and a small amount of occasional auditory feedback, e.g., when a person is receiving the ball.
Imaginary reality games maintain many of the properties of physical sports, such as unencumbered play, physical exertion, and immediate social interaction between players. At the same time, they allow introducing game elements from video games, such as power-ups, non-realistic physics, and player balancing. Most importantly, they create a new game dynamic around the notion of the invisible ball.
A concept of layered robust communication between robots in multi-agent search \& rescue scenarios
Matthias Becker,
Florian Blatt,
Helena Szczerbicka
2014 IEEE/ACM 18th International Symposium on Distributed Simulation and Real Time Applications
Accelerating distributed discrete event simulation through exchange of conditional look-ahead
Desheng Fu,
Matthias Becker,
Helena Szczerbicka
Proceedings of the 2014 IEEE/ACM 18th International Symposium on Distributed Simulation and Real Time Applications
Predictive simulation based decision support system for resource failure management in multi-site production environments
Matthias Becker,
Sinan Balci,
Helena Szczerbicka
2014 International Conference on Control, Decision and Information Technologies (CoDIT)
Brave New Interactions: Performance-Enhancing Drugs for Human-Computer Interaction
Henning Pohl
CHI '14 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '14
In the area of sports, athletes often resort to performance enhancing drugs to gain an advantage. Similarly, people use pharmaceutical drugs to aid learning, dexterity, or concentration. We investigate how pharmaceutical drugs could be used to enhance interactions. We envision that in the future, people might take pills along with their vitamins in the morning to improve how they can interact over the day. In addition to performance improvements this, e.g., could also include improvements in enjoyment or fatigue.
2013
Tickle: A surface-independent interaction technique for grasp interfaces
Katrin Wolf,
Robert Schleicher,
Sven Kratz,
Michael Rohs
Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction
We present a wearable interface that consists of motion sensors. As the interface can be worn on the user's fingers (as a ring) or fixed to it (with nail polish), the device controlled by finger gestures can be any generic object, provided they have an interface for receiving the sensor's signal. We implemented four gestures: tap, release, swipe, and pitch, all of which can be executed with a finger of the hand holding the device. In a user study we tested gesture appropriateness for the index finger at the back of a handheld tablet that offered three different form factors on its rear: flat, convex, and concave (undercut). For all three shapes, the gesture performance was equally good, however pitch performed better on all surfaces than swipe. The proposed interface is an example towards the idea of ubiquitous computing and the vision of seamless interactions with grasped objects. As an initial application scenario we implemented a camera control that allows the brightness to be configured using our tested gestures on a common SLR device.
Combining acceleration and gyroscope data for motion gesture recognition using classifiers with dimensionality constraints
Sven Kratz,
Michael Rohs,
Georg Essl
Proceedings of the 2013 international conference on Intelligent user interfaces
Motivated by the addition of gyroscopes to a large number of new smart phones, we study the effects of combining accelerometer and gyroscope data on the recognition rate of motion gesture recognizers with dimensionality constraints. Using a large data set of motion gestures we analyze results for the following algorithms: Protractor3D, Dynamic Time Warping (DTW) and Regularized Logistic Regression (LR). We chose to study these algorithms because they are relatively easy to implement, thus well suited for rapid prototyping or early deployment during prototyping stages. For use in our analysis, we contribute a method to extend Protractor3D to work with the 6D data obtained by combining accelerometer and gyroscope data. Our results show that combining accelerometer and gyroscope data is beneficial also for algorithms with dimensionality constraints and improves the gesture recognition rate on our data set by up to 4%.
Imaginary Reality Gaming: Ball Games Without a Ball
Patrick Baudisch,
Henning Pohl,
Stefanie Reinicke,
Emilia Wittmers,
Patrick Lühne,
Marius Knaust,
Sven Köhler,
Patrick Schmidt,
Christian Holz
Proceedings of the 26th annual ACM Symposium on User Interface Software and Technology - UIST '13
We present imaginary reality games, i.e., games that mimic the respective real world sport, such as basketball or soccer, except that there is no visible ball. The ball is virtual and players learn about its position only from watching each other act and a small amount of occasional auditory feedback, e.g., when a person is receiving the ball. Imaginary reality games maintain many of the properties of physical sports, such as unencumbered play, physical exertion, and immediate social interaction between players. At the same time, they allow introducing game elements from video games, such as power-ups, non-realistic physics, and player balancing. Most importantly, they create a new game dynamic around the notion of the invisible ball. To allow players to successfully interact with the invisible ball, we have created a physics engine that evaluates all plausible ball trajectories in parallel, allowing the game engine to select the trajectory that leads to the most enjoyable game play while still favoring skillful play.
Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement
Henning Pohl,
Roderick Murray-Smith
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '13
We describe the focused–casual continuum, a framework for describing interaction techniques according to the degree to which they allow users to adapt how much attention and effort they choose to invest in an interaction conditioned on their current situation. Casual interactions are particularly appropriate in scenarios where full engagement with devices is frowned upon socially, is unsafe, physically challenging or too mentally taxing. Novel sensing approaches which go beyond direct touch enable wider use of casual interactions, which will often be ‘around device’ interactions. We consider the degree to which previous commercial products and research prototypes can be considered as fitting the focused– casual framework, and describe the properties using control theoretic concepts. In an experimental study we observe that users naturally apply more precise and more highly engaged interaction techniques when faced with a more challenging task and use more relaxed gestures in easier tasks.
Designing Systems with Homo Ludens in the Loop
Markus Krause
Handbook of Human Computation
A Digital Game to Support Voice Treatment for Parkinson ’ s Disease
Markus Krause,
Jan Smeddnick,
Ronald Meyer
CHI'013 extended abstracts on Human factors in computing systems
It is about Time : Time Aware Quality Management for Interactive Systems with Humans in the Loop
Markus Krause,
Robert Porzel
CHI'13 extended abstracts on Human factors in computing systems
Mobile Game User Research : The World as Your Lab ?
Jan Smeddinck,
Markus Krause
GUR'13 Proceedings of the CHI Game User Experience Research Workshop
Supporting interaction in public space with electrical muscle stimulation
Max Pfeiffer,
Stefan Schneegass,
Florian Alt
Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication
A Multi-agent Flooding Algorithm for Search and Rescue Operations in Unknown Terrain
Matthias Becker,
Florian Blatt,
Helena Szczerbicka
Multiagent System Technologies
Online simulation based decision support system for resource failure management in multi-site production environments
Sebastian Bohlmann,
Matthias Becker,
Sinan Balci,
Helena Szczerbicka,
Eric Hund
2013 IEEE 18th Conference on Emerging Technologies \& Factory Automation (ETFA)
On the potential of semi-conservative look-ahead estimation in approximative distributed discrete event simulation
Desheng Fu,
Matthias Becker,
Helena Szczerbicka
Proceedings of the 2013 Summer Computer Simulation Conference
2012
Design and Evaluation of Parametrizable Multi-Genre Game Mechanics
Daniel Apken,
Hendrik Landwehr,
Marc Herrlich,
Markus Krause,
Dennis Paul,
Rainer Malaka
ICEC'12 Proceedings of the 11th Inernational Conference on Entertainment Computing
PalmSpace: Continuous Around-device Gestures vs. Multitouch for 3D Rotation Tasks on Mobile Devices
Sven Kratz,
Michael Rohs,
Dennis Guse,
Jörg Müller,
Gilles Bailly,
Michael Nischt
Proceedings of the International Working Conference on Advanced Visual Interfaces
Rotating 3D objects is a diffcult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A at hand pose de nes a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. Pilot tests show that PalmSpace hand gestures are feasible. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction - behind and beside the device - with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touchscreens. Our results show that both variants of PalmSpace have signi cantly lower task completion times in comparison to the virtual trackball.
ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications
Gilles Bailly,
Jörg Müller,
Michael Rohs,
Daniel Wigdor,
Sven Kratz
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.
Human Computation – A new Aspect of Serious Games
Markus Krause,
Jan Smeddnick
Handbook of Research on Serious Games as Educational, Business and Research Tools: Development and Design
Sketch-a-TUI: Low Cost Prototyping of Tangible Interactions Using Cardboard and Conductive Ink
Alexander Wiethoff,
Hanna Schneider,
Michael Rohs,
Andreas Butz,
Saul Greenberg
Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction
Graspable tangibles are now being explored on the current generation of capacitive touch surfaces, such as the iPad and the Android tablet. Because the size and form factor is relatively new, early and low fidelity prototyping of these TUIs is crucial in getting the right design. The problem is that it is difficult for the average interaction designer to develop such physical prototypes. They require a substantial amount time and effort to physically model the tangibles, and expertise in electronics to instrument them. Thus prototyping is sometimes handed off to specialists, or is limited to only a few design iterations and alternative designs. Our solution contributes a low fidelity prototyping approach that is time and cost effective, and that requires no electronics knowledge. First, we supply non-specialists with cardboard forms to create tangibles. Second, we have them draw lines on it via conductive ink, which makes their objects recognizable by the capacitive touch screen. They can then apply routine programming to recognize these tangibles and thus iterate over various designs.
Exploring User Input Metaphors for Jump and Run Games on Mobile Devices
Kolja Lubitz,
Markus Krause
ICEC'12 Proceedings of the 11th Inernational Conference on Entertainment Computing
Predicting Crowd-based Translation Quality with Language-independent Feature Vectors
Niklas Kilian,
Markus Krause,
Nina Runge,
Jan Smeddinck
HComp'12 Proceedings of the AAAI Workshop on Human Computation
Playful Surveys : Easing Challenges of Human Subject Research with Online Crowds Challenges of Human Subject Research with
Markus Krause,
Jan Smeddinck,
Aneta Takhtamysheva,
Velislav Markov,
Nina Runge
HComp'12 Proceedings of the AAAI Workshop on Human Computation
Did They Really Like the Game ? -- Challenges in Evaluating Exergames with Older Adults
Jan Smeddinck,
Marc Herrlich,
Markus Krause,
Kathrin M Gerling,
Rainer Malaka
GUR'12 Proceedings of the CHI Game User Experience Research Workshop
Attjector: an Attention-Following Wearable Projector
Sven Kratz,
Michael Rohs,
Felix Reitberger,
Jörg Moldenhauer
Kinect Workshop at Pervasive 2012
Mobile handheld projectors in small form factors, e.g., integrated into mobile phones, are getting more common. However, managing the projection puts a burden on the user as it requires holding the hand steady over an extended period of time and draws attention away from the actual task to solve. To address this problem, we propose a body worn projector that follows the user's locus of attention. The idea is to take the user's hand and dominant ngers as an indication of the current locus of attention and focus the projection on that area. Technically, a wearable and steerable camera-projector system positioned above the shoulder tracks the ngers and follows their movement. In this paper, we justify our approach and explore further ideas on how to apply steerable projection for wearable interfaces. Additionally, we describe a Kinect-based prototype of the wearable and steerable projector system we developed.
Quantum Games: Ball Games Without a Ball
Henning Pohl,
Christian Holz,
Stefanie Reinicke,
Emilia Wittmers,
Marvin Killing,
Konstantin Kaefer,
Max Plauth,
Tobias Mohr,
Stephanie Platz,
Philipp Tessenow,
Patrick Baudisch
Workshop on Kinect in Pervasive Computing at Pervasive 2012
We present Quantum games, physical games that resemble corresponding real–world sports—except that the ball exists only in the players’ imagination. We demonstrate Quantum versions of team handball and air hockey. A computer system keeps score by tracking players using a Microsoft Kinect (air hockey) or a webcam (handball), simulates the physics of the ball, and reports ball interactions and scores back using auditory feedback. The key element that makes Quantum games playable is a novel type of physics engine that evaluates not one, but samples the set of all plausible ball trajectories in parallel. Before choosing a trajectory to realize, the engine massively increases the probability of outcomes that lead to enjoyable gameplay, such as goal shots, but also successful passes and intercepts that lead to fluid gameflow. The same mechanism allows giving a boost to inexpe- rienced players and implementing power–ups.
GCI 2012 Harnessing Collective Intelligence with Games 1st International Workshop on Systems with Homo Ludens in the Loop
Markus Krause,
Roberta Cuel,
Maja Vukovic
ICEC'12 Proceedings of the 11th Inernational Conference on Entertainment Computing
Comparison of Bio-Inspired and Graph-Theoretic Algorithms for Design of Fault-Tolerant Networks
Matthias Becker,
Waraphan Sarasureeporn,
Helena Szczerbicka
ICAS 2012, The Eighth International Conference on Autonomic and Autonomous Systems
Agent-based Approaches for Exploration and Pathfinding in Unknown Environments
Matthias Becker,
Florian Blatt,
Helena Szczerbicka
17th IEEE International Conference on Emerging Technologies and Factory Automation
2011
Advancing Large Interactive Surfaces for Use in the Real World
Jens Teichert,
Marc Herrlich,
Benjamin Walther-franks,
Lasse Schwarten,
Sebastian Feige,
Markus Krause,
Rainer Malaka
Formamente
WorldCupinion Experiences with an Android App for Real-Time Opinion Sharing During Soccer World Cup Games
Robert Schleicher,
Alireza Sahami Shirazi,
Michael Rohs,
Sven Kratz,
Albrecht Schmidt
Int. J. Mob. Hum. Comput. Interact.
Mobile devices are increasingly used in social networking applications and research. So far, there is little work on real-time emotion or opinion sharing in large loosely coupled user communities. One potential area of application is the assessment of widely broadcasted television TV shows. The idea of connecting non-collocated TV viewers via telecommunication technologies is referred to as Social TV. Such systems typically include set-top boxes for supporting the collaboration. In this work the authors investigated whether mobile phones can be used as an additional channel for sharing opinions, emotional responses, and TV-related experiences in real-time. To gain insight into this area, an Android app was developed for giving real-time feedback during soccer games and to create ad hoc fan groups. This paper presents results on rating activity during games and discusses experiences with deploying this app over four weeks during soccer World Cup. In doing so, challenges and opportunities faced are highlighted and an outlook on future work in this area is given.
WuppDi! – Supporting Physiotherapy of Parkinson´s Disease Patients via Motion-based Gaming
Oliver Assad,
Robert Hermann,
Damian Lilla,
Björn Mellies,
Ronald Meyer,
Liron Shevach,
Sandra Siegel,
Melanie Springer,
Saranat Tiemkeo,
Jens Voges,
Jan Wieferich,
Marc Herrlich,
Markus Krause,
Rainer Malaka
Mensch & Computer
Serious Questionnaires in Playful Social Network Applications
Aneta Takhtamysheva,
Markus Krause,
Jan Smeddnick
ICEC'11 Proceedings of the 10th Inernational Conference on Entertainment Computing
Motion-Based Games for Parkinson's Disease Patients
Oliver Assad,
Robert Hermann,
Damian Lilla,
Björn Mellies,
Ronald Meyer,
Liron Shevach,
Sandra Siegel,
Melanie Springer,
Saranat Tiemkeo,
Jens Voges,
Jan Wieferich,
Marc Herrlich,
Markus Krause,
Rainer Malaka
ICEC'11 Proceedings of the 10th Inernational Conference on Entertainment Computing
A Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-dependent Requirements
Katrin Wolf,
Anja Naumann,
Michael Rohs,
Jörg Müller
Proceedings of the 13th IFIP TC 13 International Conference on Human-computer Interaction - Volume Part I
This paper explores how microinteractions such as hand gestures allow executing a secondary task, e.g. controlling mobile applications and devices, without interrupting the manual primary tasks, for instance driving a car. We asked sports- and physiotherapists for using props while interviewing these experts in order to iteratively design microgestures. The required gestures should be easily performable without interrupting the primary task, without needing high cognitive effort, and without taking the risk of being mixed up with natural movements. Resulting from the expert interviews we developed a taxonomy for classifying these gestures according to their use cases and assess their ergonomic and cognitive attributes, focusing on their primary task compatibility. We defined 21 hand gestures, which allow microinteractions within manual dual task scenarios. In expert interviews we evaluated their level of required motor or cognitive resources under the constraint of stable primary task performance. Our taxonomy poses a basis for designing microinteraction techniques.
Towards real-time monitoring and controlling of enterprise architectures using business software control centers
Tobias Brückmann,
Volker Gruhn,
Max Pfeiffer
Proceedings of the 5th European conference on Software architecture
Gestural interaction on the steering wheel: reducing the visual demand
Tanja Döring,
Dagmar Kern,
Paul Marshall,
Max Pfeiffer,
Johannes Schöning,
Volker Gruhn,
Albrecht Schmidt
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Touch Input on Curved Surfaces
Anne Roudaut,
Henning Pohl,
Patrick Baudisch
Proceedings of the 2011 annual conference on Human factors in computing systems - CHI '11
Advances in sensing technology are currently bringing touch input to non-planar surfaces, ranging from spherical touch screens to prototypes the size and shape of a ping-pong ball. To help interface designers create usable interfaces on such devices, we determine how touch surface curvature affects targeting. We present a user study in which participants acquired targets on surfaces of different curvature and at locations of different slope. We find that surface convexity increases pointing accuracy, and in particular reduces the offset between the input point perceived by users and the input point sensed by the device. Concave surfaces, in contrast, are subject to larger error offsets. This is likely caused by how concave surfaces hug the user's finger, thus resulting in a larger contact area. The effect of slope on targeting, in contrast, is unexpected at first sight. Some targets located downhill from the user's perspective are subject to error offsets in the opposite direction from all others. This appears to be caused by participants acquiring these targets using a different finger posture that lets them monitor the position of their fingers more effectively.
Human Computation Games: a Survey
Markus Krause,
Jan Smeddnick
EUSIPCO'11 Proceedings of the 19th European Signal Processing Conference
Interaction with Magic Lenses: Real-world Validation of a Fitts' Law Model
Michael Rohs,
Antti Oulasvirta,
Tiia Suomalainen
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Rohs and Oulasvirta (2008) proposed a two-component Fitts' law model for target acquisition with magic lenses in mobile augmented reality (AR) with 1) a physical pointing phase, in which the target can be directly observed on the background surface, and 2) a virtual pointing phase, in which the target can only be observed through the device display. The model provides a good fit (R2=0.88) with laboratory data, but it is not known if it generalizes to real-world AR tasks. In the present outdoor study, subjects (N=12) did building-selection tasks in an urban area. The differences in task characteristics to the laboratory study are drastic: targets are three-dimensional and they vary in shape, size, z-distance, and visual context. Nevertheless, the model yielded an R2 of 0.80, and when using effective target width an R2 of 0.88 was achieved.
Real-time Nonverbal Opinion Sharing Through Mobile Phones During Sports Events
Alireza Sahami Shirazi,
Michael Rohs,
Robert Schleicher,
Sven Kratz,
Alexander Müller,
Albrecht Schmidt
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Even with the rise of the World Wide Web, TV has remained the most pervasive entertainment medium and is nowadays often used together with other media, which allow for active participation. The idea of connecting non-collocated TV viewers via telecommunication technologies, referred to as Social TV, has recently received considerable attention. Such systems typically include set-top boxes for supporting collaboration. In this research we investigate if real-time opinion sharing about TV shows through a nonverbal (non-textual) iconic UI on mobile phones is reasonable. For this purpose we developed a mobile app, made it available to a large number of users through the Android Market, and conducted an uncontrolled user study in the wild during the soccer world cup 2010. The results of the study indicate that TV viewers who used the app had more fun and felt more connected to other viewers. We also show that by monitoring this channel it is possible to collect sentiments relevant to the broadcasted content in real-time. The collected data exemplify that the aggregated sentiments correspond to important moments, and hence can be used to generate a summary of the event.
Protractor3D: A Closed-form Solution to Rotation-invariant 3D Gestures
Sven Kratz,
Michael Rohs
Proceedings of the 16th International Conference on Intelligent User Interfaces
Protractor 3D is a gesture recognizer that extends the 2D touch screen gesture recognizer Protractor to 3D gestures. It inherits many of Protractor's desirable properties, such as high recognition rate, low computational and low memory requirements, ease of implementation, ease of customization, and low number of required training samples. Protractor 3D is based on a closed-form solution to finding the optimal rotation angle between two gesture traces involving quaternions. It uses a nearest neighbor approach to classify input gestures. It is thus well-suited for application in resource-constrained mobile devices. We present the design of the algorithm and a study that evaluated its performance.
Deploying an Experimental Study of the Emergence of Human Communication Systems as an Online Game.
Jan Smeddnick,
Markus Krause
IK'2011 Procedings of the Interdisciplinary College
Motion-based Serious Games for Pakinson Patients
Oliver Assad,
Robert Hermann,
Damian Lilla,
Björn Mellies,
Ronald Meyer,
Liron Shevach,
Sandra Siegel,
Melanie Springer,
Saranat Tiemkeo,
Jens Voges,
Jan Wieferich,
Marc Herrlich,
Markus Krause,
Rainer Malaka
IK'2011 Procedings of the Interdisciplinary College
WuppDi! – Motion-Based serious games for parkinson’s patients
Oliver Assad,
Robert Hermann,
Damian Lilla,
Björn Mellies,
Ronald Meyer,
Liron Shevach,
Sandra Siegel,
Melanie Springer,
Saranat Tiemkeo,
Jens Voges,
Jan Wieferich,
Marc Herrlich,
Markus Krause,
Rainer Malaka
IK'2011 Procedings of the Interdisciplinary College
Dynamic ambient lighting for mobile devices
Qian Qin,
Michael Rohs,
Sven Kratz
Proceedings of the 24th annual ACM symposium adjunct on User interface software and technology
The information a small mobile device can show via its display has been always limited by its size. In large information spaces, relevant information, such as important locations on a map can get clipped when a user starts zooming and panning. Dynamic ambient lighting allows mobile devices to visualize off-screen objects by illuminating the background without compromising valuable display space. The lighted spots can be used to show the direction and distance of such objects by varying the spot's position and intensity. Dynamic ambient lighting also provides a new way of displaying the state of a mobile device. Illumination is provided by a prototype rear of device shell which contains LEDs and requires the device to be placed on a surface, such as a table or desk.
CapWidgets: Tangile Widgets Versus Multi-touch Controls on Mobile Devices
Sven Kratz,
Tilo Westermann,
Michael Rohs,
Georg Essl
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Teaching Serious Games
Marc Herrlich,
Markus Krause,
Rainer Malaka,
Jan Smeddnick
Mensch & Computer Workshop on „Game Development in der Hochschulinformatik“
A Simulation Model of Dictyostelium Discoideum for the Study of Evolutionary Selection Mechanisms
Matthias Becker,
Helena Szczerbicka
Cybernetics and Systems: An International Journal
Design of fault tolerant networks with agent-based simulation of physarum polycephalum
Matthias Becker
2011 IEEE Congress of Evolutionary Computation (CEC)
2010
Advancing Large Interactive Surfaces for Use in the Real World
Jens Teichert,
Marc Herrlich,
Benjamin Walther-franks,
Lasse Schwarten,
Sebastian Feige,
Markus Krause,
Rainer Malaka
Advances in Human-Computer Interaction
User-defined gestures for connecting mobile phones, public displays, and tabletops
Christian Kray,
Daniel Nesbitt,
John Dawson,
Michael Rohs
Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
Gestures can offer an intuitive way to interact with a computer. In this paper, we investigate the question whether gesturing with a mobile phone can help to perform complex tasks involving two devices. We present results from a user study, where we asked participants to spontaneously produce gestures with their phone to trigger a set of different activities. We investigated three conditions (device configurations): phone-to-phone, phone-to-tabletop, and phone to public display. We report on the kinds of gestures we observed as well as on feedback from the participants, and provide an initial assessment of which sensors might facilitate gesture recognition in a phone. The results suggest that phone gestures have the potential to be easily understood by end users and that certain device configurations and activities may be well suited for gesture control.
Semi-automatic zooming for mobile map navigation
Sven Kratz,
Ivo Brodien,
Michael Rohs
Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
In this paper we present a novel interface for mobile map navigation based on Semi-Automatic Zooming (SAZ). SAZ gives the user the ability to manually control the zoom level of an SDAZ interface, while retaining the automatic zooming characteristics of that interface at times when the user is not explicitly controlling the zoom level. In a user study conducted using a realistic mobile map with a wide scale space, we compare SAZ with existing map interface techniques, multi-touch and Speed-Dependent Automatic Zooming (SDAZ). We extend a dynamic state-space model for Speed-Dependent Automatic Zooming (SDAZ) to accept 2D tilt input for scroll rate and zoom level control and implement a dynamically zoomable map view with access to high-resolution map material for use in our study. The study reveals that SAZ performs significantly better than SDAZ and that SAZ is comparable in performance and usability to a standard multi-touch map interface. Furthermore, the study shows that SAZ could serve as an alternative to multi-touch as input technique for mobile map interfaces.
Characteristics of pressure-based input for mobile devices
Craig Stewart,
Michael Rohs,
Sven Kratz,
Georg Essl
Proceedings of the 28th international conference on Human factors in computing systems
We conducted a series of user studies to understand and clarify the fundamental characteristics of pressure in user interfaces for mobile devices. We seek to provide insight to clarify a longstanding discussion on mapping functions for pressure input. Previous literature is conflicted about the correct transfer function to optimize user performance. Our study results suggest that the discrepancy can be explained by different signal conditioning circuitry and with improved signal conditioning the user-performed precision relationship is linear. We also explore the effects of hand pose when applying pressure to a mobile device from the front, the back, or simultaneously from both sides in a pinching movement. Our results indicate that grasping type input outperforms single-sided input and is competitive with pressure input against solid surfaces. Finally we provide an initial exploration of non-visual multimodal feedback, motivated by the desire for eyes-free use of mobile devices. The findings suggest that non-visual pressure input can be executed without degradation in selection time but suffers from accuracy problems.
Dance Pattern Recognition using Dynamic Time Warping
Henning Pohl,
Aristotelis Hadjakos
Proceedings of the 7th Sound and Music Computing Conference (SMC 2010)
In this paper we describe a method to detect patterns in dance movements. Such patterns can be used in the context of interactive dance systems to allow dancers to influence computational systems with their body movements. For the detection of motion patterns, dynamic time warping is used to compute the distance between two given movements. A custom threshold clustering algorithm is used for subsequent unsupervised classification of movements. For the evaluation of the presented method, a wearable sensor system was built. To quantify the accuracy of the classification, a custom label space mapping was designed to allow comparison of sequences with disparate label sets.
Use the Force (or something) - Pressure and Pressure-Like Input for Mobile Music Performance
Georg Essl,
Michael Rohs,
Sven Kratz
Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2010)
Impact force is an important dimension for percussive musical instruments such as the piano. We explore three possible mechanisms how to get impact forces on mobile multi-touch devices: using built-in accelerometers, the pressure sensing capability of Android phones, and external force sensing resistors. We find that accelerometers are difficult to control for this purpose. Android's pressure sensing shows some promise, especially when combined with augmented playing technique. Force sensing resistors can offer good dynamic resolution but this technology is not currently offered in commodity devices and proper coupling of the sensor with the applied impact is difficult.
Extending the Virtual Trackball Metaphor to Rear Touch Input
Sven Kratz,
Michael Rohs
Proceedings of the 2010 IEEE Symposium on 3D User Interfaces (3DUI 2010)
Interaction with 3D objects and scenes is becoming increasingly important on mobile devices. We explore 3D object rotation as a fundamental interaction task. We propose an extension of the virtual trackball metaphor, which is typically restricted to a half sphere and single-sided interaction, to actually use a full sphere. The extension is enabled by a hardware setup called the ¿iPhone Sandwich,¿ which allows for simultaneous front-and-back touch input. This setup makes the rear part of the virtual trackball accessible for direct interaction and thus achieves the realization of the virtual trackball metaphor to its full extent. We conducted a user study that shows that a back-of-device virtual trackball is as effective as a front-of-device virtual trackball and that both outperform an implementation of tilt-based input.
A $3 gesture recognizer: simple gesture recognition for devices equipped with 3D acceleration sensors
Sven Kratz,
Michael Rohs
Proceeding of the 14th international conference on Intelligent user interfaces
We present the $3 Gesture Recognizer, a simple but robust gesture recognition system for input devices featuring 3D acceleration sensors. The algorithm is designed to be implemented quickly in prototyping environments, is intended to be device-independent and does not require any special toolkits or frameworks. It relies solely on simple trigonometric and geometric calculations. A user evaluation of our system resulted in a correct gesture recognition rate of 80%, when using a set of 10 unique gestures for classification. Our method requires significantly less training data than other gesture recognizers and is thus suited to be deployed and to deliver results rapidly.
Human Computation in Action
Markus Krause
IK'2010 Procedings of the Interdisciplinary College
A multi-touch enabled steering wheel: exploring the design space
Max Pfeiffer,
Dagmar Kern,
Johannes Schöning,
Tanja Döring,
Antonio Kroeger,
Albrecht Schmidt
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Frontiers of a Paradigm - Exploring Human Computation with Digital Games
Markus Krause,
Aneta Takhtamysheva,
Marion Wittstock,
Rainer Malaka
HComp'10 Proceedings of the ACM SIGKDD Workshop on Human Computation
Webpardy : Harvesting QA by HC
Hidir Aras,
Markus Krause,
Andreas Haller,
Rainer Malaka
HComp'10 Proceedings of the ACM SIGKDD Workshop on Human Computation
WorldCupinion: Experiences with an Android App for Real-Time Opinion Sharing during World Cup Soccer Games
Michael Rohs,
Sven Kratz,
Robert Schleicher,
Alireza Sahami,
Albrecht Schmidt
Research in the Large: Using App Stores, Markets and other wide distribution channels in UbiComp research. Workshop at Ubicomp 2010
Mobile devices are increasingly used in social networking applications. So far, there is little work on real-time emotion and opinion sharing in large loosely-coupled user communities. We present an Android app for giving realtime feedback during soccer games and to create ad hoc fan groups. We discuss our experiences with deploying this app over four weeks during 2010 soccer world cup. We highlight challenges and opportunities we faced and give recommendations for future work in this area.
A Tabletop System for supporting Paper Prototyping of Mobile Interfaces
Benjamin Bähr,
Michael Rohs,
Sven Kratz
PaperComp 2010: 1st International Workshop on Paper Computing. Workshop at Ubicomp 2010
We present a tabletop-based system that supports rapid paper-based prototyping for mobile applications. Our system combines the possibility of manually sketching interface screens on paper with the ability to define dynamic interface behavior through actions on the tabletop. This not only allows designers to digitize interface sketches for paper prototypes, but also enables the generation of prototype applications able to run on target devices. By making physical and virtual interface sketches interchangeable, our system greatly enhances and speeds up the development of mobile applications early in the interface design process.
Natural User Interfaces in Mobile Phone Interaction
Sven Kratz,
Fabian Hemmert,
Michael Rohs
Workshop on Natural User Interfaces at CHI 2010
User interfaces for mobile devices move away from mainly button- and menu-based interaction styles and towards more direct techniques, involving rich sensory input and output. The recently proposed concept of Natural User Interfaces (NUIs) provides a way to structure the discussion about these developments. We examine how two-sided and around-device interaction, gestural input, and shape- and weight-based output can be used to create NUIs for mobile devices. We discuss the applicability of NUI properties in the context of mobile interaction.
A Data Management Framework Providing Online-Connectivity in Symbiotic Simulation
Sebastian Bohlmann,
Volkhard Klinger,
Helena Szczerbicka,
Matthias Becker
24th EUROPEAN Conference on Modelling and Simulation (ECMS), Simulation meets Global Challenges, Kuala Lumpur, Malaysia
Simulation Model For The Whole Life Cycle Of The Slime Mold Dictyostelium Discoideum.
Matthias Becker
Proceedings of the European conference on modeling and simulation (ECMS)
A simulation study of mechanisms of group selection of the slime mold Dictyostelium discoideum
Matthias Becker
2010 IEEE 14th International Conference on Intelligent Engineering Systems
2009
Bridging the gap between the Kodak and the Flickr generations: A novel interaction technique for collocated photo sharing
Christian Kray,
Michael Rohs,
Jonathan Hook,
Sven Kratz
Int. J. Hum.-Comput. Stud.
Passing around stacks of paper photographs while sitting around a table is one of the key social practices defining what is commonly referred to as the ‘Kodak Generation’. Due to the way digital photographs are stored and handled, this practice does not translate well to the ‘Flickr Generation’, where collocated photo sharing often involves the (wireless) transmission of a photo from one mobile device to another. In order to facilitate ‘cross-generation’ sharing without enforcing either practice, it is desirable to bridge this gap in a way that incorporates familiar aspects of both.
In this paper, we discuss a novel interaction technique that addresses some of the constraints introduced by current communication technology, and that enables photo sharing in a way, which resembles the passing of stacks of paper photographs. This technique is based on dynamically generated spatial regions around mobile devices and has been evaluated through two user studies. The results we obtained indicate that our technique is easy to learn and as fast, or faster than, current technology such as transmitting photos between devices using Bluetooth. In addition, we found evidence of different sharing techniques influencing social practice around photo sharing. The use of our technique resulted in a more inclusive and group-oriented behavior in contrast to Bluetooth photo sharing, which resulted in a more fractured setting composed of sub-groups.
Impact of item density on the utility of visual context in magic lens interactions
Michael Rohs,
Robert Schleicher,
Johannes Schöning,
Georg Essl,
Anja Naumann,
Antonio Krüger
Personal Ubiquitous Comput.
This article reports on two user studies investigating the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore items on a map and look for a specific attribute. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. Hand motion patterns and eye movements were recorded. We found that visual context is most effective for sparsely distributed items and gets less helpful with increasing item density. User performance in the magic lens case is generally better than in the dynamic peephole case, but approaches the performance of the latter the more densely the items are spaced. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces, involving spatially tracked personal displays or combined personal and public displays, by suggesting when to use visual context.
Mobile phones offer an attractive platform for interactive music performance. We provide a theoretical analysis of the sensor capabilities via a design space and show concrete examples of how different sensors can facilitate interactive performance on these devices. These sensors include cameras, microphones, accelerometers, magnetometers and multitouch screens. The interactivity through sensors in turn informs aspects of live performance as well as composition though persistence, scoring, and mapping to musical notes or abstract sounds.
PhotoMap: Using Spontaneously Taken Images of Public Maps for Pedestrian Navigation Tasks on Mobile Devices
Johannes Schöning,
Antonio Krüger,
Keith Cheverst,
Michael Rohs,
Markus Löchtefeld,
Faisal Taher
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
In many mid- to large-sized cities public maps are ubiquitous. One can also find a great number of maps in parks or near hiking trails. Public maps help to facilitate orientation and provide special information to not only tourists but also to locals who just want to look up an unfamiliar place while on the go. These maps offer many advantages compared to mobile maps from services like Google Maps Mobile or Nokia Maps. They often show local landmarks and sights that are not shown on standard digital maps. Often these 'You are here' (YAH) maps are adapted to a special use case, e.g. a zoo map or a hiking map of a certain area. Being designed for a fashioned purpose these maps are often aesthetically well designed and their usage is therefore more pleasant. In this paper we present a novel technique and application called PhotoMap that uses images of 'You are here' maps taken with a GPS-enhanced mobile camera phone as background maps for on-the-fly navigation tasks. We discuss different implementations of the main challenge, namely helping the user to properly georeference the taken image with sufficient accuracy to support pedestrian navigation tasks. We present a study that discusses the suitability of various public maps for this task and we evaluate if these georeferenced photos can be used for navigation on GPS-enabled devices.
HoverFlow: Expanding the Design Space of Around-Device Interaction
Sven Kratz,
Michael Rohs
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
In this paper we explore the design space of around-device interaction (ADI). This approach seeks to expand the interaction possibilities of mobile and wearable devices beyond the confines of the physical device itself to include the space around it. This enables rich 3D input, comprising coarse movement-based gestures, as well as static position-based gestures. ADI can help to solve occlusion problems and scales down to very small devices. We present a novel around-device interaction interface that allows mobile devices to track coarse hand gestures performed above the device's screen. Our prototype uses infrared proximity sensors to track hand and finger positions in the device's proximity. We present an algorithm for detecting hand gestures and provide a rough overview of the design space of ADI-based interfaces.
Improving the Communication of Spatial Information in Crisis Response by Combining Paper Maps and Mobile Devices
Johannes Schöning,
Michael Rohs,
Antonio Krüger,
Christoph Stasch
Mobile Response
Efficient and effective communication between mobile units and the central emergency operation center is a key factor to respond successfully to the challenges of emergency management. Nowadays, the only ubiquitously available modality is a voice channel through mobile phones or radio transceivers. This makes it often very difficult to convey exact geographic locations and can lead to misconceptions with severe consequences, such as a fire brigade heading to the right street address in the wrong city. In this paper we describe a handheld augmented reality approach to support the communication of spatial information in a crisis response scenario. The approach combines mobile camera devices with paper maps to ensure a quick and reliable exchange of spatial information.
Impact of Item Density on Magic Lens Interactions
Michael Rohs,
Georg Essl,
Johannes Schöning,
Anja Naumann,
Robert Schleicher,
Antonio Krüger
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
We conducted a user study to investigate the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore objects on a map and look for a specific attribute shown on the display. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. We found that visual context is most effective for sparse item distributions and the performance benefit decreases with increasing density. User performance in the magic lens case approaches the performance of the dynamic peephole case the more densely spaced the items are. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces by suggesting when external visual context is most beneficial.
Using Hands and Feet to Navigate and Manipulate Spatial Data
Johannes Schöning,
Florian Daiber,
Antonio Krüger,
Michael Rohs
Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
We demonstrate how multi-touch hand gestures in combination with foot gestures can be used to perform navigation tasks in interactive systems. The geospatial domain is an interesting example to show the advantages of the combination of both modalities because the complex user interfaces of common Geographic Information System (GIS) requires a high degree of expertise from its users. Recent developments in interactive surfaces that enable the construction of low cost multi-touch displays and relatively cheap sensor technology to detect foot gestures allow the deep exploration of these input modalities for GIS users with medium or low expertise. In this paper, we provide a categorization of multitouch hand and foot gestures for the interaction with spatial data on a large-scale interactive wall. In addition we show with an initial evaluation how these gestures can improve the overall interaction with spatial information.
Map Torchlight: A Mobile Augmented Reality Camera Projector Unit
Johannes Schöning,
Michael Rohs,
Sven Kratz,
Markus Löchtefeld,
Antonio Krüger
Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.
Playful tagging: folksonomy generation using online games
Markus Krause,
Hidir Aras
WWW '09 Proceedings of the 18th international conference on World wide web
Games for Games
Aneta Takhtamysheva,
Robert Porzel,
Markus Krause
HComp'09 Proceedings of the ACM SIGKDD Workshop on Human Computation
LittleProjectedPlanet: An Augmented Reality Game for Camera Projector Phones
Markus Löchtefeld,
Johannes Schöning,
Michael Rohs,
Antonio Krüger
Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2009), Bonn, Germany, September 15, 2009
With the miniaturization of projection technology the integration of tiny projection units, normally referred to as pico projectors, into mobile devices is not longer ction. Such integrated projectors in mobile devices could make mobile projection ubiquitous within the next few years. These phones soon will have the ability to project large-scale information onto any surfaces in the real world. By doing so the interaction space of the mobile device can be expanded to physical objects in the environment and this can support interaction concepts that are not even possible on modern desktop computers today. In this paper, we explore the possibilities of camera projector phones with a mobile adaption of the Playstation3 game LittleBigPlanet. The camera projector unit is used to augment the hand drawings of a user with an overlay displaying physical interaction of virtual objects with the real world. Players can sketch a 2D world on a sheet of paper or use an existing physical configuration of objects and let the physics engine simulate physical procedures in this world to achieve game goals.
Unobtrusive Tabletops: Linking Personal Devices with Regular Tables
Sven Kratz,
Michael Rohs
Workshop Multitouch and Surface Computing at CHI'09
In this paper we argue that for wide deployment, interactive surfaces should be embedded in real environments as unobtrusively as possible. Rather than deploying dedicated interactive furniture, in environments such as pubs, cafés, or homes it is often more acceptable to augment existing tables with interactive functionality. One example is the use of robust camera-projector systems in real-world settings in combination with spatially tracked touch-enabled personal devices. This retains the normal usage of tabletop surfaces, solves privacy issues, and allows for storage of media items on the personal devices. Moreover, user input can easily be tracked with high precision and low latency and can be attributed to individual users.
TaxiMedia: An Interactive Context-Aware Entertainment and Advertising System
Florian Alt,
Alireza Sahami Shirazi,
Max Pfeiffer,
Paul Holleis,
Albrecht Schmidt
2nd Pervasive Advertising Workshop at Informatics 2009
Squeezing the Sandwich: A Mobile Pressure-Sensitive Two-Sided Multi-Touch Prototype
Georg Essl,
Michael Rohs,
Sven Kratz
Demonstration at the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST), Victoria, BC, Canada
Two-sided pressure input is common in everyday interactions such as grabbing, sliding, twisting, and turning an object held between thumb and index finger. We describe and demonstrate a research prototype which allows for twosided multitouch sensing with continuous pressure input at interactive rates and we explore early ideas of interaction techniques that become possible with this setup. The advantage of a two-sided pressure interaction is that it enables high degree-of-freedom input locally. Hence rather complex, yet natural interactions can be designed using little finger motion and device space.
On Classification Approaches for Misbehavior Detection in Wireless Sensor Networks
Matthias Becker,
Martin Drozda,
Sven Schaust,
Sebastian Bohlmann,
Helena Szczerbicka
Journal of Computers
Tread profile optimization for tires with multiple pitch tracks
Matthias Becker,
Sebastian Jaschke,
Helena Szczerbicka
Proceedings of the IEEE 13th international conference on Intelligent Engineering Systems
Quality control of a light metal die casting process using artificial neural networks
Matthias Becker
2009 IEEE International Conference on Computational Cybernetics (ICCC)
2008
Group Coordination and Negotiation through Spatial Proximity Regions around Mobile Devices on Augmented Tabletops
Christian Kray,
Michael Rohs,
Jonathan Hook,
Sven Kratz
Horizontal Interactive Human Computer Systems, 2008. TABLETOP 2008. 3rd IEEE International Workshop on
Negotiation and coordination of activities involving a number of people can be a difficult and time-consuming process, even when all participants are collocated. We propose the use of spatial proximity regions around mobile devices on a table to significantly reduce the effort of proposing and exploring content within a group of collocated people. In order to determine the location of devices on ordinary tables, we developed a tracking mechanism for a camera-projector system that uses dynamic visual markers displayed on the screen of a device. We evaluated our spatial proximity region based approach using a photo-sharing application for people sat around a table. The tabletop provides a frame of reference in which the spatial arrangement of devices signals the coordination state to the users. The results from the study indicate that the proposed approach facilitates coordination in several ways, for example, by allowing for simultaneous user activity and by reducing the effort required to achieve a common goal. Our approach reduced the task completion time by 43% and was rated as superior in comparison to other established techniques.
Designing Low-Dimensional Interaction for Mobile Navigation in 3D Audio Spaces
Till Schäfers,
Michael Rohs,
Sascha Spors,
Alexander Raake,
Jens Ahrens
34th International Conference of the Audio Engineering Society (AES 2008), Jeju Island, Korea, August 28-30, 2008
In this paper we explore spatial audio as a new design space for applications like teleconferencing and audio stream management on mobile devices. Especially in conjunction with input techniques using motion-tracking, the interaction has to be thoroughly designed in order to allow low-dimensional input devices like gyroscopic sensors to be used for controlling the rather complex spatial setting of the virtual audio space. We propose a new interaction scheme that allows the mapping of low-dimensional input data to navigation of a listener within the spatial setting.
Sensing-Based Interaction for Information Navigation on Handheld Displays
Michael Rohs,
Georg Essl
Advances in Human-Computer Interaction Volume 2008 (2008)
Information navigation on handheld displays is characterized by the small display dimensions and limited input capabilities of today’s mobile devices. Special strategies are required to help users navigate to off-screen content and develop awareness of spatial layouts despite the small display. Yet, handheld devices offer interaction possibilities that desktop computers do not. Handheld devices can easily be moved in space and used as a movable window into a large virtual workspace. We investigate different information navigation methods for small-scale handheld displays using a range of sensor technologies for spatial tracking. We compare user performance in an abstract map navigation task and discuss the tradeoffs of the different sensor and visualization techniques.
Target Acquisition with Camera Phones when used as Magic Lenses
Michael Rohs,
Antti Oulasvirta
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
When camera phones are used as magic lenses in handheld augmented reality applications involving wall maps or posters, pointing can be divided into two phases: (1) an initial coarse physical pointing phase, in which the target can be directly observed on the background surface, and (2) a fine-control virtual pointing phase, in which the target can only be observed through the device display. In two studies, we show that performance cannot be adequately modeled with standard Fitts' law, but can be adequately modeled with a two-component modification. We chart the performance space and analyze users' target acquisition strategies in varying conditions. Moreover, we show that the standard Fitts' law model does hold for dynamic peephole pointing where there is no guiding background surface and hence the physical pointing component of the extended model is not needed. Finally, implications for the design of magic lens interfaces are considered.
Improving Interaction with Virtual Globes Through Spatial Thinking: Helping Users Ask "Why?"
Johannes Schöning,
Brent Hecht,
Martin Raubal,
Antonio Krüger,
Meredith Marsh,
Michael Rohs
Proceedings of the 13th International Conference on Intelligent User Interfaces
Virtual globes have progressed from little-known technology to broadly popular software in a mere few years. We investigated this phenomenon through a survey and discovered that, while virtual globes are en vogue, their use is restricted to a small set of tasks so simple that they do not involve any spatial thinking. Spatial thinking requires that users ask "what is where" and "why"; the most common virtual globe tasks only include the "what". Based on the results of this survey, we have developed a multi-touch virtual globe derived from an adapted virtual globe paradigm designed to widen the potential uses of the technology by helping its users to inquire about both the "what is where" and "why" of spatial distribution. We do not seek to provide users with full GIS (geographic information system) functionality, but rather we aim to facilitate the asking and answering of simple "why" questions about general topics that appeal to a wide virtual globe user base.
The Design Space of Mobile Phone Input Techniques for Ubiquitous Computing
Rafael Ballagas,
Michael Rohs,
Jennifer Sheridan,
Jan Borchers
In: Joanna Lumsden (Ed.): Handbook of Research on User Interface Design and Evaluation for Mobile Technologies. IGI Global, Hershey, PA, USA, 2008. ISBN: 978-1-59904-871-0
The mobile phone is the first truly pervasive computer. In addition to its core communications functionality, it is increasingly used for interaction with the physical world. This chapter examines the design space of input techniques using established desktop taxonomies and design spaces to provide an indepth discussion of existing interaction techniques. A new five-part spatial classification is proposed for ubiquitous mobile phone interaction tasks discussed in our survey. It includes supported subtasks (position, orient, and selection), dimensionality, relative vs. absolute movement, interaction style (direct vs. indirect), and feedback from the environment (continuous vs. discrete). Key design considerations are identified for deploying these interaction techniques in real-world applications. Our analysis aims to inspire and inform the design of future smart phone interaction techniques.
User Detection for a Multi-touch Table via Proximity Sensors
Jens Teichert,
Marc Herrlich,
Benjamin Walther-Franks,
Lasse Schwarten,
Markus Krause
Proceedings of the IEEE Tabletops and Interactive Surfaces
Multitouch Motion Capturing
Markus Krause,
Marc Herrlich,
Lasse Schwarten,
Jens Teichert,
Benjamin Walther-Franks
Proceedings of the IEEE Tabletops and Interactive Surfaces
Multitouch Interface Metaphors for 3D Modeling
Marc Herrlich,
Markus Krause,
Lasse Schwarten,
Jens Teichert,
Benjamin Walther-Franks
Proceedings of the IEEE Tabletops and Interactive Surfaces
Microphone as Sensor in Mobile Phone Performance
Ananya Misra,
Georg Essl,
Michael Rohs
Proceedings of the 8th International Conference on New Interfaces for Musical Expression (NIME 2008), Genova, Italy, June 5-7, 2008
Many mobile devices, specifically mobile phones, come equipped with a microphone. Microphones are high-fidelity sensors that can pick up sounds relating to a range of physical phenomena. Using simple feature extraction methods, parameters can be found that sensibly map to synthesis algorithms to allow expressive and interactive performance. For example blowing noise can be used as a wind instrument excitation source. Also other types of interactions can be detected via microphones, such as striking. Hence the microphone, in addition to allowing literal recording, serves as an additional source of input to the developing field of mobile phone performance.
Spatial Authentication on Large Interactive Multi-Touch Surfaces
Johannes Schöning,
Michael Rohs,
Antonio Krüger
Adjunct Proceedings of the 3rd IEEE Workshop on Tabletops and Interactive Surfaces (IEEE Tabletop 2008), Amsterdam, the Netherlands, October 1-3, 2008
The exploitation of finger and hand tracking technology based on infrared light, such as FTIR, Diffused Illumination (DI) or Diffused Surface Illumination (DSI) has enabled the construction of large-scale, low-cost, interactive multi-touch surfaces. In this context, access and security problems arise if larger teams operate theses surfaces with different access rights. The team members might have several levels of authority or specific roles, which determine what functions and objects they are allowed to access via the multi-touch surface. In this paper we present first concepts and strategies to authenticate and interact with subregions of a large-scale multi-touch wall.
A GPS Tracking Application with a Tilt- and Motion-Sensing Interface
Michael Mock,
Michael Rohs
Workshop on Mobile and Embedded Interactive Systems (MEIS at Informatik 2008), Munich, Germany, September 11, 2008
Combining GPS tracks with semantic annotations is the basis for large data analysis tasks that give insight into the movement behavior of populations. In this paper, we present a first prototype implementation of a GPS tracking application that aims at subsuming GPS tracking and manual annotation on a standard mobile phone. The main purpose of this prototype is to investigate its usability, which is achieved by a tilt- and motion-sensing interface. We provide a GPS diary function that visualizes GPS trajectories on a map, allows annotating the trajectory, and navigating through the trajectory by moving and tilting the mobile phone. We present the design of our application and report on the very first user experiences.
Navigating Dynamically-Generated High Quality Maps on Tilt-Sensing Mobile Devices
Sven Kratz,
Michael Rohs
Workshop on Mobile and Embedded Interactive Systems (MEIS at Informatik 2008), Munich, Germany, September 11, 2008
On mobile devices, navigating in high-resolution and high-density 2D information spaces, such as geographic maps, is a common and important task. In order to support this task, we expand on work done in the areas of tilt-based browsing on mobile devices and speed-dependent automatic zooming in the traditional desktop environment to create an efficient interface for browsing high-volume map data at a wide range of scales. We also discuss infrastructure aspects, such as streaming 2D content to the device and efficiently rendering it on the display, using standards such as Scalable Vector Graphics (SVG).
Mobile Interaction with the "Real World"
Johannes Schöning,
Michael Rohs,
Antonio Krüger
Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2008), Amsterdam, The Netherlands, September 2, 2008
Real-world objects (and the world) are usually not at. It is unfortunate, then, that mobile augmented reality (AR) applications often concentrate on the interaction with 2D objects. Typically, 2D markers are required to track mobile devices relative to the real-world objects to be augmented, and the interaction with these objects is normally limited to the xed plane in which these markers are located. Using platonic solids, we show how to easily extend the interaction space to tangible 3D models. In particular, we present a proof-of-concept example in which users interact with a 3D paper globe using a mobile device that augments the globe with additional information. (In other words, mobile interaction with the "real world".) We believe that this particular 3D interaction with a paper globe can be very helpful in educational settings, as it allows pupils to explore our planet in an easy and intuitive way. An important aspect is that using the real shape of the world can help to correct many common geographic misconceptions that result from the projection of the earth's surface onto a 2D plane.
Photomap: Snap, Grab and Walk away with a "YOU ARE HERE" Map
Keith Cheverst,
Johannes Schöning,
Antonio Krüger,
Michael Rohs
Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2008), Amsterdam, The Netherlands, September 2, 2008
One compelling scenario for the use of GPS enabled phones is support for navigation, e.g. enabling a user to glance down at the screen of her mobile phone in order to be reassured that she is indeed located where she thinks she is. While service based approaches to support such navigation tasks are becoming increasingly available - whereby a user downloads (for a fee) a relevant map of her current area onto her GPS enabled phone, the approach is often far from ideal. Typically, the user is unsure as to the cost of downloading the map (especially when she is in a foreign country) and such maps are highly generalised and may not match the user's current activity and needs. For example, rather than requiring a standard map on a mobile device of the area, the user may simply require a map of a university campus with all departments or a map showing footpaths around the area in which she is currently trekking. Indeed, one will often see such specialised maps on public signs situated where they may be required (in a just-in-time sense) and it is interesting to consider how one might enable users to walk up to such situated signs and use their mobile phone to `take away' the map presented in order to use it to assist their ongoing navigation activity. In this paper, we are interested in a subset of this problem space in which the user `grabs' a map shown on a public display by taking a photograph of it and using it as a digital map on her mobile phone. We present two di erent scenarios for our new application called PhotoMaps: In the rst one we are having full control on the map design process (e.g. we are able to place markers etc., in the second scenario we use the map as it is and appropriate it for further navigation use.
Using Mobile Phones to Spontaneously Authenticate and Interact with Multi-Touch Surfaces
Johannes Schöning,
Michael Rohs,
Antonio Krüger
Proceedings of the Workshop on Designing Multi-Touch Interaction Techniques for Coupled Public and Private Displays (PPD at AVI 2008), Naples, Italy, May 31, 2008
The development of FTIR (Frustrated Total Internal Reflection) technology has enabled the construction of large-scale, low-cost, multi-touch displays. These displays—capable of sensing fingers, hands, and whole arms—have great potential for exploring complex data in a natural manner and easily scale in size and the number of simultaneous users. In this context, access and security problems arise if a larger team operates the surface with different access rights. The team members might have different levels of authority or specific roles, which determines what functions they are allowed to access via the multi-touch surface. In this paper we present first concepts and strategies to use a mobile phone to spontaneously authenticate and interact with sub-regions of a large-scale multi-touch wall.
Facilitating Opportunistic Interaction with Ambient Displays
Christian Kray,
Areti Galani,
Michael Rohs
Workshop on Designing and Evaluating Mobile Phone-Based Interaction with Public Displays at CHI 2008, Florence, Italy, April 5, 2008
Some public display systems provide information that is vital for people in their vicinity (such as departure times at airports and train stations) whereas other screens are more ambient (such as displays providing background information on exhibits in a museum). The question we are discussing in this paper is how to design interaction mechanisms for the latter, in particular how mobile phones can be used to enable opportunistic and leisurely interaction. We present results from an investigation into the use and perception of a public display in a café, and we derive some requirements for phone-based interaction with (ambient) public displays. Based on these requirements, we briefly evaluate three different interaction techniques.
Traffic analysis and classification with bio-inspired and classical algorithms in sensor networks
Matthias Becker,
Sebastian Bohlmann,
Sven Schaust
Performance Evaluation of Computer and Telecommunication Systems, 2008. SPECTS 2008. International Symposium on
Comparing performance of misbehavior detection based on neural networks and ais
Matthias Becker,
Martin Drozda,
Sebastian Jaschke,
Sven Schaust
2008 IEEE International Conference on Systems, Man and Cybernetics
Approaches to Analyze and Optimize Inventory-Controlled Service Systems using Taguchi Method and Ant Algorithms
Matthias Becker,
Helena Szczerbicka,
Honam Kim
2008 International Conference on Service Systems and Service Management
Performance of Security Mechanisms in Wireless Ad Hoc Networks
Matthias Becker,
Martin Drozda,
Sven Schaust
Workshop on Security and High Performance Computing Systems
2007
Performance of routing protocols for real wireless sensor networks
Matthias Becker,
Sven Schaust,
Eugen Wittmann
Proceedings of the 10th International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS'07)
NEURAL NETWORKS AND OPTIMIZATION ALGORITHMS APPLIED FOR CONSTRUCTION OF LOW NOISE TREAD PROFILES
Matthias Becker,
Helena Szczerbicka,
Michael Thomas
Cybernetics and Systems: An International Journal
Generating Interactive 3-D Models for Discrete-Event Modeling Formalisms
Matthias Becker
Cyberworlds, 2007. CW'07. International Conference on
2006
Genetic algorithms for noise reduction in tire design
Matthias Becker
2006 IEEE International Conference on Systems, Man and Cybernetics
2005
Approaching Ad Hoc Wireless Networks with Autonomic Computing: A Misbehavior Perspective
Martin Drozda,
Helena Szczerbicka,
Thomas Bessey,
Matthias Becker,
Rainer Barton
Proc. 2005 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS05)
Optimisation of buffer size in manufacturing systems using ant algorithms
Matthias Becker,
Helena Szczerbicka
Foundations of Control and Management Sciences
2003
Planning the Reconstruction of a Shiplift by Simulation of a Stochastic Petri Net Model
Matthias Becker,
Thomas Bessey
European Simulation Symposium
Modeling and simulation of a complete semiconductor manufacturing facility using Petri nets
Matthias Becker
Emerging Technologies and Factory Automation, 2003. Proceedings. ETFA'03. IEEE Conference
2002
Comparison of the modeling power of fluid stochastic Petri nets (FSPN) and hybrid Petri nets (HPN)
Matthias Becker,
Thomas Bessey
IEEE International Conference on Systems, Man and Cybernetics
On Modification In Petri Nets
Roger Jahns,
Matthias Becker,
Thomas Bessey,
Helena Szczerbicka
Proc. Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS)
Integrating Software Performance Evaluation in Software-Engineering
Matthias Becker,
Lutz Twele,
Helena Szczerbicka
First international conference on grand challenges for modeling and simulations at Western MultiConference
2001
Property-Conserving Transformations in PNiQ (Petri Nets including Queueing Networks)
Matthias Becker
Computer Science and Engineering: Invited Session on Modelling and Analysis based on Petri nets
2000
PNiQa concept for performability evaluation
Matthias Becker,
Helena Szczerbicka
System performance evaluation
1999
1998
Genetic algorithms: a tool for modelling, simulation, and optimization of complex systems
Michael Syrjakow,
Helena Szczerbicka,
Matthias Becker
Cybernetics \& Systems
Modeling and optimization of Kanban controlled manufacturing systems with GSPN including QN
Matthias Becker,
Helena Szczerbicka
Systems, Man, and Cybernetics, 1998. 1998 IEEE International Conference on
Combined Modeling with Generalized Stochastic Petri Nets including Queuing Nets
Matthias Becker,
H Szczerbicka
14th UK Computer and Telecommunications Performance, Engineering Workshop: 1998
PNiQ Generalized Stochastic Petri Nets including Queuing Networks
Matthias Becker,
Helena Szczerbicka
Advances in computer and information sciences' 98: ISCIS'98: proceedings of the 13th International Symposium on Computer and Information Sciences, 26-28 October 1998, Belek-Antalya, Turkey
Modellierung eines Kanban-Systems mit zwei Produktarten, Prioritäten und Umrüstzeiten
Matthias Becker,
Alexander K Schömig
Operations Research Proceedings 1997