Lack of knowledge about how users adopt signifierless designs 

Get Complete Project Material File(s) Now! »

do users interact with swhidgets?

It remains unclear how many swhidgets users actually use and how they are discovered. To the best of my knowledge, there is very little public data on current usage of swhidgets. The only available data set I am aware of can be found on a blog post reporting proportions of users (among 6 tested) who knew how to interact with a subset of system swhidgets in iOS 11 [Ram17], which suggests there are great differences in the knowledge of the different interactions, some being spontaneously used by all participants and others being understood by none of the participants. Unfortunately, the author does not accurately describe the methodology and experimental procedure employed, questioning the generalization of the results.
Academic research has mostly investigated the performance of novel edge-based interaction techniques, such as menus or keyboards [JB12; RT09; SLG13] that do not rely on a physical metaphor that may help users uncovering these features. Similarly, Schramm et al. [SGC16] investigated the transition to expertise in hidden toolbars, a name describing various types of user interfaces that make func-tionality available only when users explicitly expose them through a dedicated interaction, including menu bars (revealing commands when the menu bar is clicked), the “charms” in Windows 8 [Hop19], and some swhidgets (typically, system swhidgets). Their work focuses on the performance of such interfaces, proposing four hidden toolbar de-signs (all different from actual swhidgets designs) and comparing their performance in terms of selection time and learning of item locations. As a result, their work is informative regarding the performance of the hidden toolbar designs they propose, but is not adequate to fully understand how users currently interact with swhidgets.
Similarly, studies conducted on other related “hidden” or “expert” interaction techniques can help apprehending interaction with swhid-gets. This is the case of the work conducted by Avery and Lank surveying the adoption of three multitasking multi-touch gestures on the iPad [AL16]. Their results show that users tend to have varying levels of knowledge and actual usage of these techniques, between completely ignoring them and perfectly mastering the art of how and when to use them. They also show that despite being “hidden”, these gesture-based techniques can get high levels of user awareness and willingness to use. Regarding keyboard shortcuts on desktop, studies conducted on Microsoft Word [Lan+05; TWV13] revealed that users do not systematically use the hotkeys they know. This behavior may be explained in part by the fact that users often underestimate the po-tential benefits of hotkeys. This is however not the whole explanation, since users also often fail to use hotkeys when they know it will be faster, even under time pressure. By definition, swhidgets are instances of signifier-less designs, since their design provides no signifier, i.e., no perceptible indicator of the presence of the swhidget or of the possibility to make a swipe to reveal it (the notion of signifiers will be introduced in a broader context in Chapters 3 and 5, and the notion of signifier-less design in Chapter 6).
Some signifier-less designs lack of signifiers because there seems to be no satisfying way of permanently displaying signifiers where the input method can be used [May+18]. This is however not the case of swhidgets, where a handle could easily be displayed on the movable interface elements to provide a hint of their presence [RT09]. Indeed, a small dash-shaped handle used to be displayed near the bottom edge of the screen when the control panel was introduced in iOS, but more recent releases show it on extraordinary occasions only (see Footnote 3), or use it as a replacement for the physical home button (typically on iPhone X, which has no physical home button). Swhidgets are by default “hidden” under the screen bezel, yet their existence is not signified to users, possibly as an intention to minimize visual clutter [SGC16] and keep the UI clean in order to improve user experience and performance [Nor04]. The iOS Human Interface Guidelines echo these expected benefits in the themes of clarity and deference: “Content typically fills the entire screen, [. . . ] minimal use of bezels, gradients, and drop shadows keep the interface light and airy, while ensuring that content is paramount” [App19].
This focus on “clean” interfaces benefits all users, but swhidgets-aware users might additionally benefit from other aspects of the de-sign. For instance, it could be said that swhidgets provide sometimes a more uniform way to interact with the system globally than alter-native designs, since they rely on similar gestures and metaphors than navigation gestures. However, none of these benefits seems to justify a total removal of signifiers. On the other hand, the discovery of a swhidget could also be a positive experience for users and such a positive experience may justify the lack of signifiers, although there currently seem to be very little evidence in the HCI field supporting such an hypothesis.

discovery of swhidgets

As pointed by Mayer et al., the swipe gestures of swhidgets are not visually communicated to the users: Instead, device manufacturers rely on dedicated tips and animations that are either shown to the user when setting up the device, or showcased on stage when the system is presented to the technology-oriented press [May+18]. A way to discover swhidgets that is better integrated in everyday interaction is to explore the interface by performing series of inputs and expecting the system to respond to them. Schramm et al. [SGC16] advocate that with edge-based interactions becoming more common, users might appropriate the physical metaphor of sliding objects and become more likely to expect swhidgets, thus exploring the operating system to discover them. The design guidelines and Design language7 (typically, Material Design on Android and the Human Interface guide-lines on iOS) eventually declare a consistent interaction environment that promotes such metaphors and swipe-based interaction, which might lead users to discover these swhidgets through interactive explo-ration.
Moreover, swhidgets are by design exploration-friendly, since 1) they do not trigger commands directly but only reveal widgets, and it is thus safe to try revealing them; 2) testing the presence of a swhidget has a low interaction cost since it only requires to slide a little bit a finger on a candidate surface or edge; and 3) in case of success, this test provides immediate feedback with a visibly moving element. However, Schramm et al. also acknowledge that one cannot expect novice users to guess the availability of swhidgets nor the actions used to access them regardless. Therefore, users still need to have discovered a first swhidget before being able to reproduce a similar input somewhere else in the interface. In addition, it still requires users to explicitly explore the interface in the first place, while they may be too engaged in their tasks to do it, even if it would eventually improve their performance as users [CR87].
Another possible way to discover swhidgets is through accidental revelation, that is, when the swhidget is revealed whereas it was not the first intention of the user. For instance, one of iOS swhidgets that is likely to be accidentally revealed is the search view swhidget located on top of some lists, that may be brought into view when the user overshoots while scrolling back to the top of the list. Other swhidgets may be less likely to be accidentally discovered. For instance, with the item swhidgets in the Mail application, there is no reason to expect users to perform an horizontal swipe on an item of a vertically scrollable list. Moreover, accidental activation can be confusing and users might be unable to reproduce the input operation that triggered that accidental activation. Finally, users might discover the existence of swhidgets from their own social network, typically through friends, colleagues or family. Indeed, impact on input mechanism adoption of witnessing others performing them has already been observed in the context of keyboard shortcuts [Per+04].

general research questions identified

In this section, I will recall and formalize the questions that were quickly raised about the design of swhidgets in the previous sections. These questions point at individual phenomena that need to be under-stood before being able to get a full understanding of the design of swhidgets. However, the scope of these phenomena extends far beyond the single case of swhidgets, and understanding each of theses phenom-ena is a general topic in the field of HCI. It is therefore not possible, in this thesis, to address all these phenomena, nor to contribute directly to the understanding of all of them. The list of phenomena and re-search questions that I give below is therefore defining a framework into which research on swhidgets has to be conducted, and highlights the benefits of studying swhidgets to improve general knowledge about human-computer interaction as a phenomenon [Bea04; HO17]. The next chapters will discuss basic notions implied by this framework.

Integration of swhidgets in relation with other interface features

We have identified three types of swhidgets (Section 2.1), in which specific interaction techniques (position and semantic of the swipe gesture) match with domains of application of commands: system swhidgets provide system-wide commands and are attached to the screen bezels, a feature of the device itself; view swhidgets provide commands about the set of data displayed in a view and are revealed by scroll-like interactions with this view; item swhidgets require to interact with the item targeted by the command. This matching raises two types of questions, depending on whether its role is considered in a specific instance of interaction or in multiple instances of interactions across different applications. overarching interface logic This matching between inter-action techniques and domains of application of commands is an overarching logic in the structure of the interface, a general prin-ciple ruling the design of the whole operating system and specific applications. It suggests that users can learn this overarching logic (Section 2.5), and later use it to deduce where they should search for a specific command, even if the widget providing this command is hidden. Do users really exploit such an overarching logic? How do they learn it, and how do they think about exploiting it? What cognitive processes are activated to exploit this knowledge? These questions fit in the general HCI research about how users develop a conceptual understanding of the interface (as rules, strategies, mental models, etc.) and exploit them in their goal-oriented, problem solving interactive behaviors [Nor88].
consistency with other interactions The overarching in-terface logic is only one aspect of a consistent use of swhidgets in the design of the system and applications, but other aspects of the design can also be consistent. For instance, the mailbox view in the Mail application (Figure 2.2-right) and the conversations view in the Message application both share a similar list layout with similar item swhidgets, although there are slight variations between the two designs. Can users notice this similarity, infer a pattern from it, and use their knowledge of the pattern to discover swhidgets in other applications? Are there aspects of the design, of the activity supported by an appli-cation, or user skills that can help or prevent noticing the pattern or similarities with other interfaces? Beyond the consistency of uses of swhidgets in design, what role plays the consistency of swhidgets with other types of interactions such as navigation gestures (Section 2.4), or the consistency of patterns that involve both swhidgets and other types of controls? How important is a consistent use of swhidgets in design for their adoption by users? These questions fit in the general topics of transfer of knowledge from other contexts of use, and other benefits of consistency in user interfaces [Gru89]. benefits of a clean interface Since the design of swhidgets seems to purposefully avoid the use of signifiers (Section 2.4), and since a possible reason for this is to reduce interface clutter, improve readability, and focus on content, it seems important to understand the benefits of having a “clean” interface and how these benefits are obtained.

READ  The Internet of Things

Interaction with specific swhidgets

action-function coupling Beside the existence of an overar-ching interface logic, is there something specific about how swhidgets’ designs connect the semantics of the interactions with the semantics of the commands they trigger? Swhidgets use an interface element that connects the domain of commands and the interaction technique by being more or less loosely related to both: a bezel, a view, or an item – but does the strength of this relation affects users’ ability to understand it or to integrate this understanding in how they think about swhidgets and commands? Behind these questions, there is the whole domain of HCI research about the consequences of relating the purpose or effect of an interaction with the way it is performed [WDO04; Dja+04].
physical metaphor Swhidgets rely on the physical metaphor of sliding interface objects to reveal what is under them. Unlike action-function coupling, such metaphors are concerned with the way the interaction if performed but not with the effect of the interaction or its purpose (beside “revealing some widgets”). As such, they fit in the general study of metaphors as a tool to help users make sense of the interface and learn how to use it by themselves. However, in the case of swhidgets, there is a little more than that, since the metaphor also contributes to unify the design of swhidgets with other types of interactions promoted by the system (Section 2.4), and is thus related to consistency questions. The specific metaphor used for swhidgets also allows to bridge the gap between uni-stroke gestures and direct manipulation, with con-sequences on the learnability of the gestures or users’ understanding of what they did or are doing (feedback). It therefore has to be ana-lyzed both relatively to the problem of learning how to interact with a system, and to the problem of facilitating interaction itself.

Discovery and adoption of Swhidgets in the long-term

multiple ways to do a task Swhidgets can coexist with other methods for triggering the commands they provide, with sometime differences in performance or ease of use (Section 2.2). It raises the general questions of what methods users prefer to use, and of what aspects of the methods, task, and context affect these preferences. But it also raise questions about the evolution of such preferences in time as users get more familiar with them and better understand the advantages of each method. In particular, are some methods supposed to replace others as users get more experienced, and if so, what aspects of the design of each interaction method makes it better suited for different types of users? reasons not to adopt The example of hotkeys (Section 2.3) re-veals that measuring users’ performance with an interaction technique and the proportions of users who know how to use it at a given level of performance is not enough to understand users’ adoption of the technique: we also need ways to better explain the non-usage behav-iors observed – including reasons for users not to use a technique despite performing well with it. A study of swhidgets should thus also evaluate users’ perception of the technique’s performance, ease of use, reliability, mental and physical costs of activation, etc., to determinate if an inaccurate perception of these qualities justifies the non-usage of swhidgets.

Table of contents :

1 introduction 
1.1 Observations
1.2 A context favoring signifier-less designs
1.2.1 Diversification of interactive device types
1.2.2 Design philosophies
1.2.3 Lack of knowledge about how users adopt signifierless designs
1.3 Approach and contributions
1.3.1 Approach
1.3.2 Contributions
1.4 Plan of the dissertation
2 swhidgets: description, analysis, and research questions 
2.1 Types of Swhidget
2.2 Roles of Swhidgets
2.3 Do users interact with Swhidgets?
2.4 Voluntarily signifier-less: why?
2.5 Discovery of Swhidgets
2.6 General research questions identified
2.6.1 Integration of swhidgets in relation with other interface features
2.6.2 Interaction with specific swhidgets
2.6.3 Discovery and adoption of Swhidgets in the long-term
2.6.4 Methodological aspects of the study of Swhidgets
2.7 How I Address These Research Questions
2.7.1 What is a signifier-less design and how swhidgets are different from other signifier-less designs?
2.7.2 Do users know swhidgets?
2.7.3 How can users know swhidgets despite their lack of signifiers?
2.7.4 What are the benefits of not providing signifiers?
3 affordances 
3.1 Gibsonian affordances
3.1.1 Origin of the concept in perception theory
3.1.2 Affordances and ecology
3.1.3 Tools and learning: Affordances change for equipped actors
3.1.4 Perception of affordances is learned
3.2 The role of affordance structures in HCI
3.2.1 Means-end relations
3.2.2 The problem of unclear relations between affordances
3.2.3 Instrumental affordances
3.3 Debated Aspects of Affordances in HCI
3.4 Debate 1: Meaningfulness of Affordances
3.4.1 Norman’s view: affordances as action possibilities only
3.4.2 Meaningfulness as potential
3.4.3 Norman’s view: Meaning is conveyed by signifiers
3.4.4 Meaningfulness from complementarity
3.5 Debate 2: Social Aspects of HCI Affordances
3.5.1 Norman’s view: Affordances, conventions and symbols
3.5.2 Gaver’s view: Culture only highlights some affordances
3.5.3 Gibson’s notion of Ecology
3.6 Debate 3: Symbols and Affordances in Human Displays
3.6.1 Symbols involved in affordances
3.6.2 From Gibson’s human displays to Norman’s signifiers: affording knowledge
3.7 Conclusion and recap
4 the seven stages of action 
4.1 The Seven Stages
4.2 Limitations and ambiguities of the model
4.2.1 Status of the model and nature of the stages
4.2.2 Distinction between intentions and sequences of actions
4.2.3 Focus on problem-solving rather than discovery
4.2.4 Focus on information used in short-term interaction
4.3 Conclusion
5 signifiers & design means 
5.1 Perception of design means
5.1.1 Affordances as output of perception
5.1.2 Sensory affordances
5.1.3 Gestalt Laws
5.2 Definition of Signifiers in Peirce’s semiotics
5.2.1 Components of a Sign: Signifier, Object, and Interpretant
5.2.2 Combination of Signs
5.2.3 Nature of the signifier-object relation
5.3 Interpretation of design means
5.3.1 Perceived affordances
5.3.2 Labels
5.3.3 Interface Metaphors
5.3.4 Natural Mappings
5.3.5 Interaction Frogger
5.3.6 Natural Signals
5.3.7 Feedback and feedforward
5.3.8 Constraints
5.4 Evaluation of design means
6 signifier-less designs 
6.1 Defining Signifier-less Designs
6.1.1 Edge cases and limitations of the definition
6.1.2 Directly signified affordances
6.2 Historical Signifier-less designs
6.2.1 Command-line interfaces
6.2.2 Signifier-less designs in Direct Manipulation
6.2.3 WIMP interfaces
6.2.4 Post-WIMP interfaces
6.3 Types of Signifier-less Design
7 discovering and adopting affordances 
7.1 About this chapter
7.2 current knowledge and skills
7.2.1 Domain of the knowledge and skills
7.2.2 Improving knowledge and skills in time
7.2.3 Dependency across levels
7.3 Interaction between current knowledge and design means
7.3.1 Current understanding affects the usefulness of means
7.3.2 Learnability of the interface
7.4 Motivations
7.4.1 Extrinsic vs. intrinsic motivation
7.4.2 Self-determination theory
7.4.3 Hassenzahl’s model of UX
7.4.4 Non-motivational causes
7.5 Interaction between current knowledge and motivation
7.5.1 Knowledge motivated by performance
7.5.2 The Knowledge-motivation loop
7.6 First contribution: Degree of Knowledge
7.7 Second contribution: Sources of Knowledge
7.7.1 Distance of the source of knowledge
7.7.2 Intentionality of the discovery
7.7.3 Sources of Knowledge as an extension of Design Means
7.8 Conclusion
8 two studies on users’ knowledge and reception of ios swhidgets 
8.1 The laboratory and online surveys
8.1.1 Rationale and objectives
8.1.2 Scope limitation
8.1.3 Applications and operations studied
8.2 Laboratory Study
8.2.1 Factorial design of the experimental tasks
8.2.2 Experimental tasks
8.2.3 Participants
8.2.4 Material and Apparatus
8.2.5 Protocol
8.2.6 Data Collection
8.2.7 Data processing
8.3 Results of The Laboratory Study
8.3.1 Participants
8.3.2 Completion of Operations
8.3.3 primary usage of input methods
8.3.4 Degree of knowledge of Swhidgets vs. Navigation
8.3.5 productive knowledge
8.3.6 Awareness and receptive knowledge
8.3.7 Preference for swhidgets
8.3.8 How participants discovered swhidgets
8.3.9 Participants’ reaction about swhidgets
8.4 Discussion of the Results of the Laboratory Study
8.4.1 Reflection on the study
8.4.2 When participants did not use swhidgets
8.4.3 Task-based vs. System-based Interaction Models
8.5 Study 2: Online Survey on Awareness, Usage and Discovery of swhidgets
8.5.1 Design and Procedure
8.5.2 Apparatus and Participants
8.6 Results of Online Study
8.6.1 Participants Background and Experience
8.6.2 General Usage
8.6.3 Awareness and Usage of Swhidgets
8.6.4 Discovery of Swhidgets
8.6.5 General perception of Swhidgets
8.6.6 Satisfaction of basic needs
8.7 Discussion of the results of the two Studies
8.7.1 Awareness of Swhidgets
8.7.2 Usage of (And Reasons Not to Use) Swhidgets
8.7.3 Discovery of Swhidgets
8.7.4 Future work
8.7.5 Dataset
9 perspectives and conclusion 
9.1 From results back to models
9.1.1 Transfer of Knowledge
9.1.2 Hedonic aspects of Swhidgets
9.2 Improving swhidgets
9.2.1 Revealing Swhidgets with Animated Transitions
9.2.2 Considerations for the Design of Animated Transitions
9.2.3 Proposed Animated Transitions
9.2.4 Testing the animations
9.3 Conclusion
bibliography

GET THE COMPLETE PROJECT

Related Posts