Tangible objects and computer interaction

Get Complete Project Material File(s) Now! »

CHAPTER 3 LITERATURE REVIEW: TANGIBLE OBJECTS

Introduction

This chapter discusses literature on user-created tangible objects and how tangible objects support interaction between humans and computers. Section 2 covers the concept of tangible objects as these relate to humans and computers. It also considers existing technologies shown to support the use of tangible objects in human-computer interaction activities. Particular attention is given to tangible objects that contain, point to, detect, measure, manipulate, and generate data. Systems are then discussed in which the relative positions of tangible objects are of particular significance. Finally, tangible object systems used in generating digital models are listed. Section 3 covers environments in which the user creates personally meaningful tangible objects. Motivation is also given for the user to create her own tangible objects. Having deliberated tangible objects in relation to data and computer interaction and having highlighted the need for user-created tangible objects, I then reiterate in Section 4 Ishii’s (2009) omission of the object’s origin in his basic TUI model. I also restate McCloud (1994) and Jacucci’s (2007) opinions that can serve as system design guidelines when the objective is for the user to create personally meaningful objects. Section 5 concludes this chapter.

Nomenclature

In the literature, one physical object may be referred to as a three-dimensional (3D) object and elsewhere another object may be referred to as a spatial object. The question I address here is: “when should a physical object be referred to as a 3D object, and when should it be referred to as a spatial object?” The answer to this question depends on the context in which the reference is made.
When only the physical properties of an object are considered, referring to the object in terms of its 3D parameters is sufficient and the object itself as well as its position in space is then sufficiently described in terms of three orthogonal axes. However, it may be more appropriate to refer to the object as a spatial entity when the object is discussed in relation to living organisms. When an object is considered in the context of living organisms, it is not sufficient to only describe the relative position and orientation of the object to the subject. Other properties of the objects, such as its texture and colour, should also be described (Lannoch & Lannoch 1989). When the spatial nature of an object has been acknowledged, then, in the context of tangible user interfaces (TUIs), such an object is called a spatial TUI and thereby the importance of not only the shape of the object but also its location in space and its structure is highlighted (Jacoby, Josman, Jacoby, Koike, Itoh, Kawai, Kitamura, Sharlin & Weiss 2006).
Since the current study considers an object in relation to an individual, the object is mostly referred to as a spatial entity. Therefore, the answer to the question: “when should a physical object be referred to as a 3D object, and when should it be referred to as a spatial object?” is that in this study, it is appropriate to refer to a physical object as a spatial object.

Tangible objects and computer interaction

I discuss in Subsection 3.2.1 Ishii’s (2009) TUI model that describes personal interaction with a computer using tangible objects. The varieties of meanings that objects hold are covered in Subsection 3.2.2. Subsection 3.2.3 differentiates between active and passive identity and encoding mechanisms and data exchange mechanisms as either tethered or untethered. Subsection 3.2.4 then gives an overview of gesture modalities that have been applied in support of human-computer interaction. Relationships that exist between data and tangible objects are discussed in Subsection 3.2.5. Subsection 3.2.6 concludes with a discussion on using tangible objects when creating digital models.

Ishii’s tangible user interface model

Dix, Finlay, Abowd and Beale (2004) describe human-computer interaction as being concerned with the interaction that the user has with a computer in order to accomplish a task. According to Ghaoui (2005), and Sears and Jacko (2009), HCI is an interdisciplinary and multidisciplinary research domain that emerged from computing and includes contributions from (amongst others) computer science, psychology, cognitive science, ergonomics, sociology, engineering, education, graphic design, and industrial engineering. Ishii (2008a, 2009) reminds us that human-computer interaction design principles have progressed from requiring the user to remember commands and typing these commands (as is the case of the so-called command user interface, also written as CUI), to pointing at a visible rendition of the command using a computer mouse and selecting the rendition by the press of a mouse button (as is the case of the so-called graphical user interface, also written as GUI).
The TUI provides an alternative to both the CUI and GUI by taking form as tangible objects that both represent digital data and operate as tools for direct manipulation of digital data. The TUI is in sharp contrast to the GUI in that the GUI multiplexes the interface mechanism (mouse) in both space and time whereas the TUI makes provision for a dedicated interface for the data being manipulated. Another contrast is that the GUI is limited to intangible data representation whereas the TUI supports tangible representation.
Figure 3-3 depicts Ishii’s basic TUI model. This model associates tangible representations of data, digital data, or digital computations with physical objects. The model also makes provision for changes to the associated digital entities through manipulation of the physical objects (Ishii 2009).
He extended the model to include three feedback loops. The first is passive and exists between the user and the object that the user manipulates. This feedback is in immediate tactile form and exists independently of a computer. A second and active feedback loop is between the user and a computer program. The output of the computer program changes as the user manipulates the physical objects. Changes are fed back to the user in (for example) visual form. The third active feedback loop is between a physical object and data. This data may represent a digital model or it may be the result of computation and can be used to affect physical properties of the object (Ishii 2009).
What Ishii’s (2009) TUI model does not reflect is how the tangible representation is conceived and by whom. Indeed, the majority of the literature discussed in this study does not indicate the user’s involvement in determining the tangible representation. It is therefore reasonable to assume that, in the majority of TUI-based systems, the user is not involved in the design of the tangible representation and has to adapt to a predetermined representation.
An alternative approach extensively investigated in this thesis considers the scenario where the user decides on the tangible representation. To this end, Ishii (2008b) describes an “organic” tangible representation that allows the user to shape material supplied by a system designer. Systems that incorporate organic tangible representation make provision for the TUI system designer to prescribe the basic properties of the tangible representation yet also allow the user to change the tangible representation. Piper, Ratti, and Ishii’s (2002) Illuminating Clay and Ishii, Ratti, Piper, Wang, Biderman, and Ben-Joseph’s (2004) SandScape are examples of TUI systems that include endlessly malleable materials to facilitate organic representations.
Chapter 6 describes my T-logo tangible programming system that includes certain aspects of Ishii’s organic tangible representation. When using the T-logo programming environment, the user is free to decide what materials to use when constructing the programming objects.
Fitzmaurice, Ishii and Buxton’s (1995) Graspable User Interfaces and Ishii and Ullmer’s (1997) Tangible Bits explored mechanisms that allow a person to directly “touch” data using graspable media. Ishii and Ullmer’s research categorised a user’s attention as being in either a “foreground” or “background” state. They argued that when the user’s attention is in the foreground the user focusses on the task. Conversely, they argued that when a user’s attention is in the background then his attention is not centred on the task and the user remains aware of her immediate surroundings (the periphery). Ishii and Ullmer viewed the two states as being mutually exclusive. Their graspable media was designed to be used at the centre of a user’s attention and their ambient media was to be used at the periphery of the user’s attention. They also aimed to develop a type of human-computer interface that allows the user to “touch” data stored within the computer. They dubbed this type of human-computer interface a tangible user interface (Ishii & Ullmer 1997). Not only did they consider solid objects as potential tangible user interfaces but they also considered fluid-like mediums such as audio waves, visible light, flow of air, liquid, and gas that Ishii (Ishii 2009) called ambient media. These fluid-like mediums were recognised as possible TUIs and specifically for use in the background of a user’s attention. Ishii refers to objects that are both physical and graspable as tangible objects (Ishii 2009). My research considers a programming environment that requires focussed attention and the manipulation of solid objects.
A mapping between specific GUI elements and TUI elements emerged from research conducted during the 1990’s. Table 3-1 gives a selection of these mappings while Figure 3-4 illustrates them.
According to Ishii (2008b), researchers developed TUIs that incorporate tangible materials of which the shape is integral to the role of the TUI. An example of a TUI of which the shape is significant to its meaning is the tangible equivalent of the GUI widget (see Figure 3-4, bottom right). In addition to the research into fluid-like TUIs for use at the background of a user’s attention, second generation fluid-like TUIs were also developed for applications at the centre of the user’s attention. Examples of second generation TUIs include sand and clay. These fluid-like TUIs can be reshaped, with their digital representation changing at the same time. In addition to some TUIs that can be reshaped by the user, other TUIs can be reshaped by software through a process called actuation. Lumen (Poupyrev, Nashida, Maruyama, Rekimoto & Yamaji 2004) and Ohkubo, Ooide, and Nojima’s (2013) “smart hairs” are examples.
Of particular interest to my own study is Ishii’s consideration of objects that can be grasped and are found in the home or office environments. Ishii suggested that such objects can be used at the centre of a user’s attention. He and Ullmer were particularly interested in exploiting the rich affordances that physical objects offer. Ishii also considered the potential of architectural surfaces serving as tangible user interfaces, dubbing these interactive surfaces. Such surfaces were to be used at the centre of a user’s attention. Examples of interactive surfaces include those found as part of a building structure (an office wall is an example) and those that serve as furniture (the office desk, for example). Having now established that physical objects can represent and manipulate data, it is worthwhile to consider what meanings can be attributed to physical objects.

READ  Towards the improvement of matchmaking for MMOGs

The meaning that objects hold

For a user, an object may hold personal meaning in that it facilitates the recall of a particular memory. Ullmer (1997) calls these “physical objects with embedded memories”. Ullmer and Ishii (2000) offer a seashell as an example of an personally meaningful object that helps its owner recall a holiday experience. They coined the term associative tangible user interfaces to describe objects used independently of others to represent digital information. The advantage of using a personally meaningful object as opposed to an object selected by another person is that the user does not have to form a new mental model associating the object with the information (Hoven & Eggen 2004). Streitz et al.’s (1999) InteracTable is an example of systems that represent information using personally meaningful objects.
To describe the multiple meanings an object may hold, Underkoffler and Ishii (1999) proposes the design space as illustrated in Figure 3-5. They view this object design space as a continuum. Positions along this continuum are identified where an object can be interpreted as being a pure object, an attribute, a noun, a verb, and a reconfigurable tool. In addition to these interpretations, I propose that an object may represent a quantity (a numeric value). Such an object represents more than itself and yet it does not exist in relation with another object. I therefore locate such an object in this design space at a position between the object as a pure object and the object as an attribute. This location describes an object that serves to represent a quantity that is optionally part of a set of discrete values. The following are examples of physical objects that represent discrete and continuous quantities, nouns, adjectives, and a verb.
A glass marble is an example of an object that represents a quantity in a set of discrete values. Here, the set of discrete values consists of a bag filled with glass marbles and one or more marbles represent a quantity. A quantity described by an object may also be from a set of continuous values. An example of an object that represents a quantity from a set of continuous values is Dietz and Eidelson‘s (2009) SurfaceWare drinking glass. This drinking glass design varies the light reflected through the bottom surface according to the quantity of water in the glass. The system can determine the quantity of water when combined with an appropriate sensing surface. Figure 3-6 illustrates the operating principle: When the content is below a predetermined level, light that enters from the bottom reflects back to a sensor. Conversely, no light returns when the water is above this level.
The three fictitious genies (Mazalek 2001) Opo, Junar, and Seala are examples of objects that represent nouns. Glass bottles in Figure 3-7 represent the genie characters. Each design highlights the colour, texture, and form of the respective genie personality to help the user associate a bottle with a genie. The short and round Opo object is coloured yellow and green with a matte finish. This form, texture, and colour combination reflects the dull and depressive Opo personality. Junar’s representation is angular and cackled with pink and bright orange colours that reflect his abrasive personality. Seala is a water genie and her physical representation is blue, smooth, and tall. This combination creates an impression of flowing water.
Tangible program elements interlock in Oh et al.’s (2013) Digital Dream Labs programming environment to express a program action. The elements on the left in Figure 3-8 represent program elements of a noun, two adjectives, and a verb. These can be mapped on a continuum as shown in Figure 3-5. A user constructs a program action by interlocking these elements as shown on the right and the physical constraints limit a program action to this combination of program elements. My T-logo programming environment as discussed in Chapter 6 removes this constraint by accommodating multiple simultaneous instances of an object that represents a quantity. For example, when a quantity of 10 is called for in a program, the user is free to use a combination of objects that add up to this number.
My research considers objects that hold personal meaning for the user and are applied to computer programming. The two preceding subsections considered how objects represent data and the meanings that objects hold for the user. Still missing from the discussion is how the computer determines the object position. To address this, Subsection 3.2.3 provides an overview of technologies that feeds this data to the computer.

Supportive technologies

Tangible interaction systems include mechanisms that exchange data between the system components and other mechanisms that encode both the identity and position of tangible objects. A range of technologies support these mechanisms. Figure 3-9 illustrates my stack and system perspectives on the role of supportive technologies within a three-component tangible interaction system. The data exchange mechanisms, and identity and position encoding mechanisms (as shown in Figure 3-9) are discussed below.

CHAPTER 1
1.1 Introduction
1.2 Problem statement and purpose of this research
1.3 Rationale of this study
1.4 Research thesis statement
1.6 Research objectives
1.7 Research questions
1.8 Definition of terms
1.9 Assumptions
1.10 Methodology overview
1.11 Literature summary
1.12 Delineations and limitations
1.13 Organisation of the remainder of the thesis
1.14 Conclusion
CHAPTER 2 THEORETICAL BACKGROUND 
2.1 Introduction
2.2 Perceptual organisation
2.3 Signs
2.4 Programming
2.5 Perceptual organisation, signs, and programming
2.6 Conclusion
CHAPTER 3 LITERATURE REVIEW: TANGIBLE OBJECTS 
3.1 Introduction
3.2 Tangible objects and computer interaction
3.3 User-created tangible objects
3.4 Discussion
3.5 Conclusion
CHAPTER 4 LITERATURE REVIEW: TANGIBLE PROGRAMMING 
4.1 Introduction
4.2 Distinguishing between physical and tangible programming .
4.3 Tangible programming environments
4.4 Discussion
4.5 Conclusion
CHAPTER 5 RESEARCH METHODOLOGY 
5.1 Introduction
5.2 Philosophical stance
5.3 Methodology and methods
5.4 Conclusion
CHAPTER 6 DESIGN, IMPLEMENTATION, AND EVALUATION 
6.1 Introduction
6.2 Iteration one: GameBlocks I
6.3 Second iteration: GameBlocks II
6.4 Design iteration three: RockBlocks
6.5 Fourth iteration: Dialando
6.6 Design iteration five: T-Logo
6.7 Conclusion
CHAPTER 7 RESEARCH CONTRIBUTION: A MODEL FOR A TANGIBLE PROGRAMMING ENVIRONMENT
7.1 Introduction
7.2 Programming environment model
7.3 Applying the model
7.4 Conclusion
CHAPTER 8 FINAL DISCUSSION AND CONCLUSIONS 
8.1 Introduction
8.2 Summary of findings
8.3 Conclusions
8.4 Summary of contributions
8.5 Suggestions for further research
REFERENCES 
GET THE COMPLETE PROJECT
A TANGIBLE PROGRAMMING ENVIRONMENT MODEL INFORMED BY PRINCIPLES OF PERCEPTION AND MEANING

Related Posts