What I’ve Learned #2: Sameness

No matter the culture or country, as people, we are all the same.

I learned this after participating in a large EU-funded project with Turkey. I had an amazing time, learned a lot, and met some incredible friends that I will keep forever. But after working with some of the students from Turkey, I realized that culture or country do not matter when it comes to being able, or unable, to work with a person. It comes down to much more important elements of a person, such as morals, ethics, drive, focus, humor, or experience. For those reasons, I worked with some amazing people even though our backgrounds and cultures were widely different.

“When it gets more difficult, use both hands”

“…Exploring Bimanual Curve Manipulation”

by Russell Owen, Gordon Kurtenbach, George Fitzmaurice, Thomas Baudel, Bill Buxton (2005)

  1. Investigated “…relationship between bimanual (two-handed) manipulation and the cognitive aspects of task integration, divided attention and the epistemic action.”
  2. “We provide evidence that the bimanual technique has better performance than the unimanual technique and, as the task becomes more cognitively demanding, the bimanual technique exhibits even greater performance benefits.”
  3. Advantages of two-handed input:
    1. time-motion: reduced time-motion allow users to perform tasks faster, reduce task switching.
    2. existing skills: everyday two-handed actions as metaphors for computer interactions.
    3. expressiveness: ability to quickly manipulate, perceive, and evaluate the transformed data. Increases iteration speed and range in exploring solution space.
  4. When a user is working on a complicated task, two-handed input may further complicate the task (coordinating actions of both hands).
  5. “In our research we have concluded that in order for two-handed input to be effective it must be designed carefully.”
  6. Switching Costs“: when multiple sources of information, we make choices about what to attend to and when.
  7. Chunking tasks” so user perceives operations of two hands as an integrated single activity (minimizes switching costs).
  8. The difference in 1-handed and 2-handed techniques may consist of 1-handed time motion costs and 2-handed cognitive benefits.
  9. “We hypothesize that the cognitive benefits of the two handed technique can be attributed to epistemic action.”
    1. Epistemic Action: performed to uncover information that is hidden or hard to compute mentally. (Using fingers when counting, moving a chess piece to assess the move)
    2. Pragmatic Action: to bring the user closer to the goal by physical manipulation. (A particular goal cannot be accomplished in any way without this particular physical action.)
  10. Design Principles for Effective Two-Handed Input:
    1. task should be visually integrated (not allow divided visual attention between activities of 2 hands)
    2. task should be conceptually integrated (conceptualize operations of 2 hands as one action)
    3. task should employ integrated device spaces

My thoughts:

  • The extensive testing they did for their hypotheses, and the way they structured each hypothesis, makes me think about how I should structure my thesis and subsequent testing. I have not thought about testing too much at this point.
  • Intrigued by #9. Which type of action is used in the TRKBRD? Which type of action do I prefer to concentrate on? I’m afraid that just on this small topic alone, I could dig myself into a very deep hole of more research.
    • Actually, the TRKBRD is neither at this point. But if 2-handed gestures were possible with the device…
  • Inspiration from #10.

Day 81

Wow, where does the time go… A short summary so far…

Late 2009 – January 2010

  • Loosely thought about thesis while finishing other classes last semester.
  • Prepared for TRKBRD conference presentation: upgraded hardware, upgraded software, rebuilt Flash interface (with help), wrote presentation, built slides for presentation.

February 2010

  • Presented TRKBRD at IxDA’s Interaction ’10 conference. Gained interest and some feedback on the idea. Overall, not sure how much I gained regarding my thesis, from the conference. Of course I gained inspiration from the conference, but I was hoping to talk with more designers about my thesis idea. I mentioned it during my Q&A after my presentation, and no one told me it was stupid. A few people did mention afterwards that they were interested in what I find from the thesis.
  • Spent a few weeks back home, visiting friends and family.
  • Started to “hit the wall” with my thesis idea, and begin to lose hope, focus, energy…

March 2010…

  • 11 days into the month…
  • I have started to have renewed energy and hope, but still working on focus.
  • When I thought I was collapsing from the weight of finding redundant research, Jörn, my advisor, attempted to life me back up again with the advice of “Don’t worry about that.” Surprisingly, that helped.
    • Stand on the shoulders of the research I find. Learn from them, what they have already accomplished. It just means that someone has already done research that I don’t have to do now! Sweet!
    • What new context can be derived? What is something new that can be delivered?
    • Zoom out. Big picture.
  • Reading, annotating, processing, and blogging all of the research I have collected thus far. (see Research category link on the side, and ALL CAPS tags in tag cloud)
  • There is a wedge of originality in what I have collected so far, I can feel it, though it hasn’t emerged yet. I am collecting research from a variety of angles, which occassionally merge and overlap creating a spider web of cross-references from the duplicate references they each reference.
  • Will attempt to visualize the research I have collected so far.
    • 1st attempt: I used ALL CAPS tags in my blog tag cloud for tags describing research. I even manually put a yellow background on these words too. This will help me quickly find all the research for a given tag.
    • 2nd attempt: I want to see what other relations can be derived from the tags. Will this help me find my “wedge of originality”?
  • From there, I will brainstorm possible physical prototypes that can be built and tested.
  • Also want to spend time making an explanatory video for my thesis, physical prototypes, and testing outcomes.

“Think Before You Talk”

“…An empirical study of Relationship between Speech pauses adn Cognitive Load”

by M. Asif Khawaja, Natalie Ruiz, and Fang Chen (OzCHI 2008)

  1. Speech pauses are useful indicators of high load versus low load speech.
  2. Cognitive load refers to the amount of mental demand imposed on a person by a particular task, and has been associated with the limited capacity of working memory and the ability to process novel information.
  3. Patterns indicative of high load situations:
    1. content of language (throughput, coherency)
    2. delivery manner (pitch, volume, articulation rate)
    3. peak intonation patterns
  4. Traditionally in psychology, pauses during speech have been associated to a person’s thinking and cognitive processes. During each pause, he/she processes currently known information in the limited working memory to produce the next speech response.
  5. Used a “Dual Task” method to test theory: subject required to perform two tasks at the same time.
  6. Measured “silent pauses” (speechless segment) and “filled pauses” (ah…, hmm…, umm…)
  7. “Pausing in speech is a mechanism that allows the subject more time to plan, select and produce appropriate speech. This time is arguably used to regulate the pace of the information flow such that the subject is able to manage their cognitive load.”
  8. Found “silent pauses” to increase significantly under situations of high load.

My thoughts:

  • I found this to be a valid method to use during testing of my thesis.
  • My challenge to use this method, though, is to create a test that validly tests the cognitive load of multiple independent interfaces. They used speech and visuals for the dual-task test. Could I have the subject audibly repeat their actions in one interface while watching the results for another interface in the stack?

“Tangled Interaction”

“…On the Expressiveness of Tangible User Interfaces”

by Johan Redström

  1. “…aesthetic potential of overloading the object’s surface by adding several layers of interaction.”
  2. “…how through a certain design we aim to make a computational thing express what it can do through the way it presents itself to us through use over time.”
  3. Differences between designing for efficiency in use versus the experience of use.
  4. The Argument: Tangible interaction aims to create a strong relation between surface and functionality, between appearance and potential.
    1. Close coupling between the way the object appears and what we can do with it.
    2. However, this is not the same as saying that we should immediately understand how to use the thing.
  5. The Counterargument: With the miniaturization of technology, the relation between internal and surface complexity has been lost. (observations by Maeda:)
    1. A small object corresponded to a simple function, whereas a larger object was associated with a proportionally more complex function.
    2. Sacred Promise #1: the user would be able to construct a priori impressions of an object before actually using it (sizing it up by first glance).
    3. Sacred Promise #2: industrial designers would have a suitable amount of visual and tactile design space… in which to express that functionality.
  6. (Fitzmaurice, Ishii, and Buxton):
    1. “Space-multipexed input”: each function to be controlled has a dedicated transducer, each occupying its own space.
    2. “Time-multiplexed input”: one device to control different functions at different points in time.
  7. “Humans are inherently good at managing physical space, by ordering and sorting artifacts in their environment.” (Holmquist)
  8. Issue: “…we are dealing with a ‘surface’ incapable of expressing everything that is, or could be, going on ‘inside’ the object.”
  9. “…we rather try to expand the expressiveness of the surface by creating several layers of interaction. In other words, we try to overload the surface by adding different layers of meaning in such a way that, for example, performing a certain action might mean several different things.”
    1. Overloading will not be unproblematic.
    2. Introduce layers at conflict with each other.
    3. Layers that introduce something interesting “in-between” the layers: tangled interaction.
    4. Layers continuously present, not a set we can browse or sequence through.

My thoughts:

  • I found a lot of inspiration in this paper. Many of the concepts, ideas, and concerns, I can directly relate to the idea of “stacking inputs”.
  • Upon first looking at a TRKBRD, you would never guess that you could glide your finger over the keys to move the cursor. (#8) The design does not have a powerful affordance.
  • I like his idea of “tangled interaction”. I see his paper as helping to define my own concept of “stacking”. He gave multiple examples in the paper, and they all had one thing in common: if you remove a “layer”, the design is no longer valid or possible. The tangled layers are interwoven and necessary to each other. Whereas, “stacking” introduces layers, but as wholly independent layers that are not interwoven or necessary to each other.

“Pinning objects to wall-sized displays”

“Blurring the line between real and digital: Pinning objects to wall-sized displays”

by Daniel Stødle, Otto J. Anshus (2008)

  1. Attempt to replicate the interaction and experience of a “billboard” through advanced technology. A “billboard” being an area on a wall, or a bulletin board, where you can post small messages, posters for events, or “wanted” or “for sale” posters.
  2. “On a billboard, users are less concerned about drawing or writing, and care more about leaving content of some kind behind for other users to see.”
  3. Three important requirements for their Wallboard:
    1. A user should not have to use any special equipment, wear anything, or be fitted with any “markers”.
    2. Content should appear where a user is holding it, and be able to pin the content anywhere.
    3. Content should appear instantly (relatively).
  4. Extensive technology to create Wallboard: 16 floor-mounted cameras, 8 Mac minis, video camera with pan-tilt-zoom control, 28 projectors forming a 22 megapixel 7168×3072 display, and optional microphones.
  5. “Instead of determining what an object is, it is more fundamental to determine where it is.”
  6. Extensive software was written for this setup, including the ability to create 2 vertical planes in the video cameras. These 2 planes were used to determine when content was being held up to the wall, and when a finger was touching the wall.
  7. Procedure:
    1. User holds up a paper to post to the wall.
    2. Video cameras see the breaking of the 2 vertical planes by the paper and hand.
    3. Paper is held in place for a second.
    4. System determines position and size of paper on the wall.
    5. Pan-tilt-zoom video camera moves and zooms to zero in on content on the wall. Takes a photo!
    6. System processes image.
    7. Photo of content appears on the wall, in the same location that it was held up to.
    8. Total time: 1-3 seconds.

My thoughts:

  • Very interesting way of using floor-mounted video cameras to triangulate finger position on the wall!
  • Item #5 is very interesting! Especially for a billboard, the content doesn’t matter. For another project, in a different world, I think the project team would not have realized this and pushed for capabilities that didn’t add value in the long run. The technology does not need to know meaning, let a human conceive and determine that. The human here is determining the importance of posting something on the billboard, determining the content, and determining position on the wall. Arguably a computer could do the same thing, and even add value such as grouping like posts, making sure nothing is overlapping, or modify the content for higher value. This team said NO! Credit to them for doing so.
  • I take inspiration from their 2 vertical planes of interaction created by the floor-mounted cameras. In their design, they are only using the planes to determine a single human interaction, and filter out actions that could confused the system. What if more than 2 planes were created? Could gestures be created to augment content creation, or create ways of interacting with the content after it has been added?

“PreSense”

“…Interaction Techniques for Finger Sensing Input Devices”

by Jun Rekimoto, Takaaki Ishizawa, Carsten Schwesig, Haruo Oba (UIST 2003)

  1. “PreSense is a keypad that is enhanced by touch (or proximity) sensors based on capacitive sensing.” A normal physical button pad with touch sensing on top of each key.
  2. Extension of previous “SmartPad” system.
  3. Essence of direct manipulation user interfaces can be summarized as teh following 3 features: (Shneiderman)
    1. Continuous representation of the object of interest.
    2. Physical actions or labeled button presses instead of complex syntax.
    3. Rapid incremental reversible operations whose impact on the object of interest is immediately visible.
  4. Two types of feedback: 1) reactive, 2) proactive (preview)
  5. Mentions gesture interface possibilities.
  6. “It can provide kinesthetic feedback because users can feel the button shape when touching it.”
  7. Mentions multi-key input, key-chording, and touch-shift as alternative data input methods with touch keypad.

My thoughts:

  • “Preview Feedback” is an important element that they introduce to physical button interfaces. You don’t have the same “hover” effect in the physical form, as you do in the virtual world of a mouse and websites. Allows (from the text):
    • transparent operation (tooltips)
    • rapid inspection of possibilities (to avoid undo)
    • efficient use of screen space
  • They are more interested in creating another layer of interaction that augments an interface, rather than creating additional independent layers of interaction in a stacked interface.

“SmartPad”

“…A Finger-Sensing Keypad for Mobile Interaction”

by Jun Rekimoto, Takaaki Ishizawa, Haruo Oba (CHI 2003)

  1. “…a new input device for mobile computers that is an enhanced physical keypad by a finger position sensor.”
  2. “…combines two input sensors; one is a normal physical keypad, and the other is a finger position sensor based on capacitive sensing.”
  3. Based on “smartskin”: a wire grid that provides touch sensing through capacitive proximity sensing.
  4. Could be used for gesture input on the keypad (sliding across keys, virtual jog wheel like an iPod).
  5. “While these devices use additional sensor as a new (separated) input mode, we are more interested in enhancing existing input device (e.g., keypad) by sensors.
  6. “Preview is much more important in the physical world than GUI, because undoing is often unavailable.” Referring to the idea of predicting which button will be pushed and then providing preview information before pressing it.

My thoughts:

  • This was going to be my next physical prototype: adding touch sensitivity to the form factor of a keypad on a mobile phone. Their implementation of the idea is different though. They place a grid of sensors between the keys, whereas I wanted to add multiple touch sensitive points to the top of each key on the keypad. (See my TRKMBL sketch)
  • #5 illustrates a difference in their motivation and my motivation, within the design framing. I am looking to create additional interfaces of interaction within a stacked device, where they are looking to enhance an interface.
  • Their “smartskin” grid of sensors is exactly like an idea that came to me late at night while laying in bed. I was trying to think of a smarter way to create a large field of touch sensors while keeping logical control over the number of sensors that would be necessary to create it.

“Touch&Type”

“…A Novel Pointing Device for Notebook Computers”

by W. Fallot-Burghardt, M. Fjeld, C. Speirs, S. Ziegenspeck, H. Krueger, T. Läubli (NordiCHI 2006)

  1. “…combines a conventional keyboard with an extended touch pad whereby the touch pad’s sensitive area is formed by the surface of the keys themselves and thus can be made as large as the whole key area.”
  2. “…it takes, on average, 0.36 seconds to move a hand from the keyboard to the mouse and additional time to adjust the grasp for operating the mouse buttons.”
  3. The mouse buttons were physical buttons on the side of the keyboard. When a finger was touching the button, their hardware setup would switch from “typing” mode to “pointing” mode. “Clicking” was done with the physical buttons on the side.
  4. Video showing their “Touch&Type” in action: http://www.t2i.se/pub/media/2006_NordiCHI_Fallot_et_al.avi
  5. They performed extensive testing on comparing their device with a mouse and a trackpad. It performed very similarly to a trackpad, sometimes better and sometimes worse. In the end, they claim the Touch&Type to outperform the trackpad with a confidence level of 73%.

My thoughts:

  • This research is almost identical to my TRKBRD project, though, with subtle differences.
  • Their reasoning and grounding for the design is almost identical to the framing I used for my design: limited size of the trackpad, compact design, gaps between keys providing beneficial haptic feedback.
  • The strengths and weaknesses of their design was quite similar to the stengths and weaknesses I discovered with the TRKBRD. The strengths provide a solid grounding for the future potential of the device, but the weaknesses in precision and technology prevent it from mainstream use.
  • They mention “two-handed, coordinated input” as a future research topic related to the device. I see this related to my idea of “gesture zones” within the TRKBRD design, where the left and right hands can collaborate individual gestures to create a compound gesture together.

“Layered Touch Panel”

“…The Input Device with Two Touch Panel Layers”

by Yujin Tsukada, Takeshi Hoshino (CHI 2002)

  1. “…two touch panel layers, so that it is able to distinguish two touch states such as ‘finger on screen’ and ‘finger above screen’.”
  2. Adds an invisible infrared field about 20mm above the physical touch screen.
  3. Allows for a “rollover effect” for buttons or screen elements.

My thoughts:

  • Follows Buxton’s “3-State Model of Graphical Input“:
    • No touch
    • Touch invisible proximity layer
    • Touch visible touchscreen
  • Similar to my TRKBRD prototype, but that the top layer is being used to augment the bottom layer. The top layer is not being used as a separate input interface, but as a way to make the bottom layer more intelligent. Could this be used to help define the differences of “layering” and “stacking” layers of interaction?
THIS THESIS TOOK 254 DAYS