I have now completed my Stacked User Inputs thesis, and presented it yesterday to my advisor and opponent. This was my presentation:

My final thesis is available: RobNero-SUI-Thesis-2010.pdf [816 KB]

Affinity Diagrams

I know I should be writing more right now, but I’m excited to show the culmination of a lot of work…

After many hours of watching user testing videos, and transcribing almost all of the words that were said, I then compiled the main points and highlights onto colored post-it notes, one color for each of the 6 companies that tested the cube. These photos show part of this process…

Moving transcribed notes onto post-its.

Each company by themselves yet.

Grouped and labeled on wall 1.

Grouped and labeled on wall 2.

Deadline approaching…

Yes, this site is still alive… I am frantically writing and digesting testing videos at the moment, preparing to submit my final thesis paper in three weeks. More updates soon…

Cube Code Versions

I haven’t posted in some time, since I am now doing a lot of writing… testing my cubes, conducting interviews, collecting research, and trying to form all of that into my final thesis. I am including many videos “inside” my thesis report to better explain the cube interactions and my interview results. Sometimes words fail to explain what only a video can demonstrate.

Below are videos demonstrating the multiple versions of code I used in the cubes while testing.

What I’ve Learned #4: Skills

Knowing how to build something, physically or virtually, will get your designs further.

Sometimes when I’m learning a new programming language, arduino, or electronics, I wonder if my time is best spent learning something that isn’t specifically about interaction design. I know I will never be a pro at object-oriented programming, fully understand what a capacitor does, or be able to hack an arduino to control my twitter feed.

But… it does feel great to build something and see it working in front of you! …it does feel great to take a crazy idea that is impossible to explain to someone and build it for them to understand firsthand!

Learning bits and pieces of PHP, Processing, Flash, electronics, and Arduino during my time in the Masters program has been invaluable knowledge. It isn’t directly about interaction design, but they allow me to directly demonstrate and explain my interaction design.

Test #3 Dropoff

Friday, April 23rd, 1pm

Dropped off cube for Test #3

User: Unsworn Industries

Physical + Virtual = SUI

If you have agreed to test my SUI Cube prototype, please DO NOT continue to read this blog! …until after you have been interviewed.

I am in the middle of writing, pure thick thesis writing! It is much more difficult than it sounds.

“Why a SUI?” A question I am asked often, and have always felt I could never give a good answer to. I feel I am closer now.

All of the examples that I have documented on this site so far have included two elements: physicality and virtuality. There has been a physical element, typically in the form of a button, that provided access to an interface or functionality. There has also been a virtual element, typically in the form of a touch-sensitive surface, that provided access to another interface or set of functions. Even though many of my examples have been determined NOT to be a SUI, they have all included these two elements which made them a consideration to begin with.

My SUI Cube prototype takes these two elements to create a simple abstraction of the concept…

The physical element is the one-button, 2-state, press down, interface. A person can press the button down, and when they remove pressure, the button pops back up. There is a physical displacement of finger position when using the button.

The virtual element is the touch-sensitive surface of the button. For my prototype it is possible to see the touch-sensitive area, but in most occasions, “touch sensitivity” is invisible. You can not see the touch sensing on an iPhone, for example. Depending on design and technology, this touch-sensitive layer in the cube could be programmed for a wide range of functionality and interaction. My prototype only has a single touch capability. What if multiple touch points could be detected? If I had more time I would consider testing these ideas. If I sewed multiple independent threads on the surface, I could detect movement, direction, and speed of my finger moving across the top. This would also offer the possibility of creating gestures! The number of interactions and programmable actions with just a simple touch-sensitive button could be limitless!

Apple Trackpad (example)

When I have talked with people about my “stacking” concept, the new Apple Trackpad pops up in conversation right away. It seems to be an obvious real-world example of stacking interfaces into a single device. I understand why people think of this device right away because it is Apple’s daring design decision that has provoked people to think “Do I really need a separate button?”. The Apple Trackpad is almost a larger version of the SUI Cube I am now testing. It is a large button that is touch sensitive on the top! More investigation is necessary though…

Apple Trackpad

The lower layer of interaction in the trackpad is the button level. When you press down, it is the same result as if you clicked a mouse or tapped the trackpad: a mouse-click event is triggered. Since a press of the trackpad and a tap on the trackpad are both registered as the same event, it could be argued that this button level could be removed from the trackpad with no degradation of interaction. It is my thought that the button level was kept in the design to assist in those few occasions when only using a tap can make an interaction more difficult than using a button. Some interactions are more efficient and manually possible through the addition of physicality, instead of attempting to combine a string of touch motions that virtually create the same result.

Two ways of dragging an icon across the desktop (a simplified example):

  • Move finger to place cursor on the icon, press down with left finger, use right finger to drag cursor across screen.
  • Or, move finger to place cursor on the icon, double-tap finger but hold the finger down on the second tap, move finger to drag cursor across the screen, tap finger again to release the double-tap-hold.

The button level of the trackpad provides access to a single mouse event, click.

Apple Trackpad: button level

The upper level of the Trackpad provides a plethora of interaction that goes beyond the assumed abilities of a typical trackpad. For full disclosure, I must admit that I do not have an Apple Trackpad and am only stating functionality from what I have witnessed or seen online.

All typical interactions are possible: tap, double-tap, drag. Apple expands on these interactions, though, through the use of a multi-touch trackpad: “right-click” when tap with second finger when first finger is already touching, two-finger scroll for a webpage, four-finger up to hide all open applications, four-finger down to show a small icon of each open application, pinch two fingers to zoom, rotate two fingers to rotate an image, hold your thumb down while moving your finger to perform a click-drag, press with two fingers for a “right-click”, three fingers swiped to the left or right to move back or forward while surfing the web, four fingers swiped left or right to open the Application Switcher… and on top of all that… you can customize all of these gestures through a preferences pane in the control panel!

Apple Trackpad: touch level

The Apple Trackpad is not a SUI because it does not contain multiple interfaces. This was not an immediate verdict though. If the definition of “multiple interfaces” were stretched thinly, the answer could almost be “Yes” to the multiple interfaces question.

  • The top layer provides cursor movement control, cursor event commands, and access to a limited set of gesture commands.
  • The bottom layer provides cursor event commands, too.

Where the definition of “multiple interfaces” could be stretched, and I think where most people are convinced that it is a SUI, is how gestures are incorporated into the overall functionality of the trackpad. Without the gestures, the Apple Trackpad would without a doubt not be a SUI because of the absence of opposed interfaces. The button and touch-sensitivity are both controlling the same interface. With the inclusion of gestures in the top layer of interactions, the line that separates the interfaces becomes blurred, and the definition of “interface” becomes uncertain.

From my inexperienced understanding of the Apple Trackpad gestures, they are simply shortcuts or “quick-keys” to functionality that is attainable by other means. The gesture is providing the user with a quicker way of performing the function by allowing the user to maintain their hand position and not move their arm to press a button, or by eliminating multiple cursor movements and click commands. Because of this, I do not consider the gestures to be an interface. I do admit, though, that more investigation is necessary…

Apple Trackpad: SUI levels

Test #2 Dropoff

Wednesday, April 21th, 8:30am

Dropped off cube for Test #2

User: Apokalyps Labotek

Test #1 Dropoff

Tuesday, April 20th, 2pm

Dropped off cube for Test #1

User: Do-Fi