"We make things talk so we can talk to people."

I grew up watching Back to the Future trilogy (especially with those reruns on TV). But every time I saw the scenes where Dr. Brown or Marty operated the time machine’s dashboard, I was always like: “Is that really a good way to enter your time travel information? I mean, this is time traveling we are talking about. Don’t you guys have a quicker, safer, and slicker way to enter the times?” I was a young boy, not even knowing what Human Computer Interaction meant (or any of what the 3 words meant, :D), but the bad design of the dashboard did catch my attention. Granted, plot thickened and the story got more interesting when the characters got into trouble with the wrong time entries. But I just thought the Delorean deserves a better gadget, plus, it’s a good design activity.

Original Design

So the original dashboard looks like these:

(image source: link)

(image source: snapshot from this video)

The driver uses the number keypad to enter the destination time. But when you drive in a fast and small space, it’s not uncommon you enter a wrong number. Also, the relationships between destination time, present time, and time of departure are not visually clear. Plus, the whole dashboard is just taking too much space (but it’s definitely good to cram the space with a bunch of electronics to show what a grass-root scientist Doc is. :D). So, here are the 2 new versions I propose.

My Design

New Design A (Sized version)


My idea is based on dragging a slider on a touch screen. Here are some of my design considerations.

  • 3 slider heads are placed on a timeline.
  • Since we are doing time travel, where you are from/to isn’t always past/future, so these heads need to have a new visual representation. According to the movie, the present time was positioned in the middle of the dashboard, which is supposed to mean the most important information. So in my design, using that importance level, I use the big head as the present time, the small head as the time of when you are from, and a middle one as the destination time.
  • The destination time head is tappable and draggable. Dragging that head can modify the destination time.
  • The present time and “from time” are not draggable (obviously that’s something you can’t change). But they are tappable to show detailed info.
  • The destination time head glows with animation to hint the driver that it’s some info s/he can change.
  • The 2 ends of the timeline are tappable. When tapped, the timeline extends (to one of the directions) to show more time available.

New Design B (Colored version)


This design is pretty much like New Design A, except the head sizes are the same. They are colored in gray (for “from time”, a less important info), green (for present time, color chosen from the original design in the movie), and red (for “to-time”, color chosen from the original design, which specifies the idea of “this needs more attention”).


The main idea of my new design is incorporating a touchscreen and a slider timeline to allow visual presentation and gestural input. As mentioned, since it’s time travel, ¬†the past isn’t always when you are from and what’s in the future isn’t always when you are going, so I use different sizes or colors to specify when you actually are/from/to on the timeline.

Suggestions? Comments? Really to go 88 miles per hour? ūüôā

( the finger image in my design is from: http://openclipart.org/detail/160447/)


Topic: Usability Testing: Analyzing and Reporting Findings

Reading material:

  • UxB: Ch 16 & 17


These 2 chapters don’t make much sense to me. ¬†A lot of things mentioned are too trivial or¬†irrelevant¬†(at least to me). Basically the ideas are, for evaluation:

Evaluation Analysis (Ch 16)

  • Formative/qualitative data is used to form or identify issues, such as¬†usability¬†inspection, cognitive walk¬†through, think aloud, contextual inquiry…etc.
  • Summative/quantitative¬†data is about the numerical or statistics from¬†the¬†evaluation.
  • After problems are identified, prioritize the order of fixing with the relationship of the problems’ cost and importance.

Evaluation Reporting (Ch 17)
Using a formal and conventional way to report evaluation findings is a good way to convey the ideas “This is indeed a problem”, “Here is my solution to the problem”. Common Industry Format (CIF) is a often used for this kind of reporting. Effective reporting should identify problems specifically and provide¬†corresponding¬†solution/recommendation based the cost-importance, urgency, severity. I don’t quite agree with the book’s idea of “go easy on the doses of bad news. Clients and customers will not want to hear that here is a whole list of problems with the design of their system.” A UX expert’s job is to identify problem as much and as specific as s/he can, being too “polite” won’t do much good for the stakeholder.

Decoding/reporting the data from my UR4 experience
Metrics of usability issues can be analysed and decoded. The metrics include performance, issues and severity, behavior (non-verbal and verbal), self-reported. The statics data (avg. time on task, completion rate, SUS score) can be visually presented into charts or graphs. The qualitative data (e.g. interview answers) can be generalized into certain issues. I think the meat of a usability report lies in the issue/recommendation/severity section that lists the problems that need to be fixed.

Reading material

  • UxB: Ch 14 & 15


(I actually feel good writing this after we finished our UR 4 research project. The knowledge have been internalized so writing this post is like naturally writing a digest article.)

Conducting usability evaluation can be two-fold. The preparation and the the actual testing.


  • Data of interest: Design/ask what kinds of data is going to be collected. For example: define the tasks, time on task, pre/post – SEQ for one task, SUS survey for the whole product, pre/post-session interview questions, record of the users verbal/non-vebral¬†behaviors, voice/video record of their interactions with the product…etc.
  • Roles: Facilitator,¬†observer, supporting actors…etc.
  • Participants: Define how many participants are needed, their demographics; recruiting them and get ahold of their¬†availabilities;¬†Is compensation needed? Is IRB needed? is NDA needed?
  • Preparing facility: Find a suitable test place,¬†preferably¬†one same location (a lab). Prepare the equipment.
  • Pilot testing: Find some people who are from the target demographics to do pilot testing.¬†Modify¬†all the things listed above based on the result of the pilot test(s).


Conduct the test based on the plan. Give a opening speech. Record the data along the way. Encourage the participant to think aloud. Keep the participant at ease by: reminding that s/he is not being evaluated, the product is; providing some help while not affecting ¬†his/her own ways of the interaction. Wrap up with a post-session survey. Provide the compensation after one participant is finished and express the research team’s thank-you. Continue with the next participant.

Reading material:


Information Architecture

The chapter brief talks about what information architechure is, followed by these main topics:

  • Navigation models (how people decide and find): Satisficing; Information foraging; Mental maps; Rote memorization; Information Cost..c. These navigation models are mostly based on the idea that people¬†consciously¬†or¬†unconsciously¬†budget their mental and time cost which are highly related the websites’ scannability and page¬†traversal¬†time. All these models provide¬†corresponding¬†design implications.
  • Process of developing an¬†architecture: There are top-down and bottom-up way to design a¬†architectures. (The top-down one makes more architectural sense to me.) Architectures can be represented in outlines, flowcharts, tree diagrams,¬†wire-frames, page schematics. ¬†One useful and popular way for architecture development is to have the designer or users do card-sorting.
  • Organization¬†schemes:¬†This is about how the content¬†across¬†pages should be organized. Some schemes are¬†Hierarchy(tree), Line topology, Matrix, Full Mesh, Network, and¬†Hybrid. Hybrid is the most common one because it gives a good overall structure of the site, in the meantime ¬†products in the same subcategory can be cross-linked (a pair of shoes with different colors). Also, a website’s structure is better off with more breadth than depth because it saves more mental & time cost for the users. The meaning of the material, semantics, can also be used as a way of organization categorization, e.g. user type, topical, life event, implementation…etc.
  • Presenting navigation to the user:¬†A browser provides navigational tools/cues (Back, Forward, Homepage buttons; URL). A webpage also provides similar things such as breadcrumbs, progress bard, expanding outlines…etc.


  • Horizontal and vertical prototypes: Horizontal ones focus on developing features. Vertical ones are more about making¬†functionality. There was a confusion/discussion in the class about what’s difference between “feature &¬†functionality”,¬†because¬†they are similar words. I think “feature” is more about “modules” (for a shopping website: user verification module, search module, product recommendation¬†module), while functionality is about the technical¬†implementation¬†and consideration of those¬†modules. A good prototype would be a “T”¬†prototype: most features are rough listed, with one/some particular feature’s functionality well developed.
  • 3 kinds of prototype fidelity (they can be used in different phases or with different costs): Low (sketching), Medium (wireframing), High (close to final product).
  • Prototype¬†interactivity: scripted/click-through, fully programmed, “Wizard of Oz”;¬†physical¬†mock-ups¬†(cardboard or paper) for physical interactivity.



  • At first I didn’t know why these 2 topics are grouped together in one week. But after reading the section about card-sorting, I had a question — “card-sorting is how people come up with system architecture, but that chapter started with navigation models, how does card-sorting relate to navigation ¬†models?” Then there comes UxB’s chapter about prototyping with interactivity, which is essentially a way to test how user feels/thinks about the architecture.
  • But why does UxB spend so many pages talking about paper prototyping with specifics like the sheet dimension and material and etc?

Dogs can drive

A while back I posted a RAA about a CHI paper on video chatting with pets. I also posted my design for the interface of a time machine — which came in the form of a Delorean. And not long after, this news came along. Dogs can drive.

Granted, these dog drive with some assistance from the human trainers, and the driving lesson isn’t really about training the dogs how to operate vehicles perfectly but about raising the awareness of how intelligent they are so they will¬†hopefully¬†be less¬†abandoned¬†by people. Still, watching this video raises a lot of interesting questions. One of them is: would the idea of user centered research apply in this scenario in which the users are not human? I am not entirely joking because often times new products need to be designed or modified for¬†users¬†who can’t express their thoughts or express their likes/dislikes with feedback we don’t¬†normally¬†understand ¬†(e.g. babies). Besides trial and error, is there a better way to tackle this kind of problem?

I came across¬†this post¬†a while ago. It’s a column article from a Taiwanese business magazine’s website written by a Taiwanese UX designer who has won many international awards. It’s written in Chinese, so only people who know the language (e.g. people from Taiwan or China) understand what it’s about. But let me share some of the¬†thoughts¬†and content of the article here.

Basically the article brings up this observation: The technology industry in Taiwan is working hard to integrate good user experience design in their products that in the meanwhile the term UxD is becoming a lingo, a specification, a slogan. There are successful stories, there are failure stories. But if we look around, the industry that’s really doing well with UxD in Taiwan is the chains of¬†convenience¬†stores. For example, when it rains, the staff would bring a table of umbrellas or raincoats outside of the store so customers can have an easier access buying them. When checking out microwavable foods¬†(e.g. a small¬†carton of milk or a lunch box¬†), the staff would ask the customer¬†¬†if s/he wants to have it heated, how much s/he wants it heated, so s/he can enjoy that food in the store’s seating area.

As someone from Taiwan and having studied/worked abroad, I have come to realize how “pervasive” the¬†convince¬†stores are in Taiwan (once a news story said some tourists, from another country, got lost in Taipei because there are¬† multiple¬†convenience¬†stores in one block that they got disoriented). On top of that, I also realized that you can¬†literally¬†do¬†anything in a Taiwanese¬†convenience¬†store. You can use the kiosk to buy shuttle/train/concert tickets or print/fax/scan documents. You can pay all kinds of bills (including some taxes!). The stores are always cool (or warm) and clean. You can buy all kinds of foods and drinks. You can give them your laundry or picture files then a couple days later the laundry comes back cleaned, the photos printed. You can have your online order delivered to the store. You can use the bathroom or the seating area for personal¬†convenience. The staff is trained in a way that they are always polite and professional to answer questions, solve problems, offer information, organize and clean the store.

But most of all, the staff are trained to take initiatives to make better service. That reminds me of what Cooper said about computers should be smart and¬†considerate, so they can work better like a human, or in this case, a Taiwanese convenience store staff member. ūüôā

Taiwanese¬†tech brands – ASUS, hTC, Acer…etc, I look forward to more of your work on making products well-integrated with good user experience design.

(Additional link: A US professor’s experiences with Taiwanese convenience stores:¬†The 7-Elevens in Taiwan Are a Necessity – Not a Convenience¬†)

Korhonen H., Koivisto E., (2006),¬†Proceeding MobileHCI ’06 Proceedings of the 8th conference on Human-computer interaction with mobile devices and services,¬†Pages 9-16,¬†link

Purpose of research
The research aimed to formulate a set of playability heuristics for games running on mobile platforms through a series of expert evaluation iterations.

Initially 10 heuristics were developed based on the literature (e.g. Nielsen’s heuristics) with the team’s own modifications. They were divided into 3 categories to evaluate “playability”: Gameplay, Game Usability, and Mobility. After one round of¬†evaluation¬†with 1 mobile game and 4 evaluators, a new version of 29 heuristics were developed to replace the¬†original¬†10 because the team realized not all playability problems got defined and identified. The new 29 heuristics were¬†categorized¬†y in the same way with the 1st version. A new round of evaluation was then conducted, with 5 games, each one evaluated by 2 to 4 people.

Main findings
In the 3 categories, Game Usability issues (from viewing the game as a commercial utility software or website) are most identified as they are both the commonest to be violated to and the easiest to identify with.

Mobility issues are the ones least identified. My personal take is there are significant fewer heuristics in the Mobility category (only 3 out of the 29 total heuristics), and, the unique system characteristics/usage/limitation of mobile devices are often well-considered during the mobile game development.

Gameplay issues were the most difficult to spot, mainly because it takes experts (people who have real game¬†development¬†experience, who plays game a lot, or someone who is trained for this kind of evaluation) to interpret the¬†heuristics¬†and identify the issues¬†accordingly. For example, for the¬†heuristics¬†“GP3: The players are rewarded and rewards are meaningful”, it takes someone who has the mindset to tell if/what the rewards of the game are and if they makes sense to the gameplay right off the bat.

Playability is the idea of usability coupled with fun, immersions, and engagement. It’s a relatively new terminology. It’s mostly applied in computer games, but I think it can also go well with the design of theme parks, 3D theater, or virtual environments. I picked this paper because I wanted to get a better understanding of what playability is. Also, although this paper is old (written in 2006, which is like 3 generations ago in the world of mobile software/hardware), this paper is somewhat the ancestor of other offspring papers who did mobile game evaluations based on the heuristics this paper developed.

Here are some of my thoughts and questions:

  • I like that the 29¬†heuristics¬†were not made for any particular genres (e.g. adventure, puzzle, combat). The essence of¬†heuristics¬†is they are like guidelines. They are “specifically general” — they provide ways to identify certain problems but they are not over-limited to particular subsets. However, I do hope they include dmulti-player related heuristics. This is not particular to any genre but I think is an important variable to gameplay, usability, and functionality. (Strangely, the multi-player factor was brought up to re-design their 1st version of heuristics, but in the 2nd version, multi-player consideration didn‚Äôt show up as one of the new/modified heuristics.)
  • The Gameplay heuristics (14 out of the total 29) are not only good for evaluation, but useful for game design. Gameplay is the experience of game mechanics and story. To me it’s mainly about the balance of things. Making a good game is not easy because it’s not just a software, it’s not just a movie, it’s an interactive experiences with a lot of things involve. That’s why Gameplay heuristics are important, that’s also why gameplay issues are not easy to identify.
  • I am not sure if making the heuristics into 3 independent¬†categories¬†is the best way. For example, the Mobility heuristics should be somewhat tied to Gameplay ones to provide an expert view that the design of mobile game play has some¬†different¬†considerations.
  • I personally don’t get why they used this kind of chart, i.e. the charts (below is one of them) present total playability issues across all 5 tested game. What makes more sense to me would be a chart of average issues for all 5 tested games, or one chart of each tested game.
  • Over the past one year, I have experienced the mobile app boom by playing tons of games on my Android phone and iOS iPad. A lot of the games are really creative and¬†beautiful, but also a lot of them lack the consideration in playability or interface design, which stops me from keep playing them. That’s the interesting about this business. Interfacing: it works best when they are silent, simple, and intuitive. And you know there is something wrong when the interface is making “noises” that obstruct your flow of play. Here is a screenshot of¬†Outwitter, a turn-based strategy iOS game I have played for half year. I just did a quick evaluation with the 29 heuristics this paper came up with. Turns out, the¬†success¬†of Outwitter can be explained by the fact that it violates very fiew of the 29 heuristics.