ACM CHI 2005 Workshop: The Future of User Interface Design Tools
The Future of User Interface Design Tools (Scott Klemmer, Dan Olsen)
09:00 Introductions
- http://hci.stanford.edu/srk/chi05-ui-tools/
- Michel Beaudoin-Lafont: with Wendy Mackay
- Anind Dey: Ubicomp CMU
- Steven Dow: G Tech PhD student with Blair Macintyre, "DART" AR toolkit
- Andrew Faulring PhD student with Brad Myers
- Scott Hudson
- Jeff Nichols CMU Myers Personal Universal Controller
- Rob Jacob Tufts U, next-generation UI toolkits
- Yang Li Berkeley postdoc with James Landay, sketch-based UIs, Monet, context-aware computing Topiary
- XIML.org
- Mark Green, U Hong Kong, School of Media, teaches art theory and digital media, VR, 3-D interaction, creating new lab in Toronto now
- Bjoern Hartmann, PhD student with Scott Klemmer at Stanford
Angel Puerta: Model-based UI tools
- Future ui tools should be oriented around the process of developing software, interoperable (ie no closed systems), focus on a single feature instead of being "UI environments", and should enhance the design skills of the UI designer
- Good example for such a tool is his UI Pilot tool to create website wireframes
- He thinks building UIs is fundamentally a software engineering process. Model-based UI tools can fit the above qualities. Unlike monolithic tools, they can generate UIs for various platforms from a single representation.
09:40 Jeff Nichols and Andrew Faulring: Model-Based UIs and Automatic Generation
- Automatic generation is needed especially for systems where not all models are available at initial design time, when trying to provide UIs that customize to individual users or systems, or when trying to integrate new physical devices
- But: what can we generate automatically apart from the typical dialog box?
- How can we combine multiple models (eg a database and a question agent, or for old and new appliances, or for multiple connected appliances) to create new UIs? (The last two are his current PhD thesis topics)
- Wants to improve modeling languages, integrate rapid prototpying and rapid prototyping, and use model based techniques with new interface styles. E.g., GUIs use the widget abstraction, what is this abstraction for tangible or gestural UIs? Maybe those new styles are not mature enough yet for a model based approach?
Discussion
- Scott Klemmer: Word now has a "build document model by example" feature: as you format text, it starts creating styles from that and applies them to the text, so later you can change everything in 18pt Helvetica to 24pt GillSans. - Angel: We may still need more experience in GUI tools before we can do this. Electronic text formatting has 30 years of experience (and even more in the pre-computer era).
- Jan: Flexible tools as advertised by Angel sound like UNIX commands. This might be too complicated for beginning UI designers? Mark: The real problem here is to have the production process nailed down. Film production has this had sorted out for decades, they know their workflow, and therefore it's easy for them to use many small "pipelining" tools (which is what they do a lot). Modern digital media and post-desktop UI production, however, doesn't have that understanding yet.
- Jeff Pierce: Shouldn't model based UIs also help carrying over my favorite interactions from one device to the next? Example "hotel alarm clock". - Scott K does quick survey - most people use their personal device rather than hotel alarm clock or wakeup service. Wakeup service explains its UI, and the personal device I already know. Jeff's particular example: Toronto hotel, clock+stereo+CD player, manual in bedside stand, couldn't figure it out.
- Dan Olsen: Is there any chance of success? Most of us are driven by the interest to keep adding new modalities to UIs, so toolkits are always obsolete and limiting?
10:40 Scott Hudson and Mark Green: Adaptive UIs
Scott Hudson
- Adapting to what? To devices/platforms, and to the situation of the user
- Adapting to devices/platforms was staple topics of the 80s, we need to see if we can do things radically differently if we look at this at all
- Adapting to human situation is to him the more interesting example.
- His approach: The system should (1) sense, (2) model (make sense of it), and (3) act appropriately in the UI
- Example from his own work: Built system using simple sensors that can decide the question "now is a bad time to interrupt me in my office" better than human observers, and that can modulate the amount of information displayed and whether communications lines such as phone are allowed to interrupt
- There are useful sensors to do this, e.g., Raskar SIGCHI 2004: optical sensor that automatically detects edges in the sensor, faster and better than e.g. John Canny's(?) edge detection
- It seems there is no need to do complex semantic modelling!
- There may well be lots of indicators to human activities that humans don't use but machines could!
- His questions: How do UIs need to change to,e.g., know how much something being displayed actually grabs the user's attention - and how can we build tools that support this?
Mark Green
- Back in the days of the original Mac, life was easier. Today we have a much wider variety of output and input devices. Example: Exhibit "Bar Code Hotel" (Banff ca. 1995)
- Hong Kong currently has 10-15% more active cellphones than people! He got his video phone free for buying enough groceries
- At any given time, on the streets of Hong Kong there are several hundred different phone models in use. How do you design services for this??
- His sample scenario: One of his arts students prototyping an interactive exhibit where you walk through a maze of drapes and get city views on a VR display. She's not a programmer - how should she create this? (They actually have a toolkit for this.)
- In the media field, robustness is extremely important (no Error 404 in a movie). Our applications are far too brittle. Interactive installations need to always work no matter what the user does, and be entertaining all the time.
- His Grappl toolkit adapts to the device configuration at runtime.
- But a big yet unsolved problem is to build tools for content creators - content experts, artists, experience designers, and other non-programmers. Hard problem, not seen a solution that scales yet.
Discussion
- Brad Myers: Recently used a Mac Plus and was surprised that things run as swiftly as on modern computers. - Scott: First of all there are flooring characteristics in human perception - if something happens faster than a certain delay, it's hard to tell if something else is even faster. When you go into into a menu on the Mac Plus, you go into a tight assembler loop. You cannot do that in today's more open systems.
- Dan Olsen: Getting from Scott H's Sense to Model stage is hard and there are zero tools for it (- Scott H: see my upcoming UIST paper)
11:30 Philip Cohen (Oregon), Steven Dow (G Tech): Uncertainty
Philip Cohen: Uncertainty
- DARPA Multimodal Command Post of the Future - is in active use in Irak since 2002
- In his example video, a 90 second, 15 step sketch+GUI task of setting up a patrol for a certain route at a certain time window becomes just a 6-second, one-step task by using sketch+voice input.
- Example video II: drawing GANTT chart in a collaborative meeting. Uses multiple uncertain sources (eg handwriting and voice, "let's call this task 'demo'") to populate its vocabulary. That way it can deal with extravocabular objects. An MS Project chart is being created from the input automatically.
Steven Dow (G Tech PhD student): DART
- integrated DART into Macromedia Director
Discussion
- DART could be used to early-prototpye the command post
14:50 Robert Jacob
- Tool research is coming back! Because we are in the post-desktop era now where we were at the beginning of the GUI era
- A recent project: PMIW - a language for non-WIMP interfaces. It used continuous and discrete interactin relationships. It uses a graphical state machine notation (->PatchPanel!)
- Orit Shaer (his student): extends PMIW to tangible UIs. Reality-based UIs don't work with widget approach. Instead model dialogue using an enhanced FSM, and capture preconditions and states in a concurrent task diagram.
Michel Beaudoin-Lafon
- UI tools have moved from events and pixels 20y ago, to widgets and callbacks 10y ago, to HTML and JavaScript 5y ago until today
- Peter Wegner: Interaction is more powerful than algorithms
- He sees interactions (interaction instruments) as first-class objects.
- He and Wendy Mackay are organizing the "Interaction Museum" where interaction ideas, both new and old, are presented as "exhibits" for practitioners
Erwin Cuppen... and Jeff Pierce
Jeff: Multiple Devices
- His goal: helping users manage information across their devices. Many people put their laptop right next to their desktop and work on both. He also wants to allow users of a personal device to leverage public i/o devices such as large displays when they are close to them.
- Public display (Starbucks station) could also just display non-critical data (email text), while the trusted personal display (cellphone) shows the secret data (email addresses etc.)
- Discussion issues: attack distribution of UI over personal only or also public devices? do users know and care about security issues? how can we create tools to quickly create these distributed UIs?
16:30 Dan Olsen's Rant
- Dan wants to pose three issues and push a little harder.
- We are at the end of 2 fortuitous events, desktop GUI computing became cheap (which we have been exploiting for 15 years) and (when that started to decline) the internet and wireless communication (which we have been exploiting recently)
- However, we cannot tell the average user why he should buy the next computer. That's why we are still in a tech slump.
- We are paid to invent the future.
- First question: Every night at his house, he or his wife get up, get the dog into the crate, get the kids to bed, turn out lights. Only sometimes this is different. Massive uniqueness and privacy make automation of users' life very hard. What have we seen today that will make this scenario a lot better, and what's missing?
- James: Problem too uncritical to have people spend money on. more significant: if dan's older, he may spend money on a system to stay connected with his kids or for health monitoring
- Mark: Hong Kong houses are 400 sq ft, people dont want to be there much, instead biggest driver for tech sales is entertainment and games.
- Dan: then why do we care about context aware?
- Scott H: because otherwise all those devices will drive us nuts
- Q2: 30,000 students, every class result of every student in the last 10y and any associated data can be stored in RAM on a laptop. Why does the president's assistant not do that? Or why does radiologist marking CATscans every day not use his huge library of past scans?
- Scott K: Fred Brooks 1977 The computer scientist's tool builder (human with computer will beat computer at chess) - radiologist using the library would do better than without it
- What are the assumptions that keep us from innovating? E.g., we have 50 million instructions per input event. Determinism: We also don't have to decide in a ms the right thing to do. Careful design+spec solves the problem: real world works differently, growth, eg with commandpost space+time+organizational unit were the concepts that people could do most with. "programs don't remember": we still are stuck in the days where 5-step undo was expensive. finally, we don't sense enough.
- Mark: We view programming as something natural, but most of the world thinks differently. End-user programming is mostly laughable, they don't think like that.
- Dan: go ubicomp or huge