Trajectories into Practice

Maybe it’s an age thing, but I’m increasingly bothered by the question of how my research might make a difference – why might a professional working at the coalface of user experience design be bothered about what I am doing?  Of course, there are probably other reasons why this is bothering me too – we (researchers) are increasingly asked to justify the impact of our research, both speculatively when writing proposals such as the ‘pathways to impact’ section on EPSRC proposals or when our results are weighed in the balance of various assessment exercises such the fast approaching UK’s Research Excellence Framework which for the first time includes impact case studies.

An important aspect of this for me is putting HCI theory into practice, a notoriously difficult challenge as articulated by Yvonne Rogers in her recent book on HCI Theory, but also emphasised by EPSRC’s recent review of the state of HCI here in the UK.

Perhaps the first question to consider is what I mean by HCI theory? In addition to Yvonne’s Book, which tackles this question in depth, I’ve also been struck by Kia Hook’s and Jonas Lowgren’s recent TOCHI paper on ‘Strong Concepts’, design abstractions that generalise across different domains. Kia and Jonas cite my work on trajectories as an example of a strong concept, a form of HCI theory that could be ripe for putting into practice.

The question of how to put trajectories into practice has  emerged as a central concern for my ongoing EPSRC Dream Fellowship. So far, I’ve been focusing on two different domains, museums and television.

For museums, Horizon Doctoral Training Centre student Lesley Fosh has designed and studied an example trajectory through a sculpture garden that aims to move pairs of visitors between moments of experiential engagement with sculptures in which they are isolated, listening to music and performing physical gestures such as touching them, and other moments where these visitors come together again to reflect on these experiences as well as on more ‘official’ guide information that they receive afterwards.

sculpture trajectory

Lesley’s Sculpture Garden Trajectory

These ideas are now being carried forward in the European CHESS project where we are working with the Acropolis Museum in Athens and the Cite De L’Espace space museum in Tolouse among other partners, including running a series of design workshops over this summer to apply trajectories to the design of new visiting experiences.

Turning to television, I recently spent four months as a Visiting Professor at the BBC focusing on the design of multiscreen TV experiences. I began by analyzing some existing companion apps for TV shows, including the Antiques Roadshow play-along game in which viewers estimate the value of antiques during the show, as well as a research prototype called Jigsaw developed by Maxine Glancy and the team at BBC R&D that aims to support intergenerational TV experiences by enabling children to snap images from a TV show and turn them into jigsaw puzzles. Similar to Lesley’s example, I was able to produce some case studies to illustrate the potential of trajectories, albeit by analyzing existing designs.

BBC trajectories workshop

Sketching Trajectories at the BBC

Again, these were followed by a design workshop with participants from editorial, user experience and research and development, at which we used trajectories to explore new design concepts for extended TV experiences, and two subsequent presentations to the wider BBC User Experience design team as part of their ongoing One Service design project.

While it was exciting to be able to engage with professional user experience designers who expressed enthusiasm for trajectories, it proved difficult to establish a deep connection between the concepts and specific example designs in short workshops. We therefore hosted the first ‘trajectorize’ course at Nottingham last week. Three different teams – David Ullman and Dan Ramsden from the BBC; Andres Lucero from Nokia and Joel Fischer from the Mixed Reality Lab; and the artists Ben Gwalchmai and James Wheale brought along three design concepts that we then inspected though the lens of various trajectory concepts over two days. As well as being thoroughly enjoyable, this was the first time that I began to glimpse how trajectories might actually be put into practice, with designers being able to produce complex trajectory sketches as a way of challenging and refining their ideas in areas such as designing social encounters, key transitions and take-home experiences.

You can get more of a sense of what happened from the official course structure and materials, but also from Dan’s blogpost after the event.

The initial success of this course suggests to me that there is indeed the potential to embed strong concepts such as trajectories into the practice of professional user experience design, but also that this takes considerable work from both sides – in this case at least a two day commitment of time to be able to make significant progress. However, I suspect that there is far more to it than this.

  • We need to understand where these concepts sit in the UX design process (our teams were using trajectories to refine existing concepts rather than for ideation).
  • It feels important to generate some initial case studies based on familiar examples (as we did in both sectors) to generate initial interest in the concepts.
  • A our participants observed, it feels like the concepts need an appropriate level of generality, being structured enough that you can repeatedly attack a design from different perspectives, and yet so prescriptive that they close down creative thinking.
  • Finally, is the importance of sketching. We have repeatedly shown trajectories as diagrams and encouraged workshop participants to create their own. Creating and labeling trajectory diagrams feels like an important element of the approach, but also brings its own challenges, not least you need a very large sheet of paper to be able to move between overview and the fine detail of annotations. As a result we have begun to experiment with zoomable drawing tools, initially developing a series of trajectory sketches using Prezi, but more recently with my Colleagues Chris Greenhalgh and Tony Glover beginning to develop and now use our own zooming trajectory sketch tool that adds greater structure, sequencing and also metadata to an evolving sketch.
figure 3

Antiques Roadshow Trajectory Sketch in Prezi

Putting trajectories into practice is very much a work in progress, and one that I hope to continue over the coming years. My current sense is that it should be possible to put strong concepts such as trajectories to work, but only if we can find the right approach and supporting tools.

Advertisements

Bridging human and system perception

My last post opened up the topic of how humans interact with sensing technologies, particularly systems that sense their bodily responses to playful experiences. Today, I’d like to develop this theme some more, but this time focusing on a quite different kind of sensing technology, while also broadening the discussion to bring in a new perspective, that of the ‘designer’ whose task it is to bridge between the technology and its users.

First some broad background. Paul Dourish has made a compelling case that our experience of computers is becoming evermore embodied, that is more physical and material.  Interacting with new kinds sensing system would seem to be an integral part of this – from systems like the Kinect that track our movements and gestures, to those that track how we move physical objects across surfaces in tangible interfaces.

A tricky challenge with invisible sensing systems is knowing precisely what they are doing, or even how to interact with them in the first place. PARC’s Victoria Bellotti has framed this problem in terms of five key questions for the designers of invisible sensing systems.

  • How do I address one (or more) of many possible devices?
  • How do I know the system is ready and attending to my actions?
  • How do I effect a meaningful action?
  • How do I know the system is doing (has done) the right thing?
  • How do I avoid mistakes?

Over the past year I have been working on an unusual project to design and deploy a new sensing system called Aestheticodes with colleagues from the Horizon institute.  This is somewhat like the increasingly familiar QRcodes that festoon the world around us in that you point a camera (such as the one on your mobile phone) at a visual pattern (that contains a computer-readable code hidden within it) in order to trigger an interaction (such as the display of some digital media). However, it is based on a distinctive and very interesting that I first encountered  in a CHI 2009 paper by Enrico Costanza.

The basic idea is to embed a digital code into the topology of the image, by which I mean representing it through the hierarchical structure of a drawn image – specifically the number and depth of nested regions within regions. You will find the rules set out in the original paper, but as a quick example, the following image show a pattern with the simple code 1.2.2.4 embedded within it four times. The code is represented by the numbers of solid blobs that are contained in a set of white regions which must be joined together into a single connected shape (the numbers are then simply written in ascending order). This simple code is clearly repeated four times within the image.

aestheticodes-1 aesthetocides-2

This idea excited me for two main reasons. First, the rules for making visual codes are expressed through a simple set of drawing rules that a human can quickly learn. This turns the generation of visual codes into a creative act that builds on people’s natural drawing skills. Second is the consequence that two very different looking images can represent the same code, as it is the topological structure of the image that matters, not the actual shapes that are drawn. For example, the second image here also contains numbers of blobs within connected white regions, but is drawn using different shapes and in a different style.

On the other hand, two very similar images can in fact represent quite different codes sue to subtle variations that the viewer doesn’t pick up on, for example a few extra small blobs here and there. This makes the overall approach very open to playfulness and creativity as we recently found out through a collaboration with Tony Quinn and Emily-Clare Thorne, two ceramic designers from Central St Martins college. We reported the full details at CHI this year, but in a nutshell, it took us about a day to train a group of ceramic designers to be able to draw these kinds of patterns, after which we commissioned them to each produce a set set of three designs to appear on plates, placemats and menus at the Busabi Eathai restaurant.

You can see their final designs online, but I include a couple of photos taken at an exhibition that was staged at Busaba during London Design Week last year.

2012-09-20 18.52.49

What  fascinated to me was how our designers were able to hide specific visual codes within rich and complex patterns, and especially the various strategies they employed for achieving this. One was to add embellishments that a human would see as part of the pattern, but are in fact not connected to the parts that contain the code. Another was to introduce large solid regions that look significant to a human, but actually are no different from a regular line as far as the topology of the drawing is concerned. A third  was to use different colours, that again are ignored by the ‘system’ that thresholds an image to black and white before processing it.

2012-09-20 18.52.55

On the other hand, designers also needed to be aware of  how the technology processes images in order to be able to produce patterns that would work on ceramics under testing real world conditions (especially tricky lighting as you might notice from the nasty specular reflections int he photos). Not only did they have to appreciate the basic drawing rules, but also had to understand how, for example,  lines and small gaps could become problematic when images were digitised and became pixelated.

In short – and here is where we can tie it back to Victoria Bellotti’s questions – in order to make rich decorative patterns that contained reliable codes our designers had to span two worlds. On the one hand, they had to understand how humans view patterns, carefully separating figure from ground and employing basic principles from Gestalt Psychology such as closure and similarity to get them to read a coherent pattern from possibly disconnected parts. On the other they had to be able to reason about how the technology processes the patterns – which is quite different – thresholding them and then searching for nested topological structures. By understanding both worlds, the designers  could bridge between them. Importantly could also exploit the fact that they don’t always overlap. There are some things that people see as significant but that the system does not (various embellishments) and others that are significant to the system, but not to people.

It is these differences between human and system perception of the patterns – that they only partially overlap – that provides the creative wiggle room for hiding codes within beautiful patterns. I’d emphasise two final points here in what has become quite a long post.

  • It is the designers who, at least in part, are dealing with Victoria’s questions, essentially mediating between the system and the end-users (although there are still challenges for these too as we discuss in our paper).
  • This is perhaps only possible because the operation of the sensing system, in this case an image processing system, is revealed to the designers through a relatively simple set of drawing rules that fit well with their natural skills and training.

So my final question is whether other – or perhaps even all – invisible sensing technologies  should be based on transparent rules that can be revealed to ‘designers’ so that they can both make them work reliably, but also mess around with them creatively? Conversely, might technologies that operate as if by magic not be amenable to such creative use? Or adding a further layer of subtlety, must human designers appreciate how the system works in order to craft the magic for everyday users?

It’s an intriguing question – and one that we are currently exploring further as we extend the approach to enable us to embed recognisable codes into fabrics – especially lace.