Bristol Wearable Computing

[ Home | Plans | Meetings | Members | Search | Papers | Links | CyberWear | LocoSoft]


Wireless display design

The vision

Imagine picking up a palmtop display left on a table. Although it looks dead with no wires coming out, when you pick it up it springs to life with the logo of your own wearable computer! How did that happen!? You didn't plug any wires, you didn't press any buttons, you didn't reconfigure your wearable! This is hot-plug-and-play without the plugs! A display automatically comandeered by your wearable computer, just when you pick it up!

Or perhaps you walk towards a wall-mounted display. As you get within reaching distance it becomes your display, driven by your wearable computer.

The point is that your wearable computer works with whatever peripherals you like. You just signal your intent with natural, physical gestures, such as picking up the display and your wearable transparently handles all the connectivity and configuration issues.

The demo

We want to stage two demos. The first, showing a (WinCE) palmtop computing device converted to act as a dumb display for our CyberJacket the second showing one of the "Live-The-Vision" InfoStations acting as a walk up display.

Instigating peripheral sessions

You can imagine many different schemes whereby the wearable computer is able to calculate when a user would like to work with a particular peripheral device. On the whole, proximity is the best clue that a user wishes to use a particular peripheral device. Near-field radio is a good technology for detecting the proximity of a transmitter because it offers a relatively well defined cell size and does not suffer from reflection problems.

Footbridge

PhilN's near-field radio link: FootBridge (so called because it provides a data bridge for devices no more than a foot apart) could be used to detect and negotiate with peripheral devices a foot away from the user's wrist. Thus the basic model is that the wearable computer negotiates with peripheral devices that fall within a foot of the users wrist. If the states of the display and the wearable are conducive then the wearable establishes a session with the peripheral device. This session may utilise FootBridge (as in the case of the handheld display), or it may adopt another communication route.

Controlling Peripheral Sessions

Intial contact between wearables and peripherals is made by the wearable broadcasting and the peripherals listening. This saves peripheral power at the expense of wearable power. There are three reasons for this choice:

Here are the 'rules of engagement':

Here are the consequences:

If the wearer approaches two acceptable handheld displays simultaneously, then the wearable may establish a connection with either. The user may pick up one of the devices only to find that the wearable has established a link with the other. In this case the wearer can either move away from the active display until it goes out of range and a new connection is established with the display the user is holding, or they may move the active display away (akin to clearing their working area).

Initially they are no plans for one wearable driving many screens, or many wearables sharing a single screen. Any compelling applications?

Platforms

What protocol should we use to communicate between the wearable and the peripherals? By using a standard protocol it may be possible for us to avoid having to write the application-wire or wire-peripheral software, so what standard protocols are out there?

What kind of things do we want to demonstrate?

Alternatively we can take the view that existing GDIs are inappropriate for wearables and design something more appropriate from scratch. I believe that to do this job well would be too much of a distraction to the broader wearable programme, and would alienate third party developers used to conventional GDIs. We could nonetheless leave in the hooks to allow wearables to send a wire-protocol interpreter to the peripheral, prior to using a specific wire-protocol. This is also too distracting, I accept the principle, but I would like us to focus on building a demonstrator in the least possible time. What I cannot decide is whether we should develop a quick and dirty proprietary GDI and wire-protocol, that would mean that all applications would have to be written with our GDI in mind, or whether we should spend more time, implement a standard protocol and be able to use existing applications. This hinges on two things: what application would we like to demonstrate, how much do we want to disseminate and future-proof our wireless display work? For example, if we were to move to a windows CE platform eventually, would that make it certain that we should work with the Windows GDI now? Mmmm.

 


unicrest.gif (4191 bytes)

The material displayed is provided 'as is' and is subject to use restrictions.
For problems or questions regarding this web contact Cliff Randell.
Last updated: January 14, 2000.
logoep.gif (1404 bytes)
ęCopyright Hewlett-Packard 1997-2000.