[ Home | Plans | Meetings | Members | Search | Papers | Links | CyberWear | LocoSoft]
Wireless display design
Imagine picking up a palmtop display left on a table. Although
it looks dead with no wires coming out, when you pick it up it
springs to life with the logo of your own wearable computer! How
did that happen!? You didn't plug any wires, you didn't press any
buttons, you didn't reconfigure your wearable! This is
hot-plug-and-play without the plugs! A display automatically
comandeered by your wearable computer, just when you pick it up!
Or perhaps you walk towards a wall-mounted display. As you get
within reaching distance it becomes your display, driven by your
The point is that your wearable computer works with whatever
peripherals you like. You just signal your intent with natural,
physical gestures, such as picking up the display and your
wearable transparently handles all the connectivity and
We want to stage two demos. The first, showing a (WinCE)
palmtop computing device converted to act as a dumb display for
our CyberJacket the second showing one of the
"Live-The-Vision" InfoStations acting as a walk up
Instigating peripheral sessions
You can imagine many different schemes whereby the wearable
computer is able to calculate when a user would like to work with
a particular peripheral device. On the whole, proximity is the
best clue that a user wishes to use a particular peripheral
device. Near-field radio is a good technology for detecting the
proximity of a transmitter because it offers a relatively well
defined cell size and does not suffer from reflection problems.
PhilN's near-field radio link: FootBridge (so called because
it provides a data bridge for devices no more than a
foot apart) could be used to detect and negotiate with
peripheral devices a foot away from the user's wrist. Thus the
basic model is that the wearable computer negotiates with
peripheral devices that fall within a foot of the users wrist. If
the states of the display and the wearable are conducive then the
wearable establishes a session with the peripheral device. This
session may utilise FootBridge (as in the case of the handheld
display), or it may adopt another communication route.
Controlling Peripheral Sessions
Intial contact between wearables and peripherals is made by
the wearable broadcasting and the peripherals listening. This
saves peripheral power at the expense of wearable power. There
are three reasons for this choice:
- When the wearable is 'off' (not being used by the
wearer), there is no need to poll for peripherals, and
thus power can be saved.
- Peripherals are left scattered and maybe forgotten,
whereas the wearable is taken everywhere and is personal.
It is easier to imagine wearers taking steps to recharge
their wearable than their displays.
- There are likely to be many peripherals for each
wearable. Recharging a single wearable is simpler than
recharging many displays.
Here are the 'rules of engagement':
- The wearable connects to the first peripheral that it is
able to detect.
- Once a session is established, the wearable will not
connect to other peripherals coming into range.
- Any break in the communiction is interpreted as if the
user no longer wishes to use the peripheral.
Here are the consequences:
If the wearer approaches two acceptable handheld displays
simultaneously, then the wearable may establish a connection with
either. The user may pick up one of the devices only to find that
the wearable has established a link with the other. In this case
the wearer can either move away from the active display until it
goes out of range and a new connection is established with the
display the user is holding, or they may move the active display
away (akin to clearing their working area).
Initially they are no plans for one wearable driving many
screens, or many wearables sharing a single screen. Any
What protocol should we use to communicate between the
wearable and the peripherals? By using a standard protocol it may
be possible for us to avoid having to write the application-wire
or wire-peripheral software, so what standard protocols are out
- X-wire protocolThe UNIX/LINUX protocol. We could deploy
an X-client on the wearable, but we would also need to
run an X Server on the peripheral. This is a big old
beast (10's of MBs), unlikely to fit on light-weight
peripherals using low power microprocessors. We could use
a standard X-Client and then write our own stripped down
X-server. This compromise would save us the
application-wire development and would allow us to run
standard UNIX/LINUX apps. Nonetheless, the X-protocol is
notoriously greedy on bandwidth.
- ICA 3.0 (Independent Computer Architecture). This is a
major but proprietary protocol developed by Citrix for
client-server computing on Window's devices. Citrix
currently use the protocol within their WinFrame
client-server package. Unfortunately the WinFrame server
(application-wire software) is too heavy-weight to run on
our wearable platform, in particular it only runs over
the Windows NT OS. We could write our own stripped down
ICA application-wire protocl and then use Citrix's
WinFrame WinCE clients, but we need a licence to get
details of the ICA protocol. It is not clear that this is
worth the effort, compared to writing our own system from
- There are a many "remote control" applications
for windows: pcAnywhere, LapLink, Reach Out, Carbon Copy,
Close Up, Co/Session, Remote Desktop. Unless we are happy
to run DOS apps, they would all require Windows to be
running on the wearable. Unsuprisingly, none support
WinCE at the server side, and all other Windows variants
are too heavy weight. In any case, it is notoriously hard
to get WinCE running on independently-assembled
platform's such as our own. If we did use a DOS server
then I bet that we would be limited to text mode graphics
only. I doubt that is adequate for range of applications
that we would like to support on the wearable.
- HTTP. In this instance the wearable runs a web-server and
the peripheral runs a web browser. Often web applications
are cgi-scripts based on standard scripting languages
such as Perl. It can be messy to develop applications in
this way, since each interaction with the application
interface corresponds to the execution of a separate
script at the server. Microsoft offers an Internet
Application developer kit based on its Visual Studio
product that combines C/Visual basic with Active X
components. This would make application development
easier, but probably requires Windows NT to be running on
the wearable. We could write applications as Java apps
using the AWT interface to render graphics at the client,
but this is slightly counter to the philosophy of the
peripheral as an extremely dumb client with processing
taking place at the server. There are already stripped
down web browsers available for WinCE devices. The WinCE
1.0 devices are not able to support plug-ins or OLE
objects (eg Active-X objects). This may have changed in
WinCE2.0. Are there any functional limits that we should
know about for these mini-browsers? What about the
server? How small can we make the server? Will there be
any difficulties persuading the browser and server to
communicate over FootBridge? I doubt it.
- JetSend. HP's new and whizzy peer-to-peer communication
protocol developed at HPLB. This protocol was developed
to allow (non-PC) devices to interact without the need
for a fully-fledged operating system and pre-loaded
device drivers. In fact our demonstrator will have both
the wearable and the peripheral with operating systems
and device drivers, so we are actually operating in the
opposite world! Nonetheless JetSend contains a protocol
for transmitting "Ematerial" between devices
and some software to encode and decode messages. It is
also very lightweight in terms of code size (8k for
JetSend Lite, up to100's k). Currently JetSend has been
developed with emphasis on data transmission. It is
possible to support client-instigated document changes
and in this way an interactive program interface can be
built. Nonetheless, rapid updates caused by dragging
actions could swamp the communication channel. There is
also some support for transmission of control surfaces
that support highly abstract interaction devices, but
these are in an early stage of development (not available
in the current JetSend release).
- Alternatively, brew-your-own. In this case we would have
to write our own application-wire and wire-peripheral
components, but they could be minimal. Even a minimal
scheme still presents some challenges such as compressing
bitmaps. Third party solutions are likely to perform at
least as well and maybe noticeably better.
What kind of things do we want to demonstrate?
- That conventional applications can be run using our
wearable with wireless display. In which case we need to
support an established graphics interface such as
X-windows, Windows GDI or Java UI. We also need to
consider a standard wire protocol for graphics, so that
we might not have to build all the application-wire and
- That drivers do not have to be preloaded into the
wearable platform. Surely with a wire-protocol connecting
the wearable to the peripheral, this is assured?
Alternatively we can take the view that existing GDIs are
inappropriate for wearables and design something more appropriate
from scratch. I believe that to do this job well would be too
much of a distraction to the broader wearable programme, and
would alienate third party developers used to conventional GDIs.
We could nonetheless leave in the hooks to allow wearables to
send a wire-protocol interpreter to the peripheral, prior to
using a specific wire-protocol. This is also too distracting, I
accept the principle, but I would like us to focus on building a
demonstrator in the least possible time. What I cannot decide is
whether we should develop a quick and dirty proprietary GDI and
wire-protocol, that would mean that all applications would have
to be written with our GDI in mind, or whether we should spend
more time, implement a standard protocol and be able to use
existing applications. This hinges on two things: what
application would we like to demonstrate, how much do we want to
disseminate and future-proof our wireless display work? For
example, if we were to move to a windows CE platform eventually,
would that make it certain that we should work with the Windows
GDI now? Mmmm.
The material displayed is
provided 'as is' and is subject to use restrictions.
For problems or questions regarding this web
contact Cliff Randell.
Last updated: January 14, 2000.
ęCopyright Hewlett-Packard 1997-2000.