On February 2019, I did a talk on “Mobile design with device-to-device networks” at the Open Source Design track in the FOSDEM conference. This post is adapted from my slides and notes.
As part of my work with Terranet, a Swedish R&D company focused on direct connectivity, I designed and created mobile demos and prototypes to communicate the usefulness of Wi-Fi Aware and similar technologies. This gave me the chance to explore this novel space and reflect about how we can find out what may be created with a new design material.
The prototypes shown here run on regular Pixel 2 devices with Android.
Direct connectivity is the ability to create networks between two or more devices without needing any other infrastructure nor Internet access.
You might already know about technologies like Bluetooth, Hostpot or WiFi Direct. There is a new one, called WiFi Aware, which is what I am using for the examples here. Future 5G technologies will support device-to-device connections as well.
This field is relevan now because, from my point of view, these technologies are progressively becoming fast enough, convenient enough, and flexible enough to enable new interactions and new solutions.
How can we start to find out what new things can be done with this new technology?
It is like exploring a new (design) space: you don't know what might be out there, so you have to feel your way around.
My main point in this piece here is that in order to carry out this exploration, you need to be switching continuously between the perspective of the designer and the perspective of the engineer. You need to be observing people and understanding them, and you need to investigate the technology and tinker with it. You need to design solutions and prototype them. And most important of all, after each step you need to reflect on what you have learned and how that moves you forward.
(I do realize that this is still a niche field; my goal here is simply that by showing my own explorations, you might be able to extract from them some ideas that could be useful for your own work.)
WiFi Aware is the implementation of a standard called "Neighbour Aware Networking" that allows devices to discover and connect to each other directly.
How does it work? A very simple explanation is this:
And that's it. That's our material.
Let's play with it.
I built a small tool that uses Wi-Fi Aware to discover other devices and connect to them, which helped me a lot in trying out and understanding the API.
Each announcement contains a user ID and a name. You can see how, after the devices have detected each other, we can tap on the peer's name to create a connection.
Here's an idea: if I
I should be able to use Wi-Fi Aware with applications that were not created for it, right?
Well, that actually almost never works.
The one application that works like that out of the box is… OpenArena.
OpenArena is a game based on the Quake 3 engine, ported from the desktop to mobile. It turns out that WiFi Aware uses IP6 addresses with a scope (the IP address includes the name of the network interface) and many apps/libraries are not able to handle them correctly.
First of all, that the technology works, although the implementation is sometimes still a bit unstable.
The API is not too easy to use, so there's some work to do in terms of libraries and utilities. Having done this exploration is a good starting point to know what is useful and needed.
Many apps and some protocols (VLC, WebRTC) don’t seem to work, usually because of the scoped IPv6 addresses.
Tinkering and playing with technology can lead to unexpected discoveries and insights.
Finally, there are potential privacy issues:
Now we change perspectives and look at this space from the designer's point of view.
A design process usually consist of research, design, prototyping, testing and evaluation. In this kind of explorations, this last step of critiquing your work and learning from it is the most important one. Those lessons are what you want to take away so they can be guidelines for your future work.
I first got in touch with this field when I was studying Interaction Design in Malmö. Terranet approached the university to do a project designing an application to carry out presentations using mesh networks.
We started with research questions focused on:
After the research phase, I got several important insights:
This is a video of the prototype that I created.
A lot of the functionality was simulated: each device already had all the images, and they only exchanged small messages to select which one to show. Simple, but it worked well enough that I was able to carry out two presentations in front of an audience at university, which was a good way to test and demonstrate the design.
After the master, I remained at Terranet to bring this prototype to life. This is a video of its latest status:
The devices use NFC to exchange enough information to create the network. Participants can share their own content. Media files are automatically distributed among all the participants. The camera is integrated in the app, so you can take a photo and have it show up on the other devices right away. We have live drawing.
This prototype worked very well for our purposes of demonstrating and communicating the usefulness and the possibilities of this technology. It was also a good way to test and refine the underlaying framework and tools.
Let's take a step back and look critically at this work, so we can learn some lessons for the future.
There is a tension between prototypes being very focused on specific aspects and them being open and flexible. This one started being very focused on the presentation use case, but later on we saw that there was value in flexibility: we could try out different scenarios easily, like collaborative drawing, annotating a PDF book together, or sharing the camera.
This prototype was very good for demos and communication, but only as long as somebody knowledgeable was available to set things up. But it is not easy to get people onboard on their own. There's of course the practical matter of needing two capable devices to test it. And the mental model is very different from the way people normally use their phones.
Using body gestures can help in communicating a mental model for direct connectivity that is easier for people to understand. Tapping the phones helps in grounding the interaction, it gives a reason why it only works with people nearby, it makes it almost intimate. You and me; and everybody else is outside.
Building on these ideas, I created a small tool that is much more focused: it lets you share large files with a friend just by tapping the phones together.
Share. Tap. Done.
It is fast and quite flexible: while one transfer is going on, the next one is already being prepared.
And you can of course send several files at the same time.
In closing, I would like to mention some areas where I think that there is interesting work to do, and where Free Software can play a role.
🕵️♀️The first one is privacy. As I mentioned, service announcements are public and can be easily faked, both of which pose grave threats to pricavy and security. We a free and open system in place that lets you find your friends, but prevents other people from finding you.
📽The second area is video. There are some pretty cool scenarios that are possible when you can share your phone's camera with a friend nearby: take remote photos, record video from multiple points of view, stream HD content without a server, etc.
🚘And the third area is the automotive sector: if you are able to use these technologies to detect people and cars around you, you can make a car that can see behind corners and prevent accidents.
The technology for direct connectivity is "getting there" and there are a lot of scenarios and solutions that are becoming possible now.
There is an opportunity in creating tools that are aware of the people around us and support us when we are collaborating with them, in a way that can be much more context-aware and private than an Internet-based solution.
At the same time, it is also needed to find and define the concrete scenarios where this technology makes sense.
Finally, solutions have to be build on top of a simple mental model that helps users understand how the technology works and what are its constraints and possibilities. A good starting point for that mental model is to explore embodied interactions, like “tap to connect”.
The process of exploring a new design space needs to combine different approaches and points of view.
From the design point of view, one has to find real use cases, craft solutions for them, and learn from that experience. This reflection should try to find insights about the whole design space, create guidelines to support future work, and point at further directions for exploration.
From the engineering point of view, one needs to study the technology and tinker and play with it. Understand its potential and limitations. Build prototypes that are focused and functional enough to study the desired scenario, but also flexible enough to mockup up unexpected ideas.
Solutions need to be built on top of a mental model that makes the technology easy to understand, and provide clear answers to questions of usefulness (“why should I use this?”) and required knowledge (“what do I need to understand to use this?”).
Don't be afraid to experiment and play and try stuff out, but always remember to reflect and learn from these experiences.
Share on Twitter Share on Facebook