Google I/O 2014 – Wearable computing with Google

Google I/O 2014 – Wearable computing with Google


TIMOTHY JORDAN: Hello. That’s right. Usually I say hello and
you say, Hi, Timothy. Hello. AUDIENCE: Hi, Timothy. TIMOTHY JORDAN: I love that. Welcome to Wearable
Computing with Google. I’m going to try that
clicker one more time. I’m going to switch
to the other. Oh, something wasn’t plugged in. There we are. Woo, when things work. I’m Timothy Jordan. I’m a developer
advocate at Google. You can reach me online,
google.com/+timothyjordan, or on Twitter @TimothyJordan. I’m going to cover three
topics in this session. First, what is
wearable computing? And I mean more than just
a device that you wear. What is it at its core? I’m going to talk about
how we live in a world where users have
multiple devices. They’re using them
at the same time. And instead of ignoring
that and letting them compete for
each other, how do we connect those devices together
into a single, seamless experience for the user? And I have an announcement
in that section I think you’re going to like. And then third, how
can we use Google to make this whole thing a lot
simpler and yet more powerful? Sound like fun? OK. So first off, what is
wearable computing? I keep having
these conversations with developers about
cool ideas that they have for new form factors and
wearable computing and beyond. And although cool, they
aren’t necessarily great user experiences that solve
real user problems. Let me give you an example, and
I hear this idea all the time. In fact, a few of you probably
have had this as well. Wouldn’t it be cool if I walked
into a conference and everybody I knew had a little
thought bubble above them with their name and what
they do and how I know them? It’s not a great experience. And I’m going to tell you why. The user problem that
you’re trying to solve for is how do we connect people more
deeply to those around them. And that’s a great
problem to solve. But in that solution,
what you end up doing is distracting
them and distancing them from the people around them. Now, there’s another
way to solve this. What if the user
had that information before walking into
a business meeting? They could have looked
it up on their own. What if it was just delivered
to them through their glass or through their
Android Wear device? That’s a completely
different experience. In fact, you can already
get it on MyGlass today. And what it does is
it prepares that user and gives them the opportunity
to connect deeply to the people around them. But the problem
with the first idea is that it was built on
this idea of a device. It was built on this idea of
what computing is supposed to be in our future, when
instead, what we should be focusing on is the user. That’s the true promise
of wearable computing. See, people bring
technology into their lives and wear it out into the
world on their adventures. And our job is to enable them
to have richer experiences and be more connected to the
people and world around them. Let’s take a step
back and examine that idea in a little
bit more detail. Computers used to
look like this. They filled a whole room. Shortly after that,
just on your desk. And very soon after that,
the palm of your hands, where you could get the full
repository of human knowledge and worldwide
communication just right there by reaching into your
pocket– such a cool idea. But something that you’ll notice
about all of these devices is without anything
else in the picture, it’s hard to see the human. One of my favorite
thought experiments is by this VR
pioneer Jaron Lanier. Some of you have
heard this before. Please bear with me if you have. What if aliens came to earth–
cute fuzzy aliens– and people were nowhere to be found? And they were trying
to figure out us by looking at our computers. What would they think? Do we have 102 fingers
and one big eye? No, we don’t. Now, if you look at a hammer,
it almost makes sense. You could kind of
see the arm that would wield that hammer
and hammer in nails. But this computer– I
mean, look at the keyboard. It’s not really built for us. It’s not even really
built to type fast. Back when the keyboard
layout was designed, it was designed to keep
us from typing too fast so that the type bars
wouldn’t get caught amongst each other and
screw everything up. But this is what evolved
into the modern keyboard. This is a very
physical difference between the person
and the machine. Let’s look at something that’s
a little less physical but just as dangerous. What’s wrong with this picture? In many ways, this is our
relationship with technology. It gives us a lot of value. However, we are
adapting our lifestyle to the technology
that assists us. And there’s a heavy cost,
and we pay it frequently. How does that make sense? What if we didn’t have to? Here’s a photo I took a few
months ago of my team playing a word guessing game. That’s Lindsay in the
top center of the photo, and she’s trying to describe
a word given to her on Glass. What I love about this
photo is you can hardly see the technology. Some people think the
future of computing is floating screens and
constant digital overlay, constantly looking things
up on the internet. They want to keep you
immersed and keep you away from your life. But as it turns out
with wearable computing, less is more. It’s where we as
developers and designers get to put the user
before the technology. It’s briefly there when you
need it, and not when you don’t. It gives you what you need,
and you’re back to your life. Now, this simple core
philosophy to wearable computing is what you see over
and over and over again in terms you hear
throughout Google I/O, such as microinteraction
and glanceable information. And it’s not new. It’s the idea that
success isn’t measured by how long the user is
engaged with the interface but how quickly we can
get them what they need and get them on their way. And there’s a website
that’s been around for 15 years with that
exact philosophy in mind. This kind of interaction
is in our DNA. Back at the time when the
search engine philosophy was keep the user on the
page as long as possible, Google went in the
opposite direction. And as it turns out, that’s
exactly what users wanted. It’s exactly what they needed. And that’s why it
made sense for us to recognize this
core philosophy in wearable computing as well. These devices are about
that kind of experience. Now, that’s not to
say that desktops, laptops are going away. They’re not. But those are often
tools for work, whereas these are
tools for life. That’s what wearbale
computing is. Now, let’s talk about
connecting it all together. Remember what I said? What if the benefits
of technology were available everywhere, no
matter what you were doing? And this idea is bigger
than just one or two devices that the user might have
on them at the time. It’s about all the technology
that they keep in their lives. And this is what’s meant by the
term “ubiquitous computing.” And it’s how we need
to start thinking. It’s the idea that a user
starts to use a service, and it’s available to them
everywhere they want it to be. Now, maybe it’s something
as simple as an alarm clock, but it rings in whatever
the context the users enter right then. And maybe it’s in their pocket
or on their wrist or eyewear or in the living
room or in their car. And if you watch
closely, this is where we as an
industry of wearables is going– connecting
all of these devices into a single, seamless user
experience across all of them. But how do we deal as
developers with the complexity across this
constellation of devices? Well, we do what any
good engineer would do, and we simplify the problem. We need two major
things– that’s it. Platform components
that make it easy, and a design pattern
that simplifies the whole thing,
or maybe patterns. Let’s start with the platform. Now, for Glass, you’ve
got the Mirror API, which allows you to
insert timeline items and have basic user interaction,
the GDK for full apps on the device with
access to the hardware, and advanced user interaction. On Android Wear, you
can create a single app with an APK that
runs on the phone and on the watch as
well as notifications going from the
phone to the watch, including some enhancements. Now, I told you I
had an announcement. Glass is going to support
Android Wear Notifications. [APPLAUSE] This includes support for
enhanced notifications in Stacks and Pages and Replies. And it’s going to be
available to users on a build in the
next few months. This is great news
for developers, because it lowers the barrier of
entry into the wearable space. You can build rich
notifications on the phone, and they’re delivered to
the user wherever they are, whether that’s the phone,
the watch, or Glass. And of course, it’s
great for these users. Notifications will start
showing up on the wearable. And as you may know,
to get this done, you don’t have to do anything. They just show up. Enhanced notifications will
also show on the wearable. And again, to get this done,
you don’t have to do anything. It just works. Now, Android Wear has
a few more features for these notifications
that require just a few more lines of code. And they work on Glass just as
they would work on the watch, with a couple differences. Here’s Stacks on Glass. It’s Bundles. Here’s Pages. And of course, on
Glass, you’re going to have the Read More menu
items to get to those pages. And of course, Replies. They work the way you’d expect. So that’s platform–
Android wear on Glass, bringing components
together to make it easy to develop across
all of these devices. Now let’s talk about
a design platform that simplifies the whole thing. Let’s go easy– Model
View Controller. I’m not kidding. This is great. Now, this is a simple
example of MVC. And you can use any other
design pattern here. But I’m going to show you how
to spread it across devices. And it’s a lot easier than
you might think it is. But before I get there, who
here is familiar with Model View Controller? Interesting. About 50%, evenly distributed. You don’t clump together. That’s nice. So this is the MVC
you know and love. Let’s throw the user
in there as well. For those of you that
didn’t raise your hands, this is how it works. The view is what shows
the user what’s going on. That’s what they see. And maybe there’s a button, and
the user touches that button. That triggers the controller. The controller
decides what to do. And often this means
mutating some amount of data in the model. Then as that model updates,
the view gets that message, and it updates in response,
so around in this circle. Now, the view controller
part of this, that’s what your device is. This is your watch,
or this is your Glass, or this is your phone. This is any device
in the user’s life. They’re going to have a view. We need to adapt
that per device. And I’ll talk more about that. But it’s got to be sticky. You’ve got to remember
this part– what you design, the UI you
present to the user, has got to make sense for
that piece of technology that they’re using. And the controller, this
will be on the device. And it also is sometimes
going to be spread across. This might be in the
cloud in something like App Engine,
where once they get a response from each device,
it can hold on to that context and deliver it back to
the user or to the view through the model. The model is the
required common layer across all of these devices. In fact, there’s
two common layers– it’s the model and the user,
and they’re platform agnostic. They can use a number
of different tools to accomplish the same task. That’s our goal here. And we can use this
model, and maybe the controller in
the cloud as well, to keep context across
all of those devices and deliver it to the right
device at the right time. Does that make sense? This is just the MVC
paradigm, except we’re stretching the
model and sometimes the controller across devices. So we’re building a client for
the phone and for the watch and for Glass. But then we have
this other component that’s there to
help all of them. Keeping that big
picture in mind is what we’re going to do
with an example– building all of these things on
top of Google technology. We’re going to make a reminder
app, because it’s simple. This is very simple. I didn’t spend a lot of
time designing these MOXs, you’ll see. But the idea is
just to think about, how do we put these
pieces together? Because this is the
first time we’re really doing it at this scale. And we’re going to do
it on these devices. We could, of course, add in the
living room or the user’s car, but we’ll keep it
simple for now. And these are our features. The user is going to be able to
set a reminder from anywhere. They’ll be able to receive a
reminder wherever they are. And it’s going to
be location aware, so they can set a place
reminder, not just a time. Now, I’m not going to go
into detail here, but let me just briefly go
over these items. App Engine and Cloud
Endpoints, that’s what we’re going to
use for our cloud. There are two components you
need– some sort of structure that you can keep a
controller in if you need to and some sort of
access to that model. That’s Cloud Endpoints. I don’t know if
you’ve used them, but it’s really easy to
deploy Endpoints right away and be able to mutate
that model from any device that the user might be on. And then Android. We’re going to use
a lot of Android. We’re going to use
the Android Wear. We’re going to use the GDK. We’re going to use Geofencing. And we’re going to use GCM to
get information from the cloud onto the device. Now, here’s our diagram. I went for complex,
as you can see. Again, this is a simple example. But you can simplify this kind
of service across devices. We’ve got App Engine
up at the top, and then the user might
have any of those three devices in the middle. And of course, there at
the bottom, using them. This is pretty much
the same diagram that I showed you earlier. But App Engine is the model,
and then each of these devices has a view/controller pair. Let’s go through the feature
set, all three of them one by one, just
to see how they’re built, how these pieces
are put together. To set a reminder from
anywhere, we need first an app on the device to
take the user input. So we’ll start with the phone. And this is just going to
be a basic Android app. It’ll look a little
something like this. I took my inspiration from
the alarm clock, as you see. I can add a new
reminder with a name. I can add a time for
that to happen at. And of course, I save it,
and now it’s on the list. I forgot one of those items. Oh well. Now, there’s a number
of steps involved for that, because this
is a phone interface. I can easily browse. I can edit. I can do lots of things. But there’s a number
of steps involved in each one of those things. And that’s fine. It’s on our phone. But for the app on the
watch and on Glass, it’s got to be simpler. Remember, it’s a device that’s
microinteraction, glanceable. I should be able to set an
alarm on the go and be done. So for the watch, we’re
going to use a voice command. Just a simple one–
OK Google, remind me. And I speak what I
want to be reminded. And for Glass, OK Glass, remind
me to bring my speaker notes. I did do that. And finally, we need a common
place to keep all of this data. And that’s the App Engine
with Cloud Endpoints. So any time the GDK app on
Glass or the Android Wear app on the phone or on the
watch gets a new reminder, they’re going to hit
that Cloud Endpoint, and it’s going to
update our model. And then that model is going to
sync down to the phone via GCM so the phone always
has the latest alarms. You’re going to see
why, in a moment, it’s the phone that’s
keeping the reminders. To receive from anywhere, we’re
going to use Android Wear. This is the simplest part. We want to let the user
know of the reminder. We’re going to send a
notification on the phone. That’s why it has all
the reminders on it. And then it’s going to appear
on the watch or Glass, whatever the user has on them. You dismiss on any
one of those devices, and it dismisses everywhere. One last step. Just so you know that
you can add more pieces and build onto this
basic functionality is to make it location aware. So when the user
sets the reminder, we can give them an option
for a particular place. Like, remember to get the bacon
salad, because it was awesome, next time I’m here. When the user sets
that reminder, we’ll give them that option. And then the device is going
to pull GPS from somewhere. The watch will pull it from
the phone and from Glass. There’s a location
service that is actually pulling it from the phone. So the phone is going
to keep the GPS. And then it’s also going
to hold the geofence. So when the user enters
that geofence again, then again it will
send out the reminder, and it will propagate
to all the devices. What I love about this example
is that each of these steps seems like they’re going to be
complex, but two slides later, it’s simple. It’s easy to do. We can set from anywhere
by just having an APK on each one of
these devices that can accept the
data from the user and send it up to the cloud. We can receive
that from anywhere by just sending a
notification on the phone and relying on Android Wear
to get that to the user. And it can be location aware
by just attaching some GPS when we hit the Cloud
Endpoint with that reminder. A few things to remember–
just a couple quick last notes. First off– this
is important, too. We just talked about
building something. We spent a lot of time thinking
about the design just now. We would then go
into development. We always kind of
hope that our time looks like this– lots
of time for development, little bit of time for
debugging at the end. You know as well
as I do, it usually actually looks like this. We need a lot of
time for debugging. But on wearable devices,
you have a new piece of that pie, which is
testing or UX iteration. Now, you’re probably doing
this a lot on your applications already. But if you’re not or you’re not
doing it much, for wearables, be prepared for this to be a
large portion of your time. And the reason is because
when you design the user interface or a service that
goes across all these devices, you have an idea of
how it will work. But you don’t know for
sure until you try it while you’re walking
across the street or on the weekend at your
niece’s birthday party. You can’t just test it
sitting down at your desk in the middle of the work week. And on the same note, or
a similar one at least, these devices are
fundamentally different in how the user uses them. That’s why you have to test. We should also
think ahead of time, know that you’re
going to have to build a different experience. You can’t just stamp out a
service from the phone on Glass or on a watch and just have it
work the way you want it to. You have to rethink
it a little bit. What is it about the watch or
the Glass that you couldn’t do before that now you
can that will come alive on these new devices? We did not take the launcher
and just put it on the watch, a bunch of icons. That wouldn’t make sense. The user can ask for things,
or they can be given things. This service should flow
from one device to the other, because it’s about the user’s
experience across devices, not just any one device. Because at the end
of the day, we’re building for people, not
for technology– people out living their dreams
or having adventures. And our job is to help them
have those adventures, only richer, more fun. We have some
documentation online. Hope you check it out. And of course, stay in
this room and stayed tuned to this channel
for the next session, designing for wearables. I talked a lot about a
rather simple example, just to describe how
the pieces fit together. They’re going to talk
more about the design of each one of those pieces to
take them to the next level. Thank you very much. [APPLAUSE] So, I do have some
time for questions. In case I answered them
all, which would be OK, too. But I believe there’s
microphones around here. There’s a microphone stand
there with a droopy microphone, but it’s on there. If you could go up there,
that way the people watching the live stream can
hear your question as well. That would be great. AUDIENCE: Hello. TIMOTHY JORDAN: Hello. AUDIENCE: If you have Google
Glass and Android Wear at the same time,
what authentication would behave like? So you’ll get pinged on
both of them, or on the Wear or on the Glass? How are we are
expecting to do that? TIMOTHY JORDAN: Yeah,
it’s a great question. I think we are all just
starting to explore this area. So it’s a rather simple
action right now, is that if there’s a
notification on the phone, it gets sent to the
wearable devices as well. And the nice thing about
the wearable devices is it’s a rather
subtle notification that the user can easily
ignore– just a buzz on the wrist or a ding on Glass. And when a notification
is dismissed on any of those devices, it just
sort of goes away everywhere. AUDIENCE: OK. Thank you. AUDIENCE: Question. What about group notifications? Say you go out hiking
or biking on a group, and you want to notify the
group of a change of plans. And they’re all wearing
a watch, and some of them are ahead of the pack. Is there a group
notification functionality? TIMOTHY JORDAN: I’m a little
unclear what you’re asking. So you’re saying you have
multiple users, multiple phones and wear devices, and you want
a send a notification out to all of them. AUDIENCE: Right. And maybe they’re
using the same app. TIMOTHY JORDAN: Yeah. So this is a great
example of using something like that distributed MVC
model, only you’re saving data for the group,
not just the user. So when you sync it back
down to the phone via GCM, you’re going to sync
data from that group, and then all the phones
will be able to push out the notification to
that particular user. AUDIENCE: And
everybody will get it. OK, thank you. AUDIENCE: Hi. I have a question more towards
the philosophy of wearables. We design apps, and we do
design a lot of apps for Android and things like that. But the problem is,
these apps I don’t see being used by people who
are physically disabled– or for example, a person who
walks in the road who is blind. I don’t think we are
teaching to that level still. Although we have come
along way by building apps, by building the app
store, et cetera, now since we are just getting
started with variables, how do we ensure that
we do build specifically for this set of people? How we as developers
can contribute, and how, from your side, you can
ensure that this thing happens and it reaches to people? TIMOTHY JORDAN: So I
think what you’re asking– and stay at the mic
just for a moment so I can be sure about this–
is it’s really a distribution question. Once we build a
service– and you’re saying in particular services
for disabled people– how do we reach that
audience, because you’re having a challenge
reaching that audience? And this isn’t
really a new problem. And I’m not an expert
on distribution, so I don’t know how much I’m
going to be able to help you. But I would use your
app stores, and I would evangelize with
those communities. AUDIENCE: Thank you. AUDIENCE: Thank you, Tim,
for the presentation. TIMOTHY JORDAN: You’re welcome. AUDIENCE: Question. When we use Android Wear, we
send notification to our user, right? And they will see
that notification, or they could see
the notification in any device they have. What if we want to
target a specific device? Like, we have an app in
which it doesn’t make sense to send a notification to the
phone, but only to the watch. And you don’t want to send
notification to the Glass as well. TIMOTHY JORDAN: Yeah. That’s a great question. So right now, Android Wear
sends the notification out to whatever wearable
that the user is using. So if they’re using
multiple wearables, it’ll go to all of them. The nice thing about
the notifications is that they’re
relatively easy to ignore, so you can kind of
just pay attention to where your attention is. However, there are ways where
you can have a notification just on a device
if that’s really important to your service and
you want to build it today. Even in this example,
you’ve got a GDK app, and you’ve got an APK
on the Android Wear app that’s on the watch. And both of those can
communicate to the phone and deliver
notifications to the user directly rather than using
the Android Wear Notification Service. In that case, you’d be able to
choose which device you want to have a notification
and which you don’t. AUDIENCE: Makes sense. Second question, very quickly. TIMOTHY JORDAN: Second question. Yeah, that’s fine. AUDIENCE: Sorry? So for Glass now, we have
initially the Mirror API, then we have a GDK,
and now, happily, we’re going to have the Android Wear. So are the three platforms or
the three ways of developing apps are going to be available
for Glass in the future? Or should we expect
to just focus on Android Wear and
neglect the others? What’s going to be the preferred
way of developing for Glass? TIMOTHY JORDAN:
So, I will give you a general answer and
a specific answer– and a general answer that
applies to this question. Any other “what’s going to
happen in the future” question, as a policy, I don’t
comment on anything that might happen in the future. There’s some difficulty there. But what I can tell you is
that the engineering and design teams for both
Android Wear and Glass work very closely
together, and our goal is for all of this stuff to
work really well together. AUDIENCE: Thank you. AUDIENCE: What happens with
persistent notifications on Wear? Say, for instance,
you have a battery notifier in your
notification bar or some weather apps have
weather or what have you. TIMOTHY JORDAN: I don’t know
the answer to that offhand. There is a more deeply
technical session on Android Wear where you could
ask some specific questions like that tomorrow, Android Wear
From A Developer’s Perspective. Or if you ask me
online later, I’ll look up the answer for you. AUDIENCE: Thanks. AUDIENCE: Beyond
the basic interface, what kind of
opportunities will there be to fully customize
the screen for wearables? TIMOTHY JORDAN: That’s
a great question. So again, in this
example, it was simple. However, the GDK
app– and again, the APK on the Android
Wear app for the watch– are both code running
on the device. And both allow you
to do full screen UIs that are fully custom. AUDIENCE: The new
devices for Android Wear, they introduce new
possibilities of interaction. So do you have anything
regarding gesture recognition? So let’s say, recognizing
the user is looking at the watch or
something of that sort. Lifting the arm
and looking at it, or doing other types of
gestures with the hand. Is that something that– TIMOTHY JORDAN: So there
is some of that built in. And as you heard in
the keynote earlier, that most of the APIs on Android
are also available for the app that you build that
runs on the wearable. So there’s a lot of that
stuff that you can do already. But stay tuned for some
more examples where we can examine that
in more detail. AUDIENCE: So, sort
of expanding on that point, how do you see
gestures and voice recognition working together or comingling
in a unified experience? TIMOTHY JORDAN: That’s a
great question as well. It’s interesting that–
and you should definitely stay for the designing
section next. They’ll go into a little
bit more detail about this. Different situations work well
for each of those instances. Sometimes just being able to
say something really quick, like remind me to, makes sense. Trying to input that through
a gesture wouldn’t make sense. But then on the
other hand, you’ll notice when you
get a phone call, just swiping across the screen
is really the right thing to do for a watch. On Glass you can do it through
gesture or through voice. So in many ways, you’re
giving the user the option. And in other cases, there
is sort of a clear winner. There are gotchas in each case. So for example, if you’re
using head gestures and the user happens to
be walking somewhere, there’s certain head gestures
that work better than others. So for example, looking
up and then looking down works pretty good. Trying to scroll a list
or scroll horizontally would be more difficult. So those are the kinds of
things you need to think about. But all of those will
become apparent as soon as you test it in
different situations. AUDIENCE: And do you see
any implication or expansion to context-aware gestures? So you have
essentially a gesture set to your specific location
or in your environment so that those similar
movements mean different things in different
scenarios or spaces? TIMOTHY JORDAN: I think
that sounds very cool and that you should definitely
explore that with the platform. AUDIENCE: Come talk to me after. AUDIENCE: Hi, Timothy. Great talk. Very excited about Android
Wear coming to Glass. Question. Up to this point, the timeline
has sort of been a chronology. And the cards that are to
the left of the home screen have sort of been
up-and-coming things. So Android Wear
notifications, at least how they’ve been shown
on watches, are things that you see
once and then dismiss, which is a little different than
the current paradigm for Glass timeline cards. They don’t really get to dismiss
unless you actually go through and delete them. So I guess I’m wondering,
is this changing the purpose of the
timeline in a way, or how do you see that changing
or being the same in terms of the interaction
with these cards? TIMOTHY JORDAN: So
I’m going to give you that same general answer before. But again, two
more specific ones. General, I can’t
really talk about how this is going to
change in the future. The specific one
is the four guys that are going to be on the
stage in the next session, they’re designers from teams
of both Android Wear and Glass. They might have
some more insight. Another specific
answer I’ll give you is that when you do
get an Android Wear notification on
Glass, we’ll always include a Dismiss menu item. AUDIENCE: OK. Great, thanks. AUDIENCE: Hi, Timothy. I had a question. You were talking about
the kind of maintaining a similar experience
across different types of wearable devices. So with Glass and
upcoming with Wear, we’re working with both
gestures and voice control. Do you see in the future
with the new wearable devices those remaining the
predominant form of interaction with your device,
or do you see maybe in the future we might have
different ways of interacting or focusing around
the same thing? TIMOTHY JORDAN: Yeah, if I’m
doing the hand wavy, what do I think wearable
computing is all about, I think all those
things are important. I also think context
is really important– where the user is, what
they’re doing right now, what they’re doing next. And sometimes
almost as important as when they invoke
a voice command. AUDIENCE: All right. Thank you very much. TIMOTHY JORDAN: You’re welcome. AUDIENCE: After
Google Wear supports Android Wear notification, is
Mirror API still supported? And do you think they conflict? Or is it still useful? TIMOTHY JORDAN: I don’t think
any of these things conflict. We’re one big happy family. And Mirror API is
certainly alive and well, and there’s Glassware
that I’m running today that uses it that
I think is great for it. For example, New York
Times uses the Mirror API. So does CNN. And just for sending a
timeline item to Glass, it’s really a
stellar experience. How this stuff
evolves over time, I can’t really
speak to the future. But we’re always going to want
to make this stuff work better and better with each other. AUDIENCE: Thank you very much. TIMOTHY JORDAN: Yeah. AUDIENCE: Hi, Tim. First of all, awesome session. TIMOTHY JORDAN: Thank you. AUDIENCE: There are two
questions, actually. The first one is the app
discovery in the wearable. So like right now
in any phone, we go and have the grids of apps. So would the app discovery
be more context aware, or like user can just still
go through maybe smaller grids or different stuff or
have discovery in the wearable? TIMOTHY JORDAN: You know,
that’s a really great question. You saw that slide
where I had the Launcher on the phone and the
Launcher on the Glass. That doesn’t make sense. And both the Glass UX team
and the Android Wear team went away from the Launcher,
like this post-grid philosophy, if you will. No grid icons. When the user wants
to do something, when they want to
start an action, they use a voice command,
or they select that command from the Voice
Command Touch menu. And they’re sort of given
contextually aware cards about information
that makes sense to them then and at that moment. So you’re exactly right that
the Launcher is going away. And the base
philosophy in there is, what if we could reduce
almost to nothing the time between
intent and action? A Launcher implies–
I take out something, I look through stuff,
and I pick one. Whereas when I say, OK
Glass, get directions to the Museum of
Modern Art, it’s done. AUDIENCE: Right. The reason of that question
was sometimes even on mobile, you have so many apps. So at one point, there
would be in Glassware too, the user would have too many
things that they think useful at times, but they
tend to forget, like, was it already there? So every time giving a single
command and browsing through, it’s kind of a pain. TIMOTHY JORDAN: And that’s of
course against that philosophy that I was talking
about earlier, which is what if we can
make all this simpler? What if we could give the user
all the value from technology but with none of the fuss? So I don’t think our goal is
to fill the watch with lots of different things
that they may someday want to do, but to give
them the experiences that they need to do today. AUDIENCE: Cool. Thanks. And the second question
is– for now, it’s more adding to the design
questions previous folks had– there are guidelines
for Android app design, like nav bar and six-pack
designs and stuff. Is Android going to come
up with a suggested design? And there are, of course,
anti-design patterns. So are there any design
patterns to look forward for the wearables? Or as of now, developers and
designer have to figure it out? TIMOTHY JORDAN: Absolutely. There are design guidelines
available for both Glass and Android Wear, and you’ll
find that they’re very similar. And they give you a
lot of information– in fact, a very clear
road map of where to go with your design. So I’d look for that
on both the developer documentation for each of those. In fact, let me
back one slide just so you can remember
what those URLs are. If you add a slash design, I
think, to either one of these– but I know for Glass– they’ll
take you to the design page. AUDIENCE: Awesome. Thanks a lot. TIMOTHY JORDAN: You’re welcome. AUDIENCE: So I could
imagine many apps wanting to implement wearable
notifications. Is there going to be a central
place that a user can subscribe and unsubscribe
to notifications, or is it on a per-app basis? And is there any way to
prioritize notifications so that they get the most
important ones first? TIMOTHY JORDAN: So
what will be available when you get your
Android Wear device up and running is that when
you have the companion app on the phone, you can
choose any application that you want to mute. Certainly a lot further
we could go with there. We’ll just have to see
what the future holds. AUDIENCE: Thank you. AUDIENCE: Hi. Based on your example
for the reminder stuff, if I don’t have
the phone with me, will it be able to create
an application that uses only the wearable device
that holds information in it and present or communicate
even with other devices than the phone? TIMOTHY JORDAN: That’s
a good question. For the Android
Wear notifications, that requires the
phone, because that’s where the notifications
are coming from. But if you have the
Android Wear app, which is an APK on the phone
and an APK on the watch, that would work on the watch
even in absence of the phone. And you’d want to build
your service in such a way that you’d still have that
capability if you wanted it. AUDIENCE: And connect the
watch with other devices other than the phone? TIMOTHY JORDAN: I don’t
know that offhand. You’ll have to stay
tuned for that. AUDIENCE: From a
user’s perspective, it seems like there’s a lot
of overlap and redundancy in functionality
between Glass devices and the Android Wear devices
that we’ve seen so far. Is the expectation that
the user is typically going to have one or the other,
or do we expect both of them to kind of work together
in a lot of cases? TIMOTHY JORDAN:
Gosh, I don’t know. I mean, we do a lot of building
these things for users, but then we get to
learn how they’re going to use them
when they actually get their hands on them. So we’re going to
have to find out. I think with
wearable technology, one of the things
that we start to learn is once you do put the
human in that picture, there’s a lot of personal
preferences involved. So we’re just going to have
to see how that plays out. AUDIENCE: Two quick questions. The first, you had
talked about apps on the wearable
communicating with the phone. And there is
apparently structures for that in Android Wear. There’s nothing like that
on Glass at the moment. Can we expect or
request that sort of data interchange platform? TIMOTHY JORDAN:
So there is a way for you to exchange
data over Bluetooth between Glass and the phone– TIMOTHY JORDAN: But you have
to roll your own, basically. TIMOTHY JORDAN: Yeah. It’s not simple. As far as the future, what
we’d release in the future, of course I can’t
comment on that. AUDIENCE: Sure. Consider it a request, then. TIMOTHY JORDAN: But I can say
that I’ve heard this feedback, and I definitely
appreciate the feedback. Thank you. AUDIENCE: Second
very quick question. What happened so your hat? TIMOTHY JORDAN: I left it. AUDIENCE: I can’t be
the only one wondering. Thanks. TIMOTHY JORDAN: Some
days are hat days, and some days are
hatless days, I suppose. AUDIENCE: Thank you. AUDIENCE: [INAUDIBLE]. TIMOTHY JORDAN: Thank you. AUDIENCE: Notifications with
media style allow you to play and pause things through
notifications like an app, right? TIMOTHY JORDAN: Uh-huh. AUDIENCE: I was
curious, is there any limitation within the API
or anything that stops you from building buttons
that do other things? TIMOTHY JORDAN: That’s
a great question. I don’t know the answer offhand. I’m sorry. What I can tell you is
that when you play or pause that notification on your phone,
it automatically just shows up that way on the Wear device. So if there is a way, then
that likely would as well. AUDIENCE: Thanks. AUDIENCE: Is there
a left eye version of the Google Glass for people
who are weak in the right eye? TIMOTHY JORDAN: And this is
the last question that we have. No. Glass is only available
on the right eye version. AUDIENCE: And there
is another question. I don’t want to carry my
phone around everywhere. TIMOTHY JORDAN: What’s that? AUDIENCE: I don’t want to carry
the phone, the Android phone everywhere. I just want to carry my
watch and probably the Glass. Is there a way things can
work with just these two? TIMOTHY JORDAN: Well,
both those devices still work in absence of a phone. What doesn’t work is
the notification service which is relaying from the
phone to those devices. So when the phone is not around,
it of course won’t do that. But again, remember that
on both these devices, you can put APKs on them that
can run code on that device and work in on offline mode. So for example, with Glass,
one of my favorite Glassware is Word Lens. And it does translations
right in front of you of whatever you’re looking at. And that works without
an internet connection and without the
phone being present. AUDIENCE: Thank you. TIMOTHY JORDAN: You’re welcome. Thank you all so
much for coming. I really appreciate it.

Daniel Ostrander

Related Posts

4 thoughts on “Google I/O 2014 – Wearable computing with Google

  1. Jiwei Li says:

    Great talk.

  2. bryce f. says:

    i want all the developers to eat at glass when it retails for consumers. its such a cool idea and wear really didnt get much attention, but for the ones that use it, its awesome.

  3. bryce f. says:

    i think eventually the devs over at google should try to make precision pop up keys on a keyboard and have it able to tell where u are looking and to click just blink. and automatically when u look away it turns off the display. sorta like samsungs face detection feature. except way better. it would be so good for typing if you could just blink but im sure it wouldnt be very possible

  4. SUPERTEAM says:

    Thumbs up for me!?

Leave a Reply

Your email address will not be published. Required fields are marked *