We’ve all seen the Glass demonstration videos from Google, but the app driven ecosystem we live in today demands that there be a little more than Google juice flowing through the headset. During a recent SXSW presentation, Google took to the stage to show off early app partners for the launch of Glass.
The core concepts surrounding Glass focus on being able to deliver short bursts of information as the core content system. Similar to how notifications are delivered in Android, Glass can only display small blocks of text or icons in the tiny projector screen. This design also ensures that there’s never enough information on the screen to take your attention off of things, like the road in front of you. Google has made it clear in the past that the bulk of Glass apps would be based on RESTful services, so almost everything you’ll be doing with those apps will require a data connection. Google’s presentation gives a deeper look at what you can expect from the experience in general.
Photos and video are an integral part of Google Glass. Using your voice or just touching the side of the headset, you an capture the world around you from the perspective of your eye. Once the photo is taken, you can share it through services like Skitch or Path. Instead of forcing you to do those things right when the photo is taken, the option is added to your timeline. These apps are stripped down, super simple versions of the those used on smartphones and tablets today. You can share to existing groups, or in the case of Path, you can interact with shared photos through Path emoticons that now take up the whole screen.
When text messages and emails come in to Glass, you are offered a brief pop up on the headset with the information. You can choose to have the text read to you by Glass, or hover for a moment while you scan over it. When you want to reply, you can use your voice to populate the fields. This same setup can be applied to any service that displays text on the headset. For example, when demonstrating The New York Times app for Glass, the user is shown the headline for the article with the featured image in the background. When you find an article you like, you can tap and have it read to you. Sharing options from here are similar to Path or Skitch, with simple minimal feature sets.
In their presentation for developers, Google made it clear that this was the way they expected all apps to function. Their presentation focuses on distraction free presentation setup where the information is made available to the user as a timeline based on when the events occurred. If you have too many notifications or do something to pollute the user’s timeline, your service will likely be uninstalled quickly. So it will be interesting to see how many developers use or abuse Glass for their app.
Now read: Photo shows how Google Glass will work in direct sunlight
No comments:
Post a Comment