Leveraging Google Glass Capabilities

Giordano Pezzola

Master Thesis in Computer Science engineering

What is all about

  • Multimedia Guide - it provides an augmented cultural enjoinment offering a selection of audio/video contents based on user position and what the user is currently looking at.
    21 java classes - 2843 lines of code
  • Glass Notifier - A system for notifications and analysis of messages from on-site IT infrastructure.
    5 Java classes - 541 lines of code
     

The company I'm working for assigned me the task of developing two Google Glass applications in the context of a research and development project called ARCADIA 3.0

target: improving the visiting experience of the tourists exploring the "colle palatino" and the site management using Google Glass.

Research + Development + On-site Testing = 1604 hours (200 working days)

Multimedia Guide app

GLASS Notifier app

What's New

  • A positioning system based on image recognition.
  • A motion status detection algorithm specially crafted to work on Google Glass.
  • Unattended image acquisition system using Google Glass integrated camera.

positioning systeM based on image recognition

User position and what the user is looking at are determined through the SURF image recognition algorithm from the OpenCV library.

Motion status detection

120 seconds

Sampling frequency 200Hz

target: we have to avoid taking pictures while the user is walking/moving.

problem: no native support for step detection on Google Glass.

solution: development of an algorithm monitoring the data coming from the accelerometer sensor integrated on the device.

key point: a variation of accelerations which persists in time for a good part of the temporal observation window provides  a good “suggestion” to suppose several subsequent steps, therefore we can assume that the user is walking.

Motion status detection - Algorithm

  1. Storing the current acceleration along the Y axis recorded by the acceleration sensor.
  2. Computing the difference in modulus between the actual value and the previous one.
  3. Computing the average of the acceleration variations on a mobile temporal window.
  4. If the user was not walking and the average value computed at the previous step exceeds the predefined threshold than we can suppose that the user started walking. Conversely if the user was walking and the average value is smaller than the threshold then we can infer that the user is sitting still.

Using a moving average allows us to ignore noise due to head motions and generated by single steps.

Unattended picture taking

target: taking photos to be used in the image recognition process without requiring user interaction while keeping the screen off.

problem: Android allows to take pictures only if the user is able to see the preview and the photo is saved after the user gives a grant.

solution: building a camera management logic crafted for our purposes, designing and developing a dedicate local Android Service implementing a workaround to imposed limitations. 

The workaround consists in generating a dummy surface, called SurfaceTexture, on which we send the picture preview instead of showing it on the display. Using this dummy texture, we can invoke the startPreview() method (that enables camera preview) which is mandatory to call the takePicture() method that allows to take and store pictures.

Unattended picture taking - workflow

  1. After a pre-set time interval the PhotoManager starts the process
  2. [Optional] The PhotoManager waits for the grant from the WalkingDetectorService
  3. The PhotoManager asks the RotationVector to start collecting heading values
  4. The PhotoService takes the picture and asks the RotationManager the current heading value
  5. The picture and the heading value are packed in a request and sent to the image recognition server by the RequestSender.

 

USER FEEDBACK

Both the apps were demonstrated to Lazio Innova which is the co-financer of the ARCADIA 3.0 project.

We had a one-day on-site demo session with the responsible for the restoration of the Palatino.

Both the apps were verified and approved by NSR a third party certification company.

We had a one-day on-site demo with the director of "sovrintendenza capitolina ai beni culturali", who tested the apps appreciating the innovative aspects and their functionalities.

Thank you for your time.

Leveraging Google Glass Capabilities

Q & A

NEXT GEN Glass

Google Inc. has retired the prototypal version that we have today and it is currently working on a new version of Google Glass that will be mainly intended for business uses.

The redesigned glasses have a larger glass prism, 5GHz Wi-Fi, a faster Intel Atom processor and are waterproof.

main issues ADDRESSED

  • Issues related to the prototypal nature of the device
    • Frequent updates of libraries and API
    • Unfamiliarity of the new users
  • Issues related to the use within tourist sites
    • Scarce connectivity / not always equipped (limited access or fragile sites)
    • GPS signal degrades indoor or underground
    • Scarce luminosity makes makes image recognition difficult
  • Impossibility of directly porting a smartphone designed app
    • The workflow of the application has to be totally redesigned to fit the small screen
    • Audio and video contents have higher priority over textual ones
    • Google Glass user interface is deeply different, few simple gestures and voice commands are the only way of interacting with applications
    • Google Glass provides with additional functionalities if compared to a smartphone so such information has to be exploited
  • Problems related to the device specific characteristics
    • Small screen size / low resolution

Thesis Presentation

By Giordano Pezzola

Thesis Presentation

  • 743