25 October 2014

Flexible Surfaces Interaction - Group Plan & Literature Review


 This is our new idea! After our initial brainstorming sessions and deeper literature reviews, we have chosen to now focus on Flexible Surface Interaction as we think there is less research in this area and we have a novel idea to bring forward.

Summary of Idea
  • Projecting onto a flexible surface (e.g. paper)
  • Projecting onto a paper cylinder and being able to rotate the cylinder
  • Gesture interaction with the surface (e.g. making the cylinder virtually spin)
  • Folding the paper to different sizes
  • [Any findings would be relevant to future devices that are flexible]


Use cases: dynamically sized web pages, viewing a globe, flight paths, viewing timelines/history over time (as you spin), geometry education on curved planes/cartography education


EQUIPMENT LIST
  • Kinect
  • Projectors
  • Piezoelectric sensors to stick on back of paper

Literature Review

  • Summary: first paper is basically what we want to do, but we can extend on it by looking at the cylinder and folding ideas listed above.  Existing research either focuses on a device that changes shape (e.g. Bristol Morphees), so not gesture interaction, or just presents paper mock ups without actually having something that “works”


Year + Title + Link
Summary/ Interesting Findings/ Implications for our project
[2013] Flexpad: Highly Flexible Bending Interactions
for Projected Handheld Displays , LINK
Projects onto a flexible surface using Kinect (depth camera) and a projector. They are able to detect and analyse deformations in the surface in real-time thereby allowing them to project an adjusted image also in real-time. They present an algorithm that can be used to capture complex deformations in high detail and in real-time. This could be useful for our project.
[2013] Novel User Interaction Styles with Flexible / Rollable
Screens, LINK
Gathers feedback around flexible screens and potential technologies to use for implementation. No actual implementation.
Device uses a rollable display - they do not use a projector.
Has multiple modes and changes UI according to the mode it is in.
[2012] PyzoFlex: Printed Piezoelectric Pressure Sensing Foil, LINK
Used Piezoelectric pressure sensors in the form of a matrix to detect pressure and temperature. It is different to previous touch sensing that uses capacitive and resistive surfaces because it is able to ‘easily’ detect pressure.
[2014] FlexSense: A Transparent Self-Sensing Deformable Surface, LINK
This paper develops on earlier research on piezoelectric sensing (above). Using machine learning, they are able to detect the shape of the deformed surface.
Applications described: transparent smart cover for tablets, external high-DoF (Degrees of Freedom) input device

24 October 2014

In the Literature: Flexible Surfaces & Displays

After further thought, my group have decided to explore flexible surfaces and displays. We did not feel that the wearable ring idea had enough scope to make a really good interactive device.

In recent years, there has been interest among the research community to explore novel displays that can shape-shift. Our idea is to build a system that can project images on a flexible material such as paper, even when bent. The aim of the idea though is not the display itself but to design new ways of interacting with these curved surfaces. For instance, if a user bends the paper into a cylinder shape, a map can be visualised on the surface that the user can interact with by swiping the cylinder or rotating it.

The rest of this blog post explores some of the literature in this field.

"Flexpad: Highly Flexible Bending Interactions for Projected Handheld Displays", J. Steimle et al. CHI 2013

In this paper, a flexible interactive system is built that uses a depth camera (Kinect) and a projector. The system is able to detect and analyse deformations in the surface in real-time thereby allowing it to project an adjusted image also in real-time.

An algorithm is presented that can be used to capture complex deformations in high detail and in real-time. This sort of algorithm could be used when we come to implement our interactive device. Our use cases are different to the ones described in the paper so we would have to adjust the algorithms to work with our use cases. However, this paper provides a good starting point.

"Novel user interaction styles with flexible/rollable screens", S. Nagaraju CHItaly 2013

  This paper presents a concept for a device that has a rollable display and uses novel interactions. They propose a display that has multiple modes: (c) Full fold mode, (d) Fragment mode, (e) Stand mode. Using sensors built into the display (gyroscope, ), the device is able to determine the mode it is currently in. Based on this data, the device is able to determine the parts of the display the user can see (i.e. the exposed sides). The UI is then adjusted to account for the new display mode.

The paper then goes on to describe new user interfaces that can exploit the new display modes. This is something my group can think about for our own interactive device - how should the interface change for each form factor?

Our own idea is different from that explained here because we are using a projector that projects onto paper whereas this uses a dedicated display device.

Other papers:

PyzoFlex: Printed Piezoelectric Pressure Sensing Foil [2012]
  1. Used piezoelectric sensors to detect pressure and temperature

FlexSense: A Transparent Self-Sensing Deformable Surface [2014]
  1. This paper develops on the above research paper based on piezoelectric sensors.
  2. The authors used machine learning to detect the shape of the deformed surface in real-time.
  3. Applications described in the paper include: transparent smart cover for tablets, external high-DoF (Degrees of Freedom) input device

22 October 2014

Further Idea Discussions - Workshop 2

This week's workshop gave my group the opportunity to further discuss our ideas and also try out some hardware. Available was an Ardiuno Kit, Intel Galileo, Kinect Sensor, Leap Motion and Oculus Rift.

The initial ideas we went into the workshop with were discussed with the group. Following from my previous blog post, we discussed customised virtual keyboards and smart pens. Between the brainstorming workshop and this week, my team and I carried out a review of the literature that exists at the moment relating to both our ideas. It was exciting to see that there had been lots of research into the areas that we were exploring.

 A paper we discussed is "Towards Keyboard Independent Touch Typing in VR" - Kuester et al. 2005. The paper described a glove that uses Bluetooth and pressure sensors placed on the fingertips of the user to detect letters (QWERTY layout). After further investigation, we found an updated version of what began as the research paper. A press article is available at this link: Kitty virtual keyboard solution.

Following this research, we decided that the idea we thought was novel had in-fact already been done. We could not see any good way of improving this further and so decided to move on from this idea.

We fell into a similar situation with the smart pen idea. "Increasing Viscosity and Inertia Using a Robotically Controlled Pen Improves Handwriting in Children " - Hilla Ben-Pazi et al. 2010 describes a device that children hold to try and improve their handwriting. The device is capable of increasing the apparent inertia and viscosity of the pen to aid in handwriting. "Teaching to Write Japanese Characters using a Haptic Interface" - Solis, J. 2002 also describes a device that is capable of interpreting actions and exerting a more intelligent force feedback strategy. The system is able to identify the action to be performed by the user and then emulates the presence of a human tutor which feedbacks forces to the student.

The idea we finally settled on is a wearable ring device that is worn on a finger that uses ultrasound (to be explored further) and other sensors to detect interactions that can then be used as input to a device such as Google Glass. This allows for discreet operation of another device.

19 October 2014

Group Post on 3 workshop ideas



Idea 1: Customisable virtual keyboard


Idea 2: Smart Pen



Idea 3: Ring based interaction



Customisable virtual keyboard

Year + Title + Link + Initial
Summary/ Interesting Findings/ Implications for our project
[2005] Towards Keyboard Independent Touch Typing in VR, LINK, updated version of product, DT
KITTY glove using Bluetooth, that is similar to what we want to do: and removes the need to type on a specific pressure sensitive surface.  To detect letters (QWERTY layout) they use specific contact point  Presents the use case of virtual/ augmented reality (e.g. typing while wearing Oculus Rift).  

Updated version is a little less intrusive, but we could potentially make finger socks instead of gloves which could be less intrusive? Users commented on haptic feedback- perhaps we could use vibrations/sound when a letter is registered? Unpractised users took 2-7 secs per letter, perhaps we could improve on this with sensors or parallelisation: users wanted a tutorial
[2009] Fast Finger Tracking System for In-air
Typing Interface, LINK, DT
Markerless tracking of finger movements from a camera, recognising typing on air.  To achieve typing in real time, hardware that parallelizes image processing (throughput 138 fps)

(Ideally we’d want markered tracking so that the system could be used on a users lap)
[2002] Designing a Universal Keyboard Using Chording Gloves, LINK, DT
A universal input device for both text (Korean) and Braille input was developed in a Glove-typed interface using all the joints of the four fingers and thumbs of both hands.  Korean characters showed comparable performance with cellular phone input keypads, but inferior to conventional keyboard.  Letters are typed by the touching of certain finger combinations (not QWERTY).
PointGrab
This is not really a paper, but I wanted to investigate if someone had already tried to develop a system for home automation using gesture tracking. It turns out that there is a company called PointGrab that has been doing it for a few years. Here is the website: http://www.pointgrab.com/
Dextype
Dextype is a product that lets you type in the air. Because typing in air is so inaccurate, dextype helps by trying to figure out the key that you pressed. Also, it includes functions that make it very easy to complete words, draw symbols in air and correct previous typed input. Here is the website: http://www.cnet.com/news/type-in-the-air-with-dextype-for-leap-motion/
Eye Gaze Tracking for
Human Computer Interaction
This is a (very large) study on the advantages/disadvantages of using eye tracking as pointing mechanism as someone proposed last Tuesday. http://edoc.ub.uni-muenchen.de/11591/1/Drewes_Heiko.pdf
Hand Gesture Recognition Using
Computer Vision, LINK, AM
This paper investigates the detection of hand gestures using computer vision. Recognition of one-handed sign language is then used to implement a method of typing (see sections five and six).
[CHI 2003] Typing in Thin Air
The Canesta Projection Keyboard –
A New Method of Interaction with Electronic Devices, LINK, AM
This device was envisioned as a solution for typing with mobile devices, similar to our use cases. A keyboard is projected on to a surface, then the user types on the projected keyboard. Infrared light is projected in a plane slightly above the surface. The intersection of fingers with the infrared plane is used to work out which key the user pressed, and an audible click noise is made. User studies show that users of this keyboard perform worse than they would with a standard mechanical keyboard but better than they do with ‘thumb keyboards’ - however, this is before the revolution in smartphones and the associated improvements in the touchscreen keyboards, so this may no longer be a relevant comparison.
TiltType: Accelerometer-Supported Text Entry
for Very Small Devices
TiltType is a novel text entry technique for mobile devices. To enter a character, the user tilts the device and presses one or more buttons. The character chosen depends on the button pressed, the direction of tilt, and the angle of tilt. TiltType consumes minimal power and requires little board space, making it appropriate for wristwatch-sized devices. But because controlled tilting of one's forearm is fatiguing, a wristwatch using this technique must be easily removable from its wriststrap. Applications include two-way paging, text entry for watch computers, web browsing, numeric entry for calculator watches, and existing applications for PDAs.
WalkType: using accelerometer data to accomodate situational impairments in mobile touch screen text entry
The lack of tactile feedback on touch screens makes typing difficult, a challenge exacerbated when situational impairments like walking vibration and divided attention arise in mobile settings. We introduce WalkType, an adaptive text entry system that leverages the mobile device's built-in tri-axis accelerometer to compensate for extraneous movement while walking. WalkType's classification model uses the displacement and acceleration of the device, and inference about the user's footsteps. Additionally, WalkType models finger-touch location and finger distance traveled on the screen, features that increase overall accuracy regardless of movement. The final model was built on typing data collected from 16 participants. In a study comparing WalkType to a control condition, WalkType reduced uncorrected errors by 45.2% and increased typing speed by 12.9% for walking participants.

Smart Pen

Year + Title + Link + Initial
Summary/ Interesting Findings/ Implications for our project
[2010] Increasing Viscosity and Inertia Using a Robotically Controlled Pen Improves Handwriting in Children, LINK, DB
The paper aims to determine the effect of mechanical properties of the pen on quality of handwriting in children.
They used the device the child holds to try and improve handwriting.
“We predict that children may have either improved or worsened handwriting using the robot, but writing will not be slower.”
They use the robot to increase the apparent inertia and viscosity of the pen.
[2009] Poster: Teaching Letter Writing using a Programmable Haptic Device Interface for Children with Handwriting Difficulties, LINK, DB
The aim was to use a haptic device (robotic arm) to improve the handwriting of children who had difficulty writing. This was done in a virtual environment to so no ink was written to any paper. It was all virtual.
The results showed that handwriting improved by using the haptic device. There was an advantage of 3D force feedback over just 2D force feedback. Further work is needed to show 3D force feedback is superior though.
[2002] A Robotic Teacher of Chinese Handwriting, LINK, DB
This paper again used a virtual environment but with a real haptic device to teach people handwriting - in this case Chinese.
[2002] Teaching to Write Japanese Characters using a Haptic Interface, LINK, DB
“The Reactive Robot technology is capable of interpreting the human actions and exerting a more intelligent force feedback strategy.”
The Reactive robots are inspired to emulate the presence of a human tutor which feedbacks forces to the student.
The system not only reproduces the task, but also should be able to identify the action to be performed by the user.
This sounds similar to what we wanted to do but they only did it for a few letters of the Japanese alphabet. It was also still virtual.

Ring based interaction

Year + Title + Link + Initial
Summary/ Interesting Findings/ Implications for our project
An energy harvesting wearable ring platform for gestureinput on surfaces

This paper presents a remote gesture input solution for interacting indirectly with user interfaces on mobile and wearable devices. The proposed solution uses a wearable ring platform worn on users index finger. The ring detects and interprets various gestures performed on any available surface, and wirelessly transmits the gestures to the remote device. The ring opportunistically harvests energy from an NFC-enabled phone for perpetual operation without explicit charging. We use a finger-tendon pressure-based solution to detect touch, and a light-weight audio based solution for detecting finger motion on a surface. The two level energy efficient classification algorithms identify 23 unique gestures that include tapping, swipes, scrolling, and strokes for hand written text entry. The classification algorithms have an average accuracy of 73% with no explicit user training. Our implementation supports 10 hours of interactions on a surface at 2 Hz gesture frequency. The prototype was built with off-the-shelf components has a size similar to a large ring.
Plex
LINK
finger-worn textile sensor for eyes-free mobile interaction during daily activities. Although existing products like a data glove possess multiple sensing capabilities, they are not designed for environments where body and finger motion are dynamic. Usually an interaction with fingers couples bending and pressing. In Plex, we separate bending and pressing by placing each sensing element in discrete faces of a finger. Our proposed simple and low-cost fabrication process using conductive elastomers and threads transforms an elastic fabric into a finger-worn interaction tool. Plex preserves an inter-finger natural tactile feedback and proprioception. We also explore the interaction design and implement applications allowing users to interact with existing mobile and wearable devices using Plex.
More than touch: understanding how people use skin as an input surface for mobile computing
This paper contributes results from an empirical study of on-skin input, an emerging technique for controlling mobile devices.

Look at section “On-Skin Sensors”
The Sound of Touch: On-body Touch and Gesture Sensing
Based on Transdermal Ultrasound Propagation
Can detect pressure/distance of touch on skin. Requires two ultrasound transducers - one transmits, one receives. Paper focuses on forearm but notes: “our signal propagation experiments suggest
that the sensing method could be extended to various body
parts”

Others:
Hand-writing Rehabilitation in the Haptic Virtual Environment - LINK
Motor Skill Training Assistance Using Haptic Attributes - LINK
Using haptics to improve motor skills but not specific to handwriting.
Human Motion Prediction in a Human-Robot Joint Task - LINK
Optimal Kinematic Design of a Haptic Pen - LINK