19 December 2014

Projectagami CHI Work In Progress Paper


This week we worked on our CHI paper using the CHI 2015 Works in Progress template. We put a focus on concentrating on the wider use cases of our interactive device and implications of such a device in the future.

Below is the first page of our final paper.



References:

[1] Bekoe D., Tan D., Mooney A., Kumorek M., Amaya Garcia A.; "Projectagami: A Foldable Mobile Device With Shape Specific Applications"; December 2014





12 December 2014

Projectagami Final Literature Review


After the presentation of our device to the HCI group, we began to work on our CHI work in progress paper.

To do this, we felt it best to do a more extensive literature review. The main areas we focused on were 3D structures, shape shifting interaction and a self contained device.


Title, Author, Year of publication, Link
Area of research (e.g. 3D stuctures, self contained device etc.)
3 Sentence Summary of how this is useful to Projectagami (link to area of research on the left)
Estimation of Folding Operation
Using Silhouette of Origami
Yasuhiro Kinoshita, Toyohide Watanabe
2010

http://www.iaeng.org/IJCS/issues_v37/issue_2/IJCS_37_2_08.pdf
Detecting origami folds
This paper investigates automated estimation of new folds in origami using ‘before’ and ‘after’ camera images. It uses a silhouette model, which is the outline of the paper from one viewpoint. When the user of the system makes a new fold, the system compares the new silhouette against a range of possible silhouettes that were reachable from the previous silhouette, and uses that to determine the likely location of the fold made and its type. The paper does not investigate the use cases for this detection as part of an input device.
FoldMe: Interacting with Double-sided Foldable Displays
Mohammadreza Khalilbeigi, Roman Lissermann, Wolfgang Kleine, Jürgen Steimle
2012

https://embodied.mpi-inf.mpg.de/files/2012/11/p33-kahlilbeigi.pdf
Investigates possibilities afforded by folding displays
Klalilbeigi explore a very similar space to the one we are, but limited themselves more to exploring low-fidelity horizontal and vertical folds of a display. They come up with a variety of interaction principles that can be applied to designing applications for a foldable display. However, they do not explore the possibilities of non-quadrilateral or multi-dimensional shapes, and do not explore the possibilities of tangible interaction which arise when you can make more complex, high-fidelity, origami-like folds in the display.
Novel User Interaction Styles with Flexible / Rollable Screens,
Samudrala Nagaraju,
2013,
http://dl.acm.org/citation.cfm?id=2499152
Self contained device: Flexible, rollable screens
This paper investigates the possible technologies that could be used to implement flexible screens.The authors look ast sensors such as gyroscopes and accelerometers. The product they describe can only be rolled into different configurations whereas the aim of Projectagami is to be able to fold your device, just like origami. Similar to Projectagami, this paper by Samsung Research also looks at how different device configurations can be used as a tangible interface.
Tessella: interactive origami light,
Billy Cheng, Maxine Kim, Henry Lin, Sarah Fung, Zac Bush, and Jinsil Hwaryoung,
2012,
http://dl.acm.org/citation.cfm?id=2148131.2148200
Shape shifting interactions
The paper demonstrates an interactive light made from origami that transforms shape. This paper has interesting parallels to Projectagami as there is a discussion about how the form of the device implies its function. For example, our Book example suggests that you have to open it to see what is inside the book.
Flexible flat panel displays

http://books.google.co.uk/books?id=1VRuoq7h2FcC&lpg=PR5&ots=dSWCLrmlKD&dq=bending%20display&lr&pg=PA41#v=onepage&q=bending%20display&f=false
Technology enabling future devices
Book covering research in technologies that could enable us to build flexible LCD displays.

MEMS-Controlled Paper-Like Transmissive Flexible Display

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5406063&queryText%3Dflexible+display
Technology enabling future devices
A novel microelectromechanical-systems (MEMS)-controlled paper-like transmissive flexible display device was modeled by a combination of a cantilever with a flat plate and was realized by roll-to-roll printing process for the first time. This model provides predictions as well as improvement suggestions to both mechanical and electrical designs.
Tri-Foldable AMOLED displays

http://onlinelibrary.wiley.com/doi/10.1002/j.2168-0159.2014.tb00087.x/pdf
Foldable display
A folding screen type OLED display was developed (Fig. 1) to demonstrate the application of a flexible display. The display surface can be bent with a curvature radius of 4 mm. To protect an OLED against moisture, inorganic passivation layers are provided on the upper and lower sides of the flexible display. Using our transfer technology, dense passivation layers can be obtained. The measured water vapor transmission rate of the layer is 7 u 106 g/m2 day or less, which improves OLED reliability.

An enhanced user interface design with Auto-Adjusting Icon Placement on foldable devices

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6974178
Research in area of HCI and flexible displays
Flexible electronics appear in the consumer, medical, and military sectors. Thanks to the development of flexible electronics, flexible touchscreens have been widely carried on in various devices, such as mobile phones, wearable devices and hand-held tablets. (..)

On the flexible touchscreens, when the displays are folded, some touch area around the folded line is not touchable in users' operation, and this is a critical research problem for the flexible touch screens. However, to our knowledge, little or no user interface research has solved this problem. To resolve this critical problem, in this study, we design a novel user interface, called the Auto-Adjusting Placement, which can dynamically adjust objects, such as icons, texts and pictures, on the flexible touch screens to avoid the area around the folded line and to keep the high availability/readability of the objects.

8 December 2014

Projectagami Demo Preparation




Today, we finalised our demonstration in preparation for our demo tomorrow! We use the Wizard of Oz technique to demonstrate our proof of concept of a self contained device that can be folded with high fidelity.

Looking forward to presenting this tomorrow!

30 November 2014

Week 4 of The Build: Part 2 - Use-Cases


Use-cases:

Our interactive device is all about how we will interact with devices in the future that can transform into different shapes. Therefore, we will explore novel ways of interacting with such a device.

The future device we have in mind is a mobile phone that has apps. Each app can map to a different physical configuration of the device.

The first use-case we consider is a book. When the device is folded into a closed configuration, the book is closed and the book cover is shown. The book is opened by the opening of the device. This then allows the user to read the book. More contextual information can be shown by touching a word of interest in the book. In our example, we show the Wikipedia information on the term 'Pear'.




The second use-case we consider is a shopping app that launches when the device is folded into a shop shape [1]. The app will then allow the user to navigate through the products in the store, add items to their basket and pay.







The third use-case is as a game. The game we chose to demonstrate the idea is called Hungry Hippos. This is a 4-player game where each player does their best to get the most points. The device configuration we show below is what the actual game looks like. Therefore, we are using the flexible ability of our device to mimic the real world.





The fourth use-case we consider is when multiple apps map to the same device configuration. The examples we use are a web browser and a maps application. These will typically be in a rectangular shape. Therefore, when the device is in this configuration, the user is given a choice of whether they want to launch web browsing or maps. With both these applications though, the content is adjusted to fit the size of the display. This would be similar to Paddle [2,3].







[1] "Origami House", Origami Instructions, http://www.origami-instructions.com/origami-house.html
[2] Raf Ramakers, Johannes Schöning, Kris Luyten. "Paddle: Highly Deformable Mobile Devices with Physical Controls". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14).
[3] Raf Ramakers, "Paddle: Highly Deformable Mobile Devices with Physical Controls ", https://www.youtube.com/watch?v=zLe52PFZrtc

Week 4 of The Build: Part 1 - Paper Tracking & Projection

This week, we made big steps in tracking and projecting onto paper. We also started developing the use-cases that we will demonstrate. I'll start with paper tracking.


Paper tracking & projection:

After calibration, we use the Kinect and OpenCV to detect the paper. The contour functions of OpenCV are good for detecting the paper. To find contours, OpenCV defines a curve that joins all the continuous points along the boundary of an object that has the same colour or intensity [1]. To work best, there should be a big contrast between the object you want to detect and the background. Therefore, we bought a black cloth to put behind the white paper. When a threshold [2] is applied, there is a clear distinction between the paper and the background.

The contours found by OpenCV can be many. Therefore, we applied a contour approximation method to remove the redundant points on the contour. With the points of the contour and the Kinect depth information, we get coordinates for the paper in world space. With a set of transformations using the view matrix and projection matrix, the projector is then able to project onto the paper.


Below is a video demonstrating that we are able to track the paper and project onto it. To further develop this, we need to handle rotations as currently, the system doesn't handle the paper being rotated well.



[1] "Contours : Getting Started", OpenCV 3.0.0 Documentation, http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.html
[2] "Basic Thresholding Operations", OpenCV 2.4.9.0 Documentation, http://docs.opencv.org/doc/tutorials/imgproc/threshold/threshold.html

21 November 2014

Week 3 of The Build

Although we seemed to make good progress last week, the majority of our efforts this week involved having to reimplement a lot of what we had already done and thought we had completed.


We had calibration of the Kinect and projector completed last week but this did not incorporate the depth sensor values for the Kinect which was important to include. Our final calibration method is based on that used by some of the team in the BIG lab.


Above is our calibration in action. A checkerboard is projected and the Kinect recognises this checkerboard pattern. The difference with this calibration though is that we take 6 recordings for calibration instead of just the 1 we used last week. Each recording is with the checkerboard at a slightly different angle and orientation (shown below). This allows the calibration program to calculate the camera intrinsic and extrinsic parameters [1]. A projection matrix and view matrix are then calculated. These are what we feed into our Ogre application.


To improve calibration further, we turned the projector and Kinect to be the correct way up on our rig. This helped us to get a lower error for our calibrations. From talking to BIG researchers, we found that an error value of 2 or less is suitable. Our new rig configuration gives us error values in this range. Previously, our error was about 5 which was far too high.


References:

[1] Sagawa, R.; Yagi, Y., "Accurate calibration of intrinsic camera parameters by observing parallel light pairs," Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on , vol., no., pp.1390,1397, 19-23 May 2008
doi: 10.1109/ROBOT.2008.4543397
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4543397&isnumber=4543169

20 November 2014

Pressages: Augmenting Phone Calls With Non-Verbal Messages



This week, we had a lecture from Eve Hoggan, co-author of "Pressages: Augmenting Phone Calls With Non-Verbal Messages" [1]. She discussed her research on ForcePhone which allows the sending of non-verbal messages by squeezing their phone. Vibration technology is mostly used purely for feedback. However, Eve and her team propose using vibrations to become part of the conversation itself.

A user study was conducted that showed how couples in long distance relationships could use Pressages to augment their communication. The conclusion was that augmenting communication in this way has value in expressing greetings, presence and emotions.

References:

[1] "Eve Hoggan, Craig Stewart, Laura Haverinen, Giulio Jacucci and Vuokko Lantz",Pressages: Augmenting Phone Calls With Non-Verbal Messages,"UIST '12, October 7-10 2012", ACM 2012

15 November 2014

Week 2 of The Build


This week we focused on building a rig to hold the Kinect and projector and continued to work on tracking the paper.

When building our rig, we knew it had to be able to support the weight of the Kinect and projector combined. In addition, the Kinect and the projector had to be aligned to make tracking easier. Initially, we felt that the Kinect and projector had to be perpendicular to the surface. Our initial rig allowed the Kinect and projector to be perpendicular to the surface (no picture available - we forgot to take a picture). However, this initial rig was not secure enough for our needs. Through talking to lab assistants and doing our own research, we found that we can have them at an angle and still be able to do the tracking and projection. With these constraints, we built the rig shown in the image above. As you can see we use a camera Tripod to hold the devices. The Tripod allows us to change the angle of projection and also the height.

Once we have completed tracking of the paper with the Kinect, we plan to use Ogre to project onto the paper. Ogre is an open source 3D Graphics Engine. Therefore, in order to project onto the paper, we will model the paper in a 3D scene and place a camera in our scene that matches the position of the physical Kinect and projector in the real world. This will allow us to map a texture onto the paper that will then be projected.

Before we can accurately track the paper, we need to calibrate the Kinect. A research paper from Microsoft Research [1] describes how to perform quick calibration using a Kinect. To summarise the research paper, a checkerboard is shown to the Kinect and a maximum likelihood framework is used to calibrate the camera. The checkerboard pattern is commonly used in colour camera calibration. Since we have a projector, we project the checkerboard through that instead of having a physical checkerboard for calibration.

The images below show the projection of the checkerboard pattern and how this pattern is fed into our Kinect software to calibrate the device.



References:

[1] "Zhang, Cha; Zhang, Zhengyou; ",Calibration between depth and color sensors for commodity depth cameras,"Multimedia and Expo (ICME), 2011 IEEE International Conference on",Pages 1-6,2011,IEEE



7 November 2014

The Start of Implementation


After completing a literature review the previous week, my group and I began getting to grips with implementing our interactive device.

To begin with, we felt there were two key parts to the project that we should start with.

The first is being able to recognise a sheet of paper through the Kinect camera and track the paper as it moves. This is what the sub-team, to the right in the image above, were working on. It took a lot of time to get the Kinect set up and get the programming environment set up. Also, we wish to use OpenCV to do tracking and this took time to get set up. By the end of the week, this sub-team were able to have a Kinect camera feed fed into an OpenCV program. Next week, we aim to complete the basic paper tracking.

The second key part is being able to project onto the paper given the coordinates of the paper that are provided from the first part above. With another team member, this is what I worked on. At the end of the week, we were able to project a shape through the picture with coordinates that we predefined.

Next week, we aim to project onto paper and automatically adjust the coordinates of our projection based on the actual position of the paper. This will involve having to combine both this part with the first part above. This will allow us to have a very basic application that tracks paper and projects onto it as the paper moves.

25 October 2014

Flexible Surfaces Interaction - Group Plan & Literature Review


 This is our new idea! After our initial brainstorming sessions and deeper literature reviews, we have chosen to now focus on Flexible Surface Interaction as we think there is less research in this area and we have a novel idea to bring forward.

Summary of Idea
  • Projecting onto a flexible surface (e.g. paper)
  • Projecting onto a paper cylinder and being able to rotate the cylinder
  • Gesture interaction with the surface (e.g. making the cylinder virtually spin)
  • Folding the paper to different sizes
  • [Any findings would be relevant to future devices that are flexible]


Use cases: dynamically sized web pages, viewing a globe, flight paths, viewing timelines/history over time (as you spin), geometry education on curved planes/cartography education


EQUIPMENT LIST
  • Kinect
  • Projectors
  • Piezoelectric sensors to stick on back of paper

Literature Review

  • Summary: first paper is basically what we want to do, but we can extend on it by looking at the cylinder and folding ideas listed above.  Existing research either focuses on a device that changes shape (e.g. Bristol Morphees), so not gesture interaction, or just presents paper mock ups without actually having something that “works”


Year + Title + Link
Summary/ Interesting Findings/ Implications for our project
[2013] Flexpad: Highly Flexible Bending Interactions
for Projected Handheld Displays , LINK
Projects onto a flexible surface using Kinect (depth camera) and a projector. They are able to detect and analyse deformations in the surface in real-time thereby allowing them to project an adjusted image also in real-time. They present an algorithm that can be used to capture complex deformations in high detail and in real-time. This could be useful for our project.
[2013] Novel User Interaction Styles with Flexible / Rollable
Screens, LINK
Gathers feedback around flexible screens and potential technologies to use for implementation. No actual implementation.
Device uses a rollable display - they do not use a projector.
Has multiple modes and changes UI according to the mode it is in.
[2012] PyzoFlex: Printed Piezoelectric Pressure Sensing Foil, LINK
Used Piezoelectric pressure sensors in the form of a matrix to detect pressure and temperature. It is different to previous touch sensing that uses capacitive and resistive surfaces because it is able to ‘easily’ detect pressure.
[2014] FlexSense: A Transparent Self-Sensing Deformable Surface, LINK
This paper develops on earlier research on piezoelectric sensing (above). Using machine learning, they are able to detect the shape of the deformed surface.
Applications described: transparent smart cover for tablets, external high-DoF (Degrees of Freedom) input device

24 October 2014

In the Literature: Flexible Surfaces & Displays

After further thought, my group have decided to explore flexible surfaces and displays. We did not feel that the wearable ring idea had enough scope to make a really good interactive device.

In recent years, there has been interest among the research community to explore novel displays that can shape-shift. Our idea is to build a system that can project images on a flexible material such as paper, even when bent. The aim of the idea though is not the display itself but to design new ways of interacting with these curved surfaces. For instance, if a user bends the paper into a cylinder shape, a map can be visualised on the surface that the user can interact with by swiping the cylinder or rotating it.

The rest of this blog post explores some of the literature in this field.

"Flexpad: Highly Flexible Bending Interactions for Projected Handheld Displays", J. Steimle et al. CHI 2013

In this paper, a flexible interactive system is built that uses a depth camera (Kinect) and a projector. The system is able to detect and analyse deformations in the surface in real-time thereby allowing it to project an adjusted image also in real-time.

An algorithm is presented that can be used to capture complex deformations in high detail and in real-time. This sort of algorithm could be used when we come to implement our interactive device. Our use cases are different to the ones described in the paper so we would have to adjust the algorithms to work with our use cases. However, this paper provides a good starting point.

"Novel user interaction styles with flexible/rollable screens", S. Nagaraju CHItaly 2013

  This paper presents a concept for a device that has a rollable display and uses novel interactions. They propose a display that has multiple modes: (c) Full fold mode, (d) Fragment mode, (e) Stand mode. Using sensors built into the display (gyroscope, ), the device is able to determine the mode it is currently in. Based on this data, the device is able to determine the parts of the display the user can see (i.e. the exposed sides). The UI is then adjusted to account for the new display mode.

The paper then goes on to describe new user interfaces that can exploit the new display modes. This is something my group can think about for our own interactive device - how should the interface change for each form factor?

Our own idea is different from that explained here because we are using a projector that projects onto paper whereas this uses a dedicated display device.

Other papers:

PyzoFlex: Printed Piezoelectric Pressure Sensing Foil [2012]
  1. Used piezoelectric sensors to detect pressure and temperature

FlexSense: A Transparent Self-Sensing Deformable Surface [2014]
  1. This paper develops on the above research paper based on piezoelectric sensors.
  2. The authors used machine learning to detect the shape of the deformed surface in real-time.
  3. Applications described in the paper include: transparent smart cover for tablets, external high-DoF (Degrees of Freedom) input device

22 October 2014

Further Idea Discussions - Workshop 2

This week's workshop gave my group the opportunity to further discuss our ideas and also try out some hardware. Available was an Ardiuno Kit, Intel Galileo, Kinect Sensor, Leap Motion and Oculus Rift.

The initial ideas we went into the workshop with were discussed with the group. Following from my previous blog post, we discussed customised virtual keyboards and smart pens. Between the brainstorming workshop and this week, my team and I carried out a review of the literature that exists at the moment relating to both our ideas. It was exciting to see that there had been lots of research into the areas that we were exploring.

 A paper we discussed is "Towards Keyboard Independent Touch Typing in VR" - Kuester et al. 2005. The paper described a glove that uses Bluetooth and pressure sensors placed on the fingertips of the user to detect letters (QWERTY layout). After further investigation, we found an updated version of what began as the research paper. A press article is available at this link: Kitty virtual keyboard solution.

Following this research, we decided that the idea we thought was novel had in-fact already been done. We could not see any good way of improving this further and so decided to move on from this idea.

We fell into a similar situation with the smart pen idea. "Increasing Viscosity and Inertia Using a Robotically Controlled Pen Improves Handwriting in Children " - Hilla Ben-Pazi et al. 2010 describes a device that children hold to try and improve their handwriting. The device is capable of increasing the apparent inertia and viscosity of the pen to aid in handwriting. "Teaching to Write Japanese Characters using a Haptic Interface" - Solis, J. 2002 also describes a device that is capable of interpreting actions and exerting a more intelligent force feedback strategy. The system is able to identify the action to be performed by the user and then emulates the presence of a human tutor which feedbacks forces to the student.

The idea we finally settled on is a wearable ring device that is worn on a finger that uses ultrasound (to be explored further) and other sensors to detect interactions that can then be used as input to a device such as Google Glass. This allows for discreet operation of another device.

19 October 2014

Group Post on 3 workshop ideas



Idea 1: Customisable virtual keyboard


Idea 2: Smart Pen



Idea 3: Ring based interaction



Customisable virtual keyboard

Year + Title + Link + Initial
Summary/ Interesting Findings/ Implications for our project
[2005] Towards Keyboard Independent Touch Typing in VR, LINK, updated version of product, DT
KITTY glove using Bluetooth, that is similar to what we want to do: and removes the need to type on a specific pressure sensitive surface.  To detect letters (QWERTY layout) they use specific contact point  Presents the use case of virtual/ augmented reality (e.g. typing while wearing Oculus Rift).  

Updated version is a little less intrusive, but we could potentially make finger socks instead of gloves which could be less intrusive? Users commented on haptic feedback- perhaps we could use vibrations/sound when a letter is registered? Unpractised users took 2-7 secs per letter, perhaps we could improve on this with sensors or parallelisation: users wanted a tutorial
[2009] Fast Finger Tracking System for In-air
Typing Interface, LINK, DT
Markerless tracking of finger movements from a camera, recognising typing on air.  To achieve typing in real time, hardware that parallelizes image processing (throughput 138 fps)

(Ideally we’d want markered tracking so that the system could be used on a users lap)
[2002] Designing a Universal Keyboard Using Chording Gloves, LINK, DT
A universal input device for both text (Korean) and Braille input was developed in a Glove-typed interface using all the joints of the four fingers and thumbs of both hands.  Korean characters showed comparable performance with cellular phone input keypads, but inferior to conventional keyboard.  Letters are typed by the touching of certain finger combinations (not QWERTY).
PointGrab
This is not really a paper, but I wanted to investigate if someone had already tried to develop a system for home automation using gesture tracking. It turns out that there is a company called PointGrab that has been doing it for a few years. Here is the website: http://www.pointgrab.com/
Dextype
Dextype is a product that lets you type in the air. Because typing in air is so inaccurate, dextype helps by trying to figure out the key that you pressed. Also, it includes functions that make it very easy to complete words, draw symbols in air and correct previous typed input. Here is the website: http://www.cnet.com/news/type-in-the-air-with-dextype-for-leap-motion/
Eye Gaze Tracking for
Human Computer Interaction
This is a (very large) study on the advantages/disadvantages of using eye tracking as pointing mechanism as someone proposed last Tuesday. http://edoc.ub.uni-muenchen.de/11591/1/Drewes_Heiko.pdf
Hand Gesture Recognition Using
Computer Vision, LINK, AM
This paper investigates the detection of hand gestures using computer vision. Recognition of one-handed sign language is then used to implement a method of typing (see sections five and six).
[CHI 2003] Typing in Thin Air
The Canesta Projection Keyboard –
A New Method of Interaction with Electronic Devices, LINK, AM
This device was envisioned as a solution for typing with mobile devices, similar to our use cases. A keyboard is projected on to a surface, then the user types on the projected keyboard. Infrared light is projected in a plane slightly above the surface. The intersection of fingers with the infrared plane is used to work out which key the user pressed, and an audible click noise is made. User studies show that users of this keyboard perform worse than they would with a standard mechanical keyboard but better than they do with ‘thumb keyboards’ - however, this is before the revolution in smartphones and the associated improvements in the touchscreen keyboards, so this may no longer be a relevant comparison.
TiltType: Accelerometer-Supported Text Entry
for Very Small Devices
TiltType is a novel text entry technique for mobile devices. To enter a character, the user tilts the device and presses one or more buttons. The character chosen depends on the button pressed, the direction of tilt, and the angle of tilt. TiltType consumes minimal power and requires little board space, making it appropriate for wristwatch-sized devices. But because controlled tilting of one's forearm is fatiguing, a wristwatch using this technique must be easily removable from its wriststrap. Applications include two-way paging, text entry for watch computers, web browsing, numeric entry for calculator watches, and existing applications for PDAs.
WalkType: using accelerometer data to accomodate situational impairments in mobile touch screen text entry
The lack of tactile feedback on touch screens makes typing difficult, a challenge exacerbated when situational impairments like walking vibration and divided attention arise in mobile settings. We introduce WalkType, an adaptive text entry system that leverages the mobile device's built-in tri-axis accelerometer to compensate for extraneous movement while walking. WalkType's classification model uses the displacement and acceleration of the device, and inference about the user's footsteps. Additionally, WalkType models finger-touch location and finger distance traveled on the screen, features that increase overall accuracy regardless of movement. The final model was built on typing data collected from 16 participants. In a study comparing WalkType to a control condition, WalkType reduced uncorrected errors by 45.2% and increased typing speed by 12.9% for walking participants.

Smart Pen

Year + Title + Link + Initial
Summary/ Interesting Findings/ Implications for our project
[2010] Increasing Viscosity and Inertia Using a Robotically Controlled Pen Improves Handwriting in Children, LINK, DB
The paper aims to determine the effect of mechanical properties of the pen on quality of handwriting in children.
They used the device the child holds to try and improve handwriting.
“We predict that children may have either improved or worsened handwriting using the robot, but writing will not be slower.”
They use the robot to increase the apparent inertia and viscosity of the pen.
[2009] Poster: Teaching Letter Writing using a Programmable Haptic Device Interface for Children with Handwriting Difficulties, LINK, DB
The aim was to use a haptic device (robotic arm) to improve the handwriting of children who had difficulty writing. This was done in a virtual environment to so no ink was written to any paper. It was all virtual.
The results showed that handwriting improved by using the haptic device. There was an advantage of 3D force feedback over just 2D force feedback. Further work is needed to show 3D force feedback is superior though.
[2002] A Robotic Teacher of Chinese Handwriting, LINK, DB
This paper again used a virtual environment but with a real haptic device to teach people handwriting - in this case Chinese.
[2002] Teaching to Write Japanese Characters using a Haptic Interface, LINK, DB
“The Reactive Robot technology is capable of interpreting the human actions and exerting a more intelligent force feedback strategy.”
The Reactive robots are inspired to emulate the presence of a human tutor which feedbacks forces to the student.
The system not only reproduces the task, but also should be able to identify the action to be performed by the user.
This sounds similar to what we wanted to do but they only did it for a few letters of the Japanese alphabet. It was also still virtual.

Ring based interaction

Year + Title + Link + Initial
Summary/ Interesting Findings/ Implications for our project
An energy harvesting wearable ring platform for gestureinput on surfaces

This paper presents a remote gesture input solution for interacting indirectly with user interfaces on mobile and wearable devices. The proposed solution uses a wearable ring platform worn on users index finger. The ring detects and interprets various gestures performed on any available surface, and wirelessly transmits the gestures to the remote device. The ring opportunistically harvests energy from an NFC-enabled phone for perpetual operation without explicit charging. We use a finger-tendon pressure-based solution to detect touch, and a light-weight audio based solution for detecting finger motion on a surface. The two level energy efficient classification algorithms identify 23 unique gestures that include tapping, swipes, scrolling, and strokes for hand written text entry. The classification algorithms have an average accuracy of 73% with no explicit user training. Our implementation supports 10 hours of interactions on a surface at 2 Hz gesture frequency. The prototype was built with off-the-shelf components has a size similar to a large ring.
Plex
LINK
finger-worn textile sensor for eyes-free mobile interaction during daily activities. Although existing products like a data glove possess multiple sensing capabilities, they are not designed for environments where body and finger motion are dynamic. Usually an interaction with fingers couples bending and pressing. In Plex, we separate bending and pressing by placing each sensing element in discrete faces of a finger. Our proposed simple and low-cost fabrication process using conductive elastomers and threads transforms an elastic fabric into a finger-worn interaction tool. Plex preserves an inter-finger natural tactile feedback and proprioception. We also explore the interaction design and implement applications allowing users to interact with existing mobile and wearable devices using Plex.
More than touch: understanding how people use skin as an input surface for mobile computing
This paper contributes results from an empirical study of on-skin input, an emerging technique for controlling mobile devices.

Look at section “On-Skin Sensors”
The Sound of Touch: On-body Touch and Gesture Sensing
Based on Transdermal Ultrasound Propagation
Can detect pressure/distance of touch on skin. Requires two ultrasound transducers - one transmits, one receives. Paper focuses on forearm but notes: “our signal propagation experiments suggest
that the sensing method could be extended to various body
parts”

Others:
Hand-writing Rehabilitation in the Haptic Virtual Environment - LINK
Motor Skill Training Assistance Using Haptic Attributes - LINK
Using haptics to improve motor skills but not specific to handwriting.
Human Motion Prediction in a Human-Robot Joint Task - LINK
Optimal Kinematic Design of a Haptic Pen - LINK