Monday, January 25, 2010

UIST 2009: A Practical Pressure Sensitive Computer Keyboard


Article : A Practical Pressure Sensitive Computer Keyboard

Authors: Paul H. Dietz, Benjamin Eidelson, Jonathan Westhues and Steven Bathiche

Summary: In this article, the authors talk about a new modified design they have for a computer keyboard that would cause it to have pressure-sensing capabilities. To begin with, they describe how the current computer keyboard hasn't really evolved much from the original versions that were being used 20 years ago, at least from a superficial glance, even with the abundance of newer technologies available. They attribute this lack of progress to the high cost of training a user to use a new device. Their goal from the beginning is to make the new keyboard as intuitive as possible, and in order to achieve this, they make 0 modifications to the physical feel or look of the keyboard, and instead opt to add pressure sensing technology at the point of contact for each key.

In order to accomplish pressure sensitivity, they use a special variable resistance material that increases or decreases in resistance depending on how hard the user is pressing on it. They insert this material in between the contacts of each key, and place it in such a way that the pressure sensitive technology is only activated after a normal key stroke is accomplished. They use a set of rows and columns, each hooked up to an op amp, in order to measure differences in voltage to determine which key is being pressed, etc.

The authors then go on to describe a few instances in which this technology could be used, such as emotional instant messaging, video gaming, etc.

By adding the technology to an existing keyboard, they added a fairly modest amount of new abilities to the keyboard while only altering its design minimally.

Shaun's Opinion: I think this article right here is a great example of improving on existing technology without running into a whole bunch of new tricks or methods to learn. The keyboard works much the same as a regular keyboard, just with a few added features. There are no extensive training sessions, and the keyboard is only slightly more expensive to manufacture, using probably >90% of the current manufacturing processes. Articles like this make a person look around and think about objects or things in their lives that could probably be easily improved without too much effort.

Sunday, January 24, 2010

UIST 2009: PhotoelasticTouch


Article: PhotoelasticTouch: Transparent Rubbery TangibleInterface using an LCD and Photoelasticity

Authors: Toshiki Sato, Haruko Mamiya, Hideki Koike, Kentaro Fukuchi

Summary: In this article, the authors present a new idea for a multitouch interface that is capable of a few more means of user interactions than some its competitors. Dubbed "PhotoelasticTouch" by the creators, the new system uses a high speed camera mounted on top of a desk-like device with an LCD panel built into the desk. The camera is mounted directly above the LCD in order to take a top-down picture of the activity occuring on the screen below. The interface uses a couple quarterwave plates and a Hyper-Gel sheet in order to create certain deformities of light that can be detected by the camera above, which uses a polarized lens filter in order to filter the light coming from the LCD, unless the light is caused by a deformity in the Hyper-Gel.

After the user presses on a piece of the hypergel, a deformity is caused which creates a spot of light on the LCD screen below that the camera placed above, detects. The image captured by the camera is then analyzed by the software, which uses a special algorithm to detect the size of light deformities caused in the material in order to judge the amount of force being applied to the touch screen, and from where it originated, the direction it is headed, etc. The authors then go on to describe certain applications to which the new interface is particularly suited, such as a paint application, a pressure sensitive touch panel, and a tangible face.

Shaun's Opinion: In my opinion, the idea proposed by these folks is very unique and potentially useful. However, the current way that it is setup (camera overhead that captures light deformations in the material below) seems to have some inherent flaws. If the user decides to hang their head or part of their body over the same part they are touch, no light deformations will be detected and thus nothing will happen. Not to mention the inherent impracticality of an overhead type system. This system, while unique and appealing, and certainly useful in some applications, would probably never become mainstream simply due to the bulk and size of the equipment needed to make it work properly.

Tuesday, January 19, 2010

UIST 2009: Reconfigurable Ferromagnetic Input Device


Summary - The article "A Reconfigurable Ferromagnetic Input Device" is about the possibilities of a new sensory device that they have discovered in their research. The authors describe the multiple new applications that could be used if the device was enhanced, such as reconfigurable input devices that could be made on-the-fly by the user and then interpreted easily by the magnetic sensing device they have come up with.

The device uses magnetic coils attached to circuitry in order to detect changes in the magnetic fields produced by the coils that are caused by the presence of ferromagnetic objects being placed on top of the device. The changes in magnetic field caused by the object create a voltage increase at the sensory input, which is then amplified and analyzed by a C# library, and then visualized by the same library.

So far, the device can only visualize objects in 2-D, but research could be done, they say, to allow it to sense in 3-D.

Shaun's Opinion - While the idea of this device is certainly appealing, at the moment its design constraints make it hard to imagine much real world usefulness. The sensing device requires the input devices to be ferromagnetic, and it only currently sees objects in 2D. While it certainly does a fine job of detecting the touch on a ferromagnetic bladder, I don't see it being very practical. This idea definitely needs more research to possibly become a useful, implementable idea.

UIST 2009: Ripples


Summary - The "Ripples: utilizing per-contact visualizations to improve user interactions with touch displays" focuses on the problem of user frustration and user lack of confidence inherent in most single touch and multi-touch displays, and the solution that the authors have come up with, a visualization system titled Ripples. The two main problems discussed in the article are:


Fat Finger Problem - Essentially this is when you touch a touchscreen and your finger covers a larger area than what the "touch" signal is actually reduced down to, causing a "miss".


Feedback Ambiguity Problem - The feedback ambiguity problem is when a user touches a touchscreen and does not achieve the desired result, and the system provides no feedback or cues as to what might have happened. For example, when you think you've touched a button, but you actually missed it (according to the system) and there is no feedback provided as to what happened, so you begin to lose confidence that the button actually works, or perhaps that the system is laggy.

The authors then go on to describe the system they implemented to help user's learn how to solve these two problems, which is, simply put, to places ripples and other visual cues at contact points on the screen when it is touched. The ripples run on top of whatever application is running in the background, so they can remain consistent and application-independent.

Shaun's Opinion - I think what is presented in this article is a very clear-cut, simple solution to problems with touch screens that I personally have experienced. Having a computer engineering background helps me to grasp the source of erroneous touch inputs easier than the average user, and I even find myself frustrated with touch screens. I think the ripples effect on touch screens would have quite a large impact on user confidence in touch screen applications, allowing them to visualize what the system is seeing, as opposed to what the user thought they did. All in all, I think this system would allow users to learn from their mistakes and be able to make better use of touch screen applications.