Pixel1.gif (51 bytes)
Pixel1.gif (51 bytes)
Pixel1.gif (51 bytes) Main Page Pixel1.gif (51 bytes)
About DSP Laboratory
Pixel.gif (52 bytes)
Contact Us
Pixel.gif (52 bytes)
Go to FIU's Homepage


 Pixel1.gif (51 bytes)


Curve.gif (104 bytes) Pixel1.gif (51 bytes)

Software-Based Compensation of Visual Refractive Errors of Computer Users

Pixel1.gif (51 bytes)

"Software-Based Compensation of Visual Refractive Errors of Computer Users", (2005)
Miguel Alonso, Armando Barreto, Maroof Choudhury, Julie Jacko and Malek Adjouadi

ABSTRACT: For human beings, vision is one of the most important senses in interacting with the surrounding environment, as well as with any tools that require visual communication. As such, the ability to interact effectively with computers through typical graphic user interfaces (GUIs), is greatly affected by any refractive errors present in an individual's visual system. If the refractive errors can be mathematically modeled, a system for overcoming these aberrations can be devised, increasing effective human-computer interaction for these individuals. Several methods, such as Adaptive Optics, have been proposed that attempt to solve this problem using electro-mechanical devices. These methods are costly and impractical, preventing most visually impaired individuals from benefiting from their use. In contrast, an image-processing method, based on deconvolution techniques, has recently been proposed for the pre-compensation of images to be displayed in a computer. This method is much more practical, being completely implemented in software, and has achieved encouraging results. Previous results have yielded an average 50% increase in visual efficiency in the compensation of a known artificial aberration introduced into the field of vision of experimental subjects. This paper describes the difficulties encountered with the present software-only compensation and proposes several methods for overcoming these obstacles. The difficulties, as well as the proposed solutions, are described theoretically and followed by examples using a lens system showing the improvement over previous methods.