Pixel1.gif (51 bytes)
Pixel1.gif (51 bytes)
Pixel1.gif (51 bytes) Main Page Pixel1.gif (51 bytes)
About DSP Laboratory
Pixel.gif (52 bytes)
Contact Us
Pixel.gif (52 bytes)
Go to FIU's Homepage


 Pixel1.gif (51 bytes)


Curve.gif (104 bytes) Pixel1.gif (51 bytes)

Non-intrusive Physiological Monitoring for Affective Sensing of Computer Users

Pixel1.gif (51 bytes)

"Non-intrusive Physiological Monitoring for Affective Sensing of Computer Users", (2008)
Barreto A.

INTRODUCTION: The last two decades have, undoubtedly, brought about an amazing revolution in the relationship between computers and their users. This relationship has evolved from an initial state in which the full “burden” of the communication was placed on the shoulders of the user, when early computer models had to be programmed one instruction at a time, toggling individuals switches (which restricted computer usage to very few, highly trained individuals), to the current status, in which, thanks to highly intuitive graphic user interfaces (GUIs), even young children can have some meaningful interaction with the personal computers that are now present at many homes.

Further, it is now possible for users to employ alternative means, such as their speech, or even the direction of their eye gaze, to interact with computers. In cases such as these, it is clear that now the computer has taken over a larger portion of the interaction “burden”, as ancillary programs (speech recognition, eye image processing, speech synthesis) will be running in the computer system to match the actions (speaking, shifting the point of gaze on the screen, listening) that the user naturally and almost effortlessly performs during the interaction.

One may think that computers are fast approaching a level of development in which they may recognize our speech, perceive our gaze shifts and speak to us just as well as another human could, to the point of being able to substitute, under certain scenarios, a human counterpart in a dialog. However, it is very likely that, in spite of the efficiency of the recognition of our speech and the fidelity and cadence of the synthesized voice, we would soon realize we are interacting with a machine, as the subtle modulation and adjustments that occur in human-human interaction due to phenomena such as empathy and sympathy would be found missing, substituted instead by mechanistic and often inflexible templates that have been pre-designed for short interaction segments, which disregard what the affective state of the user might be or how it might be changing. In summary, the goal of a human-computer interaction that should be inherently natural and social, following the basics of human-human interaction, as proposed by Reeves & Nass [Reeves & Nass, 1996], has not yet been reached.