Abstract— sensory and perceptual abilities like those of

Abstract— Facial expressions
and emotions plays an important role in communications in social interactions
with other human beings which delivers rich information about their mood. The
“BLUE EYES TECHNOLOGY” aims at creating computational machines that have
sensory and perceptual abilities like those of human beings which enables the
computer to gather information about humans and interact with them. This paper
implements the detection of emotions (happy, sad, fear, surprised, anger,
disgust) by taking in consideration the human eye expressions and by using
emotion mouse. The emotion mouse obtains physiological data and
emotional state of a person through the single touch of mouse having different
sensors. Emotions are also determined by human eye expression in which the eye
region from a video sequence is analyzed. From the different frames of the
video stream, the human eyes can be extracted using the edge operator and then
can be classified using a Support Vector machine (SVM) classifier. After the
classification we use standard learning tool, Hidden Markov Model (HMM) for recognizing
the emotions from the human eye expressions. After successful detection
of emotion, suitable audio track will be played.

 

Keywords- Blue eyes, emotion mouse, emotion
recognition, eye expressions, Support
Vector Machine (SVM), Hidden
Markov Model (HMM).

                                                                                                                                                   
I.         
INTRODUCTION

The
“BLUE EYES” technology aims at creating computational machines by adding extraordinary
perceptual abilities to the computers that helps them to verify a person’s
identity, feel their presence, and interact with them. Human recognition depends
primarily on the stability to perceive, interpret, and integrate audio/visuals
and sensoring information, Blue eyes technology makes a computer to sense and
understand human feelings and their behavior and enables the computer to
respond according to the sensed emotional level. The main aim of blue eyes
technology is to give human abilities or power to a computer, so that the
machine can naturally interact with human beings as humans interact with each
other.

The
proposed methodologies in this paper detect human emotions are emotional mouse
and emotion recognition by human eye expressions. Emotion mouse is an input
device which is designed to track the emotions of a user by a simple touch of
it. The emotion mouse is used to evaluate and identify the user’s emotions such
as happy, sad, anger, fear, disgust, surprised, etc. when the user is
interacting with computer.

 

Human’s emotion recognition is
an important component for efficient man-machine interaction. It plays a
critical role in communication by allowing people to express oneself beyond the
verbal domain. Analysis of emotions from human eye expression involves the
detection and categorization of various human emotions or different state of
mind. For example, in security and surveillance, they can predict the offender
or criminal’s behavior by analyzing the images of their face from the frames of
the video sequence. The analysis of human emotions can be applied in a variety
of application domains, such as video surveillance and human – computer
interaction systems. In some cases, the results of such analysis can be applied
to identify and categorize the various human emotions automatically from the
videos.

 

                                                                                                                                                 
II.        
RELATED WORK

Many
approaches for blue eye technology and human emotion recognition have been
proposed in the last two decades.

Mizna
Rehman Mizna et. al. 1 This paper
implements a new technique known as Emotion Sensory World of Blue Eyes
Technology which identifies human emotions (sad, happy, excited or surprised)
using image processing techniques by extracting eye portion from the captured
image which is then compared with stored images of data base. This paper
proposes two key results of emotional sensory world. First, observation reveals
the fact that different eye colors and their intensity results in change in
emotions. It changes without giving any information on shape and actual
detected emotion. It is used to successfully recognize four different emotions
of eyes.

S.R. Vinotha
et. al. 2, this paper uses the feature extraction technique to extract the
eyes, support vector machine (SVM) classifier and a HMM to build a human
emotion recognition system. The proposed system presents a human emotion recognition system that
analyzes the human eye region from video sequences. From the frames of the
video stream the human eyes can be extracted using the well-known canny edge
operator and classified using a non – linear Support Vector machine (SVM)
classifier. Finally, standard learning tool is used, Hidden Markov Model (HMM)
for recognizing the emotions from the human eye expressions.

Mohammad Soleymani et. al. 3 this paper presents the approach
in instantaneously detecting the emotions of video viewers’ emotions from
electroencephalogram (EEG) signals and facial expressions. A set of emotion
inducing videos were shown to participants while their facial expressions and
physiological responses were recorded. The expressed valence (negative to
positive emotions) in the videos of participants’ faces were annotated by five
annotators. The stimuli videos were also continuously annotated on valence and
arousal dimensions. Long-short-term-memory recurrent neural networks (LSTM-RNN)
and Continuous Conditional Random Fields (CCRF) were utilized in detecting
emotions automatically and continuously. The results from facial expressions to
be superior to the results from EEG signals. The analyzed effect of the
contamination of facial muscle activities on EEG signals and found that most of
the emotionally valuable content in EEG features are as a result of this
contamination. However, our statistical analysis showed that EEG
signals carries complementary information in presence of facial expressions.

T. Moriyama et. al. 4 this
paper propose a system
that is capable of detailed analysis of eye region images in terms of the
position of the iris, degree of eyelid opening, and the shape, complexity, and
texture of the eyelids. The system uses a generative eye region model that
parameterizes the fine structure and motion of an eye. The structure parameters
represent structural individuality of the eye, including the size and color of
the iris, the width, boldness, and complexity of the eyelids, the width of the
bulge below the eye, and the width of the illumination reflection on the bulge.
The motion parameters represent movement of the eye, including the up-down
position of the upper and lower eyelids and the 2D position of the iris.

Renu Nagpal et. al. 5 the main contribution of this paper is to
present a first in the world publicly available dataset of labeled data
recorded over the Internet of people naturally viewing online media. The AM-FED
contains, 1) 242 webcam videos recorded in real-world conditions, 2) 168,359
frames labeled for the presence of 10 symmetrical FACS action units, 4
asymmetric (unilateral) FACS action units, 2 head movements, smile, general
expressiveness, feature tracker fails and gender, 3) locations of 22
automatically detect landmark points, 4) baseline performance of detection
algorithms on this dataset and baseline classifier outputs for smile. 5)
Self-report responses of familiarity with, liking of and desire to watch again
for the stimuli videos. This represents a rich and extensively coded resource
for researchers working in the domains of facial expression recognition,
affective computing, psychology and marketing. The videos in this dataset were
recorded in real-world conditions. In particular, they exhibit non-uniform
frame rate and non-uniform lighting. The camera position relative the viewer
varies from video to video and in some cases the screen of the laptop is the
only source of illumination. The videos contain viewers from a range of ages
and ethnicities some with glasses and facial hair. The dataset contains a large
number of frames with agreed presence of facial action units and other labels.

 

                                                                                                                                       
III.       
METHODOLOGY
USED

A.   
Emotion
Recognition From Human Eyes

Facial expressions play an
essential role in communications in social interactions with other human beings
which delivers information about their emotions. The most crucial feature of
human interaction that grants naturalism to the process is our ability to infer
the emotional states of others. Our goal is to categorize the different human
emotions from their eye expressions. The proposed system presents a human
emotion recognition system that analyzes the human eye region from video
sequences. From the frames of the video stream the human eyes can be extracted
using the well-known canny edge operator and classified using a non – linear
Support Vector machine (SVM) classifier. Finally, a standard learning tool is
used, Hidden Markov Model (HMM) for recognizing the emotions from the human eye
expressions.

 

    

        
Surprised                                        Sad

 

    

           Happy                                         Anger

 

    

                 Fear                                          Disgust

 

Fig. 1: Sample eye
expressions

Human emotion
recognition is an important component for efficient human – computer
interaction. It plays a critical role in communication, allowing people to
express themselves beyond the verbal domain. Analysis of emotions from human
eye expression involves the detection and categorization of various human
emotions and state of mind. The analysis of human emotions can be applied in a
variety of application domains, such as video surveillance and human – computer
interaction systems. In some cases, the results of such analysis can be applied
to identify and categorize the various human emotions automatically from the
videos. The six primary or main types of emotions are shown in Fig. 1:
surprised, sad, happy, anger, fear, disgust. Our method is to use the feature
extraction technique to extract the eyes, support vector machine (SVM)
classifier and a HMM to build a human emotion recognition system.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

              

The
methodology of emotion
recognition from human eye expression is shown in Fig. 2. In this
methodology image of the user sitting in front of the camera is captured. Then
image representing a set of frames is preprocessed and a noise free image is
obtained. The noise free image is edge detected using Canny Edge Operator.
Using the feature extraction process, the eye regions are extracted from the
resultant edge detected image. The extracted eye regions are classified using
SVM classifier. Finally, the corresponding emotions are recognized.

 

B.   
Emotion
Mouse

One
proposed, non-invasive method for gaining user information through touch is via
a computer input device, the mouse. This then allows the user to relate the
cardiac rhythm, the body temperature and other physiological attributes with the mood. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

                                                 

Fig. 3: Block Diagram
of Emotion Mouse

 

The block diagram of emotion mouse is shown
in Fig. 3, this device can measure heart rate and temperature and matches them
with six emotional states: happiness, surprise, anger, fear, sadness and
disgust.   The mouse includes a set of
sensors, including infrared detectors and temperature-sensitive chips. These
components can also be crafted into other commonly used items such as the
office chair, the steering wheel, the keyboard and the phone handle.
Integrating the system into the steering wheel, for instance, could allow an
alert to be sounded when a driver becomes drowsy.

Heart rate
is taken by IR on the thumb and temperature is taken using a thermistor chip.
These values are input into a series of discriminate function analyses and
correlated to an emotional state. Specifically, for the mouse, discriminate
function analysis is used in accordance with basic principles to determine a
baseline relationship, that is, the relationship between each set of
calibration physiological signals and the associated emotion.

 

                                                                                                                                                 
IV.       
SYSTEM
MODEL

In this system, two methodologies namely
emotion mouse and emotion recognition from eye expression are used. Emotion
mouse will consider the physiological as well as biological parameters such as
cardiac rhythm and body temperature, whereas on the other side emotion
recognition from human eye expression considers facial expression for the
detection of human emotion and mood.

 

Fig. 4: Block diagram
of the system

Fig. 4 shows the
block diagram of the system. In this system the data from the heartbeat sensor
and temperature sensor of the emotion mouse is given to the microcontroller.
The output of the microcontroller is then fed to the computer. The value of
heartbeat sensor and temperature sensor is compared with the standard range of
each emotion and the suitable emotion is selected on the other hand a webcam is
connected with the computer which will take the image of the person from a
video sequence and will further recognize the emotion by detecting the eye
part. The captured eye section will be compared to the images stored in
database to detect mood of the person. After detecting the mood, the musicor
audio command is played according to the detected mood.

 

                                                                                                                                                              
V.        
RESULT

In proposed system, there are two results of
the mentioned methodologies. Firstly, different eye expressions of the different
people are taken in consideration by edge detection of eyes. Further each eye
expression is categorized into a given set of emotions (happy, sad, fear,
surprised, disgust, anger} to take in account a single standard expression for
each emotion. Thus emotion of a person can be detected by comparing the eye
expression of the person with the standard eye expressions of each emotion.
Secondly, the values of heartbeat sensor and temperature sensor are compared
with the standard value range of each emotion and accordingly the value range
of a emotion that matches with the data values of the user is considered as the
emotional state of the user. According to the detected emotion the music or
audio command is played.

 

                                                                                                                                                     
VI.       
CONCLUSION

Recent research documents tell
that the understanding and recognition of emotional expressions plays a very
important role in the maintenance and development of social relationships. This
paper gives an approach of creating computational machines that have perceptual
and sensory ability like those of human beings which enables the computer to
gather information about you through special techniques like facial expressions
recognition and considering biological factors such as cardiac rhythm and body
temperature. This makes it possible for computer and machines to detect the
emotion of the human and respond to it.