Wednesday, October 21, 2015

face to face

 
Method of assessing people's self-presentation and actions to evaluate personality type, behavioral tendencies, credibility, motivations and other insights through facial muscle activity and expressions
US 8600100 B2

Abstract
A method of assessing an individual through facial muscle activity and expressions includes receiving a visual recording stored on a computer-readable medium of an individual's non-verbal responses to a stimulus, the non-verbal response comprising facial expressions of the individual. The recording is accessed to automatically detect and record expressional repositioning of each of a plurality of selected facial features by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images. The  detected and recorded expressional repositioning is automatically coded to an action unit, a combination of action units, and/or at least one emotion. The action unit, combination of action units, and/or at least one emotion are analyzed to assess one or more characteristics of the individual to develop a profile of the individual's personality in relation to the objective for which the individual or televised image is being assessed.
US20030133599 Jan 17, 2002 Jul 17, 2003 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
US20030156304 Feb 19, 2002 Aug 21, 2003 Eastman Kodak Company Method for providing affective information in an imaging system
US20030165269 Feb 19, 2002 Sep 4, 2003 Eastman Kodak Company Method for using viewing time to determine affective information in an imaging system
US20030165270 Feb 19, 2002 Sep 4, 2003 Eastman Kodak Company Method for using facial expression to determine affective information in an imaging system
US20040101212 Nov 25, 2002 May 27, 2004 Eastman Kodak Company Imaging method and system
US20050079474 Mar 11, 2004 Apr 14, 2005 Kenneth Lowe Emotional state modification method and system
US20060047515 Aug 25, 2005 Mar 2, 2006 Brenda Connors Analyzing human movement patterns
US20070066916 Sep 18, 2006 Mar 22, 2007 Imotions Emotion Technology Aps System and method for determining human emotion by analyzing eye properties

Previous page
    
1. A method of assessing an individual through facial muscle activity and expressions, the method comprising:
(a) receiving a recording stored on a computer-readable medium of an individual's response to a stimulus
(b) accessing the computer-readable medium for detecting and recording expressional repositioning  of selected facial features by conducting a computerized comparison of the facial position  through sequential facial images;
(c) coding contemporaneously detected and recorded expressional repositionings to at least one emotion; and
(d) analyzing the emotion to assess one or more characteristics of the individual to develop a profile of the individual's personality in relation to the objective or televised image
identifying moments of the recording that elicited emotion based on the at least one  emotion; and
developing the profile of the individual's personality based on a percentage of positive versus negative emotions and the specific emotions shown during the stimulus.
 further comprising linking eye tracking data from the recording with the at least one of an action unit, a combination of action units, or at least one emotion.
.The method of claim 1, wherein coding contemporaneously detected and recorded expressional repositionings to at least one of an action unit, a combination of action units, or at least one emotion comprises coding contemporaneously detected and recorded expressional repositionings to a plurality of weighted emotions.
19. A non-transitory machine-readable medium including instructions that, when executed by a machine, cause the machine to perform, operations comprising:
(a) receiving a recording stored on a computer-readable medium of an individual's response to a stimulus, the recording including a non-verbal response comprising facial expressions of the individual;
(b) accessing the computer-readable medium for automatically detecting and recording expressional repositioning of each of a plurality of selected facial features by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images;
(c) automatically coding contemporaneously detected and recorded expressional repositionings
21. The machine-readable medium of claim 20, further comprising instructions causing the machine to perform operations comprising creating a transcript of at least a portion of the individual's verbal response, and analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises one or more of:
identifying places in the transcript of emotional response;
identifying the valence of the emotions for places in the transcript;
identifying one or more emotions that are most predominant with respect to at least portions of the transcript; and
identifying discrepancies between the verbal response and emotive response of the individual.
.
CROSS REFERENCE TO RELATED APPLICATION
The present application claims priority to U.S. Provisional Patent Application No. 61/169,806, filed on Apr. 16, 2009, and entitled “Method of Assessing People's Self Presentation and Actions to Evaluate Personality Type, Behavioral Tendencies, Credibility, Motivations and Other Insights Through Facial Muscle Activity and Expressions”,
FIELD OF THE INVENTION

The present disclosure relates generally to methods of evaluating people's personality type, behavioral tendencies, credibility, motivations and other such insights. More particularly the present disclosure relates to the use of non-verbal language to gain a better understanding of people's personality type, behavioral tendencies, credibility, motivations and  such insights
BRIEF SUMMARY OF THE INVENTION

 A method of assessing an individual through facial muscle activity and expressions. The method includes receiving a visual recording stored on a computer-readable medium of an individual's non-verbal responses to a stimulus, the non-verbal response comprising facial expressions of the individual, so as to generate a chronological sequence of recorded verbal responses and corresponding facial images. The computer-readable medium is accessed to automatically detect and record expressional repositioning of selected facial features by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images. The contemporaneously detected and recorded expressional repositionings are automatically coded to an action unit, a combination of action units, and/or at least one emotion.
The present disclosure, relates to a method of assessing an individual through facial muscle activity and expressions. The method includes receiving a visual recording stored on a computer-readable medium of an individual's response to a stimulus, a first portion of the individual's response comprising facial expressions of the individual, so as to generate a chronological sequence of recorded facial images. The computer-readable medium is accessed to automatically detect and record expressional repositioning of by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images.
 

DETAILED DESCRIPTION

As utilized herein, the phrase “action unit” or “AU” can include contraction or other activity of a facial muscle or muscles that causes an observable movement of some portion of the face.
As utilized herein, the phrase “appeal” can include the valence or degree of positive versus negative emoting that a person or group of people show, thereby revealing their degree of positive emotional response, likeability or preference for what they are saying/hearing/seeing. The appeal score can be based on which specific action units or other forms of scoring emotional responses from facial expressions are involved.
As utilized herein, the term “coding to action units” can include correlating a detected single expressional repositioning or combination of contemporaneous expressional repositionings with a known single expressional repositioning or combination of contemporaneous expressional repositionings previously recognized as denoting a specific action unit whereby the detected single expressional repositioning or combination of contemporaneous expressional repositionings can be categorized as indicating the occurrence of that type of action unit. Types of action units utilized in the method of this invention may include for example, but are not limited to, those established by the Facial Action Coding System (“FACS”).
As utilized herein, the term “coding to emotions or weighted emotional values” can include correlating a detected single expressional repositioning or combination of contemporaneous expressional repositionings.
As utilized herein, the phrase “emotion” can include any single expressional repositioning or contemporaneous combination of expressional repositionings correlated to a coded unit. The expressional repositionings can be coded to action units and then translated to the various emotions, or directly coded to the various emotions, which may include but are not necessarily limited to anger, disgust, fear, happiness (true and social smile), sadness, contempt and surprise as set forth in the Facial Action Coding System (“FACS”), and the additional emotional state of skepticism.
As utilized herein, the phrase “engagement” can include the amount or volume and/or intensity of emoting, perhaps by action unit activity, that a person or group of people show in response to a given stimulus or line of inquiry or presentation, or in the case of a group of people, the percentage of people with a code-able emotional response to a stimulus, topic, line of inquiry or presentation.
As utilized herein, the phrase “expressional repositioning” can include moving a facial feature on the surface of the face from a relaxed or rest position, or otherwise first position, to a different position using a facial muscle.
As utilized herein, the phrase “facial position” can include locations on the surface of the face relative to positionally stable facial features such as the bridge of the nose, the cheekbones, the crest of the helix on each ear, etc.
As utilized herein, the term “impact” can include the potency or arousal or degree of enthusiasm a person or group of people show based on the nature of their emoting, based on for example, specific action units, their weighted value, and/or the duration of the action units involved when that is deemed relevant and included in the weighting formula.
As utilized herein, the term “interview” can include asking at least one question to elicit a response from another individual regarding any subject. For example, this can include asking at least one question relating to assessing the person's characteristic response to business situations in general, to situations likely to relate to specific traits among the  to questions or images  that pertain to Behavioral  principles, or to creating scenarios in which the person is meant to become an actor or participant for the purpose of observing that person's behavior until the simulated situation. An interview may be conducted in any number of settings, including, but not limited to seated face-to-face, seated before a computer on which questions are being posed, while enacting a scenario, etc.
As utilized herein, the term “Behavioral Economics” can include the school of economics that maintains that people engage in behavior that might not be for the classic economic principle of achieving greatest utility but may, instead, reflect the influence of irrational emotions on their behavior.
As utilized herein, the term “Behavioral Economics principles” can include some or all, and not limited to the seven principles of fear of loss, self-herding (conformity), resistance to change, impulsivity, probability blinders (faulty evaluation based on framing, mental accounting, priming, etc.), self-deception (ego), and fairness bias.
“Big Five Factor model” or OCEAN can include some or all, and is not limited to the five personality traits of openness, conscientiousness, extraversion, agreeableness and neuroticism  that form the basis of the personality model that rests on those five traits as developed by academics McCrea and Costa.
 
As utilized herein, the term “scenario” shall include a case where the interview might involve not just questions to be answered but also a situation or scenario. For example, a scenario may include asking a potential sales force hire to simulate the sequence of making a cold phone call to a prospect and detecting what emotions appear on the person's face in being given the assignment, as well as in enacting it or discussing it afterwards.
Among its embodiments, the present disclosure can be directed to overcoming the problems inherent in relying on verbal input alone in assessing the personality type, behavioral tendencies, credibility, motivations, etc., of people by supplementing or replacing such verbal analysis with the analysis of people's facial muscle activity and expressions.
. Facial coding originated with Charles Darwin, who was the first scientist to recognize that the face is the preferred method for diagnosing the emotions of others and of ourselves because facial expressions are universal (so hard-wired into the brain that even a person born blind emotes in a similar fashion to everyone else), spontaneous (because the face is the only place in the body where the muscles attach right to the skin) and abundant (because human beings have more facial muscles than any other species on the planet). Facial coding as a means of gauging people's emotions through either comprehensive or selective facial measurements is described, for example, in Ekman, P., Friesen, W. V., Facial Action Coding System: A Technique for the Measurement of Facial Movement (also known by its acronym of FACS), Consulting Psychologists Press, Palo Alto, Calif. (1978), which is hereby incorporated by reference in its entirety herein. Another measurement system for facial expressions includes Izard, C. E., The Maximally Discriminative Facial Movement Coding System, Instructional Resources Center, University of Delaware, Newark, Del. (1983)
In accordance with FACS, the observation and analysis of a person's facial muscle activity or expressions can therefore be conducted by noting which specific muscle activity is occurring in relation to the FACS facial coding set of muscle activities that correspond to any one or more of seven core emotions: happiness, surprise, fear, anger, sadness, disgust and contempt or others such as might be determined in the future. According to FACS, there are approximately 20 or so facial muscle activities that on their own or in combination with other muscle activities—known as action units or AUs—can be correlated to the seven core emotions. To engage in facial coding properly, an observer would want to be systematic by reviewing a given person's video files to establish, first, a baseline of what expressions are so typical for the person as to constitute a norm against which changes in expression might be considered. Then the video files would be watched in greater depth, with slow-motion, freeze-frame and replays necessary to document which specific AUs happen and at what time interval (down to even 1/30th of a second) to enable review or cross-checking by a second facial coder in the case of manual coding, or human checkers to verify in the case of semi- or fully-automated facial coding. SeeU.S. Pat. No. 7,113,916 (granted Sep. 26, 2006 to inventor), which is hereby incorporated by reference in its entirety herein.
Another option for analyzing emotions is disclosed in Proceedings of Measuring Behavior 2005, Wageningen, 30 Aug.-2 Sep. 2005, Eds. L. P. J. J. Noldus, F. Grieco, L. W. S. Loijens and P. H. Zimmerman and is incorporated by reference herein in its entirety. The article details a system called FaceReader™ from VicarVision that uses a set of images to derive an artificial face model to compare with the expressions it is analyzing. A neural network is then trained to recognize the expressions shown through comparison between the expression and the model.
.
In terms of statistical output, another way that the facial coding results can be depicted is to provide a percentage of positive, neutral or negative response to a given question, scenario, etc.
In terms of statistical output, yet another output that can be used is to consider an example like a mock jury being shown a visual aid intended for courtroom display and discern where the subjects look based on the use of eye tracking and how they feel about what they are taking in, using facial coding. For background, see U.S. pending patent application Ser. No. 11/491,535, titled “Method and Report Assessing Consumer Reaction to a Stimulus by Matching Eye Position with Facial Coding”, filed by this inventor on Jul. 21, 2006, under, the entirety of which is hereby incorporated by reference herein. Such synchronization of eye tracking results and facial coding results can of course be utilized in other fashions, too, for matters involving personnel such as how a job applicant inspects and reacts to company advertising, ethics guidelines, etc
Another embodiment can utilize frame-by-frame, split-second measurements to aid in the detection of possible instances of lying by taking into account a variety of patterns. Natural, involuntary expressions originate in the sub-cortical areas of the brain. These sub-cortically initiated facial expressions are characterized by synchronized, smooth, symmetrical, consistent and reflex-like facial muscle movements where volitional facial expressions tend to be less smooth. Thus an embodiment of this invention can account for whether a muscle activity has a natural onset (smooth and fast versus slow and jerky onsets for posed expressions), a peak and offset such that the emotion being shown flows on and off the face without the jerky onset, sudden ending rather than a natural fade or offset, or protracted peak—hereby dubbed a “butte”—that can mark an expression that may not be authentically felt. Likewise, software, as part of a system as described herein, may aid in noting expressions that are asymmetrical, such that one side of the face reveals the expression more than the other (in generally most cases except for contempt expressions, which are inherently unilateral) as an indication that the expression may be forced onto the face or otherwise contrived. Identifying odd timing, such that the expression arrives too early or late in conjunction with expressed statements and is as such out of synch, identifying mixed signals, where negative emotions accompany or are in the timing vicinity of a smile, noting when a surprise look or smile lasts more than expected, and detecting whether multiple action unis peak simultaneously, or fail to do so, can be clues to an unnatural, posed expression.  The format to be enacted can be made easier to enact on a standard, repeatable basis without operator error by using computer software to ensure that the format involves every element (question/scenario, etc.) in either a set order sequence or an order that is intentionally randomized.  to ensure high quality images of the participant's facial expression as obtained throughout the session. The person can be instructed, for example, to (i) look into the camera (ii) avoid any extreme or radical head movement during the session and (iii) keep from touching their face during the session. A reasonably close up filming can be used, including one in which the person's face is at least ¾ths visible as opposed to a profile filming positioning. Both the oral statements (audio) and the facial expressions (video) can be captured by the camera for the purposes of subsequent review, or the video files alone can be solely captured for the purposes of the analysis to be performed.

US20020108125 Feb 5, 2002 Aug 8, 2002 Joao Raymond Anthony Apparatus and method for facilitating viewer or listener interaction
US20030133599 Jan 17, 2002 Jul 17, 2003 International Business Machines Corporation System method for automatically detecting neutral expressionless faces in digital images
         
US20080052080 Nov 30, 2006 Feb 28, 2008 University Of Southern California Emotion Recognition System
US20080260212 * Jan 11, 2008 Oct 23, 2008 Moskal Michael D System for indicating deceit and verity
US20090285456 * May 19, 2008 Nov 19, 2009 Hankyu Moon Method and system for measuring human response to visual stimulus based on changes in facial expression
US20120002848  Jan 5, 2012 Hill Daniel A Method of assessing people's self-presentation and actions to evaluate personality type, behavioral tendencies, credibility, motivations and other insights through facial muscle activity and expressions
EP1678660A1 Oct 6, 2004 Jul 12, 2006 Northrop Grumman Corporation Robust and low cost optical system for sensing stress, emotion and deception in human subjects

No comments:

Post a Comment