Affective Computing
By Heath Yates, Kansas State University
Introduction
"Affective computing" is defined as the study and development of systems, computational devices, or wearables that can process, interpret, recognize, and detect human affects (Picard, 1997). This area of research is very active and is driven by several promising areas for applications such as virtual reality, smart surveillance, perceptual interfaces, and so on (Tao, 2005). As early as 1960, Page advanced a control-centric theory that proposed that users someday would be able to share their thoughts in real times with machines (Page, 1962). In that side, there is an even wider long term picture in the pursuit of affective computing.
Affective computing is also related to artificial intelligence. For example, the Hal 9000 computer in the Stanley Kubrick and Arthur C. Clarke film "2001: A Space Odyssey" is an affective computer. Hal could recognize individuals, read lips, understand the aesthetics of art sketches, and recognize the emotions expressed by his astronaut colleagues (Picard, 2001). This example also is a cautionary tale of affective abilities we also might not want to give machines such as the ability to lie and do physical harm to humans. Ultimately though, affective computing is not about giving machines emotions but the ability to have emotional intelligence and learn about the affective state of humans. Picard and others proposed in 2001 that machine intelligence needs to include emotional intelligence and that affective computing is a way to provide this capability to machines by processing physiological signals to infer information about human affects (Picard et al., 2001). Affective computing research is progressing and finding new applications and use cases as time progresses.
The benefits outweigh the risks. For example, besides artificial intelligence, affective computing has been proposed as a way to combat drug addiction (Gustafson, 2005). Researchers have also developed affective wearable devices that reports on social-emotional information in real-time to assist individuals with autism (El Kaliouby, 2006). Research at Kansas State University suggests another area that deserves attention: built environments. The interdisciplinary nature of this research holds promise to each researcher of varying background by the very merit of the multifaceted nature of the research problem.
Affective computing shows promise in eventually teaching machines to be given the ability to sense and recognize expression of human emotions such as interest, distress, pleasure, and stress with the goal of having the machine choose more helpful and less aggravating behavior (Picard, 2004). This article provides the reader an overview of this growing field, the emerging interest at Kansas State University, and a brief discussion of the future potential of the field.
History
The Beginning: 1990 - 1999
Affective computing research started picking up steam in the late 1990s. In 1997, Picard and Healey demonstrated the use of wearables to detect human affect states using respiration, galvanic skin response (GSR), blood volume pressure (BVP), and electromyogram (EMG, and heart rate (HR) (Picard, 1997). They also outlined several of the general challenges which have persisted in the field ever since in using wearables to infer human emotions. A year later, they demonstrated that the same signals could be used to detect human affect states using pattern recognition via Fischer linear discriminant. They also considered the participants' affective state changes over an average period of three minutes when previous studies were typically one to ten second reactions (Healey, 1998). Researchers have designed experiments which the goal of frustrating the participants on purpose in order to detect their physiometric and emotional state (Cacioppo, 1990). Detecting stress using physiometric signals such as galvanic skin conductivity and real time video to capture participants point of view was pioneered by Healey with StartleCam (Healey & Picard, 1998). StartleCam could detect when a participant was startled by using pattern recognition that relied on a time-reversed filter and convolutional sum. In other words, the device could detect when the participant were startled or stressed by smoothing out noise in the data while preserving the core pattern produced by the sensors.
Interestingly, wearable jewelry, shoes, glasses, and other affective wearable devices were anticipated in the late 1990s by researchers as well (Picard, 2000). Some examples of this nascent research were glasses equipped with embedded sensors to detect confusion and interest from users with moderately high accuracy (Scheirer, 1999). User annotated approaches to studying affective computing were also explored in the offline and online recognition of emotion from physiological data. Specifically, Picard had a trained actress equipped with various sensors that measure EMG, BVP, and GSR display neutral, anger, hate, grief, platonic, romantic, joy, and reverence to be recorded by the computer. Sequential floating forward search (SFFS) and Fisher projection (FP) were applied that achieved a recognition rate of 50% to 80% depending if the algorithm was run offline or online (Picard, 1999). Healey studied driver stress by measuring a user’s face with a camera that recorded their facial expressions approximately once a minute while driving. In addition, she measured BVP, EKG, and GSR. The initial study determined there were many confounding factors and participant’s signals would differ to similar stimuli differently over the period of the study (Healey, 1999).
The Early Years: 2000 – 2010
At the beginning of the early 2000s, research began to expand into other areas and grow more sophisticated. For example, affective medicine, which is teaching computers the recognition of emotion using variables that respond to people with active listening, empathy, and sympathy (Picard, 2002). Work also continued on recognizing the affective state of a human while driving. Healey was able to recognize patterns of about 80% for human stress in drivers by using a linear discriminant function while also recording a participant’s respiration, EMG, HR, and GSR (Healey, 2000). An interesting head nod and head shake detector was created by researchers who used hidden markov models that tracked pupils, pupil position, and directions of head movements to detect when a head nod or shake occurs (Kapoor 2001). In 2002, researchers also started looking into using affective computing to assist individuals diagnosed with autism (Blocher, 2002). Affective computing has also been used in monitoring stress and heart health in real time with a phone and wearable computer as early as 2002 (Picard, 2002).
Real-time facial feature tracking continued to evolve by tracking eyes and eyebrows in real time extracting features using PCA (Kapoor, 2002). Modeling also was becoming more sophisticated, where researchers used dynamic Bayesian networks (DBN) and mixture of hidden Markov models to classify driver’s speech under stress (Fernandez 2003). By 2004, there was a study by the Defense Advanced Research Projects Agency (DARPA) to study emotional state recognition to determine potential criminal intent which also sparked debate about ethics in affective computing (Reynolds, 2004). Researchers also proposed around this time that affective systems should be minimalist in ubiquitous interface design. That is, the combination of parsimony and transparency where parsimony means a user interface as possible and transparency by being designed to reduce cognitive demands as much as possible (Wren, 2004). It has also been shown that when agents are designed to simulate care and empathy, that individuals both perceive the agents are caring and are willing to continue working with them (Bickmore, 2004).
By 2005, the situation continued to evolve and find many new applications. Research has also been conducted into considering affective computing in adversarial situations such as poker or interviewing experiments (Reynolds, 2005). One interesting result was showing how good typography can induce a good mood, which in turn has been shown to induce individuals to perform better cognitively and more creatively (Larson, 2005). Examination of stress also continue to advance and grow in sophistication. Healey showed that you can use physiological signals to provide a metric of stress for individuals driving a vehicle and collected data for approximately 50 minutes (Healey, 2005). Researchers used Bluetooth and a custom sensor called HandWave to measure skin conductance and used this to detect the emotional, cognitive, and physical arousal of users (Strauss, 2005). Affective computing also was shown in 2005 to have potential in both educational learning environments and sensor fusion, Kapoor and Picard demonstrated that multimodal mixture Gaussian models could achieve accuracy of over 86% in detecting if children were interested or disinterested in solving a puzzle (Kapoor, 2005).
A few years later, Kapoor went further and demonstrated a similar approach using Gaussian and Bayesian methods has 79% accuracy in detecting user frustration when using an intelligent tutoring application (Kapoor, 2007). Research also continued into looking at the benefits of affective computing in assisting individuals who have autism (Hoque, 2008). Sensor technology itself started to advance in useful ways. In 2009, Poh and Picard developed a wristband for ambulatory assessment of electrodermal activity (EDA) (Poh, 2009). This technology would eventually find its way into products on the commercial market such as those offered by Empatica. Heart rate variability and electrodermal activity to classify children who had sensory impairment versus those who did not using sensors was explored using machine learning such as KNN, Decision Trees, SVM, and Linear Discriminants with some accuracy (Hedman, 2009). This demonstrated the potential viability of applying machine learning to affective computing problems.
Rapid Advances and Commercialization: 2010 – 2017
Affective Computing since 2010 has observed rapid advances in sensor technology, applications of machine learning, and commercialization of wearables. In 2010, Poh and Picard continued to perfect their wrist worn electrodermal activity (EDA) wrist band and showed it had advantages over FDA approved products at the time (Poh, 2010). Eydgahi and Picard proposed long term continuous monitoring of physiological data and the potential for commercial products to fit this niche (Fletcher, 2010). Also, during the same year Fletcher and Picard also proposed wearable sensors being used with mobile phones as a platform for advances in affective computing and low cost health care (Fletcher, 2010).
In 2011, researchers pioneered an affective computing furniture in the way of a medical mirror that provided health information related to a person tracking their daily vitals to aid in their health management (Poh, 2011). Researchers also trained computers to classify polite smiles versus actual smiles, the context was a banking scenario with a banker and a client (Hoque, 2011). Cardiovascular monitoring using earphones and a mobile device was also developed which relied on a digital signal controller to process signals and sensors from the headphones (Poh, 2012).
In 2013, a novel mirror sensor system and interface that gave user feedback on their inner physiological responses with the goal of improving scientific understanding of psychophysiology in natural settings (Hernandez, 2013). One of the first applications of affective computing in political science was when researchers measured individual’s candidate preferences based on video clips measuring their spontaneous facial reactions with accuracy of over 74% (McDuff, 2013). Using mobile phones, researchers used data collected from a mobile phone usage and wrist sensor collected over five days to accurately detect stress using binary classification of 75% based on surveys as a baseline (Sano, 2013). In 2014, development in commercial wearables with high precision biosensors began to take off. Picard introduced the Empatica E3 wearable, a commercial product that has photoplethysmograph (PPG), electrodermal activity (EDA), 3-axis accelerometer, and temperature that uses Bluetooth (Garbarino, 2014). Currently, this product has been superseded by the Empatica E4. McDuff (2014) demonstrated the ability to measure cognitive stress remotely via heart rate variability using Naive Bayes and SVM as classifiers.
In 2015, Rivera demonstrated that wearables could be used towards the management of stress and detection by using Naive Bayes and SVM (Hernandez, 2015). Recent research has also begun exploring the potential of affective wearables to detect seizures in outpatient settings. The contemporary pace shows signs that the pace of advancements is accelerating and all evidence points towards more powerful wearable devices and sophisticated machine learning approaches such as deep learning. Recently, at Kansas State University, researchers are considering how affective computing can help us better understand how humans react to different environments in urban settings.
Affective Computing at the Biosecurity Research Institute, Kansas State University
The field of affective computing is very young. Arguably, it is less than 25 years old. There is an emerging group of individuals with diverse backgrounds and disciplines that has formed a group to explore affective computing problems at the Biosecurity Research Institute, Kansas State University. The diverse background in expertise which is indicative of how interdisciplinary affective computing is. Dr. William H. Hsu is Director of the Knowledge Discovery in Databases (KDD) at Kansas State University and is an expert in artificial intelligence and machine learning. Will Baldwin serves as the director of IT at the Biosecurity Research Institute (BRI) and is an expert in software engineering, sensors, and Internet of Things (IoT). Dr. Brent Chamberlain is an expert in visualization, GIS, environmental psychology, and computational approach to landscapes. Heath Yates is a software engineer at the BRI and a research graduate assistant at KDD whose doctoral research is focused on affective computing.
Figure 2: Affective Computing Team at Kansas State University (From left to right are the following: Dr. William H. Hsu, Heath Yates, Dr. Brent C. Chamberlain, and Will Baldwin
The team is currently working on custom mobile sensors which will record data from participants to indicate their emotional states. This will help the field better ground truth the data being produced by wearables and the team hopes the results lead to improved reproducibility and replication. Also, the team is currently looking into refining machine learning approaches to detecting human emotion using sensor fusion based on the participants’ environment.
Figure 3: Working on a Sensor to be used in an Experiment
The sensor shown above will ultimately be used in affective computing experiments by asking users to indicate the intensity of their emotions. Interest in affective computing is growing and there are other researchers at Kansas State University doing work in affective space who will soon be joining the team.
Figure 4: Dr. Carlos Castellanos, Interdisciplinary Scientist and Artist at K-State
Dr. Carlos Castellanos is an interdisciplinary scientist and artist. He is reinterpreting affective computing and exploring its potential as a medium of art. This allows individuals who view the art to also actively participate in it. In short, affective computing is showing promise to evolve art into a more participatory, interactive, and fluid medium. Castellanos implemented a server that responds to a body’s physiological and emotional state to interact through a software program (Castellanos & Schiphorst, 2009).
Figure 5: BodyDaemon Art Demonstration
A video of this may be accessed below.
Join Us in Study? Co-Research?
Research into affective computing at Kansas State University is young, vibrant, and exploding. New and existing collaborations are quickly forming, deepening, and expanding. The team is looking for collaborations beyond Kansas State University. Specifically, the emerging area will require interested researchers and students working together to solve novel problems in detecting human emotion in a multitude of applications.
The Future
While affective computing is a young field, there have been many advances in the last two decades. Wearables have allowed researchers to ask new questions and provided new kinds of data which can be used to infer human emotional states and to teach computers how to recognize them. The applications have found themselves in a diverse range of scenarios from autism, stress detection in driving, recognition of emotion in faces, and so on.
At Kansas State University, researchers are hoping to expand the team with individuals whose core shared interest to further research and exploration of affective computing. It is a nascent field of study that shows promise. Most importantly, it is hoped that this new, emerging, and novel area of research will attract students from all backgrounds and disciplines. The interdisciplinary nature of the area will necessitate the need for students from architecture, art, computer science, electrical engineering, psychology, kinesiology and more. In the immediate future at Kansas State, there will be a lot of work on the refinement of sensor technology and refining machine learning approaches in detecting human emotions.
The future is difficult to predict precisely, but there are currently some undeniable trends which are worth mentioning in context of affective computing. The revolution in artificial intelligence and machine learning currently underway is most visible in the recent advances in computers beating humans in matches at the game of go. Recent trends in industry to make computers more responsive to user needs necessitates that these machines are at least aware of human emotion on a basic level. This has the potential to disrupt how we interact with technology in our vehicles, homes, and hospitals.
Acknowledgments
L. Ashmore is acknowledged for taking the pictures of the affective computing team. Thanks to Dr. Carlos Castellanos for his gracious assistance.
References
Bickmore, T. W., & Picard, R. W. (2004, Apr.). Towards caring machines. In CHI'04 extended abstracts on Human factors in computing systems (pp. 1489-1492). ACM.
Blocher, K., & Picard, R. W. (2002). Affective social quest. In Socially intelligent agents (pp. 133-140). Springer US.
Cacioppo, J. T., & Tassinary, L. G. (1990). Inferring psychological significance from physiological signals. American Psychologist, 45(1), 16.
Castellanos, C., & Schiphorst, T. (2009, Oct.). BodyDaemon. In Proceedings of the seventh ACM conference on Creativity and cognition (pp. 423-424). ACM.
El Kaliouby, R., Teeters, A., & Picard, R. W. (2006, Apr.). An exploratory social-emotional prosthetic for autism spectrum disorders. In Wearable and Implantable Body Sensor Networks, 2006. BSN 2006. International Workshop on (pp. 2-pp). IEEE.
Fernandez, R., & Picard, R. W. (2003). Modeling drivers’ speech under stress. Speech Communication, 40(1), 145-159.
Fletcher, R. R., Dobson, K., Goodwin, M. S., Eydgahi, H., Wilder-Smith, O., Fernholz, D., ... & Picard, R. W. (2010). iCalm: Wearable sensor and network architecture for wirelessly communicating and logging autonomic activity. IEEE Transactions on Information Technology in Biomedicine, 14(2), 215-223.
Garbarino, M., Lai, M., Bender, D., Picard, R. W., & Tognetti, S. (2014, November). Empatica E3—A wearable wireless multi-sensor device for real-time computerized biofeedback and data acquisition. In Wireless Mobile Communication and Healthcare (Mobihealth), 2014 EAI 4th International Conference on (pp. 39-42). IEEE.
Gustafson, D. H., Palesh, T. E., Picard, R. W., Plsek, P. E., Maher, L., & Capoccia, V. A. (2005). Automating addiction treatment: Enhancing the human experience and creating a fix for the future. Studies in health technology and informatics, 118, 186-206.
Healey, J., & Picard, R. (1998, May). Digital processing of affective signals. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on (Vol. 6, pp. 3749-3752). IEEE.
Healey, J., & Picard, R. (2000). SmartCar: detecting driver stress. In Pattern Recognition, 2000. Proceedings. 15th International Conference on (Vol. 4, pp. 218-221). IEEE.
Healey, J., & Picard, R. W. (1998, Oct.). Startlecam: A cybernetic wearable camera. In Wearable Computers, 1998. Digest of Papers. Second International Symposium on (pp. 42-49). IEEE.
Healey, J. A., & Picard, R. W. (2005). Detecting stress during real-world driving tasks using physiological sensors. IEEE Transactions on intelligent transportation systems, 6(2), 156-166.
Healey, J., Seger, J., & Picard, R. (1999). Quantifying driver stress: Developing a system for collecting and processing bio-metric signals in natural situations. Biomedical sciences instrumentation, 35, 193-198.
Hedman, E., Eckhardt, M., Poh, M.Z., Goodwin, M.S., Miller, L.J., Brett-Green, B., Schoen, S.A., Nielsen, D.M., & Picard, R.W. (2009, May 7 - 9). "Heart Rate Variability and Electrodermal Activity in Children with Atypical Sensory Processing: Exploratory Pattern Analysis," in the Extended Abstract of IMFAR 2009, Chicago, Illinois, USA,
Hernandez, J. (2015, Sept.) "Towards Wearable Stress Measurement", MIT PhD Thesis.
Hernandez, J., McDuff, D., Fletcher, R., & Picard, R. W. (2013, March). Inside-out: Reflecting on your inner state. In Pervasive Computing and Communications Workshops (PERCOM Workshops), 2013 IEEE International Conference on (pp. 324-327). IEEE.
Hoque, M. E. (2008, Oct.). Analysis of speech properties of neurotypicals and individuals diagnosed with autism and down. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility (pp. 311-312). ACM.
Hoque, M., Morency, L. P., & Picard, R. (2011). Are you friendly or just polite?–Analysis of smiles in spontaneous face-to-face interactions. Affective Computing and Intelligent Interaction, 135-144.
Kapoor, A., Burleson, W., & Picard, R.W. (2007). "Automatic prediction of frustration." International Journal of Human-Computer Studies: 65(8), 724-736.
Kapoor, A., & Picard, R. W. (2001, Nov.). A real-time head nod and shake detector. In Proceedings of the 2001 workshop on Perceptive user interfaces (pp. 1-5). ACM.
Kapoor, A., & Picard, R. W. (2002, May). Real-time, fully automatic upper facial feature tracking. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on (pp. 10-15). IEEE.
Kapoor, A., & Picard, R. W. (2005, November). Multimodal affect recognition in learning environments. In Proceedings of the 13th Annual ACM International Conference on Multimedia (pp. 677-682). ACM.
Larson, Kevin, and R. Picard. "The aesthetics of reading." In the proceedings of the Human-Computer Interaction Consortium Conference, Snow Mountain Ranch, Fraser, Colorado. 2005.
McDuff, D., El Kaliouby, R., Kodra, E., & Picard, R. (2013, Sept.). Measuring voter's candidate preference based on affective responses to election debates. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on (pp. 369-374). IEEE.
McDuff, D., Gontarek, S., & Picard, R. (2014, Aug.). Remote measurement of cognitive stress via heart rate variability. In Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE (pp. 2957-2960). IEEE.
Page, R. M. (1962). Man-Machine Coupling-2012 AD. Proceedings of the IRE, 50(5), 613-614.
Picard, Rosalind W. "Toward Machines with Emotional Intelligence." International Conference on Informatics in Control, Automation and Robotics (ICINCO). (Invited Speakers). 2004.
Picard, R. W. (2001, June). Building HAL: Computers that sense, recognize, and respond to human emotion. In Photonics West 2001-Electronic Imaging (pp. 518-523). International Society for Optics and Photonics.
Picard, R. W. (2002). Affective medicine: Technology with emotional intelligence. Studies in health technology and informatics, 69-84.
Picard, R. W., & Du, C. (2002). Monitoring stress and heart health with a phone and wearable computer. Motorola Offspring Journal, 1, 14-22.
Picard, R. W., & Picard, R. (1997). Affective computing (Vol. 252). Cambridge: MIT press.
Picard, R. W., & Vyzas, E. (1999). ªOffline and Online Recognition of Emotion Expression from Physiological Data, º Emotion-Based Agent Architectures Workshop Notes, Int'l Conf. In Autonomous Agents (pp. 135-142).
Picard, R. W., & Rosalind, W. (2000). Toward agents that recognize emotion. VIVEK-BOMBAY-, 13(1), 3-13.
Picard, R. W., Vyzas, E., & Healey, J. (2001). Toward machine emotional intelligence: Analysis of affective physiological state. IEEE transactions on pattern analysis and machine intelligence, 23(10), 1175-1191.
Poh, M. Z., Kim, K., Goessling, A., Swenson, N., & Picard, R. (2012). Cardiovascular monitoring using earphones and a mobile device. IEEE Pervasive Computing, 11(4), 18-26.
Poh, M. Z., McDuff, D., & Picard, R. (2011, Aug.). A medical mirror for non-contact health monitoring. In ACM SIGGRAPH 2011 Emerging Technologies (p. 2). ACM.
Poh, M. Z., Swenson, N. C., & Picard, R. W. (2009, June). Comfortable sensor wristband for ambulatory assessment of electrodermal activity. In 1st Biennial Conference of the Society for Ambulatory Assessment, Greifswald, Germany.
Poh, M. Z., Swenson, N. C., & Picard, R. W. (2010). A wearable sensor for unobtrusive, long-term assessment of electrodermal activity. IEEE transactions on Biomedical engineering, 57(5), 1243-1252.
Reynolds, C. J. (2005). Adversarial uses of affective computing and ethical implications (Doctoral dissertation, Massachusetts Institute of Technology).
Reynolds, C., & Picard, R. (2004, Apr.). Affective sensors, privacy, and ethical contracts. In CHI'04 Extended Abstracts on Human Factors in Computing Systems (pp. 1103-1106). ACM.
Sano, A., & Picard, R. W. (2013, September). Stress recognition using wearable sensors and mobile phones. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on (pp. 671-676). IEEE.
Scheirer, J., Fernandez, R., & Picard, R. W. (1999, May). Expression glasses: a wearable device for facial expression recognition. In CHI'99 Extended Abstracts on Human Factors in Computing Systems (pp. 262-263). ACM.
Strauss, M., Reynolds, C., Hughes, S., Park, K., McDarby, G., & Picard, R. (2005). The handwave bluetooth skin conductance sensor. Affective computing and intelligent interaction, 699-706.
Tao, J., & Tan, T. (2005, Oct.). Affective computing: A review. In International Conference on Affective computing and intelligent interaction (pp. 981-995). Springer Berlin Heidelberg.
Wren, C. R., & Reynolds, C. J. (2004). Minimalism in ubiquitous interface design. Personal and Ubiquitous Computing, 8(5), 370-373.
About the Author
Heath Yates is a Ph.D. candidate (in computer science) and software engineer at the Biosecurity Research Institute (BRI). His research interests are in affective computing, wearables, machine learning, statistical learning, and artificial intelligence. He has three masters in computer science, statistics, and mathematics. His passion is research, educational outreach, and finding ways to do data science for social good. When not researching or programming, his passion is spending time with family, reading, and exercising. He may be reached at hlyates@ksu.edu.
(Note: The photo of Heath Yates was taken by Dan Donnert.)
Previous page on path | Issue Navigation, page 12 of 26 | Next page on path |
Discussion of "Affective Computing"
Add your voice to this discussion.
Checking your signed in status ...