Rana el Kaliouby

Rana el Kaliouby
Born 1978 (age 3738)
Education University of Cambridge
Occupation chief science officer

Rana El-Kaliouby, Ph.D (born 1978) is a contributor to facial expression recognition research and technology development, which is a subset of facial recognition designed to identify the emotion being expressed by the face.[1] El-Kaliouby's research endeavoured to depart from the field's dependence on exaggerated caricature expressions created by actors in the laboratory, in favour of focusing on more subtle glances that people make in real situations.

El-Kaliouby is currently the chief science officer of Affectiva, leading the company’s Emotion Science team.[2] Her team applies computer vision, machine learning and data science to leverage the company's facial emotion repository, which it says is the world's largest with 2 million faces,[3] to understand people's feelings and behaviors.[4]

Education

El Kaliouby earned her Bachelors and Master of Science degree from the American University in Cairo. Then she earned her Ph.D from the University of Cambridge.[5]

Career

El-Kaliouby worked as a research scientist at Massachusetts Institute of Technology, helping to found their Autism & Communication Technology Initiative.[6] Her original goal was to improve human-computer interaction, but she quickly became fascinated with the possibility of applying this technology to improve human-human communication, especially for sufferers of autism, many of whom struggle with emotional communication.[7] At the Affective Computing group of MIT Media Lab, she was part of a team that pioneered development of the "emotional hearing aid",[8] a set of emotion-reading wearable glasses which the New York Times included in their Top 100 innovations of 2006.[9]

Awards

Other articles that have reported on Rana El-Kaliouby's career and invention:

Societies

Rana El-Kaliouby was inducted into the "Women in Engineering" Hall of Fame.[2][18] She is also a member of ACM, IEEE, Association of Children's Museums, British Machine Vision Association, and Nahdet el Mahrousa.[4]

Philosophy

Rana El-Kaliouby says that computers, while good with information, fall short when it comes to determining feelings, thus requiring manual prompting to respond to an operator's needs. Her interest primarily lies in the subtle facial changes that people tend to make. She has identified 24 landmarks to the face, each moving in different ways depending on an expressed emotion.[19]

This has many applications, from linguistics to video production. Autism patients, who typically have a different array of expressions that are apart from the norm, may be able to have their moods more easily monitored by parents or caretakers. For production purposes, computer generated imagery of faces (and presumably android projects) will be able to be more realistic in the art of subtlety.

References

  1. "MIT Technology Review 2012". Retrieved 30 July 2014.
  2. 1 2 "Affectiva Company Team". Retrieved 9 November 2016.
  3. http://www.prnewswire.com/news-releases/affectiva-builds-worlds-largest-emotion-analytics-repository-with-2-million-faces-analyzed-280647752.html
  4. 1 2 "Linkedin of Rana el Kaliouby".
  5. El-Kaliouby, Rana (2005). Mind-reading machines: automated inference of complex mental states (Technical report). University of Cambridge, Computer Laboratory. UCAM-CL-TR-636.
  6. http://blip.tv/autismlive/rolisand-6672702
  7. http://www.psychologytoday.com/blog/you-say-more-you-think/201104/autism-spectrum-disorder-struggling-communication
  8. El-Kaliouby, Rana; Robinson, Peter (2005-12-01). "The emotional hearing aid: an assistive tool for children with Asperger syndrome". Universal Access in the Information Society. Springer-Verlag. 4 (2): 121–134. doi:10.1007/s10209-005-0119-0. Retrieved 13 November 2014.
  9. The Social-Cue Reader, New York Times Magazine December 12, 2006 [note: this article was their writeup on the innovation, but it does not actually make the statement that it got into their top 100. That's probably in some other article which we could do with a citation for. This may involve finding it in a library because it's no longer online. There are plenty of secondary-source statements out there that say this innovation got into their top 100, but it would be nice to be able to cite the list directly.]
  10. http://www.entrepreneur.com/article/230351
  11. http://www.bizjournals.com/boston/event/100331
  12. http://www.wired.co.uk/magazine/archive/2013/12/features/the-smart-list-2013
  13. http://www2.technologyreview.com/tr35/?year=2012
  14. http://www.nytimes.com/2013/12/01/technology/when-algorithms-grow-accustomed-to-your-face.html
  15. http://www.inc.com/audacious-companies/april-joyner/affectiva.html
  16. http://techcrunch.com/2012/08/07/the-new-face-of-ad-tech-goes-consumer-emotion-tracker-affectiva-takes-12m-from-kpcb-horizon-ventures-others/
  17. http://www.fastcompany.com/1839275/does-your-phone-know-how-happy-you-are-emotion-recognition-industry-comes-giddily-age
  18. The Women in Engineering Hall of Fame
  19. Karen Weintraub, "Teaching devices to tell a frown from a smile", Innovators Under 35, Date not specified
This article is issued from Wikipedia - version of the 11/9/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.