Virtual Personal Assistance on Mobile Devices

Speaker: Jerome R. Bellegarda
Affiliation: Apple Inc, Cupertino, California

When: March 27, 2014, 3.00 p.m.
Where: A103, Polo Ferrari
Abstract
Natural language interaction has the potential to considerably enhance user experience, especially on mobile devices like smartphones and electronic tablets. Recent advances in software integration and efforts toward more personalization and context awareness have brought closer the long-standing vision of the ubiquitous intelligent personal assistant. Multiple voice-driven initiatives by a number of well-known companies have now reached commercial deployment. In this talk, I will review the two major semantic interpretation frameworks underpinning virtual personal assistance, and reflect on the inherent complementarity in their respective advantages and drawbacks. I will then discuss some of the attendant choices made in the practical deployment of such systems, and speculate on their likely evolution going forward.
Biography
Jerome R. Bellegarda is Apple Distinguished Scientist in Human Language Technologies at Apple Inc, Cupertino, California. He was instrumental in the due diligence process leading to Apple’s acquisition of Siri personal assistant technology, and subsequently helped set initial strategic directions for Siri integration into iOS. His general interests span statistical modeling, voice-driven man-machine communications, multiple input/output modalities, and multimedia knowledge management. In these areas he has written close to 200 publications, and holds over 70 U.S. and foreign patents. Over the past 25 years he has served on numerous international scientific committees, review panels, and advisory boards. He is a Fellow of both IEEE (Institute of Electrical and Electronics Engineers) and ISCA (International Speech Communication Association).

Comments are closed.