At the Signals and Interactive Systems Lab (University of Trento, Italy) we are looking for highly motivated and talented graduate students to join our research team and work on Conversational Artificial Intelligence. This umbrella term includes the following research areas:
We are investigating and designing next-generation ML models for multimodal input /output processing in physical and hybrid environments and interactions.
The Evocativity of Prediction, LLMs and DeepSeek
In December 2022, I got an unprecedented number of calls and messages from friends, entrepreneurs, students, and colleagues asking me if I knew about language models and urging me to meet them. They said: “I type, and the language model (i.e. ChatGPT) predicts and completes my sentences and requests !”. Mixed senses of void, excitement, and agitation were spreading fast. The term “language model” was popularized in a blast all over the planet. How did we get here?
In the early nineties, we witnessed the dawn of statistical language models. At that time, when I talked about our research on “stochastic language models” at AT&T Bell Laboratories (USA), people would scoff at that idea. We wanted to use them to empower machines to talk to humans. The topic was controversial then, and linguists criticized and ridiculed our research path. In 2000, at AT&T Labs, we had an aha moment when we connected a machine that could listen and talk to millions of customers calling with accents from all over the USA. It was an instant research and technological breakthrough, appreciated by the scientific and business communities, with little impact on the broad audience of consumers.
Fast forward in time: in 2011...
Today was a big feel-good day for me and my colleagues Giovanni Iacca and Eleonora Aiello from the Department of Computer Science and Information Engineering at the University of Trento.
Our students' presentations delved into specific aspects of how medical professionals, from ophthalmology, neurosurgery, cardiology, radiology, endovascular surgery, and health physics, could benefit from AI systems.
They showcased how AI may help reduce the burnout of radiologists, support neurosurgeons in improving the precision in delivering brain stimulation for Parkinson's patients, detect postoperative delirium, improve the tracking of white matter pathways, predict toxicity in radiotherapy, and many more high-impact issues in medicine and health.
Go students, enjoy the rest of the ride!
Research in human-machine dialogue (aka ConvAI) has been driven by the quest for open-domain, knowledgeable and multimodal agents. In contrast, the complex problem of designing , training and evaluating a conversational system and its components is currently reduced to a) prompting LLMs, b) coarse evaluation of machine responses and c) poor management of the affective signals. In this talk, we will review the current state-of-the-art in human-machine dialogue research and its limitations. We will present the most challenging frontiers of conversational AI when the objective is to create personal conversational systems that benefit individuals. In this context we will report experiments and RCT trials of so-called personal healthcare agents supporting individuals and healthcare professionals.
Longitudinal Dialogues (LD) are the most challenging type of conversation for human-machine dialogue systems. LDs include the recollections of events, personal thoughts, and emotions specific to each individual in a sparse sequence of dialogue sessions. Dialogue systems designed for LDs should uniquely interact with the users over multiple sessions and long periods of time (e.g. weeks), and engage them in personal dialogues to elaborate on their feelings, thoughts, and real-life events.
In this talk, the speaker presents the process of designing and training dialogue models for LDs, starting from the acquisition of a multi-session dialogue corpus in the mental health domain, models for user profiling, and personalization, to fine-tuning SOTA Pre-trained #Language Models and the evaluation of the models using human judges.
The Laboratory for Augmented Health Environments will develop data, AI and robotics-driven prototypes and train future surgeons and healthcare professionals. A collaboration of the University of Trento with the School of Medicine and the Regional Healthcare Services
Ore 17:00 - 19.00 | lunedì 13 novembre 2023
Introduce e modera:
Paola Iamiceli, Università di Trento
Ne discutono:
Marco Zenati, Harvard Medical School e UniTrento;
Cesare Hassan, Hunimed; Marco Francone, Hunimed;
Carlo C. Quattrocchi, UniTrento e APSS; Lorenzo Luciani, APSS;
Andrea Passerini, UniTrento; Giuseppe Riccardi, UniTrento;
Eleonora Marchesini, FBK; Roberto Carbone, FBK;
Marta Fasan, UniTrento