Sislab » News-home-page Signal & Interactive System Laboratory Wed, 03 May 2023 05:53:53 +0000 en-US hourly 1 Seminar – May 4, 2023 @ 11:00 am – Longitudinal Dialogues: Data Collection, Personalization, Dialogue Modelling, and Evaluation Tue, 02 May 2023 21:43:28 +0000 May 4, 2023: PhD Candidate talk: Mahed Mousavi

Title: Longitudinal Dialogues: Data Collection, Personalization, Dialogue Modelling, and Evaluation.

Place: In-Person UniTrento DISI Garda Room

“Longitudinal Dialogues (LD) are the most challenging type of conversation for human-machine dialogue systems. LDs include the recollections of events, personal thoughts, and emotions specific to each individual in a sparse sequence of dialogue sessions. Dialogue systems designed for LDs should uniquely interact with the users over multiple sessions and long periods of time (e.g. weeks), and engage them in personal dialogues to elaborate on their feelings, thoughts, and real-life events.
In this talk, the speaker presents the process of designing and training dialogue models for LDs, starting from the acquisition of a multi-session dialogue corpus in the mental health domain, models for user profiling, and personalization, to fine-tuning SOTA Pre-trained #Language Models and the evaluation of the models using human judges.”

More information at link

]]> 0
Conversational AI for People, For Their Benefit. Wed, 01 Feb 2023 23:29:21 +0000 These days we hear a lot about conversations between machines and humans and how powerful they could be if they brought benefits to people.

The Horizon 2020 project COADAPT was an ambitious project and brought together an Italian-Belgian-Finnish-Greek network of eleven partners. COADAPT’s mission was to support aging citizens to adapt to changing conditions in the workplace and personal life with diverse type of enabling technologies and systems.

The research group led by Prof. Giuseppe Riccardi at the Department of Engineering & Information Science ( University of Trento , Italy) lead the design and training of human-machine dialogue systems for mental health. While there is a lot of interest in the mental health domain for applying AI, most attempts have resorted to Eliza-style interactions devoid of natural language processing and conversation abilities.

The research group designed and piloted a novel conversational artificial intelligence system in the mental health domain. One of the most important novel feature is the ability to manage longitudinal conversations while engaging individuals for a long period of time. The system can understand and decode the context of the user’s behavior and provide personalized therapeutic support and manage user-specific conversations. Last but not least the AI systems interact with psychotherapist to provide an integrated human-in-the-loop experience.

In the research journey quite a few novel concepts were developed and validated. A novel methodology for eliciting and collecting dialogue data in the mental health domain [1], the concept of “Dialogue Follow-Ups” for conversational artificial intelligence systems. The team also improved several state-of-the-art models for understanding user emotions from text [2] and introduced the new concept of “Emotion Carriers” to explain emotions in natural language processing [3,4]. It was developed a model to construct the Personal Space of the users throughout the dialogue in order to model each user by his/her real-life events and participants [5]. The mentioned innovative models and ideas were further integrated in a personalized conversational model named “TEO Therapy Empowerment Opportunity” [6]. TEO was deployed in the field using the latest low-latency human-in-the-loop AI framework and helped both the patients and the therapists to achieve better interaction and therapy outcomes. TEO is the first conversational agent based on the latest natural language processing and machine learning achievements to be evaluated in a registered Randomized Control Trials (RCT) [6,7]. The research team demonstrated that participants who received traditional CBT treatment with the support of the m-health application were likely to report better satisfaction and more stable trend of improvement limited to the individual perception of stress related symptoms [6].



[1] Seyed Mahed Mousavi, Alessandra Cervone, Morena Danieli, and Giuseppe Riccardi. 2021. Would you like to tell me more? Generating a corpus of psychotherapy dialogues. In Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations, pages 1–9, Online. Association for Computational Linguistics.

[2] Gabriel Roccabruna, Steve Azzolin, and Giuseppe Riccardi. 2022. Multi-source Multi-domain Sentiment Analysis with BERT-based Models. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 581–589, Marseille, France. European Language Resources Association.

[3] Seyed Mahed Mousavi, Gabriel Roccabruna, Aniruddha Tammewar, Steve Azzolin, and Giuseppe Riccardi. 2022. Can Emotion Carriers Explain Automatic Sentiment Prediction? A Study on Personal Narratives. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 62–70, Dublin, Ireland. Association for Computational Linguistics.

[4] Aniruddha Tammewar, Alessandra Cervone, Eva-Maria Messner, and Giuseppe Riccardi. 2020. Annotation of Emotion Carriers in Personal Narratives. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1517–1525, Marseille, France. European Language Resources Association.

[5] Mousavi, Seyed Mahed, Roberto Negro, and Giuseppe Riccardi. “An Unsupervised Approach to Extract Life-Events from Personal Narratives in the Mental Health Domain.” CLiC-it. 2021.

[6] Danieli M, Ciulli T, Mousavi SM, Silvestri G, Barbato S, Di Natale L, Riccardi G. Assessing the Impact of Conversational Artificial Intelligence in the Treatment of Stress and Anxiety in Aging Adults: Randomized Controlled Trial. JMIR Mental Health. 2022 Sep 23;9(9):e38067.

[7] Danieli M, Ciulli T, Mousavi SM, Riccardi G. A Conversational Artificial Intelligence Agent for a Mental Health Care App: Evaluation Study of Its Participatory Design. JMIR Formative Research. 2021 Dec 1;5(12):e30053.


]]> 0
2023 Funded PhD and RA Openings Wed, 07 Sep 2022 12:43:06 +0000 At the Signals and Interactive Systems Lab (University of Trento, Italy) we are looking for highly motivated and talented graduate students to join our research team and work on Conversational Artificial Intelligence.

Conversational Artificial Intelligence includes the following research areas:

- Natural Language Processing
- Conversational Modeling and Systems
- Machine Learning
- Affective Computing

The SIS Lab has been training intelligent machines and evaluating AI-based systems in the last three decades in many industry sectors from fintech to health.

The lab research team is interdisciplinary and attracts researchers from computational linguistics, psychology, applied math, biomedical and  electrical engineering and computer science.

Research projects and demos can be found at the lab website  here .

The candidates should have strong background, past achievement records in the areas of Conversational Research and/or Engineering.

The official language (research and graduate teaching) of the department is English.


  1. Six months funded research fellowships: approximately 1600 Euro/month gross amount .
  2. Three-year funded Phd fellowships: approximately 1600 Euro/month gross amount .

For more information about cost of living in the area
please visit the website .


Openings with start date as early as March 2023.
Positions open until filled.


MANDATORY ( for both positions )
- Master degree in Computer Science, Electrical Engineering, Computational Linguistics, Machine Learning or
similar or related disciplines.
- Excellent  academic records
- Excellent programming skills
- Excellent command of oral and written English
- Good knowledge of most of the following: experimental design methodology and statistics,
natural language processing , machine learning methods
- Excellent team-work skills


Interested applicants should mention the position they are applying and send their CV to:

For more info:

The Signals and Interactive Systems Lab
The PhD School
The Department Information Engineering and Computer Science Department @ University of Trento

]]> 0
Keynote @ Summer School in Granata on Conversational AI Wed, 07 Sep 2022 12:42:09 +0000
Title:  Conversational AI to Benefit Individuals
Speaker: Prof. Giuseppe Riccardi, U. Trento
Most research in human-machine dialogue ( aka conversational AI) is  driven by the challenge of open-domain shallow and sometimes hallucinated conversations. The complex and multi-disciplinary research problem of designing , training and evaluating a conversational system is reduced to a) fine-tuning pre-trained  language models, b) coarse  evaluation of machine responses and c) poor management of the affective signals. In this talk we will review the current state-of-the-art in human-machine dialogue research and its limitations. We will present the most challenging frontiers of conversational AI when the objective is to create personal conversational systems that benefit the individuals. In this context we will report the latest research, experiments and evaluation of so-called personal healthcare agents supporting individuals and healthcare professionals.
Slides of the talks HERE
]]> 0
Seminar – May 24, 2022 @ 04:00 pm – The AI Human Condition Mon, 30 May 2022 13:58:37 +0000 We are glad to announce the DISI Seminar “The AI Human Condition” to be held on 24 May at 4:30 pm.
Where: online on Zoom with projection of the speaker at Garda Room in Povo 1.
Speaker: James Brusseau - Philosophy Department, Pace University, New York City
Mandatory online registration by 23 May at 1 pm at 
The AI human condition is an ethical dilemma between authenticity and freedom. But, what does that mean for information engineers and computer scientists in the real world? To begin answering, we will follow contemporary technological case studies to their intersections with historical debates about human identity: Can I be fully described by my dataset, or does something always escape? Even if all my biological and psychological characteristics could be defined, do I really want to know who I authentically am? And, how free am I to change my identity through time? Do I remain basically the same person through the years, or can transformations in my decisions and purposes radically split me away from the person I was in the past? While these questions have been pursued for millennia, this presentation is about reimagining them today, in the language and context of advancing digital technologies.

Keywords: Ethics, AI, Artificial Intelligence, Human Identity

About the speaker
Professor Brusseau’s current theoretical work engages the human dilemmas of artificial intelligence, especially intersecting with privacy, authenticity, freedom, and personal identity. His current practical work concentrates in AI healthcare ethics.

Contact person: prof. Vincenzo D’Andrea (

All details about the seminar at

]]> 0
AI Coach: virtual assistant for improving the life of people with ASD Mon, 27 Sep 2021 08:36:01 +0000

Anffas project in collaboration with UniTrento DISI


The goal is to create, thanks to an Artificial Intelligence (AI) system, a real virtual coach able to support people with autism spectrum disorders (ASD) in improving adaptation and their autonomy in the various areas of life, with particular regard to communication and interpersonal skills.
The artificial intelligence (AI) system will be developed by the research group led by Giuseppe Riccardi, full professor of Information Processing Systems at DISI and founder of the Signals and Interactive Systems laboratory. The research group has been engaged for years in the analysis of linguistic signals and biosignals and the design of AI systems in the field of medicine and health in Italy, Europe and the USA.

Specifically, the AI ​​Coach project implements a type of so-called blended intervention, which the laboratory has been carrying out for years. Using this approach, a personalized and customizable support is provided, useful both to the person with ASD and to his family and caregivers. The system will be implemented in two versions: one to be used on the caregiver’s mobile device, the other on the person with ASD. The AI ​​coach will support the ASD user in the collection of digital diaries and in the management of situations for which help is needed; the caregivers, on the other hand, will take care of the personalization of the AI ​​Coach intervention. Through the analysis of the information exchanged, “AI Coach” will indicate the progress and significant changes in the behavior/preferences of the person, thus in the effectiveness of the interventions and their reassessment.

This is a very important technological innovation that will open new horizons of support for people with autism spectrum disorder and more generally, in the future, for all people with intellectual disabilities and neurodevelopmental disorders.

The project launch event will take place on Friday 4 June at 10 am on the Zoom platform. For more information visit the Anffas webpage.

]]> 0
Seminar – September 22, 2021 @ 11:00 am – Operating Room of the Future – Redux Fri, 17 Sep 2021 11:24:11 +0000 More info at the link




]]> 0
Autism and emotions: artificial intelligence reveals their neural encodings Wed, 09 Jun 2021 21:23:13 +0000 A study shows that facial emotions are encoded in the brains of people with autism (ASD). Published in the journal Biological Psychiatry, the joint University of Trento-Stony Brook University study dismantles some beliefs about brain functioning in people with ASD and opens up new scenarios to improve their relational life. With machine learning, a representation of the neural models that each brain applies to decode emotions has been created. Riccardi (University of Trento): “An interdisciplinary approach is essential” Emotions are a universal language and can usually be recognized easily and naturally. This is not the case for people with Autism Spectrum Disorder (ASD) for whom this simple activity is very limited at best. The reason for this difficulty has for years been the focus of scientific studies that try to shed light on the functioning of the brain in individuals affected by these disorders. A study by the University of Trento and Stony Brook University of New York published a few days ago in pre-print version in the journal Biological Psychiatry: Cognitive Neuroscience and Neuroimaging questions many beliefs and opens up new scenarios to improve living conditions and the social relationships of people with ASD. Reading facial expressions and decoding emotions is actually difficult for those with autism spectrum disorders. But the reason lies not in the brain’s ability to encode neural signals – as has always been thought – but rather in problems in the translation of information. A problem that in this period is also exacerbated by the containment measures of the pandemic. “Particularly now with the constant use of protective masks – explains Matthew D. Lerner, co-author of the study and professor of Psychology, Psychiatry and Pediatrics at Stony Brook University – limits the expressiveness of the face and this leads to less availability of information on our emotions. This is why it is important to understand how, when and for whom comprehension difficulties arise, what are the mechanisms underlying the misunderstanding ». The study’s conclusions are the result of a long analytical work that used machine learning techniques and could be useful for reviewing the approach with which people with ASD are helped to read the emotions of others. «At the moment there is a tendency to use prostheses for the recognition of emotions that help the visual perception of biological movement. Our results suggest that we should instead focus on how to help the brain transmit an intact encoding of the message that conveys the correctly perceived emotion “. Reading emotions with machine learning The study was conducted jointly by a group of researchers from Stony Brook University in New York and the University of Trento (Department of Engineering and Information Sciences) on 192 people of different ages with and without autism spectrum disorders. Their neural signals were recorded while displaying many facial emotions and subsequently analyzed. To do this, the research team employed a new facial emotion classification system that leverages machine learning, called Deep Convolutional Neural Networks. This “machine learning” approach includes an algorithm that allows you to analyze and classify the activity of the brain while observing faces, detected by electroencephalography (EEG). The result is a very accurate map of the neural patterns that each person’s brain applies to decode emotions. “Technologies derived from machine learning are generally considered to be an engine of innovation in processes and products in all industrial sectors”, comments Giuseppe Riccardi, co-author of the study and professor of Information Processing Systems at the University of Trento (Department of Engineering and Information Science). “And it is also evident in this case. Machine learning techniques can help us interpret brain signals in the context of emotions. First of all, they can be decisive in supporting the early stages of basic scientific research. But they can also be used directly for clinical interventions. The study we conducted shows how much a strong integration of interdisciplinary skills is necessary for artificial intelligence to have a measurable impact on people’s lives “.

]]> 0
Alessandra Cervone has successfully defended her PhD on “Computational models of coherence in open-domain dialogue” Mon, 02 Nov 2020 08:24:28 +0000 Congratulations Dr. Cervone!


]]> 0
SIGDIAL2020 paper on Dialogue Coherence Thu, 16 Jul 2020 09:57:10 +0000 Paper :

]]> 0