Live memories

Live memories

Memory on the time of internet

Memories logo

Memories logo

Live Memories Web Site
In the digital age, our records of past and present are growing at an unprecedented pace. Huge efforts are under way in order to digitize data now on analogical support; at the same time, low-cost devices to create records in the form of e.g. images, videos and text are now widespread, such as digital cameras or mobile phones. This wealth of data, when combined with new technologies for sharing data through platforms such as Flickr, Facebook, or the blogs, open up completely new, huge opportunities of access to memory and of communal participation to its experience.

Multilingual Named Entity Extraction

The Named Entity Extraction Service can extract named entities from any given English or Italian text. The web service takes in a plain text document and provides an xml output with the corresponding named-entities tagged in the documents. The algorithms for the NER recognition task are based on re-reranking strategies combining strength of Conditional Random Fields and Kernel Methods using both structured and flat feature.

Livememories search engine

The goal of the Livememories multimedia search engine is to provide the users the ability to search through multiple streams of data. This data could be either in the form of text, audio or video. Our goal in the second year was to integrate the search results extracted from the different multimedia streams and to provide a platform to visualize these results for the users.
The Livememories multimedia repository consists of data from diverse domains. It has a large collection of lectures recorded at the University of Trento. It also has audio data from the recordings of the sessions of the local comune of Trento. The repository also contains recordings of the daily news segment of the local news channel RTTR.
This Livememories multimedia repository of data goes through different pre-processing steps which include:
a) Transcription of the audio (or audio segments of the video) files. This is done with the help of our external partner Pervoice; b) Indexing of the transcript to provide faster search results; c) Named-Entity Recognition (NER) whereby various named entities in the documents are identified and tagged; d) Time-expression annotation where the transcripts and documents are annotated with TIMEX3 tags.
What sets the Livememories multimedia search engine apart from other multimedia search engines is its ability to search through the transcripts of the data. So unlike most other search engines, which search through the names or tags associated with the multimedia documents, the Livememories multimedia search engine searches the actual content of the multimedia documents to provide more relevant results to the user.
Apart form a general search engine interface, the Livememories multimedia search engine also provides two types of visualizations of the data. A tag-cloud based visualization is provided, which can be used to navigate through a cloud-of-cloud of named-entities. This provides a new dimension to traverse through the data, which are retrieved based on the named-entities identified in them. Users can also see how frequently one named-entity in a document is related to the other named-entities. The system also provides visualization for the time-expressions in the documents retrieved by the user’s query. Users can browse through the timeline of the retrieved results and can see how a particular time-expression is related to the named-entities in the results. The system also integrates results retrieved from external sources such as Google news, youtube, and real-time data from Twitter posts.
The URL for the demo is: http://persistence.disi.unitn.it:8080/lm/livememories

For more info on the Web Service, contact: sisl-infoATdisiDOTunitnDOTit

iScout Mobile App

The iScout platform presents users a way to conveniently capture and share personal memories. A version of iScout was developed for the Apple iPhone to allows users to track their locations anywhere in the planet, and perform location-based memory collection. It uses the built-in GPS to retrieve information such as coordinates(latitude and longitude), elevation and speed and present such information as trails in a virtual map on the device’s screen. Users can perform tour annotation by saving audio, images, or textual data, which are grouped into waypoints. This allows users to later visualize what they did and where they did it. Once the tour is completed, users can also perform off-line editing of their tours by adding/deleting images, recording new audio messages or add brand new waypoints to enrich the memory of their tour. The user can also upload the final tour to our partner community, Giscover to share the trips and memories with friends. Further enrichment of the memories can be done online on the Giscover community portal.

Leave a Reply