EuropeanaTech 2023
The Rijksmuseum stores vast amounts of contextual information about its collection in different systems. Most systems support specific processes, for example documentation and research. Integrating the data would have many benefits, providing users with richer experiences. Linked Data could be a technical solution to this challenge, but how does a museum go about implementing integration infrastructure?
We approached Linked Data not as an end-product, but as an essential technology at the center of our collection data infrastructure. This brought additional requirements, in terms of stability, scalability and maintainability. An architecture design resulted in a daunting list of skills that would be required for implementing the infrastructure, revolving around code development, infrastructure and data engineering.
This presentation will discuss how we evolved from a one-and-a-half-member team into a ten-member multidisciplinary engineering team, with experience in academia, cultural heritage and industry. Part of the Research Services department, we have close ties with information specialists that steward data in the source systems. We adapted new practices: Scrum, Infrastructure as Code and DevOps. We collaborate with our website builder in making our Linked Open Usable Data usable for them.
In the future we want this infrastructure to be a sound foundation to build new services upon. We foresee a core team at the museum, which is scaled up on a project basis. Projects could entail adding new source systems, data services or retrieving data from external source.
The world is in a biodiversity crisis. Due to human activities, plant and animal species are going extinct at a rate of 10-100 times higher than at the last mass extinction event 66 million years ago when the dinosaurs vanished from the earth. To document and understand this loss of species, natural history collections are pivotal. Not only can the data on gathering sites and dates of specimens collected from the late 1600s up until present provide scientists with information about biodiversity change over long periods of time, but the significance of mass extinction for the world's population can be made tangible when broad segments of society are involved in collecting this data. By transcribing handwritten labels to databases, they contribute significantly to making the information available to research, education and outreach.
To engage the public in unlocking the potential of collections on a large scale and as part of mass digitisation efforts, the Natural History Museum Aarhus (Denmark) and Museum für Naturkunde, (Germany) have entered a partnership, funded by the European Union, Erasmus+, to create a professional framework for volunteer projects that gather metadata from specimen labels. In this talk, we present our study of what motivates the volunteers to participate in digitisation projects, how to manage volunteer programs, what pitfalls to avoid and how other cultural institutions can – and should – engage the public in digitisation.
This presentation will be based on an exploratory project which aims to study and develop a proposal for the implementation of haptic and audio material for the inclusion of people with visual disability in museums. It was based on an innovative technical approach (photogrammetry, virtual reality, Game Engines, haptic and auditory technological devices) reflecting the possibilities offered by virtual reality as a mediating resource between the artistic object and a museum's visually disabled audience. A prototype of haptic-virtual technology was created to explore the Portuguese painting “O Grupo do Leão” (1885) by Columbano Bordalo Pinheiro, one of the most important art work of the Museu Nacional de Arte Contemporânea in Lisbon, Portugal. A group of visually impaired participants was invited to physically explore this prototype among the museum exhibition space and share their experiences through semi-structured interviews.
This presentation will offer attendees the opportunity to discuss and explore new lines for research and training on 3D computational technologies towards museological spaces, highlighting the importance of democratising access to heritage and new technologies for visually impaired people. Also, we will discuss new methodologies for developing motivated socially research, evolving the participation of the disabled community in academic work.
Digital or digitised cultural heritage may already seem like something obvious, thanks to decades of effort from GLAM institutions. And, thanks to human creativity and the development of technology, we can explore this cultural heritage in new ways.
But does this heritage also represent its true diversity? Cultural institutions play an extremely valuable role in preserving and curating collections. However, not all stories and communities fall within this framework. The stories of many communities, events, and places would have never been told if it weren't for the passion and engagement of hundreds of individuals and groups that form the community archives movement - outside of big heritage institutions.
At the Center of Community Archives, we support communities to tell their stories, in their own language and through sources that matter to them. The Community Collections portal allows these sources to be preserved and showcased and grassroots, personal stories curated by the communities to which they relate.
We try to make people care about people and the stories distributed in hundreds of communities across the whole country by making the Open Archiving System and Community Collections portal available. These tools are part of our supporting system for community archivists in order to make them independent in presenting their history widely.
This presentation will present the journey from personal letters, photos from school trips, oral history or local poets' poems to the public in a way that would matter also to outside and bigger communities.
For several years, the specialised Information Service Asia (CrossAsia) of the Berlin State Library collected a huge amount of digital texts. The data were licensed through licensing agreements, with text and data-mining rights (TDM) also included. To avoid the dilemma of providing the texts for TDM without revealing them, we discovered the new decentralised Gaia-X infrastructure based on the Ocean Protocol as one possible solution.
The Ocean Protocol implements the Compute-To-Data (CtD) approach, where an algorithm is sent to the data and not the other way around. Users do not have access to data that are subject to licences. The data stays on premise.
To explore the possibilities of this approach, we carried out a small Proof-of-Concept. As part of this, we set up a dedicated portal for the Gaia-X-Test-Network (https://sbb.portal.minimal-gaia-x.eu/) and published data sets for CtD. Selected data scientists were able to run their algorithms on the data.
This presentation will report on the details of the Proof-of-Concept and will give an overview of the discovered advantages and disadvantages of the dynamic Gaia-X Infrastructure. At the same time, it will identify important questions around the workflows and implementation that we know we need to find answers to. This will be linked reflections on what still needs to be accomplished for this approach to develop into an infrastructure for the digital humanities and machine learning.
This session will show how the National Archaeological Museum of Tarragona and the Giravolt program have scanned the Roman site of Centcelles combining different methods: a laser scanner has been used throughout the entire site to achieve the highest accuracy, drone flights were used for the roofs and high-resolution photogrammetry was used to obtain textures of the highest quality, especially in the Roman paintings and mosaics in the central dome.
The new historical and scientific research has been based on the new 2D plans extracted from the scans but also on the analysis of the 3D models, available online.
Using an optimised version of the 3D model, a first pilot experience of Virtual Reality has been created, which has been a success in all the tests conducted with different audiences, from teachers to cultural visitors or heritage professionals. They virtually ascend to the dome, 13 meters high, and see the mosaics face to face, with spectacular detail and realism. The museum is working to integrate this experience into the physical visit to the site and also to use it in classrooms of schools collaborating with the museum as part of a virtual reality educational kit.
Find out more: https://www.youtube.com/watch?v=y42mTHAY0P8
In the era of the Semantic Web, where data integration and interoperability are paramount, terminologies and thesauri play a crucial role in unlocking the full potential of linked data. This lightning talk emphasises the need for organisations and networks to effectively manage these controlled vocabularies.
This became apparent within the TAG-project, a project initiated by MoMu with the aim to unite and enhance the digital textile heritage in Flanders. We will showcase the practical implementation of Opentheso (an open source thesaurus management tool) during the project, where it has been successfully utilised to clean, enrich, and restructure the multilingual Europeana Fashion Thesaurus.
By making use of Semantic Web standards like SKOS and by using persistent identifiers (e.g. arks), the results become available as linked data and can easily be integrated into other applications. The tool also offers collaboration opportunities like a 'proposal module' that can be used to discuss and manage the workflows for adding new terms.
Given our initial lack of awareness about Opentheso at the start of the project, we believe others might also be unaware of its existence. Therefore, we would like to share the positive experiences we had.
Attendees will learn about the functionalities of Opentheso (thesaurus creation, alignment, collaboration, re-use) and why taking ownership of your own controlled vocabularies might benefit organisations or networks. We will also touch upon possible pitfalls, encountered difficulties and the importance of web standards.
'Museopolis.eu' is a digital initiative launched between 2018-2020 through a collaboration between the Museum Ceramics of Boleslawiec (Poland) and Muzeum Eského Ráje in Turnov (Czech Republic). This effort, part of the 'Gate to the World of Collections' project, co-funded by the European Regional Development Fund, aimed to digitally showcase the Poland-Czech border region's historical heritage and aesthetic richness.
A curated selection of significant regional artefacts is digitised and exhibited with four language option on the Museopolis.eu platform itself. Both techniques range from traditional photography employing 2D visualisations to photogrammetrically-created immersive 360-degree visualisations of the museum ceramics archive and collections.
Museopolis.eu serves a dual purpose - not only as a virtual museum and exhibition space but also a promotional tool for regional museum collections. Until now, it has been the result of a collaboration between nine museums, six museums in Poland and three museums in the Czech Republic. Its strategies include the publication of multilingual materials and the organisation of exhibitions straddling the border. As such, it encourages varied engagement with the digital archive, offering creative avenues for audiences to interact with cultural heritage.
Museopolis.eu also serves a crucial function in the preservation and exploration of collective cultural history. This is achieved through the digitisation of heritage, which enables universal accessibility. Additionally, the platform facilitates the appreciation of the distinctive cultural heritage of the Poland-Czech borderland.
Join this session to learn more.
DACE is an open-source Data Aggregation and proCessing Engine that has been developed by PSNC since 2019. It serves FBC (the Polish national aggregator), SSH Open Marketplace, Leopoldina (an institutional knowledge platform) and Ariadna (a Silesian aggregation platform). DACE is composed of a customisable, event-driven data aggregation and processing pipeline, harvesting manager and optional discovery platform.
The aggregation and processing pipeline can be adapted to specific scenarios through microservices that focus on specific and small-scale actions, like batch or single record retrieval, data transformation, text recognition or data ingestion. DACE supports data harvesting via OAI-OMH, Mediawiki API, Wordpress API, Z39.50, CSV/XML import as well as several dedicated APIs (e.g. CLARIN resource families). The data transformation, extraction and normalisation components use data-source level configurable XSLT or JOLT, text recognition engines for full-text search (e.g. Tesseract) as well as date, keywords or NER extraction/normalisation routines.
Technically, the main idea behind DACE is to leverage the Apache Kafka event streaming framework in order to build loosely coupled ecosystem of microservices that receive messages and act on them, e.g. by sending new messages or ingesting data into discovery platform. Through this approach we aim to build a reusable framework that is flexible, scalable, reliable and highly available.
After several years of developments and production-level deployments (with more to come), this session will present the achievements of DACE and aims to attract the community to use and further develop the engine.
With the Linked Digital Heritage programme line, the National Library of the Netherlands wants to connect libraries’ dispersed heritage in accordance with the principles of the Dutch Digital Heritage Network, making it available for use by different target groups of education, science, the creative industries and the general public. Maastricht University Library carried out one of the eight pilot projects within this programme line. Titled ‘Het licht is rond: Pierre Kemp verbonden’, the project aimed to explore how the University Library’s collection of the poet Pierre Kemp (1886-1967) could be presented through Linked Open Data in such an open and enriched way as to reach the widest possible audience.
As part of the first phase of this programme line, our project delivered a dataset consisting of the objects to be included in an author portal developed by the Dutch Museum of Literature. The dataset contains digitised representations of letters, manuscripts, drawings and physical objects and is provided as Linked Open Data. The digitised content itself, including 3D representations of Pierre Kemp’s desk, a cabinet and an egg decorated by him with a poem, is presented in the content management system Omeka S of the University Library.
Join this session to gain insights into processes and workflows which facilitate the digitisation, creation, enrichment and management of data and standards and protocols which support the creation, sharing and reuse of Linked Data.
AI and machine learning are a point of disruption for our sector. It challenges the principles of openness and mutuality we have developed around data sharing and data reuse during small scale, grass roots initiatives to large data projects. The question now is how cultural heritage can respond to this description and start taking a position.
While uncertainties in the accuracy of cultural heritage representations have long been recognised, their significance and urgency become amplified in the development of immersive 3D experiences. Issues arise due to missing or incomplete historical records, differing interpretations, and the limitations of available technology. This session delves into the challenges posed by these uncertainties and emphasises the need for effective strategies.
Two contrasting projects will be examined to highlight different approaches. The Discovery Tour in the video game Assassin's Creed showcases how a large gaming company leverages substantial resources to create immersive experiences that entertain and educate. Ubisoft not only employs historians, but also cooperates with McGill University to provide teachers around the globe with learning material. In contrast, the “Augusta Raurica AR Experience”, developed by Augusta Raurica, demonstrates how a small cultural heritage institution employs interactive storytelling through Augmented Reality to engage visitors.
By analysing these examples, this session will examine the strengths and limitations of each approach and provide insights into managing uncertainties in 3D applications. It will emphasise the importance of transparency, providing contextual information and promoting critical thinking. The aim is to inspire cultural heritage institutions to adopt strategies that balance engagement, entertainment, and historical accuracy in their immersive applications, fostering a deeper understanding and appreciation of our shared cultural heritage.
This presentation invites participants to consider the complexities of managing uncertainties in 3D cultural heritage experiences and encourages the adoption of strategies that align with the goals and resources of their respective institutions.
This session presents the remarkable results of the Horizon 2020 ERA Chair in Digital Cultural Heritage: ‘Mnemosyne’ project conducted at the Cyprus University of Technology, funded under the programme ‘Establishing ERA Chairs’. The newly developed “Mnemosyne” methodology includes a research approach and associated techniques for the creation of the enhanced digital “memory twins” for 17 selected exemplar heritage case studies.
Extending the 2020 EU study VIGIE 2020/654, the Mnemosyne methodology proposes holistic documentation and development pipelines for the data acquisition and digitisation of cultural heritage throughout a project’s lifecycle from planning to preservation. Starting from understanding key factors like complexity and quality of the target and digitisation before work is undertaken, who the stakeholders are – a multidisciplinary community of experts and users involved in the documentation of and digitisation of heritage – categorising informational needs, expertise and motivations through to how the digitalised object, including its metadata and paradata, can be used and reused maximising return on investment both intellectually and financially.
These factors, amongst others, are critical in the holistic approach to cultural heritage digitisation if meaningful and high-quality data is to be produced and advance the state of the art from the geometric-based digital twin to the higher-order memory twin incorporating geometric, intangible and process data.
Furthermore, the methodology includes integrated taxonomies for movable and immovable tangible cultural heritage (with an aspiration to extend into intangible heritage domains) to support the representation, understanding and communication of the complex nature of cultural heritage.
Polifonia, a Horizon 2020 Programme, aims to highlight the implicit knowledge linking musical heritage to wider cultural heritage (including tangible assets), to engage both the general public and music domain experts in a consistent environment. The project uses novel data science methods (LOD, KG, ML/AI) to extract information on music patterns and music object intrinsic features to enrich musical heritage knowledge graphs. Polifonia also holds a stakeholder network consisting of experts within musicology, cultural heritage, public institutes and the music industry, in order to ensure reuse of the project’s output for different end users.
This session will highlight two pilots from the project:
The BELLS pilot aims to provide tools and methods through which Italian historical bells heritage can be better known and put in relation to other parts of cultural heritage. The final interface will enable users to navigate through the connections between bells sound, tangible heritage (bells, bell towers) and intangible heritage (sound practices, oral transmission of knowledge among communities and bearers of tradition).
The history of pipe organs is rich and diverse, and highly interrelated to economic, religious and artistic contexts. In the ORGANS pilot, a knowledge graph will be assembled that contains information about the histories and characteristics of all important historic organs in the Netherlands. The data will be a valuable resource of knowledge on Dutch organs, used by music historians, organ advisors, the Cultural Heritage Agency of the Netherlands, organ builders and the general public.
The “Arxiu Lliure” (Teatre Lliure’s archive) is the digital space that connects the theatre heritage of the past with the present to show, move, recall and thrill. Photographs, videos, artistic programmes, posters, press clippings, etc. are all part of a set of +60,000 documents from 2,000 shows and events. All in all, “Arxiu Lliure” is where we store, care and catalogue the heritage of the theatre for the present and the future. A living space that allows us to get closer to the documents of the context of the shows that form the memory of the Teatre Lliure.
In this project, we have combined artificial intelligence (AI) and natural intelligence (NI) to catalogue +120,000 faces of the photographic collection with facial recognition tools. We have developed a collective cataloguing project with the help of automatic facial recognition tools and a user interface aimed at facilitating the interaction between automatisms and human users. Its playful aspect has made it possible to put into action the tacit knowledge of our institutions’ staff. This experience illustrates a symbiotic relationship between both types of intelligence since the team at Lliure takes part in the cataloguing process –which enriches the photographic’ s metadata collection while also feeding into the reference database used by the algorithm. Finally, we catalog more than 25,000 faces in around 100 hours of work, carried out by around 20 different people.
What’s behind the curtain of this pioneering project? Let’s the stage play begin.
The living heritage of traditional martial arts embodies multifaceted knowledge systems across the material and immaterial, spanning kinaesthetic, somatic, physical, social, cultural and technical ideologies of different ethnicities. Recent efforts have embarked on digitally capturing martial art performances as a foundation for knowledge preservation, yet lack efficient tools to (re)present and explore the digitised content. In particular, despite being the authentic carrier for traditional practices like martial arts, the human body has often been underrepresented and inaccessible to its knowledge.
In addressing the gap, this project inspects the combination of movement computing with ontology design to unfold kaleidoscopic knowledge dimensions in traditional martial arts. It proceeds with a dual approach: the development of a deep learning workflow to auto-classify movement series and a formal ontology conceptualising the semantic meaning of martial art movement. Integrating both allows the datafication of multimodal materials in the Hong Kong Martial Arts Living Archive (HKMALA), relating feature-based classification to semantic representation. On that basis, it instantiates an interactive knowledge system to allow users to investigate archival content with ontology-based knowledge representation and through interactive exploration via semantic and embodied clues.
This presentation will outline the methodology and showcase a series of computational experiments with the HKMALA data. Furthermore, the speaker will reflect on the notion of embodiment and how the conceptualisation and operationalisation of it may forge a new paradigm for archival interaction, facilitating the valorisation and dissemination of the intangible cultural heritage embodied.
In Gelderland, the Netherlands, over 100 museums exhibit objects that contribute to the province's historical narrative. However, physically bringing these objects together is a challenge, so 35 museums joined forces in the Schatkamers van Gelderland project. The project creates digital 3D replicas of historical objects that can be experienced in a virtual reality game. The game features 70 objects divided into seven historical periods.
Two players meet a professor in the virtual reality game, who needs their help to restore the past. The professor's twin brother invented a time machine to study archaeological objects in their original era. However, his brother's greed leads him to bring the objects to the present, resulting in their disappearance from participating museums. One player enters the VR time machine to travel back in time and search for clues, while the other player has an overview of all the missing artefacts. Together, they match clues with the right object and learn about its historical significance. With the professor's guidance, the players discover how the objects contribute to constructing the historical story of Gelderland.
The hardware for the game is placed in a small booth that resembles a time machine, measuring 3x3 metres. It has travelled across the province since 2021. Schatkamers van Gelderland offers a unique approach to showcasing history and making it accessible to a wider audience. By creating digital replicas of historical objects, the project enables people to experience history. Join this session to learn more.
Released in April this year, Science in the Making presents 250,000 images from the archives of the Royal Society. This session will introduce the new platform and discuss the way in which it makes use of existing digital infrastructure combined with new open-source technology to represent the 360 years of scientific discovery and exploration in entirely new ways. The site enables a reconceptualisation of the material archive, and facilitates new connections and new collaborations in the digital space.
Presented jointly between the Royal Society and Cogapp, our digital development partner, the paper will begin with a discussion of the development process and evolving workflow through which the traditional structures and systems of the archive are reimagined in a digital space to take full advantage of the flexibility and possibilities afforded by new technology. This includes combining archival information with external data sources such as CrossRef and Wikidata, as well as leveraging the International Image Interoperability Framework (IIIF). This will be followed by a tour of the user-facing features of the site, which have already transformed user behaviour in our reading rooms and influenced institutional practice across the world’s oldest continually-operational scientific society.
The digital age has brought about many changes in the way we teach and learn, and the emergence of AI tools has revolutionised the field of film education.
This session explore the benefits and limitations of using AI tools in film education, with a focus on using silent movies made available through Europeana.eu as material to implement a learning scenario. AI tools offer a wealth of opportunities for film educators looking to enhance the learning experience for their students. From analysing the narrative structure of silent movies to creating personalised silent film additions and remediation (enriching them with sounds, voices, and music thanks to the AI App), these tools can help students develop critical thinking skills, learn at their own pace, and personalise the learning experience. The implementation of AI in the classroom employing silent movies from Europeana.eu can be seen here: https://docs.google.com/document/d/1e2k0ccpxdt9u2ZEe7DgpcjVx74o8Jufe/edit?usp=sharing&ouid=110835533992599078349&rtpof=true&sd=true
The British Library had 22,000 pages recording 18th century parliamentary acts digitsed from microfilm. For a project, they wanted these indexed and catalogued, and the solution involved a combination of Machine Learning, conventional programming and... human input.
We devised an approach involving a mixture of two techniques, after ML OCR: machine learning vision recognition to classify different kinds of pages; and bespoke heuristic programming which analysed OCR to segment and extract particular text elements. But when we applied it to the full data set we found some aspects of the problem which weren't susceptible to either of these techniques; it was most efficient to use human eyeballs to answer these questions. So a third aspect was developing simple workbenches (mostly using Google Sheets) to allow a human operator to play their part in the most efficient way.
The end result was a pipeline combining humans and computers which processed 20,000 images to generate over 1,500 catalogue entries.
In this session, we will describe the challenge, how our solution worked - and where it still falls short. We will discuss the expected and unexpected messiness of the data, the need for programmers who can do data entry, and when pragmatic reality should override programmer hubris! We also wish to discuss the importance of tooling, creating workbenches for evaluation and adjusting approaches, and how Google Sheets can be a powerful assistant in this kind of work, especially in combination with the IIIF imaging standard.
Digital technologies have the potential to empower Vietnamese cultural professionals by providing them with access to and inclusion in the global discourse on art and culture. However, they also face direct and indirect barriers such as a lack of resources, language barriers, geopolitics, and outdated stereotypes circulating online which impact their capacity to effectively represent their culture and engage with a national and international audience. These challenges can impede the digitisation process in Vietnam and further increase the global digitisation divide.
This session aims to address the challenges and opportunities presented by digital technologies in the representation of Vietnamese art and culture arising from a co-designed action research project conducted in collaboration with museums in Hanoi, Vietnam. It will discuss findings from the research and potential solutions that were developed in collaboration for improving access, inclusion and engagement such as using a system of no cost or low cost technology and apps to support the creation of 3D digital models of artefacts and environments. It will explore the importance of digital inclusion and the need for systematic policy, training, protocols, and standardised practices to ensure a sustainable solution for preservation and representation of art and culture in Vietnam. This may also have broader implications for the digitisation of, and engagement with, cultural heritage in the Global South.
Restaging Fashion, a collaboration between the University of Applied Sciences Potsdam (UCLAB) and the Berlin State Museums, is an interdisciplinary project within the scope of digital cultural heritage where researchers from interface design, information science and art history address the topic of fashion representation from a Linked Data and information visualisation perspective. In this context, selected objects from the Lipperheide Costume Library in Berlin and the Germanisches Nationalmuseum in Nuremberg, realia / 3D models and reference texts from local or external sources are being contextualised and made available for research by means of visualisation and narrative components. On the basis of structured and semantically enriched data, we are developing an interface for showing relationships on a graph-based setting, giving the end user the possibility to discover new material or connections beyond the functionality of a regular search/browsing interface. Moreover, we juxtapose the visualisation alongside art historical texts (stories) and images, providing a new approach to curatorial communication and online exhibition. By combining the narrative form with nodes and edges, we create a prototype to be implemented and reused on other types of linked data in humanistic research - beyond vestimentary sources.
This session will address the 'exploration' theme of the conference, although it covers significant aspects of the engagement and experience themes, too. Its goal is to offer all interested attendees a design research perspective regarding the use of linked data and its incorporation in cultural heritage settings that combine representational (data-centered) and non-representational (narrative) components.
Collections of cultural heritage institutions are often formed over prolonged periods of time. Methods, norms and standards used for describing collections are therefore often heterogenous, within and between institutions. Consequently, it is difficult for curators and audiences at large to evaluate and assess cultural heritage collections. Questions like what is the gender division of artists within a collection, or in which time periods a collection was formed, are difficult to answer. Evaluating whether metadata contains contentious terms or references to outdated vocabularies is altogether out of scope.
To remedy this we propose the CHAS pipeline (Cultural Heritage ASsessor). CHAS first converts key fields from collection metadata to linked data using the Europeana Data Model's main contextual classes, and uploads it to the Dutch Cultural Heritage Agency triplestore. A sequence of federated queries aims to connect the collection to reference datasets like thesauri, and (historical) gazetteers. A Jupyter notebook then uses these queries to generate a collection report. This report provides statistics about the quality of the collection metadata, such as how many fields could be mapped to reference data. It also gives key insights such as its gender division, geographical distribution, the use of contested terms, and the potential to map keywords to thesauri. CHAS will enable cultural heritage institutions, or session participants, to easily evaluate their collections, and use this information to better inform their audience. Because of the quality metrics CHAS also promotes further standardisation and accessibility of cultural heritage data. Join to learn more.
The studio of an artist is a complex space that contains and reflects the individual’s artistic journey related to the creator’s materiality, philosophy, inspiration and thinking. The ephemerality and variability of artists' studios make their documentation crucial for protecting and disseminating their artistic process and creation. Digital technologies and data offer us tremendous opportunities that are able to tackle the challenge.
This session will present an ontology-based documentation framework that captures the essence of artists’ creativity in the studio using interactive digital documentation methods. The interactive survey of the studio’s physical space is based on 360° panoramic documentation and a video interview of the artists interacting with their workspace offers a wealth of exploration and investigation opportunities to the users. The ontology implements the CIDOC-Conceptual Reference Model and Art & Architecture Thesaurus structured vocabulary created by the Getty Research Institute to assure a semantic documentation framework for the thorough analysis of artists’ studios and the long-term preservation of these rich art resources.
The above methodology was implemented at Lemba Pottery, a studio in Cyprus where the artists build on their knowledge of medieval earthenware techniques to create functional pottery and sculptures. The documentation of the studio identified connections between the potter and his studio, showing how the spatial setting of Lemba Pottery is a key locus of creativity and work and thus an integral part of the artistic process.
This session addresses the increasing demand for the restoration of audiovisual archive content, considering the abundance of visual archives, videos and films worldwide. However, the current manual restoration methods and non-AI algorithms are both time-consuming and costly. Furthermore, although AI-based restoration techniques exist, they have not yet attained the desired quality.
In response to these challenges, this study proposes a hybrid approach that combines the strengths of manual restoration with the efficiency and cost-effectiveness of AI techniques. By leveraging existing AI frameworks and adapting them to the task of restoration, this method aims to surpass the limitations of previous approaches. The proposed method relies on a workflow that integrates artificial intelligence (AI) frameworks, originally designed for other applications, to achieve efficient and high-quality restoration and enhancement. Initially, keyframes are automatically selected from the original material. These keyframes undergo AI-based restoration or enhancement, guided by human supervision. This collaborative approach ensures that the AI models produce results aligned with the desired quality standards. Finally, a style transfer Generative Adversarial Network (GAN) is employed to apply the restored/enhanced keyframes to the remaining portions of the original material, resulting in a coherent and visually pleasing restoration.
The rapid digitisation of cultural heritage data has resulted in vast repositories of information that encompass diverse entities such as artworks, historical figures, institutions and landmarks. However, the effective utilisation of this data is hindered by the challenge of entity disambiguation—accurately identifying and resolving references to entities with similar names or ambiguous contexts, and linking them to external knowledge bases such as Wikidata.
This presentation aims to address this issue by leveraging the power of deep learning models for entity disambiguation in cultural heritage data. The proposed approach combines state-of-the-art deep learning techniques with advanced natural language processing algorithms to improve the accuracy and efficiency of entity disambiguation. By harnessing the contextual information embedded within the textual descriptions, metadata, and interconnections of cultural heritage data, the models can discern subtle semantic cues that aid in disambiguation.
Key points to be discussed in the presentation include an overview of the prevalent challenges in entity disambiguation within cultural heritage data, the design and architecture of the deep learning approach to the problem, and the experimental evaluation of their performance. Overall, this presentation highlights the potential of deep learning models to advance entity disambiguation in cultural heritage data. By resolving ambiguities and connecting related entities more accurately, these models contribute to a deeper understanding and broader accessibility of our rich cultural heritage.
Yale University has recently launched LUX, a standards-based discovery platform that brings together collections from our museums, libraries, archives and special collections. The system improves upon existing infrastructures by seamlessly integrating traditional record-based search with graph-based queries, and clearly demonstrates end-user value of connecting and enriching data across domain, organizational and institutional boundaries. Leveraging links in the graph, we can allow users to explore the rich connections among the collections, people, places, concepts and events with a focus on discovery, rather than on searching.
With more than 17 million objects, described using 41 million records with some 2 billion relationships, LUX relies on automated reconciliation across the internal datasets and more than a dozen external authority sources. We rely heavily on data standards such as Linked Art for descriptive metadata, and IIIF for image interoperability. The data is intentionally FAIR, being also published as CC0 at persistent URIs. The paradigm used for synchronisation is based on IIIF Change Discovery, which in turn is built upon the fediverse-supporting ActivityStreams specification, enabling ease of expansion and decentralisation.
In this session you will learn how Yale was successful in adopting, implementing and openly publishing a large scale knowledge graph for cultural heritage, including both the social cohesion and internal structures essential for innovation, and the technologies and standards used. You will gain access to the source code and data, and understand how to leverage the graph effectively and in real time in a coherent and friendly user interface.
The common European data space for cultural heritage (Data Space, for short) is one of 14 data spaces initiated by the European Commission. Building on Europeana’s major accomplishments in open data, community building, and data aggregation, it challenges the initiative to grow, innovate, and rethink its approach to cultural heritage data.
The Data Space will become a sustainable and trusted ecosystem for producers and users of European cultural data. It will thrive by building bridges with similar initiatives on various levels: European initiatives such as the SSHOC Marketplace and the European Open Science Cloud and national initiatives such as the Dutch Digital Heritage Network (NDE). Such collaborations with new entities and coordination with local and national initiatives will require a more flexible and adaptive approach to support the inclusion of more diverse types of data. The Data Space will need to embrace a more open and decentralised approach to data sharing to enhance cooperation across the sector and accelerate its digital transformation. Semantic interoperability and technologies like Linked Data and SOLID can be enablers of this change.
In the hope of informing future decisions regarding the use of persistent identifiers (PID) in the common European data space for cultural heritage, we have conducted an analysis of the usage of PIDs in the metadata that cultural heritage institutions deliver to Europeana.eu. Our analysis focuses on the persistent identification of cultural heritage objects and their digital representations, and we have analysed the usage of five PID schemes: Archival Resource Key (ARK), Digital Object Identifier (DOI), HANDLE, Persistent URL (PURL), and Uniform Resource Name (URN).
This lightning talk presents some statistics of PID usage in Europeana.eu, which show that 13% of the records in Europeana.eu contain a PID and that ARK and HANDLE are the most frequently used PID schemes. We have also analysed the uniqueness of the existing PIDs, and identified some data quality issues.
Rhineland-Palatinate has a rich and diverse cultural heritage, but many of the preserving institutions there are smaller, with a specialised local or thematic focus. To support digital access this cultural treasure, a superordinate state portal "Cultural Heritage Portal Rhineland-Palatinate" is being developed. This portal will aggregate and link digital objects of the various institutions in the state. It will act as a pre-aggregator to supra-regional digital cultural portals such as the German Digital Library and Europeana.eu and make the state’s cultural heritage searchable and accessible worldwide.
While the metadata are recorded in digital form in most institutions, the use of standards like LIDO or EAD and of vocabularies like Geonames remains a domain of a few large institutions.
One of the state’s preserving institutions is the Mainz Carnival Museum. The museum’s collection covers 160 years of Mainz carnival history. All objects are digitised and recorded with detailed, collection specific metadata, but so far aren’t accessible for the public. In this real-life example, the extraction, mapping, and enrichment of metadata is presented. The resulting EDM record is as rich as possible and follows international standards – ready for further cascading aggregation towards Europeana.eu. Join this session to learn more.
In 2022, the Complutense University of Madrid conducted a citizen participation process to investigate the state of digital cultural heritage across its university museums, collections and archives. This process aimed to learn how different stakeholders (lecturers and professors, researchers, students and curators) use heritage materials for teaching and scientific dissemination.
After evaluating the results of this process, the researchers chose the Medialab Madrid Archive as a case study for designing, developing and assessing the use of digital heritage materials through participatory methods that included all stakeholders.
This lightning talk will show the open access demo developed following a participatory design methodology. Thanks to this platform, lecturers and professors, curators, students or researchers can easily create digital educational resources with digital heritage to help them in their classes, studies, research or for scientific dissemination, utilizing effective communication strategies and tools. Evaluations of these materials have shown that they improve student understanding of educational content and help to achieve the learning outcomes.
This presentation is about Augmented Reality, what it is and what it can be. It might be silly and simplistic to refer to the "Pokemon Go effect", but in fact there is a crucial aspect that we should embrace and promote: technology that stimulates interactions with the real world and real people as opposed to sucking users into predefined echo chambers.
A new paradigm is now starting to surface in the form of persistent Augmented Reality which allows us to experience digital content in a physical space collaboratively, and more importantly, collectively interact with that content. LIDAR technology is important in this aspect in order to map our environment and give us a presence in that digital layer.
One of the most beautiful things in museums, besides art or historic artifacts, are corners where kids can draw and share their interpretations by hanging their creations on a wall. It gives us a glimpse inside their minds and in the artworks or relics themselves.
Dropping audio, text, image, video of the beholder next to a work of art will complement it, elevate it, allows you to interact with it and leave a longer lasting impression and/or food for thought you can take home and digest. Persistent Augmented Reality is the technology that can do just that...
Computed-Tomography scans (CT-scans) is a type of dataset that provides additional information in the research and curation of non-visible characteristics of complex archaeological artifacts and cultural heritage findings. CT-scans can reveal useful details about the inner structure of archaeological objects, human remains and burial findings.
The procedure to generate and study a CT-scans dataset requires specific infrastructures, skills, computational resources and specific software packages. On the other hand, the technological achievements in immersive technologies, such as the relevant aspects of extended reality , namely virtual reality, augmented reality, and mixed reality, have become more accessible. In addition, the progress of the creative industry of video games and the democratisation of the game-engines have raised new challenges in various scientific fields.
In this context, this session introduces an innovative methodology for the transformation of CT-scans from restricted use in laboratories to immersive experiences for general public. Starting with the cost-affordable analysis and study of the fragments of the Antikythera Mechanism, we reused the whole CT-scans dataset to artificially generate the 3D models with high level of photorealistic textures. Through the combination of VR and videogame, we developed an interactive virtual tool for the study of the inner structure of the fragments. The evaluation by the end users underlined the success of the transformation. Our methodology could be applied in a wide spectrum of archaeological artifacts, offering breathtaking experiences to scientists and museums’ visitors.
In times of war, such as the ones in Ukraine and Syria, damages and heavy losses to cultural heritage sites are observed by local populations on the ground as well as the international cultural heritage community. In Europe, the Ukrainian conflict has brought to the fore the need to digitally store and save Ukrainian cultural heritage through a series of participatory initiatives by individual cultural heritage experts (e.g., SUCHO) and institutions (e.g., UNESCO actions for Ukraine, April 27, 2023).
In this context, motivated by our willingness to create a service that would benefit the Ukrainian cultural heritage sector, we launched the Space4CC (Space for Cultural Heritage, https://space4cc.eu/) joint venture that will produce a tool that gathers data on damage to cultural heritage sites due to warfare. By combining space and Earth observations data (Copernicus, Galileo) as well as citizen-generated data, such as photos of damaged heritage, we are about to deliver a service that can be used by cultural heritage professionals, public agencies and civil society organisations as an effective tool to monitor direct and indirect impact of conflict to cultural heritage.
Our Space4CC idea, and soon-to-become an operationalised service, has been awarded in two European-wide competitions organised by the European Union Agency for the Space Programme (EUSPA).
In our session, we will provide insights into the technical innovations developed through the Space4CC tool and its data valorisation potential for the benefit of war-torn cultural heritage.
CirculAR is an Augmented Reality application that has been developed to provide a distinctive user-environment interaction in a gamified manner. Through integrating learning and entertainment elements, this application facilitates the exploration of cultural heritage in an immersive manner, specifically focusing on the rich heritage of Ancient Greece.
The functionality of CirculAR relies on localised simulation technology and visual detection, enabling the augmentation of information and interactive 3D models to enhance the surrounding environment at two renowned archaeological sites and a museum. The AR app seamlessly integrates with these sites' existing infrastructure, thereby enhancing appeal to visitors, and their relaxation and enjoyment during their visit. The unique features of CirculAR have been designed to augment the accessible and inclusive nature of the immersive experience provided to end-users on-site (such as visual and audio descriptions and a virtual agent). The application incorporates various gamified and educational components, including quizzes, animations, content visualisation and manipulation, and scoring mechanisms. The CirculAR authoring tool offers a user-friendly interface enabling institutions and content owners to preserve, curate and disseminate their cultural heritage data. Based on meticulously designed 3D content, augmented storylines strive to faithfully replicate the ancient sites, drawing from extensive research conducted by museums and archaeological sites. Due to the archaeological density of the selected sites, our application is anticipated to make a substantial contribution towards emphasising existing elements and recovering missing fragments essential for a comprehensive understanding of these areas as a whole. Join this session to learn more.
'Interwoven' is a pioneering platform in South Asia that merges artificial intelligence and machine learning technologies to offer a unique narrative of global textile collections (https://interwoven.map-india.org/). The platform uncovers unseen connections between artworks from diverse cultures, presented visually and intuitively to inspire exploration and discovery. By critically evaluating the AI model's architecture and interface in ‘Interwoven,’ this session will explore how the architecture and interface of the 'Interwoven' platform can be optimised to enhance user engagement and interactivity, improve cultural heritage object annotation, and stimulate more effective storytelling experiences.
The research will base itself on:
1) A comprehensive literature review of existing knowledge in the field of digital humanities and digital heritage.
2) Interviews with designers and developers of 'Interwoven' to understand the platform's development process, underlying principles, and intended user experience.
3) Usability tests with the target audience, utilising heuristic principles of interactive design to evaluate the effectiveness of the platform's user interface.
Using a multi-pronged approach of enquiry, the larger objective of the presentation is to analyse the current annotation and its limitations to address the challenges in creating robust and consistent metadata for diverse cultural heritage objects; to explore the role of user vis-a-vis curators and annotators in such context; and finally, to investigate the extent to which the visualisation of cultural heritage in this platform enables a casual browser to transform into an enthusiastic and informed user through practical tools of storytelling.
In recent years we have seen a surge of interest in the Cultural Heritage (CH) sector towards exploring and adopting AI. First making waves in other professional sectors, AI solutions have come around to show promising results in different areas of operations of Cultural Heritage Institutions - from content analysis and knowledge extraction to machine translation and enrichment of metadata. However, a number of technical and knowledge barriers that limit the broader uptake of AI in the sector persist. At the same time, ICT actors encounter a number of challenges when it comes to efficiently transferring AI techniques that have been successful in other sectors to the CH domain. In this panel we invite developers, brokers, promoters and experts of AI to share their insights into the opportunities waiting to be seized and the challenges faced by both CH and technology stakeholders. Will our assumptions be confirmed or debunked?
ARSTEAMapp addresses the challenges of improving the teaching of STEAM disciplines for 12-16 year-olds, offering an innovative approach to establish connections between these disciplines in a meaningful and viable way within the educational context.
The project will develop an augmented reality (AR) educational app that can be used to explicitly reflect on the connections between STEAM disciplines through the analysis of relevant European cultural heritage (sculpture and buildings).
As an example of the app's use, students could scan the façade of a cathedral using tablets or smartphones; next, through a virtual tour, students will discover aspects related to science (type of rock), technology (tools), engineering (successes and failures in its design), mathematics (structure) and art (historical context in which the cathedral was designed and built; social and cultural importance of the cathedral), while making explicit connections between the STEAM disciplines content.
Considering all beneficiaries, ARSTEAMapp provides new theoretical foundations for the didactic transposition of integrative STEAM, in the context of Augmented Reality resources, and aims to give the educational community an easily accessible teaching-learning resource to adopt this educational approach, which can be used in all European schools.
In the session, participants will have the opportunity to examine STEAM components applied on Hagia Sophia, Eiffel Tower, Bran Castle and Prague Astronomical Clock using the app. In addition, by applying these applications to their own students, they can measure their students' knowledge in the fields of STEAM with the questions in the application.
This year the Anne Frank House opened up all its research results in an online knowledge base. More than 1,100 items were added, structured in a new way and enriched with (moving) images.
To achieve this result, we had to transform all of our texts into data fields to make the information searchable, findable and reusable for others. We created a data model for four types of content: persons, locations, events and subjects.
This presentation will talk the audience though this process, share our workflows and show the results. We will demonstrate:
1) How heritage organisations can use this way of working to unlock valuable information for the public (other than collection information)
2) Do’s and don’ts for data migrations -> Points of attention to make it reusable for others (Linked Open Data/ API)
3) How to present the information in such a way that it’s interesting for different types of target audiences -> structure your data in a layered way
The result of this project is an online knowledge base, accessible for everyone interested in Anne Frank, her people, locations and subjects that are linked. Via basic search fields, advanced search options and filters people can search through our knowledge base, without having to contact an expert from our museum. Search results are structured in a list or can be plotted on a map for storytelling to present the data from the knowledge base in an accessible way for students and teachers.
This session will present the ongoing work of MuseIT, a Horizon Europe project which develops methodologies, services and technologies that will facilitate accessibility of cultural heritage and extend participation and inclusion for everyone; enrich cultural experiences and engagements; and generate tailored multimodal representations (e.g., 3D models for haptics, music, sound), at scale, in an interoperable and reusable way, according to the FAIR data principles.
MuseIT addresses three main challenges:
1) Extending accessibility of cultural assets: done by developing multisensory representations and alternative expressions, brought together in an integrated, interactive and immersive user experience environment (AR/VR, haptic). This aims to enable engagement by the public regardless of functional or sensory impairments, based on their own needs and preferences.
2) Broadening engagement and cultural co-creation: access to cultural heritage remains limited for those with disabilities, and participation in creative production of cultural assets has many barriers (e.g., due to mobility issues, lack of accessible tools and technologies to enable co-creation from distance). MuseIT proposes an accessible platform for co-creation and performance of music from a distance with perceived zero latency which incorporates tools for intricate emotional communications and exchange of alerts and cues needed for enabling performances from distance.
3) Extending methodologies for inclusive preservation of cultural heritage: will include ways of improving accessibility to preserved material, and will enable storage of layered multisensory representations in an integrated way, including means of storing and preserving haptic information.
Join the session to learn more.
Communities facing social, economic or political marginalisation often encounter significant challenges in asserting control over their data. This can be due to limited resources, lack of legal recognition of control, data sharing practices that disregard cultural protocols, online hate and discrimination or consequences of war and disasters.
A use case project of TIB - Leibniz Information Centre for Science and Technology, the German Documentation Centre for Art History, with the support of SUCHO, and funded by the German Government Commission for Culture and the Media, aims to photographically document endangered and culturally significant buildings in selected regions of Ukraine and make them accessible with descriptive data. The project makes use of Wikibase and provides:
- A secure, currently password-protected database for collecting and enriching information about at-risk buildings.
- A sustainable infrastructure for metadata, preview images and links to the full resolution images stored at Foto Marburg.
- A LIDO.xml2WB transformation process via Python scripts, which can be reconfigured for other projects.
- A multilingual system with version history and persistent identifiers.
- Built-in querying and visualisation platform via SPARQL (in federation with Wikidata or other databases providing an endpoint) with map or timeline views.
- An environment which is connectable for citizen activities in endangered areas and/or for vulnerable communities.
Starting with the pilot in Ukraine mentioned above, an initiative of TIB and Avoin GLAM, Finland, aims to co-design community-managed, privacy-oriented archival data platforms for vulnerable communities to safeguard their cultural heritage. Join this session to learn more.
As part of the Danube-AI subproject of the National Digital Heritage Laboratory, the costume designs kept in the documentation repository of the Hungarian State Theatre of Cluj-Napoca were digitised in 2022.
The metadata of the costume designs related to the 94 performances of the Hungarian State Theatre of Cluj-Napoca between 1959 and 1980, as well as information related to each performance, were entered into the Wikibase-based semantic database ITIdata of the Institute of Literary Studies of the Centre for Humanities Research (https://n9.cl/c3obz). This will provide theatre historians with context for research on the social history of theatre in the 1960s and 1970s.
ITIdata is primarily a space for literary research, so hosting a theatre history project raised challenges. One of our main questions was what specification to develop for the costume design records, what new properties and entities were needed for optimal implementation, and into which existing data network to integrate the costume design records. We opted for semi-automatic data loading.
The standardised data structure developed in ITIdata, based on semantic web technology, allows us to aggregate the complex set of relationships we have mapped to the Europeana Data Model. Our system is also prepared for data enrichment with Wikidata. High-resolution digital surrogates of the costume designs have been created that meet the quality standards required by Europeana. We hope that our collection can serve as a model for other semantic web-based aggregation projects - join this session to learn more.
Since the emergence of Pokémon Go, people have understood the impact of mobile devices and augmented reality when engaging in location-based (LB) gameplay. Mapping the possibilities of LB gaming in culture, and a creative integration of this technology, provides valuable opportunities to improve cultural heritage dissemination and understanding in the context of cultural tourism among younger audiences.
This session presents a project in cooperation with the Sint Maarten Utrecht Foundation to realise a location-based game for the city of Utrecht which is part of the European Cultural Route Via Trajectensis. The main research objective was to explore how an engaging and playful experience along this European cultural heritage route could be curated into an immersive story for young audiences using open-source data from platforms like Europeana.eu, Google Art & Culture, OpenStreet Map or ArcGIS. A first concept of this interactive app was developed during Europeana's Low-Code hackathon and will be further developed and scaled up to other cities and heritage sites based on a feasibility study.
One of the outcomes of this study is that augmented reality, location-based games have a positive attitude among participants and have the potential to enhance social experiences and cultural heritage exploration. The game uses co-creation to enhance the values of Saint Martin such as solidarity and charity. The user can earn points and the winner can give these points/money to a chosen charity. Artificial intelligence is used in order to optimise the experience for the user.
In 2021, during a cultural hackathon named Coding da Vinci, I developed with my colleague the web application “Plantala”. With Plantala you can create your own mandala from aesthetic plant parts, save it, print it and colour it afterwards. Along the way, you will also learn something about the special characteristics of individual plants. All items are based on original biological educational panels from the collection of Göttingen University and were published under a Creative Commons license.
Since then it has been used also by biology teachers or by stressed people to calm themselves down by painting “Plantalas”.
Originally designed as an open-source application for a media station in an exhibition, the underlying framework of Plantala offers cultural institutions the opportunity to develop their own media station for creating item based mandalas. This framework called Media Station as a Service (MaaS) gives you an easy way to combine the technical benefits of Plantala with other digital images.
This approach was successfully tested in practice just a few months later. “Julala” was the first desired after-use of the application - so in three simple steps “Plantala” became “Julala”. With Julala you can create your own mandala with dragons, whales and unicorns. Along the way, you'll learn about an important political wedding in 16th century and how people celebrated festivals in the Renaissance.
In my lightning talk I want to encourage cultural institutions to re-use the Media Station as a Service (MaaS) to create their own “Yourlala”.
Preserving and presenting historical clothing items in museum collections is challenging due to their sensitivity to handling - but of great importance due to their deterioration over time. Creating digital replicas of these items can give museums unique documentation and presentation possibilities.
This session will demonstrate how to create 3D digital models of historical clothing items and present them with realistic motion simulation.
The combination of cultural heritage and digital games is nothing new. The technological tools behind digital heritage and games, such as 3D scanning and photogrammetry, are similar, and cross-pollination between disciplines exists, such as archaeogaming (Caracciolo 2022). However, the current literature mostly focuses on serious games, i.e. games designed with specific educational aims in mind, which, despite their successful application, are still a minority compared to normal commercial games, which are now estimated to reach 3.2 billion players globally (Newzoo 2022). In fact, many of the most successful commercial games are rich in cultural heritage representations (Copplestone 2017), and can be powerful vehicles for the diffusion of cultural heritage within and outside designated institutions (Nizzo 2020). Nonetheless, it is important for cultural heritage to be represented appropriately, as recently demonstrated by the Sámi Council of Finland, who requested the removal of an attire from the game Final Fantasy XIV for the unauthorised use of their cultural heritage (Saami Council 2023).
This session will discuss the usage of cultural heritage data in commercial digital games, as well as the problems that might arise in terms of cultural property and appropriateness of use, and potential solutions. Through case studies from Italy, Canada, China and India, the presentation will display case studies on various implementations of cultural heritage in digital games and propose methods of collaboration between cultural heritage institutions and the creative industries.
For small and medium-sized cultural heritage organisations (SMCHO’s), it is very often difficult, if not impossible, to digitise their collections, metadate them adequately and deliver the results to Europeana, either through an aggregator or directly. Consequently they will lag behind and their content will be missed in Europeana and in the “digital world” at large and will not be available to most of the interested audience. Not having a digital presence and not having digital experience will hinder these organisations to cooperate with and learn from their peers all over Europe.
To aid the SMCHO’s, a Europeana Task Force is developing a standardised workflow, in which each step will be provided with appropriate, standardised, open or free tools and best practices. This will reduce the capacity needed by SMCHO’s, bridge the capabilities-gap and decrease the need for budget. In this session we will present a Digitization Handbook, a living "document" (using Static Site Generator technology, instead of static PDF files), to help SMCHO's from shelf to Europeana. We will present and discuss the results of the questionnaire on problems in SMCHO's during the process from digitization to publication.
Many images on Europeana.eu are in low resolution and don't meet user expectations or the requirements set by the Europeana Publishing Framework. Images often suffer from degradations including camera blur, sensor noise, sharpening artifacts, and JPEG compression.
Image Super-Resolution (SR) techniques aim to enhance the visual quality and refine the details of low-resolution images by generating corresponding high-resolution versions with finer details. Recent advances in SR rely heavily on deep learning models that leverage data-driven approaches to accurately reconstruct missing details. However, assessing the quality of the SR output is challenging, as commonly used quantitative metrics like PSNR and SSIM only loosely correlate with human perception.
Different deep learning models may perform better on specific types of images, such as photos, drawings, and prints. Following the experiments of the Europeana Foundation R&D team, at EFHA, in order to determine the most suitable SR model for different image types, we developed a framework for evaluating various deep learning models using selected images from the EFHA collections on Europeana.eu. Through a crowdsourcing visual comparison tool, users have the ability to browse the EFHA test collection, zoom in on different areas of the images and compare and rank the results obtained from different SR models.
By leveraging user feedback and incorporating visual comparison tools, our framework aims to improve the quality of images within Europeana.eu and provide insights into selecting optimal SR models based on image type. Join this session to find out more.
This paper proposes a computational framework for a novel experience of embodied knowledge archives, an integral part of Intangible Cultural Heritage (ICH), focusing specifically on the exploration of dance performances within the audiovisual archive of the Prix de Lausanne. By applying advanced computer vision algorithms to capture human poses, the computational augmentation of moving image archives enables the extraction of rich processable data, resulting on two main levels of augmentation: archive-level browsing and item-level movements visualisation.
First, dimensionality reduction algorithms like t-SNE and UMAP are employed to visualise the archive in 2D or 3D, enabling both a more 'generous' browsing experience and serendipitous discoveries. The effects of different parameters on the mapping results, as well as the clustering of similar poses, will be discussed.
Second, various real-time and preprocessed visualisations are generated at scale to unlock new modes of viewing dancers' movements. These novel ways of representing bodies in motion offer a captivating and aesthetic experience for the audience, revealing the ephemeral nature of dance performances.
This session will present a comprehensive framework that combines AI, audiovisual archives and computational techniques to preserve embodied knowledge, unlock new modes of access and enhance the understanding and appreciation of dance performances as an example of ICH practices. This research offers valuable insights into the potential applications of AI in the preservation and exploration of diverse forms of embodied knowledge, with methods and concepts that go beyond the specific case of the Prix de Lausanne dance performances.
For decades, GLAM institutions have been exploring the benefits of publishing their digital collections as Linked Open Data using controlled vocabularies. They have reused external repositories such as Wikidata and VIAF to enrich their content as well as to experiment with advanced visualisations. Recent advances in technology have provided a new context in which data quality has become a crucial element in a wide diversity of tasks such as the training of AI models and the use of NLP methods.
This session will present several methods to assess and describe the data quality in terms of Linked Open Data in the GLAM sector. In this context, SPARQL can be used to query an RDF dataset and retrieve the information required (e.g. number of classes, properties, etc.). More advanced approaches are based on the use of Shape Expressions that enable the definition of constraints to be tested against RDF datasets.
The session will encourage researchers to reuse high quality data provided by LOD repositories made available by GLAM institutions. It will provide an overview of the methods for the assessment of data quality in LOD by including real examples based on several research projects performed in collaboration with other institutions. The code is available in the form of open source code repositories to be able to reproduce the results.
http://rua.ua.es/dspace/handle/10045/109459
https://rua.ua.es/dspace/handle/10045/117374
https://doi.org/10.1002/asi.24761
In cooperation with the Leipzig Museum of Natural History a scanning system will be established to enable the museum to digitise large parts of its collection, despite financial restraints.
Various methods (structured light scanning, photogrammetry, laser scanning) are being evaluated - especially regarding personnel capacities - and groups of objects that are suitable for the different recording methods will be defined. The aim is to determine a workflow that enables the archiving of three-dimensional objects, even for small museums with limited budgets, and to establish a procedure from acquisition to archiving for various purposes.
Digital copies will be made accessible in a web-based, annotated form and processed in an educational context. For example, 'reviving' preserved insects to illustrate movement sequences or enlarging exhibits for detailed viewing and to be able to look at them from all sides. The plan is to develop a programme that will make it possible to curate one's own exhibitions in a participatory manner and to share and further develop knowledge content.
cura3D has more than ten years of experience in the development of participative museum applications as well as in the acquisition of three-dimensional exhibits. Based on this experience and our core product, intuitive exhibition planning software (used in large European museums such as the Kunsthistorisches Museum Vienna, the Staatliche Kunstsammlungen Dresden and the Kunstmuseum Basel), we are trying to establish a process that sustainably links the acquisition of three-dimensional objects with the museum didactic use. Join this session to learn more.
How can digital technologies revive a place that no longer exists?
Until 2018, a film history landmark in Moscow attracted filmmakers and researchers from all over the world: the apartment of Soviet film director Sergei Eisenstein. Eisenstein’s apartment had been, for decades, of enormous importance for Russian civil society because of its atmosphere of cultural exchange and diversity. Eisenstein was from Riga and his Jewish father’s family came from Ukraine. His understanding of cultural diversity, represented in his apartment, is especially timely in today’s world - a respectful approach to different cultures and their potential to inspire one another.
Over several decades, film historian Naum Kleiman had turned Eisenstein's apartment into a centre of Eisenstein research. The European Film Academy declared it a European Treasure. However, in the course of the political dismantling of the Moscow Film Museum, to which the Eisenstein cabinet officially belonged, the apartment was dismantled in 2018.
Our research project restores access to this unique space and to the intellectual cosmos of Sergei Eisenstein. The digital reconstruction of this cultural monument is a case study for our multidisciplinary research in Virtual Reality (VR), Information Visualisation and 3D-Sound. We explored new ways of representing and visualising it digitally. Our research lays the foundation for an international, interactive, constantly evolving web platform: Eisenstein’s House. We envision it as a digital meeting space for the entire international Eisenstein community – researchers, students, artists and interested non-professional users. Join this session to learn more.
The emergence of highly efficient and accessible technologies opens up new opportunities for generating multimodal virtual representations of musical instruments. These primarily consist of audio recordings, three-dimensional models, simulation data and documents. Using the standard for Virtual Acoustic Objects (VAO) - which is also capable of incorporating historical acoustical spaces and situations - virtual objects can be used for artistic, educational, and research purposes as interactive representations and virtually playable musical instruments.
The framework not only supports the recording, analysis, segmentation and structuring of multimodal data including AI approaches, but also the conversion of photogrammetric interior models to simulate their acoustics at any virtual position. With the internal data relations and interfaces for various environments and applications, it is possible to interactively perform animations on segmented parts of a 3D model (e.g. piano keys), triggered by interactions or MIDI signals, resulting in real-time auralisations while observing the correlating mechanical actions. With its linked information system, the access and visualisation of person and institution networks, object and material provenance or other information classes can be achieved and implemented in Web-/App-Environments.
Attendees of this session will gain insights into multimodal data relations of musical instruments, acoustical spaces and interactive virtual representations. Discussions of the VAO-standard will be focused on its potential mapping to the Europeana Data Model. The session will present the outcome of the digitisation projects TASTEN and DISKOS, providing a comprehensive overview of workflows, challenges and use cases of VAOs in creative and remote research situations.
Transcribathon is a citizen science platform which allows users to engage with cultural heritage materials by transcribing and geo-tagging them. This session will focus on the new technical modules developed during the EnrichEuropeana+ project, showing how automatic handwriting recognition (HTR) and automatic semantic enrichments work and how they were technically implemented.