CULTURA Virtual Research Environment

The Virtual Research Environment (VRE) for the Digital Humanities has been developed as part of the CULTURA project. This Research Environment is the central component of the CULTURA project and it supports users with different levels of experience to use a variety of tools to interact with a number of cultural heritage collections.

A number of different cultural collections have been incorporated into the CULTURA Virtual Research Environment, which provides a suite of useful services, integrated into a unified portal. These services are central to enhancing a user's engagement with an archive and include faceted search tools, annotators, social network visualisation tools, and personalised recommenders. The CULTURA portal is built on top of the Drupal content management system, as it provides numerous services that, while essential to CULTURA, are not core research elements, such as user authentication and system-wide logging.

Major Features and Functionalities:

  • Has a user-friendly interface, which supports users with varying levels of experience to explore digital cultural collections.
  • It enables multiple services to interact within one seamless digital humanities environment. The service-oriented architecture, coupled with parameterised launch of these services, allow for quick integration of new tools and features.
  • Is corpus agnostic, allowing different digital humanities collections, with varying degrees of metadata to be incorporated into it.
  • Provides extensive user logging and tracking which enriches the personalisation process.
  • Is easily extensible, with new services that are developed as Drupal modules able to slot into the environment and interact with pre-existing services.

Sample screen shots

Figure 1. An Illumination from the IPSA collection displayed within the CULTURA Virtual Research Environment

Figure 2. A witness statement from the Bureau of Military History displayed within the CULTURA Virtual Research Environment

Who to contact:

  • Prof. Owen Conlan - Email: Owen dot Conlan at cs dot tcd dot ie

↑ back on top

Cultural Heritage Collections

The concepts developed within the CULTURA project have been proved through the use of two reference cultural heritage collections: The 1641 Depositions and IPSA. These two collections are particularly relevant for scholars in the humanities, respectively for historians and historians of art, and have been chosen as case studies because they cover different and complementary aspects in the research on cultural heritage. The 1641 Depositions are composed of textual documents that required the development of tools for text processing, while IPSA is mainly a collection of digital images and metadata that required the development of tools for multimedia delivery. Yet the CULTURA approach showed to be effective for both collections, with a number of tools (annotations, visualisations, narratives, personalisation) that proved to be independent on the content and thus easily extensible to additional collections. This was proved when a third collection, a set of witness statements collected by the Irish Bureau of Military History, was integrated into CULTURA.

The behavior of users during user evaluations has been consistently tracked throughout the project, allowed the CULTURA consortium to integrate the content of the two collections with valuable information on user interest. Moreover, user annotations are stored in a separate server and constitute an important starting point for future studies on user interaction with digital collections of cultural heritage.

In the case of IPSA, the development of the CULTURA project allowed to carry out a complete normalization of names and subjects. Moreover, a number of procedures for exportation of both textual metadata and digital images (thumbnails and complete images) have been developed creating an important background to be used for fast inclusion of new collections within the CULTURA environment.

The main outcomes of this approach are:

  • Tracking of valuable data on user interaction with cultural heritage collections, including the use of visualization tools.
  • Enrichment of the collections with guided paths, in the form of short lessons called narratives.
  • Recollection of user generated content associated to the digital objects, in the form of annotations on complete items, textual elements, and image details.
  • Development of procedures to improve interoperability between an original collection and the CULTURA environment.

 

In the context of the 1641 Depositions, CULTURA has allowed the collection to be exploited and enhanced in a number of significant ways:

  • As part of the work carried out on the normalization of the highly variable and inconsistent early modern English contained in the Depositions, a manually normalized corpus, comprising approximately 10% of the total corpus. This resource is ideally suited to support the automated training of a range of natural language processing resources, and to facilitate the validation of such tools.
  • CULTURA has allowed work to be carried out on the manual normalization of person and place names. While this is a mammoth task that remains work in progress, CULTURA has made important steps towards the creation of a reliable and comprehensive gazetteer for early modern Ireland, a resource that remains something of a holy grail for historians and genealogists.
  • Prior to their inclusion in CULTURA, the Depositions were already accompanied by detailed, manually-generated metadata. The normalization of the text, and the application of entity extraction techniques by IBM and Commetric has significantly enriched this dataset.

 

In the case of the Bureau of Military History, the existing collection was enhanced in a number of ways:

  • Text was extracted from the existing pdf files and entity extraction performed over this new corpus. This entity extraction powered new visualizations and recommenders.
  • New faceted search features were augmented to collection to support user exploration.
  • Narratives that guide users to specific artifacts in the collection based around a specific theme were implemented, and annotation functionality added.

Who to contact:  

  • Prof. Micheál Ó'Siochrú for 1641 - Email: OSIOCHRM at tcd dot ie
  • Prof. Maristella Agosti for IPSA - Email: agosti at dei dot unipd dot it

 

↑ back on top

Personalisation

Cultural heritage collections often contain a large amount of resources and support a wide community of users that have varying levels of expertise. It is vital that the various types of users who interact with collections within the CULTURA portal are supported in locating content relevant to their current interests and tasks. Hence, the application of personalisation techniques by CULTURA helps empower experienced researchers, novice researchers and the wider community to discover, interrogate, and analyse cultural heritage resources.

Major Features and Functionalities:

 

  • Uses a four stage personalisation approach ? an underlying methodology with guide, explore, reflect and suggest phases. These four stages underpin personalisation within the CULTURA environment.
  • Utilises an extensible user model (stored in MongoDB, a NoSQL database) that contains data from multiple services integrated into CULTURA.
  • Provides different Recommender Widgets: the Hybrid Recommender Widget recommends similar content based on the user model and the content the user is currently viewing; the Global Recommender Widget gives recommendations solely based on the user model. This recommender is aimed at providing initial starting points for exploration rather than providing links to complementary resources while in the middle of an exploration.
  • Combines both short tail (recent activity) and long tail (overall activity) activities in its recommendations.
  • Supports transparency regarding why recommendations are given and allows users to reflect on and adjust their user model using tag clouds. This process makes the decisions that are occurring in the background more transparent as well as giving the user significant control over how their user model represents them. Different clouds can be rendered for different types of entities (people, organisations etc.) as well as different clouds detailing the user?s overall model of interest and their current short tail model of interest.
  • Users with little experience of the underlying resources can use a narrative module within the CULTURA environment. This ?narrative? module enables resources within the collection to be sequenced on a specific theme. These lessons are developed by domain experts and contain paths of various lengths (with optional and compulsory parts), so that users with different levels of interest can be accommodated. Users can be manually assigned a level of interest when starting a course (e.g. low, medium or high) or automatically based on their user model. Regardless, CULTURA monitors the user?s progress through a lesson, and if the user shows sufficient interest in particular concepts, the lesson can be dynamically augmented through the addition of further relevant resources. Importantly, the lessons can also be explicitly adjusted by the user as they make progress (by choosing to see more resources on a specific concept), which gives them ultimate control of their lesson path to a very fine granularity.

 

Sample screen shots

Figure 3. An example of recommended content displayed to users within the CULTURA portal

Figure 4. User Model Rendered as a Tag Cloud

Who to contact: 

  • Prof. Owen Conlan - Email: Owen dot Conlan at cs dot tcd dot ie
  • Prof. Séamus Lawless - Email: Seamus dot Lawless at scss dot tcd dot ie

↑ back on top

User Engagement

The concept of user-centred, user-led, and user-validated design has been a crucial and central part of the design and implementation of the CULTURA project. At the beginning of the project, all partners shared the vision of extensive cooperative engagement with users of all sorts, but did not have an established methodology to guide or underpin this engagement. Furthermore, the unusual prominence given to user interaction in the CULTURA project meant that no suitable methodology was available for adoption or adaptation.

Thus, an early priority of CULTURA was the development and refinement of a robust and rigorous methodology for user engagement that ensured that the authentic voice of end users was heard and heeded in all stages of project design, and that ensured that this input included the perspectives of users from across the end user spectrum.

CULTURA humanities researchers, in close consultation with evaluation partners, worked together to craft a methodology that addressed these requirements. This included identifying and developing communities of end users, designing a structure for eliciting responses and gathering feedback ? both qualitative and quantitative ? and procedures for disseminating project developments back to users, ensuring that a feedback loop was fostered, and that user input remained a constant and dynamic part of the project.

The main outcomes of this approach are:

  • Robust methodology for user interaction
  • Flexible adaptation to a wide range of user types
  • Balanced emphasis on eliciting input for future design and refinements, and gathering user evaluations of existing systems
  • Ensures Tracking of the authentic user voice at all stages of the project
  • Fostering of dynamic communities of researchers, with potential to ensure longer term sustainability and usefulness of project outcomes.

Who to contact: 

  • Prof. Micheál Ó'Siochrú - Email: OSIOCHRM at tcd dot ie

↑ back on top

Entity Relationship Extraction

The Entity Relationship Recognition module is a linguistic solution, developed by the IBM team, for extracting key entities from cultural resources and understanding the relationships among them. The process is done through a UIMA pipeline for annotating textual content, using IBM LanguageWare Workbench to create custom dictionaries and parsing rules. The module was applied on the 1641 Depositions to extract persons, locations, dates, and events, as well as the relationships among them and their associated attributes.

Major features and functionalities:

  • Leveraging IBM LanguageWare (LW) Workbench for the building and customization of dictionaries and parsing rules
  • Generic component for annotating textual cultural resources, which includes a UIMA annotator that performs linguistic processing over text and a designated identification module that returns the normalized forms of UIMA?s annotations
  • Configurable component that gets a collection of annotated documents and a representation of an Entity Relationship Diagram (or schema) and generates the corresponding Entity-Relationship (ER) data in an XML format
  • Entity disambiguation through post-annotation analysis, based on detecting linguistic similarities (e.g., full vs. partial spelling of person?s name) or on custom knowledge about linguistic mappings (e.g., mapping holidays to dates, or Julian dates to Gregorian)
  • Dictionary extension tool for extending and improving the initial LW dictionaries, resolving multiple entries to a single normal-and-surface-forms entry, and removing conflicts and adding semantics for entity identification
  • Exposing a set of APIs to support changes to ER data, such as adding or deleting an entity or a relationship, or updating an attribute ? this supports the integration with PreMapper for manual feedback.

Sample screen shots

Figure 5. Cultural historical documents

Figure 6. Sample parsing rule

Figure 7. Annotating deposition text

Who to contact: 

  • Ella Rabinovich - Email: ELLAK at il dot ibm dot com

↑ back on top

Entity-oriented Search (EoS)

EoS features Entity Relationship Data Discovery model, a data discovery extension to the classical ER conceptual model and a new logical Document Category Sets model used to represent entities and relationships within an enhanced document model. Based on this data modelling, the IBM team developed a novel approach for exploratory search over rich entity-relationship data that utilizes a unique combination of expressive yet intuitive query language, faceted search, and graph navigation. The team initiated a Lucene open-source project to improve EoS scalability and performance, with robust support for updating indexed documents. We are also developing an extension to enable Lucene-based search over noisy historical text by indexing raw terms together with their normalized versions, thus providing the capability to explore original and normalized texts simultaneously. The underlying platform upon which EoS has been developed is IBM?s Social Networks and Discovery (SaND), an aggregation tool that unleashes the value of information by combining content-based analysis and people-based analysis over a rich data foundation. SaND is based on modelling together content, people (e.g., researchers), and their activities (e.g., annotations), and making all entities and the relationships among them searchable and retrievable. The platform is used in many domains as the base for recommendation systems, personalization, and other social services.

Major features and functionalities:

 

  • Novel data discovery model used to represent entities and relationships within an enhanced document model
  • Expressive query language that supports three basic types of query predicates: entity attribute predicates, free-text predicates, and entity-relationships predicates
  • Built-in support for similarity predicate, aimed to discover similar entities to a given entity with respect to content, metadata, and proximity (between entities connected by relationships)
  • Generic engine for searching ER data based on the open source Lucene search engine
  • Advanced features for entity ranking, enabling boosting of frequent entities and entities comprising a dense relations network
  • Web service that receives each HTTP request with the user?s query and returns the results
  • An expressive Web UI that supports entity-related faceted search and graph navigation, featuring relationships among entities
  • Lucene open-source project to support incremental field updates (in progress)
  • Integration with IBM SaND to support social services
  • Highly configurable and applicable to textual collections, as demonstrated through the incarnation of EoS for the 1916 statements.

 

Sample screen shots

Figure 8. EoS Web

Figure 9. 1641 Depositions Entity-Relationship Diagram

Figure 10. EoS

Who to contact: 

  • Ella Rabinovich - Email: ELLAK at il dot ibm dot com

↑ back on top

Text Normalisation

It has been developed and implemented a new general and language independent approach to the noisy text correction problem in the framework of the CULTURA project.

Although the noisy text correction problem can be regarded as a specialised machine translation method, it differs from the standard SMT methods, in two main aspects.

Firstly, the presence of noise makes it impossible to predict the variety of input noisy words or to create a complete translation table, that is in the basis of an SMT-engine, at a preliminary stage. This issue has been addressed by designing a novel machine learning technique called REBELS.

Secondly, the correction of noisy texts unlike machine translation is a monotone one, i.e. the noise usually concerns the individual words but rarely changes the word order or induces long-distance dependencies. Hence to select the best correction of a complete noisy text the system applies a log-linear model (LLM) that combines the correction candidates provided by REBELS with respect to their rank and some standard language features and ignores the principle complications. The lack of transpositions and long-distance dependencies allowed us to implement the entire LLM by the means of a new computational framework called functional automata. The system is supplied with a flexible environment for the efficient training required by the LLM and a complete search procedure that is guaranteed to find the best correction output according to the model.

Major features and functionalities:

 

  • The REBELS module automatically extracts historical spelling variation patterns from a training corpus.
  • For any source word and phrase the REBELS module provides a ranked list of correction candidates based on the extracted spelling variation patterns and their probabilities.
  • The system exploits a Language models (LM) of the contemporary language in order to measure the likelihood (in respect to the LM) of the normalisation of the source sentence.
  • The LLM implemented with the Functional Automata Framework combines the probabilities of the REBELS module and the LM and selects the best normalisation based on Entropy Maximization.
  • The REBELS module is further extended to REBELS-XRL with the addition of custom expert dictionaries for treating of specialized terms and expressions like roman dates, geographical names etc.
  • The system is accessible through a RESTFUL based web service for seamless integration with other systems.
  • A user friendly web interface has been developed for direct use of the system over the Internet.

 

Sample screen shots

Figure 11. EoS Web

Who to contact: 

  • Prof. Tinko Tinchev - Email: tinko at fmi dot uni-sofia dot bg

back on top

Desktop PreMapper

The PreMapper is a custom-built tool for applying social network analysis to digital humanities resources. The desktop version of the software was originally planned as the only tool to edit entities and relations, add new entities, arrange custom maps and export results. The desktop PreMapper is built with .NET software framework.

The key functions of the desktop PreMapper include:

  • search in the 1641 depositions database, as well as entity-oriented search in the loaded network
  • creating new entities
  • selecting, editing and merging entities
  • adding or editing entity relations
  • filtering maps and data
  • measuring centrality
  • customising network map design
  • exporting and sharing network visualisation projects.

 

Figure 15. Results of a search by name with the word "henry". The results list all results featuring "henry" in their name. Using the blue links on the right, the results can be filtered by type - "Deposition", "Locations" or "Persons".

Figure 16. Search function in the loaded network in the upper right corner highlights the results in the map. The mini-map functionality in the lower left corner reveals where the highlighted nodes are situated in the full network.

Figure 17. A manually created link between two people, with custom colour, direction and label.

Figure 18. Properties toolbar node view, with one node selected.

Figure 19. Filters Toolbar, with one Node Attribute Slider Filter selected and distribution of values visualised for the attribute "weight".

Who to contact: 

  • Spyros Garyfallos - Email: spyros dot garyfallos at commetric dot com

↑ back on top

Web PreMapper

This web PreMapper was developed in the second half of 2013 following the second stage user trials of the desktop version. The web PreMapper supports the main functionalities of the desktop application: creating new entities, selecting, editing and merging entities, adding or editing entity relations, filtering.

The key characteristics of the web PreMapper are:

  • Works in the Drupal website
  • No additional software required
  • Takes advantage of the Drupal user permissions
  • Easy to use
  • Real time updates
  • Smart editing: takes the possible relations between the nodes into account
  • Graph database edits can be fed in the EoS index.

 

Since it is integrated in the Drupal module network visualisations, the web PreMapper is easily accessible and convenient to use. Only users with editing permissions in the CULTURA environment can edit the 1641, 1916 or IPSA entity networks, which will assure that the data would not be compromised by attempted edits by non-experts.

The changes are stored in real time in the MySQL graph database, which supports the network visualisations, and can be viewed by all other users.

Figure 20. The web PreMapper allows editing the properties of any node, to delete nodes and to add new ones.

Figure 21. One of the key functionalities of both desktop and web PreMapper is to merge nodes. This is especially useful when merging duplicate person entities.

Figure 22. The web PreMapper also makes it possible to create new links and thus connect entities, which are not currently associate.

Who to contact: 

  • Spyros Garyfallos - Email: spyros dot garyfallos at commetric dot com

↑ back on top

FAST Annotation Service

The Flexible Annotation Semantic Tool (FAST) service covers many of the uses and applications of annotations ranging from metadata to full content; its flexible and modular architecture makes it suitable for annotating general Web resources as well as digital objects managed by different digital library systems; the annotation themselves can be complex multimedia compound objects, with varying degree of visibility which ranges from private to shared and public annotations and different access rights.

Major features and functionalities:

 

  • An annotation is a compound multimedia object which is constituted by different signs of annotation. Each sign materializes part of the annotation itself; for example, we can have textual signs, which contain the textual content of the annotation, image signs, if the annotation is made up of images, and so on. In turn, each sign is characterized by one or more meanings of annotation, which specify the semantics of the sign; for example, we can have a sign whose meaning corresponds to the title field in the Dublin Core (DC) metadata schema, in the case of a metadata annotation, or we can have a sign carrying a question of the author about a document whose meaning may be ?question? or similar.
  • Annotations are exposed through a RESTful Web service both in XML format, to ease for B2B development, and in JSON format, to ease for B2C and user interfaces development.
  • Annotations and annotated resources constitute a hypertext which can span and cross the boundaries of the single digital library system, providing a superimposed layer of information that can connect different and heterogeneous resources together.
  • A powerful query language, based on the CQL 2.0 syntax, is offered to mix both structured and unstructured queries over annotations. The query language is also able to take into consideration the hypertext that exists among annotations themselves and annotated resources. In this way, it is possible to exploit threads of annotations on a topic to retrieve annotated resources which would have not been retrieved otherwise with a direct search concerning only the annotated resources.

 

Figure 23. Example of document-annotation hypertext

Figure 24.Example of annotation

Who to contact: 

  • Dr. Nicola Ferro - Email: ferro at dei dot unipd dot it

↑ back on top

Content Annotation Tool (CAT)

CAT is a web annotation tool developed with the goal of being able to annotate multiple types of documents and assist collaboration in the field of digital humanities. At present, CAT allows for the annotation of both text and images. The current granularity for annotation of text is at the level of the letter. For image annotations, the granularity is at the level of the pixel. This allows for extremely precise document annotation, which is very relevant to the Digital Humanities domain due to the variety of different assets that prevail.

Major features and functionalities:

 

  • There are two types of annotation which may be created using CAT; a targeted annotation and a note.
  • A targeted annotation is a comment which is associated with a specific part of a document. This may be a paragraph, a picture or an individual word, but the defining feature is that the text is directly associated with a specific subset of the digital resource.
  • Conversely, a note is not associated with a subset of an artefact and serves as a general comment or remark about the document as a whole.
  • Annotations can also allow an individual to link their annotations to other, external sources. Importantly, each link has comment text associated with it; for example allowing an educator to explain why this specific link is important or what the student should seek to gain from reading this particular source.
  • CAT provides valuable user information to the CULTURA user model in terms of documents and entities that users have shown an explicit interest in. This is very useful for personalisation and recommendations.
  • CAT supports private annotations, public annotations and group annotations. This enables users with different needs (Educators creating annotations for their class, Students collaborating on a project, Historians documenting their private insights etc.) to have their annotations shared at an appropriate level.
  • Because CAT gives access to targeted sections of a document, the toolbar can be exposed to other services within the portal, allowing for live interfacing with a document. For example, CULTURA provides a normalization service which resolves anomalies in the archaic text of the 1641 depositions. Access to this service is now provided to the user via CAT, with users able to highlight sections of text and have it normalised to a modern form.

 

Sample screen shots

Figure 25. A user can create a targeted annotation on a body of text about a person of interest.

Figure 26. A user can create an annotation on an image of interest

Who to contact: 

  • Prof. Owen Conlan - Email: Owen dot Conlan at cs dot tcd dot ie

↑ back on top

Evaluation Methodology

The main objective of evaluation work in CULTURA consists in planning and conducting empirically sound evaluations to investigate the quality and benefit of the project outcomes. The novel methodological and technical approaches integrated in the CULTURA environment, as well as the intended reusability of the system with different digital collections and the diversity of users addressed as target audience require a thorough definition of sound evaluation approaches that are suitable and comparable across diverse collections and user communities. As a basis for carrying out empirical evaluation studies an evaluation methodology has been specified. Concretely, this comprises an evaluation model, which builds upon a state of the art model for digital libraries, and which has been iteratively extended to accommodate the evaluation needs and specificities of CULTURA.

The evaluation model distinguishes system, content, and users as the main components of CULTURA and locates the qualities for evaluation on the axes between those components. In this way, the evaluation model enables a systematic, in-depth examination of the quality and benefit of the CULTURA services in addition to assessing traditional quality aspects of the overall system. The evaluation model has served as a theoretical and common ground for planning and conducting user trial evaluations over different collections and user groups, which have translated the model into appropriate evaluation designs, instruments, and procedures. Measurement instruments have been selected or developed in line with the evaluation qualities defined in the model, taking into account the expertise of the targeted users and the specificities of the given test bed and evaluation setting.

Major features of the evaluation methodology:

 

  • Evaluation model defining evaluation qualities on the axes between system, content, and users of CULTURA. The evaluation qualities identify the specific aspects of interest to be addressed in evaluation and their operationalization to make them measurable in empirical data collection.
  • A set of evaluation methods specifying the evaluation design and procedure for different user groups with reference to the evaluation model as common underlying basis.
  • A collection of evaluation instruments for data collection on the individual evaluation qualities in the context of empirical user studies. These were adopted and adapted from existing or standard instruments or, respectively, developed in line with the evaluation qualities and their operationalization. The instruments implement a mixed technique approach combining subjective self-reports and objective consideration of user interaction.

 

Figure 27. Diagram of the evaluation model

Who to contact: 

  • Dr. Christina Steiner - Email: christina dot steiner at tugraz dot at

↑ back on top

Equalia

Evaluation is an important task in the context of digital libraries, because it reveals relevant information about the quality of the technology for all stakeholders and decision makers. It involves collecting and analysing information about the users? activities and the software characteristics and outcomes. Its purpose is to make judgements about the benefits of a technology, to improve its effectiveness, and/or to inform programming decisions. The evaluation process can be broken down into three key phases: (1) Planning, (2) Collecting, and (3) Analysing.

The main objective of Equalia is to support the systematic and sound evaluation of digital libraries systems in line with the key phases of evaluation. For this reason, its approach is based on an evaluation model, multi-modal data collection, and automatic reports. The evaluation model formally defines what and how should be evaluated, and thus specifies the evaluation questions to be answered. The multi-modal data collection approach allows collecting data from different sources, such as log data, on-line ratings, and questionnaires. Automatic reports are created based on the collected data and the underlying evaluation model and make available the evaluation data and results in graphical and tabular form. Equalia has been integrated and used with the CULTURA system in conjunction with the 1641 and IPSA collection. It has been used in several evaluation studies, where evaluation data has been collected and used to create the evaluation reports.

Major features and functionalities:

 

  • Formal definition of the evaluation questions and instruments: The aspects to be evaluated and the instruments how the evaluation is conducted are formally modelled. Additionally the instruments are mapped to the evaluation aspects.
  • Multi-modal data collection: Evaluation data is collected in different ways ? in a invasive and non-invasive, continuous and non-continuous manner. First, there are traditional questionnaires provided by Equalia to collect data about the users? opinions on certain topics as well as demographic data. Second, continuous data collection is provided by collecting direct feedback while users interact with the system under evaluation. Third, data about the navigation behaviour and usage of system functions are collected.
  • Automatic Reports: All the collected data are related to the evaluation aspects according the to the previously defined evaluation model. Using these relations automatic reports are created that display the raw data as well as analyses and results on the evaluation aspects. The results are graphically displayed, can be exported, and are made accessible through the REST interface.
  • Open interface and integration with other systems: Equalia has a REST interface that is used to collect data in XML format from the evaluated system and questionnaires. Beyond CULTURA also other systems can be evaluated with Equalia, if they send evaluation data in the specified format.

 

Figure 28. Sample screen shot of Equalia

Who to contact: 

  • Dr. Christina Steiner - Email: christina dot steiner at tugraz dot at

↑ back on top