Semantic Visualization project provides a comprehensive visualization and interactive search and analytics interfaces for exploiting Semantic Web capabilities for all levels of systems architecture. This includes (a) ontology development and usage (including filtering, search, personalization) for developing light-weight, personal and collaboratively developed ontologies, as well as highly expressive biological ontologies with support for visual modeling and display of object modeled (e.g., complex carbohydrate molecules), (b) graphical query formulation with support for specifying and interactively modifying context and ranking criteria for semantic (relationship-based) document search and complex semantic analytics, (c) 3D immersive visual display of semantic analytics showing "connections between the dots" with ability to explain, (d) tracking and associating activities, and (e) "inspecting the dots" which can be heterogeneous documents and multi-modal content (text-based documents in various formals and static or dynamically generated Web page, audio, video, image and other media content). Currently, three tools have been developed: OntoVista, SAV and SET.
Ontologies build the backbone for many life-sciences applications. These ontologies, however, are represented in XML-based languages that are meant for machine-consumption and hence are difficult for humans to comprehend. For a meaningful visualization of these ontologies, it is important that the display of entities and relationships captures the cognitive representation of the domain as perceived by the domain experts. OntoVista is an ontology visualization tool that is adaptable to the needs of different domains, especially in the life sciences. While keeping the graph structures as the predominant model, we provide a semantically enhanced graph display that gives users a more intuitive way of interpreting nodes and their relationships. Additionally, OntoVista provides comfortable interfaces for searching, semantic edge filtering and quick-browsing of ontologies. To this end, we extended the Jambalaya plugin for Protege to allow for customization and integration of different layouts.
Semantic Analytics Visualization
"Semantic Analytics Visualization" (SAV) is a 3D visualization tool for Semantic Analytics. It has the capability for visualizing ontologies and meta-data including annotated web documents, images, and digital media such as audio and video clips in a synthetic three-dimensional semi-immersive environment. More importantly, SAV supports visual semantic analytics, whereby an analyst can interactively investigate complex relationships between heterogeneous information. Use of a Virtual Reality technology allows it to provide a highly interactive interface. The backend of SAV supports query processing and semantic association discovery developed in the SemDis project. Using a virtual laser pointer, the user can select nodes in the scene and either play digital media, display images, or load annotated web documents. SAV can also display the ranking of web documents as well as the ranking of paths (sequences of links). SAV supports dynamic specification of sub-queries of a given graph and displays the results based on ranking information, which enables the users to find, analyze and comprehend the information presented quickly and accurately.
Semantic EventTracker (SET) is a highly interactive visualization tool for tracking and associating activities (events) of suspects in a Spatially Enriched Virtual Environment (SEVE). "Event" information is extracted from ontologies that enable a user to discover semantic associations between events using thematic and topological relations, including RDF graphs that represent metadata and ontological teams. For example, a suspect A traveled to city M in two days where he met another suspect B. After 3 days, they took a flight where another suspect C was also traveling and they all arrived at a nuclear facility in city N. Additionally, a week prior, suspect C had visited a bomb manufacturing facility in country X. A 2D map is generated using geospatial information which is applied as a texture on 3D objects. Temporal data is represented as a 3D multi-line in space that connects sequencing events. The slope of the line depends upon the time-distance between two events; time-distance indicates how far apart two events occurred in time, for example, this could depict how fast a suspect traveled from one place to another. A steep slop indicates greater time-distance. The line is semantically marked with 3D objects that can be selected by the user's virtual hand. Upon a marking's selection, SET displays in SEVE metadata information such as digital images, 3D models, web documents and it can also play audio and video clips. Multiple multi-lines can be visualized on the same map to help discover path crossings, meeting times, contacts of suspects, etc, of tracked suspects.
Paged Graph Visualization
Paged Graph Visualization (PGV) is a new user-centered, semi-autonomous visualization tool for RDF data exploration, validation and examination. PGV consists of two main components: the PGV visualizer, which is the front-end of PGV, and the BRAHMS data-store, which is our high performance main-memory RDF storage system. Unlike th e existing graph visualization techniques which attempt to display the entire graph, PGV has been designed for the incremental, semantics-driven exploration of very large RDF ontologies. A novel, Ferris-Wheel-like visualization technique is used to explore the so called hot spots in the graph, i.e. nodes with large numbers (hundreds) of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV visualizer obtains the necessary sub-graphs from BRAHMS and enables their incremental visualization without having to rearrange the previously laid out sub-graphs. PGV and BRAHMS communicate via the HTTP protocol and to provide the necessary on-line response times, BRAHMS is connected to the Web server via the Fast-CGI interface which eliminates the need of restarting the BRAHMS process at every request. Additionally, PGV has been enhanced with a speech recognizer and a speech synthesizer.
©2005 LSDIS and the University of Georgia. All rights reserved.