Accepted Posters and Demos

The Floorplan of Posters and Demos session is available at this PDF file.

Posters

top
Masao Watanabe, Kazunari Hashimoto, Seiya Inagi, Yohei Yamane, Seiji Suzuki and Hiroshi Umemoto
A method for quantifying working processes on manufacturing floors was established that uses a wearable sensor device and an ontology-based stream data processing system. Using this method, the measurement of manufacturing process efficiency from sensor data extracted from such a device worn by workers on the job was confirmed at the Fuji Xerox factory.
Paolo Pareti
This study presents a framework to allow human and machine agents to reason and coordinate actions without direct communication mechanisms by sharing distributed Linked Data resources. This framework addresses the problems of querying frequently-updating distributed datasets and guaranteeing transactional consistency. The motivation for this framework comes from the use-case of opportunistic automation of humans-generated procedures. This use-case is based on existing real-world Linked Data representations of human instructions and their integration with machine functionalities.
Lihua Zhao, Naoya Arakawa, Hiroaki Wagatsuma and Ryutaro Ichise
Sophisticated digital map is an essential resource for intelligent vehicles to localize and retrieve environment information. However, the open map source do not contain enough information for decision making during autonomous driving. Although comprehensive commercial map data can provide precise map knowledge, the data format is not in a machine-readable format. Therefore, we retrieve useful knowledge from high-precision commercial map and convert it into ontology based data to help intelligent vehicles perceive driving environment and make decisions at various traffic scenarios. Other than developing deci- sion making systems, the converted map data can be used as a golden standard for evaluating traffic sign detection, road mark detection, and automatic map construction.
Octavian Rinciog and Vlad Posea
Nowadays, governments and public agencies publish open data at an exponentially growing rate on dedicated portals. These open data have a problem: they don’t have a well defined structure, because the focus is on publishing data and not on how they are used. GovLOD is a platform that aims to transform the information found in these heterogeneous files in Linked Open Data using RDF triples.
Christophe Debruyne, Eamonn Clinton, Lorraine McNerney, Atul Nautiyal and Declan O'Sullivan
In this paper we present data.geohive.ie, which aims to serve Ireland’s national geospatial data as authoritative Linked Data. Currently, the platform provides information on Irish administrative boundaries and the platform was designed to support two use cases: serving boundary data of geographic features at various level of detail and capturing the evolution of administrative boundaries. We report on the decisions taken for modeling and serving the information such as the adoption of an appropriate URI strategy, the devel-opment of necessary ontologies, and the use of (named) graphs to support the aforementioned use cases.
Nicolas Seydoux, Khalil Drira, Nathalie Hernandez and Thierry Monteil
Semantic interoperability is an issue in heterogeneous IoT systems. The limited processing power and memory storage of constrained IoT nodes prevents them from handling enriched data. This paper proposes a method to lower complex knowledge representations into simpler structured data, based on the reuse of lifting mappings from data schemas to semantic models.
Femke Ongenae, Femke De Backere, Jelle Nelis, Stijn De Pestel, Christof Mahieu, Shirley Elprama, Charlotte Jewell, An Jacobs, Pieter Simoens and Filip De Turck
People with Dementia (PwD) exhibit Behavioral Disturbances (BD) that can be alleviated by personalized interactions, revisiting memories and promoting comfort and quality of life. However, caregivers are unable to spend a lot of time on these interactions. This work-in-progress poster details the design and deployment of a semantic Internet of Robotic Things (IoRT) platform that enables personalized interactions of a robot with a PwD to reduce and intercept BDs.
Takeshi Morita, Yu Sugawara, Ryota Nishimura and Takahira Yamaguchi
We have developed PRactical INTElligent aPplicationS (PRINTEPS) which is a platform for developing comprehensive intelligence applications. This paper introduces an application of PRINTEPS for customer reception service in robot cafe by using stream reasoning and Robot Operating System (ROS) based on PRINTEPS, and for integrating image sensing with knowledge processing. Based on this platform, we demonstrate that the behaviors of a robot in a robot cafe can be modified by changing the applicable rule sets.
Michał Blinkiewicz and Jaroslaw Bak
We present the recent progress of SQuaRE, the SPARQL Query and R2RML mappings Environment which provides a graphical interface for creating R2RML mappings which can be immediately tested by executing SPARQL queries. SQuaRE is a web-based tool with easy to use interface that can be applied in the ontology-based data access applications. We describe SQuaRE’s main features, its architecture as well as technical details.
Henning Agt-Rickauer, Jörg Waitelonis, Tabea Tietz and Harald Sack
With the switch from analog to digital technology the entire process of production, distribution, and archival of a film and tv program large amounts of data are created. Besides recorded and processed audiovisual information, in each single step of the production process and furthermore throughout the entire media value chain new metadata is created, administrated, and put into relation with already existing metadata mandatory for the management of these processes. Due to competing standards as well as to proprietary and incompatible interfaces of the applied software tools, a significant amount of this metadata is lost again and not available for subsequent steps in the process chain. As a consequence most of this valuable information has to be costly recreated in each single step of media production, distribution, and archival. Currently, there is no generally accepted nor commonly used metadata exchange format that is applied throughout the media value chain. But, also the market for media production companies has changed dramatically towards the internet as being the preferred distribution channel for all media content. Today’s available limited budget for media production companies puts additional pressure to work in a cost and time efficient way and not to waste resources due to the necessity of costly reengineering of lost metadata. The dwerft project aims to apply Linked Data principles for all metadata exchange through all steps of the media value chain. Starting with the very first idea for a script, all metadata are mapped to either existing or newly developed ontologies to be reused in subsequent steps of the media value chain. Thus, metadata collected during the media production becomes a valuable asset not only for each step from pre- to postproduction, but also in distribution and archival. This paper presents results of the dwerft project about the successful integration of a set of film production tools based on the Linked Production Data Cloud, a technology platform for the film and tv industry to enable software interoperability used in production, distribution, and archival of audiovisual content.
Corentin Jouault, Kazuhisa Seta and Yuki Hayashi
The purpose of this research is to use Linked Open Data (LOD) to support history learning on the Internet. The main issue to create meaningful content-dependent advice for learners is that the system requires an understanding of the learning domain. The learners use the Semantic Open Learning Space (SOLS) to create a machine-understandable concept map that represent their knowledge. SOLS is able to dynamically generate questions depending on each learner’s concept map. The system uses history domain ontologies to generate questions that aim to help learners develop their deep historical considerations. An evaluation showed that the learners using the question generation function could express deeper historical considerations after learning.
Jongmin Lee, Youngkyoung Ham and Tony Lee
There are many studies on question answering system which can answer to natural language questions. Diverse techniques are required for building this system, but it cannot be implemented without well-structured knowledge data. For this reason, we construct a large-scale knowledge base in Korean, with the goal of creating a uniquely Korean question answering system.
Wei Emma Zhang, Ermyas Abebe, Quan Z. Sheng and Kerry Taylor
In this paper, we propose the rst system, so-called Open Programming Knowledge Extraction (OPKE), to automatically extract knowledge from programming Question-Answering (QA) communities. OPKE is the rst step of building a programming-centric knowledge base. Data mining and Natural Language Processing techniques are leveraged to identify paraphrased questions and construct structured information. Preliminary evaluation shows the eectiveness of OPKE.
Yuting Song, Taisuke Kimura, Biligsaikhan Batjargal and Akira Maeda
Aiming to link the records that refer to the same entity across multiple databases in different languages, we address the mismatches of wordings between literal translations of metadata in source language and metadata in target language, which cannot be calculated by string-based measures. In this paper, we propose a method based on word embedding, which can capture the semantic similarity relationships among words. The effectiveness of this method is confirmed in linking the same records between Ukiyo-e (Japanese traditional woodblock printing) databases in Japanese and English. This method could be applied to other languages since it makes little assumption about languages.
Gerhard Wohlgenannt and Filip Minic
Ontology learning has been an important research area in the Semantic Web field in the last 20 years. Ontology learning systems generate domain models from data (typically text) using a combination of sophisticated methods. In this poster, we study the use of Google's word2vec to emulate a simple ontology learning system, and compare the results to an existing "traditional" ontology learning system.
Motoyuki Takaai and Yohei Yamane
In the development departments of some manufacturing companies, there are weekly reports describing the status of events but they are poorly structured plain texts. In this report, we propose a method for constructing semantic networks of development activities from weekly reports. Our ontology-based method extracts things like events, status and agents from the reports and constructs relations between them and creates Semantic MediaWiki pages from the semantic networks to visualize development activities. We show a use case to apply the method to actual weekly reports and internal documents of a development department.
Fabien Gandon
We describe a DBpedia extractor materializing as linked data the editing history of Wikipedia pages to support historical queries and indicators.
Paramita Mirza, Simon Razniewski and Werner Nutt
While automated knowledge base construction so far has largely focused on fully qualified facts, e.g. , the Web contains also extensive amounts of cardinality information, such as that someone has two children without giving their names. In this paper we argue that the extraction of such information could substantially increase the scope of knowledge bases. For the sample of the hasChild relation in Wikidata, we show that simple regular-expression based extraction from Wikipedia can increase the size of the relation by 178. We also show how such cardinality information can be used to estimate the recall of knowledge bases.
Andrea Giovanni Nuzzolese, Anna Lisa Gentile, Valentina Presutti and Aldo Gangemi
In this paper we describe cLODg2 (conference Linked Open Data generator - version 2), a tool to collect, refine and produce Linked Data about scientific conferences with their associated publications, participants and events. Conference metadata collected from different unstructured and semi-structured resources must be expressed with appropriate vocabularies to be exposed as Linked Data. cLODg2 facilitates this task by providing a one-click workflow to generate data which is ready to be integrated in the ScholarlyData.org dataset. cLODg2 is an open source project, which has the aim to foster the publication of scholarly Linked Open Data and encourage collaborative efforts in this direction between researchers and publishers.
Jonas Bulegon Gassen, Stefano Faralli, Simone Paolo Ponzetto and Jan Mendling
System analysis and design is concerned with the creation of conceptual models. In this paper, we introduce a novel resource called "Who-Does-What" (WDW) that supports the creation and quality assurance of such models. WDW provides a knowledge base of activities for classes of people engaged in a wide range of different occupations. The resource is semi-automatically created by populating the manually-created Standard Occupational Classification (SOC) of the US Department of Labor with activities found on the Web.
Joo Sungmin, Seiji Koide, Hideaki Takeda, Daisuke Horyu, Akane Takezaki and Tomokazu Yoshida
This paper proposes Agriculture Activity Ontology(AAO) as a basis of the core vocabulary of agricultural activity. Since concepts of agriculture activities are formed by the various context such as purpose, means, crop, and field, we organize the agriculture activity ontology as a hierarchy of concepts discriminated by various properties such as purpose, means, crop and field. The vocabulary of agricultural activity is then defined as the subset of the ontology. Since the ontology is consistent, extendable, and capable of some inferences thanks to Description Logics, so the vocabulary inherits these features. The vocabulary is also linked to existing vocabularies such as AGROVOC. It is expected to use in the data format in the agricultural IT system. The vocabulary is adopted as the part of "the guideline for agriculture activity names for agriculture IT systems" issued by Ministry of Agriculture, Forestry and Fisheries (MAFF), Japan.
Atsuko Yamaguchi, Kouji Kozaki, Kai Lenz, Yasunori Yamamoto, Hiroshi Masuya and Norio Kobayashi
Linked Open Data (LOD) is a powerful mechanism for linking different datasets published on the Web, which is expected to create new value of data through mash-up over various datasets on the Web. One of the important needs to obtain data from LOD is to find a path of resources connecting given two classes, each of which has an end resource of the path. In this study, the two technologies for the approach are introduced: a labeled multi graph named class graph to compute class-class relationships and an RDF specification named SPARQL Builder Metadata to obtain and store required metadata for construction of a class graph. In addition, as a practical application, we introduce the SPARQL Builder system, which assists users in writing semantic queries for LOD.
Satoshi Kume, Hiroshi Masuya, Yosky Kataoka and Norio Kobayashi
Imaging data are fundamental to life sciences. We aimed to construct a microscopy ontology for an integrated metadata database of optical and electron microscopy images combined with various bio-entities. To realise this, we applied the Resource Description Framework (RDF) to an Open Microscopy Environment (OME) data model, which is the de facto standard to describe optical microscopy images and experimental data. We translated the XML-based OME metadata into the base concept of Web Ontology Language (OWL) as a trial of developing microscopy ontology. We describe the OWL-based ontology of microscopy imaging data and propose 18 upper-level concepts of ontology with missing concepts such as electron microscopy, phenotype data, biosample, and imaging conditions.
Kai Lenz, Hiroshi Masuya and Norio Kobayashi
To promote data dissemination and integration of life science datasets produced in a general research institute, RIKEN, we developed an infrastructure database named as "RIKEN MetaDatabase", which en- ables data publication and integration with Resource Description Frame- work. We implemented simple data managing work ow, relational data- base like graphical interface represents data links across laboratories. As a result, activities of inter-laboratories collaborations and coordination began to accelerated. Combined with global standardisation activities, we expect this database can contribute data integration across the world.
Terue Takatsuki, Mikako Saito, Sadahiro Kumagai, Eiki Takayama, Kazuya Ohshima, Nozomu Ohshiro, Kai Lenz, Nobuhiko Tanaka, Norio Kobayashi and Hiroshi Masuya
We developed RDF-based databases of phenotype and animal strains produced in Japan and a portal site termed as “J-Phenome”. By the application of common schema, these databases can be retrieved by the same SPARQL query across graphs. In the operation of these databases, RDF represented multiple advantages such as improvement of comprehensive search, data integration using ontologies and public data, reuse of data and wider dissemination of phenotype data compared to conventional technologies.
Jing Mei
Evidence-based medicine intends to optimize clinical decision making by using evidence. Semantic query answering could help to find the most relevant evidence. However, at point of care, it still lacks time for human reading of the evidence. In this poster, we propose to build an evidence graph for clinical decision support, in which an evidence ontology is defined with extension of SWRL rules. On top of this graph, we do evidence query and evidence fusion to generate the ranking list of decision options. Our prototype implementation of the evidence graph demonstrates its assistance to decision making, by combining a variety of knowledge-driven and data-driven decision services.
Stasinos Konstantopoulos, Angelos Charalambidis, Giannis Mouchakis, Antonis Troumpoukis, Jürgen Jakobitsch and Vangelis Karkaletsis
The ability to cross-link large scale data with each other and with structured Semantic Web data, and the ability to uniformly process Semantic Web and other data adds value to both the Semantic Web and to the Big Data community. This paper presents work in progress towards integrating Big Data infrastructures with Semantic Web technologies, allowing for the cross-linking and uniform retrieval of data stored in both Big Data infrastructures and Semantic Web data. The technical challenges involved in achieving this, pertain to both data and system interoperability: we need a way to make the semantics of Big Data explicit so that they can interlink and we need a way to make it transparent for the client applications to query federations of such heterogeneous systems. The paper presents an extension of the Semagrow federated SPARQL query processor that is able to seamlessly federated SPARQL endpoints, Cassandra databases, and Solr databases, and discusses future directions of this line of work.
David Martin and Peter Patel-Schneider
The SPARQL 1.1 Query Language \cite{SPARQL} permits patterns inside {\sf FILTER} expressions using the {\sf EXISTS} construct, specified by using substitution. Substitution destroys some of the aspects of SPARQL that make it suitable as a data access language. As well, substitution causes problems in the SPARQL algebra and produces counterintuitive results. Fixing the problems with {\sf EXISTS} is best done with a completely different definition that does not use substitution at all.
Katalin Ternai and Ildikó Szabó
Compliance checking of business processes executed by auditors requires to analyze documents e.g. log files, business process models depending on requirements derived from reference guidelines. This paper presents a forward compliance checking application for facilitating conformant behavior by de-tecting organizational operations and their deviations based on these docu-ments in a semantic way. This application has been tested on the Internaliza-tion process in the respect of Erasmus mobility.
Silvio Peroni, David Shotton and Fabio Vitali
In this poster paper we provide an overview of the OpenCitations project and of its main outcome, the OpenCitations Corpus, which is an open repository of scholarly citation data made available under a Creative Commons public domain dedication, which provides in RDF accurate citation information harvested from the scholarly literature.
Amna Basharat, Khaled Rasheed and I. Budak Arpinar
In this paper we illustrate how we harness the power of crowds and specialized experts through automated knowledge acquisition workflows for semantic annotation in specialized and knowledge intensive domains. We undertake the special case of the Arabic script of the Qur'an, a widely studied manuscript, and apply a hybrid methodology of traditional 'crowdsourcing' augmented with 'expertsourcing' for semantically annotating its verses. We demonstrate that our proposed hybrid method presents a promising approach for achieving reliable annotations in an efficient and scalable manner, especially in cases where a high level of accuracy is required in knowledge intense and sensitive domains.
Hassan Saif, Miriam Fernandez, Matthew Rowe and Harith Alani
From its start, the so-called Islamic State of Iraq and the Levant (ISIL/ISIS) has been successfully exploiting social media networks, most notoriously Twitter, to promote its propaganda and recruit new members, resulting in thousands of social media users adopting pro-ISIS stance every year. Automatic identification of pro-ISIS users on social media has, thus, become the centre of interest for various governmental and research organisations. In this paper we propose a semantic-based approach for radicalisation detection on Twitter. Unlike most previous works, which mainly rely on the lexical and contextual representation of the content published by Twitter users, our approach extracts and makes use of the underlying semantics of words exhibited by these users to identify their pro/anti-ISIS stances. Our results show that classifiers trained from words' semantics outperform those trained from lexical and network features by 2% on average F1-measure.
Khai Nguyen and Ryutaro Ichise
Instance matching is the problem of finding the instances that describe the same object. It can be viewed as a classification problem, where a pair of two instances is predicted as match or non-match. A common limitation of existing classifier-based matching systems is the absent of instance pairs ranking. We propose using a ranking feature to enhance the classifier in instance matching. Experiments on real datasets confirm the significant improvement when applying our method.
Junzhao Zhang, Xiaowang Zhang and Zhiyong Feng
Many existing approaches have been proposed to solve subgraph matching problem based on filter-and-refine strategy. The efficiency of those existing serial approaches relies on the computational capabilities of CPU. In this paper, we propose an RDF subgraph matching algorithm based on type-isomorphism using GPU since GPU has higher computational performance, more scalability, and lower price than CPU. Firstly, we present a concurrent matching model for type-isomorphism so that subgraph matching can be tackled in a parallel way. Secondly, we develop a parallel algorithm for capturing our proposed concurrent matching model and implement a prototype called IRSMG using GPU. Finally, we evaluate IRSMG on the benchmark datasets LUBM. The experiments show that IRSMG significantly outperforms the state-of-the-art algorithms on the CPU.
Zhenyu Song, Xiaowang Zhang and Zhiyong Feng
The time of answering a SPARQL query with its all exact solutions in large scale RDF dataset possibly exceeds users' tolerable waiting time, especially when it contains the OPT operations since the OPT operation is the least conventional operator in SPARQL. It becomes essential to make a trade-off between the query response time and solution accuracy. We propose PRONA - an plugin for well-designed approximate queries in Jena, which provides help for users to answer well-designed SPARQL queries by approximate computation.The main features of PRONA comprise SPARQL query engine with approximate queries, as well as various approximate degrees for users to choose.
Hong Fang and Xiaowang Zhang
In this paper, we present a querying language for probabilistic RDF databases, where each triple has a probability, called pSRARQL, built on SPARQL, recommended by W3C as a querying language for RDF databases. Firstly, we present the syntax and semantics of pSPARQL. Secondly, we define the query problem of pSPARQL corresponding to probabilities of solutions. Finally, we show that the query evaluation of general pSPARQL patterns is PSPACE-complete.
Zhihui Liu, Xiaowang Zhang and Zhiyong Feng
The rule-based OWL reasoning is to compute the deductive closure of an ontology by applying RDF/RDFS and OWL entailment rules. In this paper, we present an approach to enhancing the perfor- mance of the rule-based OWL reasoning on Spark based on a locally optimal executable strategy. Firstly, we divide all rules (27 in total) in- to four main classes, namely, SPO rules (5 rules), type rules (7 rules), sameAs rules (7 rules), and schema rules (8 rules) since, as we investi- gated, those triples corresponding to the rst three classes of rules are overwhelming (e.g., over 99% in the LUBM dataset) in our practical world. Secondly, based on the interdependence among those entailment rules in each class, we pick out an optimal rule executable order of each class and then combine them into a new rule execution order of all rules. Finally, we implement the new rule execution order on Spark. The exper- imental results show that the running time of our approach is improved by about 30% as compared to Kim & Park's algorithm (2015).
Changlong Wang, Xiaowang Zhang and Zhiyong Feng
We propose a technique that combine an OWL 2 EL reasoner with an OWL 2 reasoner to classify expressive ontologies. We exploit the information implied by the ontology structure to identify a small non-EL ontology that contains necessary axioms to ensure the completeness. In the process of ontology classification, the bulk of workload is delegated to an efficient OWL 2 EL reasoner and the small part of workload is handled by a less efficient OWL 2 reasoner. Experimental results show that our approach leads to a reasonable task assignment and offers a substantial speedup in ontology classification.
Julien Subercaze and Christophe Gravier
We present an in-memory, cross-platform, parallel reasoner for RDFS and RDFSPlus . Inferray uses carefully optimized hash-based join and sorting algorithms to perform parallel materialization. Designed to take advantage of the architecture of modern CPUs, Inferray exhibits a very good uses of cache and memory bandwidth. It offers state-of-the- art performance on RDFS materialization, outperforms its counterparts on RDFSPlus and can be connected with Jena. Reasons to see the poster: i) Presentation of the system, how to use it; ii) Discussion about implementation, source code walkthrough.
Dieter De Paepe, Ruben Verborgh, Erik Mannens and Rik Van de Walle
Semantic Web reasoners are powerful tools that allow the extraction of implicit information from RDF data. This information is reachable through the definition of ontologies and/or rules provided to the reasoner. To achieve this, various algorithms are used by different reasoners. In this paper, we explain how state space search can be applied to perform backward-chaining rule-based reasoning. State space search is an approach used in the Artificial Intelligence domain that solves problems by modeling them as a graph and searching (using diverse algorithms) for solutions within this graph. State space search offers inherent proof generation and the ability to plug in different search algorithms to determine the characteristics of the reasoner such as: speed, memory or ensuring shortest proof generation.
Ben De Meester, Anastasia Dimou, Ruben Verborgh, Erik Mannens and Rik Van de Walle
Data has been made reusable and machine-interpretable by publishing it as Linked Data. However, Linked Data automatic processing is not fully achieved yet, as manual effort is still needed to integrate existing tools and libraries within a certain technology stack. To enable automatic processing, we propose exposing functions and methods as Linked Data, publishing it in different programming languages, using content negotiation to cater to different technology stacks, and making use of common, technology-independent identifiers to make them discoverable. As such, we can enable automatic processing of Linked Data across formats and technology stacks. By using discovery endpoints, similarly as being used to discover vocabularies and ontologies, the publication of these functions can remain decentralized whilst still be easily discoverable.
Lu Fang, Qingliang Miao and Yao Meng
In this paper, we investigate how to identify entity type based on entity cat-egory information. In particular, we first calculate the statistical distribution of each category over all the types. And then we generate type candidates according to distribution probability. Finally we identify the correct type ac-cording to distribution probability, keywords in category and abstract. To evaluate the effectiveness of the approach, we conduct preliminary experi-ments on a real-world dataset from DBpedia. Experimental results indicate that our approach is effective in identifying entity types.
Valeria Fionda, Melisachew Wudage Chekol and Giuseppe Pirrò
We introduce the Gize framework for querying historical RDF data. Gize builds upon two main pillars: a lightweight approach to keep historical data, and an extension of SPARQL called SPARQ–LTL, which incorporates temporal logic primitives to enable a rich class of queries. One striking point of Gize is that its features can be readily made available in existing query processors.
Makoto Urakawa, Masaru Miyazaki, Hiroshi Fujisawa, Masahide Naemura and Ichiro Yamada
Curriculum for school is generated based on the academic year. For the reason that students need to learn many subjects every year, the relative topics are put into curricula in discrete. In this study, we propose a method to construct a dynamic learning path which enables us to learn the relative topics continuously. In this process, we define two kinds of similarity score, inheritance score and context similarity score to connect the learning path of relative topics. We also construct curriculum ontology with Resource Description Framework (RDF) to make the dynamic learning path accessible. Using the curriculum ontology, we develop a learning system for school which shows a dynamic learning path with broadcasted video clips.
Monika Solanki
Effective, collaborative integration of software and big data engineering for Web-scale systems, is now a crucial technical and economic challenge. This requires new combined data and software engineering processes and tools. Semantic metadata standards and linked data principles, provide a technical grounding for such integrated systems given an appropriate model of the domain. In this paper we introduce the ALIGNED suite of ontologies specifically designed to model the information exchange needs of combined software and data engineering. The models have been deployed to enable: tool-chain integration, such as the exchange of data quality reports; cross-domain communication, such as interlinked data and software unit testing; mediation of the system design process through the capture of design intents and as a source of context for model-driven software engineering processes. These ontologies are deployed in web-scale, data-intensive, system development environments in both the commercial and academic domains. We exemplify the usage of the suite on a complex collaborative software and data engineering scenario from the legal information system domain.
Ian Harrow, Martin Romacker, Andrea Splendiani, Stefan Negru, Peter Woollard, Scott Markel, Yasmin Alam-Faruque, Martin Koch, Erfan Younesi, James Malone and Ernesto Jimenez-Ruiz
The Pistoia Alliance Ontologies Mapping project (http://www.pistoiaalliance.org/projects/ontologies-mapping) was set up to find or create better tools or services for mapping between ontologies in the same domain and to establish best practices for ontology management in the Life Sciences. It was proposed through the Pistoia Alliance Ideas Portfolio Platform (IP3: https://www.qmarkets.org/live/pistoia/home) which was selected by the Pistoia Alliance Operations Team for development of a formal business case. The project has delivered a set of guidelines for best practice which build on existing standards. We show how these guidelines can be used as a "checklist" to support the application and mapping of source ontologies in the disease and phenotype domain. Another important output of this project was to specify the requirements for an Ontologies Mapping Tool. These requirements were used in a preliminary survey that established that such tools already exist which substantially meet them. Therefore, we have developed a formal process to define and submit a request for information (RFI) from existing ontologies mapping tool providers to enable their evaluation. This RFI process will be described and we summarise our findings from evaluation of seven ontologies mapping tools from academic and commercial providers. The guidelines and RFI materials are accessible on a public wiki:- https://pistoiaalliance.atlassian.net/wiki/display/PUB/Ontologies+Mapping+Resources. A critical component of any Ontologies Mapping tool is the embedded ontology matching algorithm. Therefore, the Pistoia Alliance Ontologies Mapping Project is supporting development and evaluation of ontology matching algorithms though sponsorship and organisation of the new Disease and Phenotype track for OAEI 2016, which is also be summarised in this poster. This new track has been organised because currently, mappings between ontologies in a given data domain are mostly curated by bioinformatics and disease experts in academia or industry, who would benefit from automation of their procedures. This could be accomplished through implementation of ontology matching algorithms into their existing workflow environment or investment in an ontologies mapping tool for management of the ontologies mapping life cycle. Work is in progress by the Ontologies Mapping project is to develop user requirements for an ontologies mapping service. We will conduct a survey of Pistoia Alliance members to understand the need for such a service and whether it should be implemented in future.
Thomas Wilmering and Mark B. Sandler
This paper discusses an extension to the Audio Effect Ontology (AUFX-O) for the interdisciplinary classification of audio effect types. The ontology extension implements a unified classification system that draws on knowledge from different music-related disciplines and is designed to facilitate the retrieval of audio effect information based on low-level and semantic aspects. It extends AUFX-O enabling communication between agents from different disciplines within the field of music creation and production. After briefly discussing the ontology, we show how it can be used to efficiently classify and retrieve effect types.
Xiang Nan Ren, Olivier Curé, Houda Khrouf, Zakia Kazi-Aoul and Yousra Chabchoub
Due to the growing need to timely process and derive valuable information and knowledge from data produced in the Semantic Web, RDF stream processing (RSP) has emerged as an important research domain. In this paper, we describe the design of an RSP engine that is built upon state of the art Big data frameworks, namely Apache Kafka and Apache Spark. Together, they support the implementation of a production-ready RSP engine that guarantees scalability, fault-tolerance, high availability, low latency and high throughput. Moreover, we highlight that the Spark framework considerably eases the implementation of complex applications requiring libraries as diverse as machine learning, graph processing, query processing and stream processing.
Sejin Chun, Jooik Jung, Xiongnan Jin, Seungjun Yoon and Kyong-Ho Lee
In this paper, we propose a proactive replication of Linked Data for RDF Stream Processing. Our solution achieves a fast query processing by replicating subsets of remote RDF datasets before query evaluation. To construct the replication process effectively, we present an update estimation model to handle the changes in updates over time. With the update estimation model, we re-compose instances of the replication process in response to some problems, i.e., the outdated data. Finally, we conduct exhaustive tests with a real-world dataset to verify our solution.
Seungjun Yoon, Sejin Chun, Xiongnan Jin and Kyong-Ho Lee
The W3C RDF Stream Processing (RSP) community has proposed both a common model and a language for querying RDF streams. However, the current implementations of RSP systems are significantly different from each other in terms of performance. In this paper, we propose a unified interface for optimizing a continuous query in heterogeneous RSP systems. To enhance the performance of RSP, the unified interface decomposes query, reassembles partial queries and assigns them to appropriate RSP systems. Experimental results show that the proposed approach performances better in terms of memory consumption and latency.
Christophe Gravier and Julien Subercaze
As our computers embed more cores, efficient reasoners are designed with parallelization but also CPU and memory friendliness in mind. % These latter contribute to make reasoner tractable in practice despite the computational complexity of logical fragments. % However, creating benchmark to monitor this CPU-friendliness for many reasoners, datasets and logical fragments is a tedious task. % In this paper, we present the Université Saint-Etienne Reasoners Benchmark (USE-RB) that automates the setup and execution of reasoners benchmarks with a particular attention to monitor how reasoners work in harmony with the CPU.
Davide Lanti, Guohui Xiao and Diego Calvanese
In this paper we present an experimental evaluation of VIG, a data scaler for OBDA benchmarks. Data scaling is a relatively recent approach, proposed in the database community, that allows for quickly scaling an input data instance to s times its size, while preserving certain application-specific characteristics. The advantages of scaling are that the generator is general, in the sense that it can be re-used on different database schemas, and that users are not required to manually input the data characteristics. VIG lifts the scaling approach from the database level to the OBDA level, where the domain information of ontologies and mappings has to be taken into account as well. To evaluate the quality of VIG, in this paper we use it to generate data for the Berlin SPARQL Benchmark (BSBM), and compare it with the official BSBM data generator.
Olaf Hartig and Carlos Buil Aranda
The recently proposed Triple Pattern Fragment (TPF) interface aims at increasing the availability of Web-queryable RDF datasets by trading off an increased client-side query processing effort for a significant reduction of server load. However, an additional aspect of this trade-off is a very high network load. To mitigate this drawback we propose to extend the interface by allowing clients to augment TPF requests with a VALUES clause as introduced in SPARQL 1.1. In an ongoing research project we study the trade-offs of such an extended TPF interface and compare it to the pure TPF interface. With a poster in the conference we aim to present initial results of this research. In particular, we would like to present a series of experiments showing that a distributed, bind-join-based query execution using this extended interface can reduce the network load drastically (in terms of both the number of HTTP requests and data transfer).
Ran Yu, Besnik Fetahu, Ujwal Gadiraju and Stefan Dietze
Embedded markup based on Microdata, RDFa, and Microformats have become prevalent on the Web and constitute an unprecedented data source. RDF statements from markup are highly redundant, co-references are very frequent yet explicit links are missing, and with numerous errors in such statements. We present a thorough analysis on the challenges associated with markup data in the context of entity retrieval. We analyze four main factors: (i) co-references, (ii) redundancy, (iii) inconsistencies, and (iv) accessibility of information in the case of URLs. We conclude with general guidelines on how to avoid such challenges when dealing with embedded markup data.
John P. Mccrae and Philipp Cimiano
This paper presents LIXR, a system for converting between RDF and XML. LIXR is based on domain-specific language embedded into the Scala programming language. It supports the definition of transformations of datasets from RDF to XML in a declarative fashion, while still maintaining the flexibility of a full programming language environment. We directly compare this system to other systems programmed in Java and XSLT and show that the LIXR implementations are significantly shorter in terms of lines of code, in addition to being bidirectional and conceptually simple to understand.

Demos

top
Ruben Taelman, Pieter Heyvaert, Ruben Verborgh, Erik Mannens and Rik Van de Walle
The world contains a large amount of sensors that produce new data at a high frequency. It is currently very hard to find public services that expose these measurements as dynamic Linked Data, We investigate how sensor data can be published continuously on the Web at a low cost. This paper describes how the publication of various sensor data sources can be done by continuously mapping raw sensor data to RDF and inserting it into a live, low-cost server. This makes it possible for clients to continuously evaluate dynamic queries using public sensor data. For our demonstration, we will illustrate how this pipeline works for the publication of temperature and humidity data originating from a microcontroller, and how it can be queried.
Pieter Heyvaert, Ruben Taelman, Ruben Verborgh, Erik Mannens and Rik Van de Walle
As the amount of generated sensor data is increasing, semantic interoperability becomes an important aspect in order to support efficient data distribution and communication. Therefore, the integration and fusion of (sensor) data is important, as this data is coming from different data sources and might be in different formats. Furthermore, reusable and extensible methods for this integration and fusion are required in order to be able to scale with the growing number of applications that generate semantic sensor data. Current research efforts allow to map sensor data to Linked Data in order to provide semantic interoperability. However, they lack support for multiple data sources, hampering the integration and fusion. Furthermore, the used methods are not available for reuse or are not extensible, which hampers the development of applications. In this paper, we describe how the RDF Mapping Language (RML) and a Triple Pattern Fragments (TPF) server are used to address these shortcomings. %define reusable and extensible mappings to generate Linked Data based on heterogeneous (sensor) data. The demonstration consists of a micro controller that generates sensor data. The data is captured and mapped to RDF triples using module-specific RML mappings, which are queried from a TPF server.
Damien Graux, Pierre Geneves and Nabil Layaida
When searching for flights, current systems often suggest routes involving waiting times at stopovers. There might exist alternative routes which are more attractive from a touristic perspective because their duration is not necessarily much longer while offering enough time in an appropriate place. Choosing among such alternatives requires additional planning efforts to make sure that e.g. points of interest can conveniently be reached in the allowed time frame. We present a system that automatically computes smart trip alternatives between any two cities. To do so, it searches points of interest in large semantic datasets considering the set of accessible areas around each possible layover. It then elects feasible alternatives and displays their differences with respect to the default trip.
Eugene Siow, Thanassis Tiropanis and Wendy Hall
Resource-constrained Internet of Things (IoT) devices like Raspberry Pis', with specific performance optimisation, can serve as interoperable personal Linked Data repositories for IoT applications. In this demo paper we describe PIOTRe, a personal datastore that utilises our sparql2sql query translation technology on Pis' to process, store and publish IoT time-series historical data and streams. We demonstrate, for a smart home scenario with PIOTRe: a real-time dashboard that utilises RDF stream processing, a set of descriptive analytics visualisations on historical data, a framework for registering stream queries within a local network and a means of sharing metadata globally with HyperCat and Web Observatories.
Femke Ongenae, Pieter Bonte, Jelle Nelis, Thomas Vanhove and Filip De Turck
The Internet of Things (IoT) is starting to take a prevalent role in our daily lives. Smart offices that automatically adapt their environment to make life at the office as pleasant as possible, are slowly becoming reality. In this paper we present a user-friendly semantic-based smart office platform that allows, through easy configuration, a personalized and comfortable experience at the office.
Shuya Abe, Yutaka Mitsuishi, Shinichiro Tago, Nobuyuki Igata, Seiji Okajima, Hiroaki Morikawa and Fumihito Nishino
Based on Open Data Charter of G8, the governments are publishing corporation register data as Open Data. In Japan, the government recently published a dataset covering approximately 4.4 million corporations, but the dataset is rated as 3 star in the 5-star rating system. Our policy, which we believe is also common in the LOD community, is that low-star datasets must be converted into 5 star as early as possible for strengthening the power of LOD. Based on this policy, we designed a schema for corporation data, converted the Japanese dataset into 5 star using this schema, and published this dataset under Creative Commons Attribution 4.0 License on 9th December 2015, only eight days after the publication date of the original dataset. As far as we know, eight datasets currently refer to ours, which makes the degree of 5 star stronger. As a business purpose, we internally appended links between our dataset and other data such as DBpedia, and applied this enriched data to a visualization system for browsing a corporation from various perspectives.
Raf Buyle, Pieter Colpaert, Mathias Van Compernolle, Peter Mechant, Veronique Volders, Ruben Verborgh and Erik Mannens
Base registries are trusted authentic information sources controlled by an appointed public administration or organization appointed by the government. Maintaining a base registry comes with extra maintenance costs to create the dataset and keep it up to date. In this paper, we study the possibility to entangle the maintenance of base registries at the core of existing administrative processes and to reduce the cost of maintaining a new data source. We demonstrate a method to manage Local Council Decisions as Linked Data, which creates a new base registry for mandates. We found that no extra effort was needed in the process by local administrations. We show that an end-to-end approach for Local Council Decisions as Linked Data is feasible. Furthermore, using this proof of concept, we established a momentum to roll out these ideas for the region of Flanders in Belgium.
Gregoire Burel, Lara Piccolo and Harith Alani
We introduce EnergyUse, a collaborative website designed for raising climate change awareness by offering users the ability to view and compare the actual energy consumption of various appliances, and to share and discuss energy conservation tips in an open and social environment. The platform collects data from smart plugs, and exports appliance consumption and community generated energy tips as linked data. EnergyUse is supported by multiples automatic processes that semantically link related contributions, generate appliances descriptions and publish consumption data using the EnergyUse ontology.
Konstantina Bereta, Guohui Xiao, Manolis Koubarakis, Martina Hodrius and Conrad Bielski
We present Ontop-spatial, a geospatial extension of the well-known OBDA system Ontop, that leverages the technologies of geospatial databases and enables GeoSPARQL-to-SQL translation. We showcase the functionalities of the system in real-world use cases which require data integration of different geospatial sources.
Michel Buffa, Catherine Faron Zucker, Thierry Bergeron and Hatim Aouzal
The Azkar research project focuses on the remote control of a mobile robot using the emerging Web technologies WebRTC for real time communication. One of the use cases addressed is a remote visit of the French Museum of the Great War in Meaux. For this purpose, we designed an ontology for describing the main scenes in the museum, the objects that compose them, the different trails the robot can follow in a given time period, for a targeted audience, the way points, observation points. This RDF dataset is exploited to assist the human guide in designing a trail, and possibly adapting it during the visit. In this paper we present the Azkar Museum Ontology, the RDF dataset describing some emblematic scenes of the museum, and an experiment that took place in June 2016 with a robot controlled by an operator located 800~kms from the museum. We propose to demonstrate this work in real time during the conference by organizing a remote visit from the conference demo location.
Nandana Mihindukulasooriya, Esteban Gonzalez, Fernando Serena, Carlos Badenes and Oscar Corcho
FarolApp is a mobile web application that aims to increase the awareness of light pollution by generating illustrative maps for cities and by encouraging citizens and public administrations to provide street light information in an ubiquitous and interactive way using online street views. In addition to the maps, FarolApp builds on existing sources to generate and provide up-to-date data by crowdsourced user annotations. Generated data is available as dereferenceable Linked Data resources in several RDF formats and via a queryable SPARQL endpoint. The demo presented in this paper illustrates how FarolApp maintains continuously evolving Linked Data that reflect the current status of city street light infrastructures and use that data to generate light pollution maps.
Freddy Lecue, John Vard and Jiewen Wu
Travel expenses represent up to 7% of organizations overall budget. Existing expenses systems are designed for reporting expenses types and amount, but not for understanding how to save and spend. We present a system, manipulating semantic web technologies, which aims at identifying, explaining, predicting abnormal expense claims by employees of large organizations in 500+ cities.
Michel Héon, Roger Nkambou and Mohamed Gaha
Ontological syntax standardized by the W3C offer the expressiveness needed in the formulation of complex concepts. However, the codification of an ontology is a process of formalization of thought that sometimes requires extensive knowledge and is often inaccessible in the layperson's logic. The G-OWL (for Graphical OWL) language has been designed to provide a tool to facilitate the expression of knowledge in a manner that is compatible with the OWL2 ontolo-gy. This paper presents the OntoCASE4G-OWL prototype, a visual modeling software for the editing of formal ontologies in G-OWL and their translation into Turtle. The executable version of OntoCASE for Windows and MacOsX is available at http://www.cotechnoe.com/iswc2016
Fernando Florenzano, Denis Parra, Juan L. Reutter and Freddie Venegas
We demonstrate a visualisation aimed at facilitating SPARQL-fluent users to produce queries over a dataset they are not familiar with. This visualisation consists of a labelled graph whose nodes are the different types of entities in the RDF dataset, and where two types are related if entities of these types appear related in the RDF dataset. To avoid a visual overload when the number of types in a dataset is too big, the graph groups together all types that are subclass of a more general type, and users are given the option of navigating through this hierarchy of types, dividing type nodes into subtypes as they see fit. We illustrate our visualisation using the Linked Movie Database dataset, and offer as well the visualisation of DBpedia.
Pankesh Patel, Amelie Gyrard, Dhavalkumar Thakker, Amit Sheth and Martin Serrano
Semantic Web of Things(SWoT) applications focus on providing a wide-scale interoperability that allows the sharing of IoT devices across domains and the reusing of available knowledge on the web. However, the application development is difficult because developers have to do various tasks such as designing an application, annotating IoT data, interpreting data, and combining application domains. To address the above challenges, this paper demonstrates SWoTSuite, a toolkit for prototyping SWoT applications. It hides the use of semantic web technologies as much as possible to avoid the burden of designing SWoT applications that involves designing ontologies, annotating sensor data, and using reasoning mechanisms to enrich data. Taking inspiration from sharing and reuse approaches, SWoTSuite reuses data and vocabularies. It leverages existing technologies to build applications. We take a hello world naturopathy application as an example and demonstrate an application development process using SWoTSuite. The demo video is available at URL- http://tinyurl.com/zs9flrt.
Bernardo Cuenca Grau, Evgeny Kharlamov, Sarunas Marciuska, Dmitriy Zheleznyakov and Marcelo Arenas
In this demo we present the SemFacet system for faceted search over ontology enhanced Knowledge Graphs (KGs) stored in RDF. SemFacet allows users to query KGs with relatively complex SPARQL queries via an intuitive Amazon-like interface. SemFacet can compute faceted interfaces over large scale RDF datasets by relying on incremental algorithms and over large ontologies by exploiting ontology projection techniques. SemFacet relies on an in-memory triple store and current implementation bundles JRDFox, Sesame, Stardog, and PAGOdA. During the demonstration the attendees can try SemFacet by exploring Yago KG.
Evgeny Kharlamov, Bernardo Cuenca Grau, Ernesto Jimenez-Ruiz, Steffen Lamparter, Gulnar Mehdi, Martin Ringsquandl, Yavor Nenov, Stephan Grimm, Mikhail Roshchin and Ian Horrocks
In this demo we present the SOMM system that resulted from an ongoing collaboration between Siemens and the University of Oxford. The goal of this collaboration is to facilitate design and management of ontologies that capture conceptual information models underpinning various industrial applications. SOMM supports engineers with little background on semantic technologies in the creation of such ontologies and in populating them with data. SOMM implements a fragment of OWL 2 RL extended with a form of integrity constraints for data validation, and it comes with support for schema and data reasoning, as well as for ontology integration. We demonstrate functionality of SOMM on two scenarios from energy and manufacturing domains.
Wouter Beek and Jan Wielemaker
SPARQL editors make it easier to write and inspect their results. Notebooks already support computer- and data scientists in domains like statistics and machine learning. There is currently not an integrated notebook solution for Semantic Web (SW) programming that combines the strengths of SPARQL editors with the benefits of notebooks. SWISH gives an integrated notebook experience for the Semantic Web programmer.
Alo Allik, Mariano Mora-Mcginity, Gyorgy Fazekas and Mark Sandler
This demo presents MusicWeb, a novel platform for linking music artists within a web-based application for discovering associations between them. MusicWeb provides a browsing experience using connections that are either extra-musical or tangential to music, such as the artists' political affiliation or social influence, or intra-musical, such as the artists' main instrument or most favoured musical key. The platform integrates open linked semantic metadata from various Semantic Web, music recommendation and social media data sources. The connections are further supplemented by thematic analysis of journal articles, blog posts and content-based similarity measures focussing on high level musical categories.
Pasquale Lisena, Manel Achichi, Eva Fernandez, Konstantin Todorov and Raphaël Troncy
In this paper, we introduce OVERTURE - a web application allowing to explore the interlinked catalogs of major music libraries including the French National Library, Radio France and the Philharmonie de Paris. We have first developed the DOREMUS ontology which is an extension of the well-known FRBRoo model for describing works and expressions as well as the creation processus. We have implemented a so-called marc2rdf tool allowing for the conversion and linking of bibliographical entries about music works, interpretations and expressions from their original MARC-format to RDF following this DOREMUS ontology. We present an exploratory search engine prototype that enables to browse through the reconciled collection of bibliographical records of classical music and to highlight the various interpretations of a work, its derivative, its performance casting as well as other rich metadata.
Damian Bursztyn, Francois Goasdoue and Ioana Manolescu
Semantic Web data management raises the challenge of answering queries under constraints (i.e., in the presence of implicit data). To bridge the gap between this extended setting and that of query evaluation provided by database engines, a reasoning step (w.r.t. the constraints) is necessary before query evaluation. A large and useful set of ontology languages enjoys FOL reducibility of query answering: queries can be answered by evaluating a SQLized first-order logic (FOL) formula (obtained from the query and the ontology) directly against the explicitly stored data (i.e., without considering the ontological constraints). Our demonstration showcases to the attendees, and analyzes, the performance of several reformulation-based query answering techniques, including one we recently devised, applied to the lightweight description logic DL-LiteR underpinning the W3C’s OWL2 QL profile.
Matias Junemann, Juan L. Reutter, Adrian Soto and Domagoj Vrgoc
In this demo we present an extension of SPARQL which allows queries to connect to JSON APIs and integrate the obtained information into query answers. We achieve this by adding a new operator to SPARQL, and implement this extension on top of the Jena framework in order to illustrate how it functions with real world APIs.
Rivindu Perera, Parma Nand and Gisela Klette
Linked Data has emerged as the most widely used and the most powerful knowledge source for Question Answering (QA). Although Question Answering using Linked Data (QALD) fills in many gaps in the traditional QA models, the answers are still presented as factoids. This research introduces an answer presentation model for QALD by employing Natural Language Generation (NLG) to generate natural language descriptions to present an informative answer. The proposed approach employs lexicalization, aggregation, and referring expression generation to build a human-like enriched answer utilizing the triples extracted from the entities mentioned in the question as well as the entities contained in the answer.
Suvodeep Mazumdar and Ziqi Zhang
This paper describes an extension of the TableMiner+ sys- tem, the only open source Semantic Table Interpretation system that annotates Web tables using Linked Data in an effective and efficient ap- proach. It adds a graphical user interface to TableMiner+, to facilitate the visualization and correction of automatically generated annotations. This makes TableMiner+ an ideal tool for the semi-automatic creation of high-quality semantic annotations on tabular data, which facilitates the publication of Linked Data on the Web.
Ghislain Auguste Atemezing and Pierre-Yves Vandenbussche
There is an increasing presence of structured data due to the adoption of Linked data principles on the web. At the same time, web users have different skills and want to be able to interact with Linked datasets in various manner, such as asking questions in natural language. Over the last years, the QALD challenges series are becoming the references for benchmarking question answering systems. However, QALD questions are targeted on datasets, not on vocabulary catalogues. This paper proposed a first implementation of Query Answering system (QA) applied to the Linked Open Vocabularies (LOV) catalogue, mainly focused on metadata information retrieval. The goal is to provide to end users yet another access to metadata information available in LOV using natural language questions.
Rivindu Perera, Parma Nand and Gisela Klette
DBpedia encodes massive amounts of open domain knowledge and is growing by accumulating more triples at the same rate as Wikipedia. However, in order to be able to present the knowledge processed using DBpedia, the applications need to present this knowledge often require natural language formulations of these triples. The RealText-lex2 framework offers a scalable platform to transform these triples to natural language sentences using lexicalization patterns. The framework has evolved from its previous version (RealText-lex) and is comprised of four lexicalization pattern mining modules which derive patterns from a training triple collection. These patterns can be then applied on the new triples given that they satisfy a defined set of constraints.
Tabea Tietz, Jörg Waitelonis, Joscha Jäger and Harald Sack
When searching for an arbitrary subject in weblogs or archives, users often don’t get the information they are really looking for. Often they are overwhelmed with an overflow of information while sometimes the presented information is too scarce to make any use of it. Without further knowledge about the context or background of the intended subject users are easily frustrated because they either cannot handle the amount of information or they might give up because they cannot make sense of the topic at all. Furthermore, authors of online-platforms often deal with the issue to provide useful recommendations of other articles and to motivate the readers to stay on the platform to explore more of the available but most times hidden content of their blog or archive. In the demo presentation, we present refer, a semantic annotation and visualization system integrated into the Wordpress platform. With refer, content creators are enabled to (semi-)automatically annotate their texts with DBpedia resources as part of the original writing process and visualize them automatically. With refer users are encouraged to take an active part in discovering a platform’s information content interactively and intuitively, rather than just to have to read the entire textual information provided by the author. They can discover background information as well as relationships among persons, places, events, and anything related to the subject in current focus and are inspired to navigate the previously hidden information on a platform.
Stefano Faralli, Christian Bizer, Kai Eckert, Robert Meusel and Simone Paolo Ponzetto
Taxonomic relations (also known as ``isa'' relations or hypernymy relations) represent a fundamental atomic piece of structured information for many text understanding applications. Such structured information is part of the basic topology structure of knowledge bases and foundational ontologies. Despite the availability of shared knowledge bases, some NLP applications (e.g. Ontology Learning) require automatic isa relation harvesting techniques to cope with the coverage of domain-specific and long-tail terms. We present a Web Application to directly query our repository of isa relations extracted from the Common Crawl (the largest publicly available crawl of the Web). Our resource can be also downloaded for research purposes and accessed programmatically (we also release a Java application programming interface).
Dmitriy Zheleznyakov, Evgeny Kharlamov, Vidar Klungre, Martin G. Skjæveland, Dag Hovland, Martin Giese, Ian Horrocks and Arild Waaler
In ontology-based data access (OBDA) the users access relational databases (RDBs) via ontologies that mediate between the users and the data. Ontologies are connected to data via declarative ontology-to-RDB mappings that relate each ontological term to an SQL query. In this demo we present our system KeywDB that facilitates construction of ontology-to-RDB mappings in an interactive fashion. In KeywDB users provide examples of entities for classes that require mappings and the system returnes a ranked list of such mappings. In doing so KeywDB relies on techniques for keyword query answering over RDBs. During the demo the attendees will try KeywDB with NorthWind and NPD FP databases and collections of mappings that we prepare.
Jacopo Urbani, Ceriel Jacobs and Markus Krötzsch
We present VLog, a new system for answering arbitrary Datalog queries on top of a wide range of databases, including both relational and RDF databases. VLog is designed to perform efficiently intensive rule-based computation on large Knowledge Graphs (KGs). It adapts column-store technologies to attain high efficiency in terms of memory usage and speed, enabling us to process Datalog queries with thousands of rules over databases with hundreds of millions of tuples---in a live demonstration on a laptop. Our demonstration provides in-depth insights into the workings of VLog, and presents important new features such as support for arbitrary relational DBMS.
Adam Sotona
Eclipse RDF4J (formerly known as Sesame) is an open source Java framework for processing RDF data. RDF4J framework is extensible through its Storage And Inference Layer (SAIL) to support various RDF stores and inference engines. Apache HBase is the Hadoop database, a distributed and scalable big data store. It is designed to scale up from single servers to thousands of machines. We have connected RDF4J and HBase to receive an extremely scalable RDF store.
Thi-Nhu Nguyen, Hideaki Takeda, Khai Nguyen, Ryutaro Ichise and Tuan-Dung Cao
The entity type is considered as very important in DBpedia. Since this information is inconsistently described in different languages, it is difficult to recognize the most suitable type of an entity. We propose a method to predict the entity type based on a novel conformity measure. We combine the consideration of the specific-level of and the majority voting. The experiment result shows that our method can suggest informative types and outperforms the baselines.
Anastasia Dimou, Dimitris Kontokostas, Markus Freudenberg, Ruben Verborgh, Jens Lehmann, Erik Mannens, Sebastian Hellmann and Rik Van de Walle
The root of schema violations for RDF data generated from (semi-)structured data, often derives from mappings, which are repeatedly applied and specify how an RDF dataset is generated. The DBpedia dataset, which derives from Wikipedia infoboxes, is no exception. To mitigate the violations, we proposed in previous work to validate the mappings which generate the data, instead of validating the generated data afterwards. In this work, we demonstrate how mappings validation is applied to DBpedia. DBpedia mappings are automatically translated to RML and validated by RDFUnit. The DBpedia mappings assessment can be frequently executed, because it requires significantly less time compared to validating the dataset. The validation results become available via a user-friendly interface. The DBpedia community considers them to refine the DBpedia mappings or ontology and thus, increase the dataset quality.
Anastasia Dimou, Pieter Heyvaert, Wouter Maroy, Laurens De Graeve, Ruben Verborgh, Erik Mannens and Rik Van de Walle
Linked Data generation and publication remain challenging and complicated, in particular for data owners who are not Semantic Web experts or tech-savvy. The situation deteriorates when data from multiple heterogeneous sources, accessed via different interfaces, is integrated, and the Linked Data generation is a long-lasting activity repeated periodically, often adjusted and incrementally enriched with new data. Therefore, we propose the RML Workbench, a graphical user interface to support data owners administrating their Linked Data generation and publication workflow. The RML Workbench’s underlying language is RML, since it allows to declaratively describe the complete Linked Data generation workflow. Thus, any Linked Data generation workflow specified by a user can be exported and reused by other tools interpreting RML.
Damien Graux, Louis Jachiet, Pierre Geneves and Nabil Layaida
We demonstrate SPARQLGX: our implementation of a distributed sparql evaluator. We show that SPARQLGX makes it possible to evaluate SPARQL queries on billions of triples distributed across multiple nodes, while providing attractive performance figures.
Syed Muhammad Ali Hasnain, Qaiser Mehmood, Syeda Sana E Zainab and Aidan Hogan
There are hundreds of SPARQL endpoints on the Web, but finding an endpoint relevant to a client's needs is difficult: each endpoint acts like a black box, often without a description of its content. Herein we briefly describe SPORTAL: a system that collects meta-data about the content of endpoints and collects them into a central catalogue over which clients can search. SPORTAL sends queries to individual endpoints offline to learn about their content, generating a best-effort VoID description for each endpoint. These descriptions can then be searched and queried over by clients in the SPORTAL user interface, for example, to find endpoints that contain instances of a given class, or triples with a given predicate, or more complex requests such as endpoints with at least 1,000 images of people. Herein we give a brief overview of SPORTAL, its design and functionality, and the features that shall be demoed at the conference.
Md. Kamruzzaman Sarker, Adila A. Krisnadhi and Pascal Hitzler
Once the conceptual overview, in terms of a somewhat informal class diagram, has been designed in the course of engineering an ontology, the process of adding many of the appropriate logical axioms is mostly a routine task. We provide a Protege plugin which supports this task, together with a visual user interface, based on established methods for ontology design pattern modeling.
Md. Kamruzzaman Sarker, David Carral, Adila A. Krisnadhi and Pascal Hitzler
In our experience, some ontology modelers find it much easier to express logical axioms using rules rather than using OWL (or description logic) syntax. Based on recent theoretical developments on transformations between rules and description logics, we develop ROWL, a Protege plugin that allows users to enter OWL axioms by way of rules; the plugin then automatically converts these rules into OWL DL axioms if possible, and prompts the user in case such a conversion is not possible without weakening the semantics of the rule.
Jędrzej Potoniec and Agnieszka Ławrynowicz
We present a Protege plugin implementing Swift Linked Data Miner, an anytime algorithm for extending an ontology with new subsumptions. The algorithm mines an RDF graph accessible via a SPARQL endpoint and proposes new SubClassOf axioms to the user.
Jędrzej Potoniec
We present an on-line system which learns a SPARQL query from a set of wanted and a set of unwanted results of the query. The sets are extended during a dialog with the user. The system leverages SPARQL 1.1 and does not depend on any particular RDF graph.
Mihael Arcan, Mauro Dragoni and Paul Buitelaar
To enable knowledge access across languages, ontologies that are often represented only in English, need to be translated into different languages. Since manual multilingual enhancement of domain-specific ontologies is very time consuming and expensive, smart solutions are required to facilitate the translation task for the language and domain experts. For this reason, we present ESSOT, an Expert Supporting System for Ontology Translation, which support experts in accomplishing the multilingual ontology management task. Differently than the classic document translation, ontology label translation faces highly specific vocabulary and lack contextual information. Therefore, ESSOT takes advantage of the semantic information of the ontology for translation improvement of the ontology labels.
Evgeny Kharlamov, Sebastian Brandt, Martin Giese, Ernesto Jimenez-Ruiz, Yannis Kotidis, Steffen Lamparter, Theofilos Mailis, Christian Neuenstadt, Özgür Lütfü Özcep, Christoph Pinkel, Ahmet Soylu, Christoforos Svingos, Dmitriy Zheleznyakov, Ian Horrocks, Yannis Ioannidis, Ralf Möller and Arild Waaler
Numerous analytical tasks in industry rely on data integration solutions since they require data from multiple static and streaming data sources. In the context of the Optique project we have investigated how Semantic Technologies can enhance data integration and thus facilitate further data analysis. We introduced the notion Ontology-Based Stream-Static Data Integration and developed the system Optique to put our ideas in practice. In this demo we will show how Optique can help in diagnostics of power generating turbines in Siemens Energy. For this purpose we prepared anonymised streaming and static data from 950 Siemens power generating turbines with more than 100,000 sensors and deployed Optique on distributed environments with 128 nodes. The demo attendees will be able to see do diagnostics of turbines by registering and monitoring continuous queries that combine streaming and static data; to test scalability of our devoted stream management system that is able to process up to 1024 concurrent complex diagnostic queries with a 10 TB/day throughput; and to deploy Optique over Siemens demo data using our devoted interactive system to create abstraction semantic layers over data sources.
Robin Keskisärkkä
A number of RDF Stream Processing (RSP) systems have been developed to support the processing of streaming Linked Data, however, due to the lack of a standardized query language they all provide different SPARQL extensions. The RSP Community Group is in the process of developing a standardized RSP query language (RSP-QL), which incorporates many of features of existing RSP language extensions. In this demo paper we describe how RSP-SPIN, a SPIN extension for representing RSP-QL queries, can be used to encapsulate RSP queries as RDF, forming a syntax agnostic representation that can be used to support serialization into multiple RSP language extensions. This could be useful, for example, to reduce the effort required to produce and maintain RSP benchmarks, since developers can focus on a single representation per query, rather than manually implementing and validating queries for several languages in parallel.
Luca Costabello, Pierre-Yves Vandenbussche, Gofran Shukair, Corine Deliot and Neil Wilson
Considerable investment in RDF publishing has recently led to the birth of the Web of Data. But is this investment worth it? Are publishers aware of how their linked datasets traffic looks like? We propose an access analytics platform for linked datasets. The system mines traffic insights from the logs of registered RDF publishers and extracts Linked Data-specific metrics not available in traditional web analytics tools. We present a demo instance showing one month (December 2014) of real traffic to the British National Bibliography RDF dataset.
Francesco Osborne, Angelo Antonio Salatino, Aliaksandr Birukou and Enrico Motta
Academic publishers, such as Springer Nature, annotate scholarly products with the appropriate research topics and keywords to facilitate the marketing process and to support (digital) libraries and academic search engines. This critical process is usually handled manually by experienced editors, leading to high costs and slow throughput. In this demo paper, we present Smart Topic Miner (STM), a semantic application designed to support the Springer Nature Computer Science editorial team in classifying scholarly publications. STM analyses conference proceedings and annotates them with a set of topics drawn from a large automatically generated ontology of research areas and a set of tags from Springer Nature Classification.
Fabian M. Suchanek, Colette Menard, Meghyn Bienvenu and Cyril Chapellier
In this demo proposal, we present a system that proposes generations of existing concepts such as "cars that park automatically"or "skyscrapers made of glass".
John P. Mccrae
Linked data is one of the most important methods for improving the applicability of data, however most data is not in linked data formats and raising it to linked data is still a significant challenge. We present Yuzu, an application that makes it easy to host legacy data in JSON, XML or CSV as linked data, while providing a clean interface with advanced features. The ease-of-use of this framework is shown by its adoption for a number of existing datasets including WordNet.

Submenu

Program

Accepted Papers
Accepted Posters and Demos
Keynote: Kathleen McKeown
Keynote: Christian Bizer
Keynote: Hiroaki Kitano
Awards
Tutorials
Workshops
Doctoral Consortium
Mentoring Lunch
Guidelines for Authors
Lightning Talks
Floorplan

Sponsors

Platinum Sponsors

Gold Sponsors

Silver Sponsors

Student Travel Award Sponsor

Become a Sponsor

Organizers