Bios and Abstracts
Keynote speaker: Sonja Chirico Indrebø - Chief Information Officer – Senior Vice President IT, Statoil
Sonja Chirico Indrebø graduated from University of Surrey, UK, in 1992 with a BEng’s degree in Electronic and Electrical Engineering. After graduation she worked for six years as an instrument and telecommunication's engineer within the Oil and Gas business of Aker Engineering (now Aker Solutions).
Sonja joined Statoil in 1998 as a senior engineer in offshore telecommunications. Since then she has held multiple technical and leadership positions mainly within the company’s IT business. She has been responsible for large scale projects and also operations of the global IT services, both applications and infrastructure. Since beginning of 2011 she has been the company’s CIO.
Keynote speaker: Dr. James Maltby - YarcData
Dr. James Maltby is a Solution Architect for YarcData Inc., and specializes in mapping scientific and business applications to new computer architectures. He has an academic background in physics and engineering, specializing in radiation transport. He has worked for Cray since 2000, developing software for the massively multithreaded Cray XMT (and its MTA-1 predecessor) as a well as the other Cray systems. He also led the Bioinformatics practice at Cray for several years, using HPC to solve Life Science problems. His most recent project involved developing a highly parallel in-memory Semantic Database for the XMT architecture, now released as uRiKA by YarcData.
Complexity and Scalability in Semantic Graph Analysis
Semantic approaches offer a more natural way to combine disparate sources of data and to search for complex patterns. However, limitations of common computer hardware architectures can place an upper limit on the scalability of graph stores. Previous data management architectures have been very successful, but trade-offs related to their intended purposes have created built-in limitations. Classic relational databases offer storage efficiency, but performance can suffer as more data sources are added. Denormalized data warehouses improve performance but at a cost of complexity and increased maintenance. New storage paradigms such as Hadoop offer lower cost and higher parallel performance, but their highly partitioned nature limits the type of data that may be analyzed. This talk will present a new in-memory semantic graph database built over a unique multithreaded supercomputer, based on RDF and SPARQL, and describe its performance. Several real-world uses will be described, drawn from life sciences, financial services and cybersecurity.
Keynote speaker: Prof. Ian Horrocks - Oxford University
Ian Horrocks is a Professor in the Oxford University Department of Computer Science and is a Fellow of Oriel College Oxford. His research interests include logic-based knowledge representation and reasoning and the semantic web, with a particular focus on ontology languages and applications. He was one of the key authors of the OIL, DAML+OIL, and OWL ontology language standards, chaired the W3C working group that standardised OWL 2, and developed many of the algorithms, optimisation techniques and reasoning systems that underpin OWL applications. He is a Fellow of the Royal Society, a member of Academia Europaea, an ECCAI Fellow, and is amongst the most highly cited authors in Computer Science.
Ontology Based Data Access: Theory and Practice
So called "Semantic Technologies" are rapidly becoming mainstream technologies, with RDF and OWL now being deployed in diverse application domains, and with major technology vendors starting to augment their existing systems accordingly. In this talk I will review the evolution of Semantic Technologies to date, and show how research ideas from logic based knowledge representation developed into a mainstream technology. I will then go on to examine the scalability challenges arising from deployment in large scale applications, and discuss ongoing research aimed at addressing them, focusing in particular on a layered integration of ontologies and databases known as Ontology Based Data Access.
Keynote Speaker: Stephen Brobst - Teradata Corporation
Stephen Brobst is the Chief Technology Officer for Teradata Corporation. Stephen performed his graduate work in Computer Science at the Massachusetts Institute of Technology where his Masters and PhD research focused on high-performance parallel processing. He also completed an MBA with joint course and thesis work at the Harvard Business School and the MIT Sloan School of Management.
Stephen has been on the faculty of The Data Warehousing Institute since 1996. He was also appointed to Barack Obama's Presidential Council of Advisors on Science and Technology (PCAST) in the working group on Networking and Information Technology Research and Development (NITRD).
Semantic Data Modeling: The Key to Re-Usable Data
There is significant industry confusion in understanding the differences between logical, physical, and semantic data modeling. This workshop will provide a clear understanding of the role for semantic data modeling when deploying an enterprise data warehouse. An approach for development of a semantic data model will be described. In addition, we will describe the VIP methodology for realization of a semantic data model using an access layer on top of an enterprise data warehouse with an optimized combination of views, indexes, and physicalization.
- Learn about the differences between logical, physical, and semantic data models.
- Learn how use of semantic data modeling can reduce data redundancy in an analytic environment.
- Learn how to make good design tradeoffs in regard to performance, maintenance, and storage costs in the realization of the access layer aligned to your semantic data model.
Graham Moore - Bouvet
After graduating top of his class with a 1st Class Honours degree in Computer Science from Southampton University Graham worked at leading information, knowledge and content management companies (DPSL Ltd, STEP Gmbh, Empolis, Ontopia) before forming Networked Planet in 2004.
Graham has led teams of developers in building semantic web, knowledege and content management products for over 10 years. Graham's roles have included the development and productisation of leading edge technology as well as the evangelism and partner network building that is required to take these products to market.
Graham has also been heavily involved in the communities around information and knowledge management technology. He has spent 10 years on the ISO13250 Topic Maps committee, editing the Topic Maps and Topic Maps Constraint Language and being the author of the SDShare information integration protocol.
Introduction to SDShare
SDShare is a stream based protocol for exposing and consuming data in a standardised way. It uses Atom and RDF as building blocks to allow servers to expose collections of data, and clients to process and update local data stores with copies of the data.
RDF provides a very flexible data model that allows data of any form to be represented. The existing RDF family of standards define how to serialise the model, how to query the model (SPARQL) and how to update the model (SPARQL Update). However, there are no protocols that define how a machine can both expose and consume RDF in a system of interconnected nodes. A classic use case for this requirement is where one RDF server contains a collection of master data and another RDF system wishes to use that data. The master data will change over time. This protocol defines how the master data server can publish the changes to the resources it manages and how a client can consume them to ensure it is in sync.
While the standard uses RDF for interchange, source systems need not support RDF natively, but instead transform data as requested into RDF. This makes SDShare a solution to the general problem of enterprise information integration and compares favorably when compared to data integration via Web Services.
The protocol was original developed as a response to an eGovernment challenge of syndicating asset metadata between government agencies. Since then it has been refined and published as an open standard at SDShare.org. It has also been in use in many government and commercial applications. The main business uses for SDShare include, enterprise search of multiple sources, master data management, data quality management, content classification and unified data merging and querying.
This presentation will describe the technical aspects of the standard and well as provide high level overview of the most common use cases.
Prof. Maurizio Lenzerini - Università di Roma La Sapienza
Maurizio Lenzerini is a Full Professor in Computer Science at Università di Roma La Sapienza. He is conducting research in data management, knowledge representation and reasoning, information integration, and service-oriented computing. He is the author of more than 250 publications in international conferences and journals, and has been invited speaker in many international conferences. He was the Chair of the Executive Committee of the ACM Symposium of Principles of Database Systems in 2010-2012, and is currently a member of such Committee. He is a Fellow of the European Coordinating Committee for Artificial Intelligence (ECCAI), a Fellow of the Association for Computing Machinery (ACM), a member of the Academia Europaea - The Academy of Europe, and the recipient of several research awards, including an IBM University Shared Research Award, and an IBM Faculty Award.
Ontology-based data management
The need of effectively managing the data sources of an organization, which are often autonomous, distributed, and heterogeneous, and devising tools for deriving useful information and knowledge from them is widely recognized as one of the challenging issues in modern information systems. Ontology-based data management aims at accessing, using, and maintaining data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. This new paradigm provides several interesting features, many of which have been already proved effective in managing complex information systems and addressing the above mentioned challenges. In this talk we first provide an introduction to ontology-based data management, illustrating the main ideas and techniques for using an ontology to access the data layer of an information system, and then discuss several important issues that are still the subject of extensive investigations by the Knowledge Representation and Reasoning community.
Axel Borge - Bouvet
Axel Borge has been responsible for the implementation of a whole range of real-world projects using semantic technologies for the last 8 years. He has been a consultant for more than 20 years, the last 10 in Bouvet. He now is the head of the semantic integration and search department in Bouvet. Axel has been working with Web technologies since 1994 when he started as a consultant for Oslonett, one of the first Internet companies in Norway.
Hafslund SESAM -- Semantic Integration in Practice
Sesam is an archive system developed for Hafslund that achieves the often-sought but rarely-achieved goal of automatically enriching metadata by using semantic technologies to extract and integrate business data from business applications. The extracted data is also indexed with a search engine together with the archived documents, allowing true enterprise search.
The system is unusual both for the functionality provided, and for the architecture, which we believe provides a superior approach to information integration.
Prof. Josef Noll - University of Oslo
Josef Noll is professor at the University of Oslo in the area of Mobile Services. His group ConnectedLife concentrates on the working areas mobile-based trust and authentication, personalised and context-aware service provisioning, and service continuation in 5G systems. He is co-founder and steering board member of the Center for Wireless Innovation Norway and Mobile Monday Norway.
In the area of Internet of Things he is project leader of the JU Artemis pSHIELD project. Previously he was Senior Advisor at Telenor R&I in the Products and Markets group, Programme manager for the UMTS++ Mobile Broadband Access programme, and project leader of several European projects.
Prof. Noll received the IARIA fellow award in 2008. He is reviewer of several EU FP6/FP7/FP8, referee of the European Research Council (ERC) Starting Grant, and evaluator of the EU's framework programme FP7, the Joint Undertaking (JU) Artemis, the Dutch IOP, the Austrian FIT, the Qatar National Research Fund and the Cyprus research programmes.
Measurable Security for the Internet of Things
Business intelligence is moving towards the real-time handling of information, coming from both internal and external business processes. Inclusion of sensor data in automatic process control has been a topic in industry for quite a while, but was mainly limited to closed systems. Trends in collaborative industries like oil & gas show that sensor data might contribute to automatic processes in different domains, fostered by the vision of the Internet of Things.
The presentation will address the challenges of communication across domains, focussing on the challenges of new infrastructures, new ways of communication and new devices. Two main trends are visible: (i) wireless sensors contributing to automated processes and (ii) the move of control into mobile devices. The example of "bring your own device" (BYOD) is used to exemplify the trends of devices accessing processes and information in your enterprise. In the upcoming years not only phones, tablets and computers will demand access, but also sensors and embedded system will deliver and request information. Sensors will contribute to automated processes, and thus require a knowledge management.
Classic threats as insufficient authentication and loss of devices are addressed through an approach of integrating, managing and securing mobile devices. Such a short-sighted approach, as suggested by leading IT companies, is deemed to fail. A paradigm shift in handling security is required, addressing the need for securing information instead of securing infrastructure. The paradigm shift includes the need for measurable security, and addresses a metrics-based approach for a quantitative assessment of both the potential attack scenario and the security measures of the infrastructure.
Our suggested approach is based on the semantic description of both a potential attack scenario, the security-related aspects of my sensors/systems and semantic policies. The outcome is a methodology for measurable security, and provides composable security for sensor systems. The approach is currently applied in the areas of Railway Security and UAV operation through the European Artemis project nSHIELD (http://newSHIELD.eu)
Nils Jacob Berland - X2X Maritime AS
Nils Jacob Berland has a PhD in Computer Science from University of Bergen, Norway. He worked for several years with various positions at what is now Telenor, at a period he was program manager for security at Telenor R&D. In recent years he has worked with collaboration tools for oil and gas logistics. At present he is CTO at X2X Maritime AS and Theano AS. Nils Jacob has also had part time positions at the Norwegian University of Science and Technology and Molde University College.
Building applications with Semantic Technologies
Semantic Technologies may seem quite complex in the beginning, but by using existing tools it is possible to assemble useful applications quite easily. In this presentation we will explore how triple stores can manage data for useful applications. The graph properties of triple stores are a benefit for many applications and we will explore how to both use this and visualize data from triple stores. The examples will be practical with several examples related to a planned information hub for sharing crucial logistics information between companies operating on the Norwegian Continental Shelf.
Peter Lawrence - TopQuadrant
Dr. Peter Lawrence has 30+ years of experience in applying automation, software, and information technology to create transformational solutions to technical challenges, particularly in process manufacturing.
Most recently he was CTO of Matrikon where the Intuition product he developed (using the Semantic Web technology) was instrumental to Matrikon’s acquisition by Honeywell in 2010.
Logical Semantic Warehouse - Developing Your Own Semantic Ecosystem
Gartner has defined the Logical Semantic Warehouse (LSW) as a new data management architecture for analytics combining the strengths of traditional repository warehouses with alternative data management and access strategy. The LSW will form a new best practice by the end of 2015 according to Gartner.
Organizations are reporting faster time to delivery, greater flexibility and an ability to include additional forms of processing such as virtualization and even content analytics in their analytics solutions — all managed by a common semantic layer or engine to deliver the assets to multiple different analytic applications.
Dr. Peter Lawrence from TopQuadrant will describe the evolution of LSW from existing data warehouse and data mart technology practices and the semantic version of LSW, its architecture and benefits.
Thomas Hubauer - Siemens
Thomas Hubauer is a Research Scientist in the technology field Business Analytics and Monitoring at Siemens Corporate Technology in Munich.
His research focuses on knowledge modeling and retrieval, with an emphasis on information interpretation and abductive reasoning. His current projects address the application of Semantic Web technologies for reactive and preventive data analytics in the industry and energy domains.
Semantic Data Analytics for Integrated Energy and Industry Automation Systems
Integration across the layers of the automation pyramid has been a key to increasing transparency and efficiency in both the industry and energy automation domains.
Nowadays, the focus shifts from this intra-domain integration towards a broader inter-domain approach where energy provision and energy consuming production processes are closely linked. Data interoperability is an important enabler for providing an integrated management of the composed system and added value services based on advanced business analytics.
This talk presents our current efforts in using semantic modeling and analytics in this endeavour, and its benefits in efficiency and environmental impact.
Torulf Mollestad - SAS Institute
Torulf Mollestad, Ph.D., is a Senior Consultant of Advanced Analytics for the SAS Information Management team in the Nordic region; working on analytics, data mining and text mining projects for numerous industries and many different application areas, such as churn prediction, risk modeling, fraud detection, anti money laundering, industrial equipment failure prediction etc.
He was an Associate Professor NTNU (Norges Teknisk-Naturvitenskaplige Universitet) in Trondheim N for 8 years and has been with SAS Institute for 12 years.
Text mining and semantic tagging
In this presentation, statistical text mining methods are described - what they are and how they may be compared to classical constructive semantics based methods. We will describe methods for semantic tagging of words or documents and how this information may be utilized in subsequent (statistical) analysis of the text Moreover, we will talk about methods and tools for designing ontologies that are consistent across the organization and for making automatic document classification into these. Different use cases will be presented.
David Norheim - Computas
David Norheim heads the Information Management Department in Computas. He has experience with applying Semantic Web technologies for various user groups and application areas since 2001 and co-founded a Semantic Web startup in 2002. He is currently involved in a number of Semantic Web projects in Norway and in Europe.
Experiences with organisational linked data
Over the last couple of years major government bodies in Norway and Sweden has been demonstrating the value from publshing national company registies as linked open data. Organizational data containing information about management and board members, number of employees, locations and various classifications has been linked to accounting information and to eachother. Over these data various demonstrators have been build. The power and ease of developing demonstrators over a linked data and Semantic Web infrastructure has been demonstrated. An innovation project known as Semicolon has been a key driver for this progress, and we continue to demonstate the value of linked data inside the firewall. In this presentation we will show how we've collaborated and how the agencies have matured and how demonstrators with focus on business intelligence becomes an eye opener.
Monika L. Eknes - Skaperhuset
Monika L. Eknes is an entrepreneur with a passion for how to transform organizations into super organisms and founder of Skaperhuset AS. She considers ICT to serve as groups’, organizations’, societies’, nations’ – and global nervous systems, analogue to what exists in a human body. She assesses semantic technologies to have the potential to simulate the human brain’s strategies for dealing with Big Data. Eknes holds a Ph.D. within offshore safety and is a certified communicologist. She has previously worked in DNV Research and held different management positions within SHE and quality, including being the safety director of the Norwegian National Rail Administration.
Arne Dybdahl - Skaperhuset
Arne Dybdahl, MSc ICT, boad member for the SIG on Semantic Technology and Information Architecture at The Norwegian Computer Society, certified communicologist, and partner at Skaperhuset Semansys AS. He has extensive experience covering the management of a production enterprise, research, banking and quality management.
Can semantic technologies revolutionize administrative ICT?
Being responsible for a big organization’s management system which received low user satisfaction scores even though it had been developed aided by recognized consultants and documented in one of the most advanced tools available I asked one of my co-workers how the system could be made simpler and more user friendly at reasonable costs. Quite quickly he returned with the answer that this would be achievable by using a semantic wiki. It turned out not to be possible to convince the ICT department to explore the idea. However four years later we realized the idea in Skaperhuset.
The presentation outlines how a management system was developed in a Semantic Mediawiki combining semantic technology, competence on "framing and anchoring", the Business Motivation Model and HSEQ competence. The presentation also discusses future applications far beyond management systems.
Johan Wilhelm Klüwer - DNV
Johan holds a doctoral degree in philosophy and applied logic from the University of Oslo. His work revolves around enterprise master data integration and governance, with a focus on ontology-based methods and semantic technology. He is engaged in the EU project Optique, introducing cutting-edge ontology based data access (OBDA) methods in the energy industry.
Magne Valen-Sendstad - DNV
Magne has 28 years experience working in the field of engineering and information management for the O&G industry, including 3D CAD systems and integration with fabrication applications and engineering
databases. He has been engaged in development of ontologies for the process industry since 1995, including core contributions to the ISO 15926 standard and the POSC Caesar Reference Data Library.
Johan W. Klüwer and Magne Valen-Sendstad are information management specialists at DNV Maritime and Oil & Gas, part of a team dedicated to bringing ontologies and Linked Data to the energy industry, primarily O&G. They have different responsibilities in the work to be presented at Semantic Days: Magne in interpretation of the engineering domain, Johan in representations in RDF/OWL.
The Source for engineering information: Building Aibel's EPC ontology
Aibel AS is a service company that works within the oil, gas and renewable energy sectors.These services include Engineering, Procurement, and Construction, EPC, type projects. Projects at Aibel involve complex customer requirements, cross professional domains including process design and construction, and require a constant attention to process industry standards. We will present Aibel's enterprise ontology, which is being built as a common resource for Aibel's thousands of engineers. We describe the ontology construction process, highlighting where we see the biggest differences in moving from PDF standards and traditional SQL database approaches to ontology-based, semantic models. We show how a Linked Data architecture, employing existing database infrastructure, provides a coordinated approach to a range of information integration challenges. The presentation will be illustrated with numerous examples from the subject matter of /piping bulk/ and ASME, ASTM, and ISO standards.
Marin Dimitrov - Ontotext
Marin Dimitrov is the CTO of Ontotext - a company providing solutions for data integration and information management with Semantic Technologies. He has experience with various industrial and research projects in the area of Linked Data, text mining, semantic search and semantic databases
From Big Data to Smart Data
Companies increasingly are realizing that they have more than a “big data” problem. What they really have is a “Big Metadata” problem, in that the barrier for timely and complex analytics is not necessarily the processing speed or the storage volume, but the fact that valuable metadata is spread across many inconsistent and heterogeneous repositories, or that the insufficient metadata makes complex data analysis time consuming and costly.
Semantic Technologies and RDF data warehousing in particular, have the potential to help with establishing harmony among enterprise metadata and timely insight discovery
Lasse Bache-Mathiesen - Capgemini
Lasse Bache-Mathiesen, CTO Capgemini BIM Norway ,has been working in the reporting, BI and data warehouse area for 25 years. He has been working with implementing solution in the BI space in many large and small organizations in Norway. He has not very much experience with the semantics area but has been working with information modeling and how to help large enterprises form a common knowledge of their data used in their business
How to achieve common semantics on business information/vocabularys
How to combine traditional business terms from business intelligence in a larger landscape throughout the business. SOA and process management has been ruling the integration, while Business Intelligence has been focusing on data and information. Today we see that companies must combine the information with the processes and hence a common information model is needed. This is a speech about how some Norwegian companies have tried different approaches to achieve this.
Craig Trim - IBM Global Services
Craig Trim is an Open Group Certified Master IT Specialist and Senior Managing Consultant in IBM’s Global Services Division. Craig is focused on technical innovations in natural language processing and semantic technology and application to client business situations. He is the author of over 20 articles on IBM’s developerWorks in the field of Language Processing. He is also an active member of the W3C Working Group for Provenance. Craig is actively involved in the development of technical assets for IBM’s Watson technology portfolio.
NLP-Driven Ontology Modeling
The Oil Industry will be changing, or will need to change, to make use of all the data that it is gathering now about its fields. While the Oil Industry has always been a major consumer of processing power for seismic, it appears the problems presented for understanding real large fields are forcing a change in how data will need to be processed in the direction of using pattern recognition or artificial intelligence approaches. This is an uncomfortable change for many petroleum engineers whose approach is based on a different sort of modeling.
Ontologies and Natural Language Processing (NLP) can often be seen as two sides of the same coin. NLP-driven Ontology modeling is the processing of using information discovery techniques to drive natural language understanding of unstructured data. As language is modeled, and entities are discovered and relationships between entities understood, Ontology models can be formed.
Ontology-driven Natural Language Processing (NLP) uses the modeled entities and relationships to drive understanding of dialog in interactive systems, provide answers to questions posted to an expert system, and to further refine the same models.