Abstracts & Bios
Relevant and trustworthy information – a crucial ingredient in risk management
Offshore oil and gas exploration and production has been critical to the world’s energy supply for more than 40 years. With the rapidly increasing demand for energy, it will remain so for many more years. This offshore activity is complex and not without risk; large amount of hydrocarbons are to be controlled and safely handled in an offshore environment subject to dynamic, and sometimes harsh, environmental conditions. The trend of exploring deeper waters and arctic waters add on to this complexity. How do we know the state of a subsea production plant at the seabed at 1000m below the surface? When drilling a well, how can we detect early warning signals about unstable conditions so that we can stop the operation safely in due time? How do we monitor our environmental impact when operating in environmentally sensitive areas?
These are critical questions. Questions which the oil and gas industry need to answer to authorities and the society at large. A common denominator is the need for large amounts of information. Not any kind of information; it is crucial that the information is both relevant and trustworthy. We need to know what specific information we want, what we will use it for, the technology to collect and process it, the methodology to analyze it. And, finally, the work processes and organization to ensure that we make the best use of it.
Marianne Hauso, Director of Operation - DNV Risk Management Solutions
Marianne Hauso holds the grade MSc within naval architecture and offshore technology from NTNU. She has been employed with DNV since 1996, where she has been widely engaged in risk management consultancy towards the maritime and offshore industry. In 2006 she got the business development responsibility for DNV’s safety risk management consultancy (oil & gas) at Høvik. In 2010 she led DNV’s subsea technology unit in Norway, and from January 2011 she took on the responsibility for DNV’s total risk management consultancy operation in Norway towards the oil and gas sector. This is a unit of 260 consultants offering services within the disciplines safety-, environment-, asset- and enterprise risk management. From their locations in Høvik, Stavanger, Bergen, Trondheim and Harstad the unit serve customers both locally and internationally.
Using Data from to the Web to observe communities
The Web has connected people much more than it connects machines. People form communities, and these communities evolve. The Web, and specifically the Semantic Web made these communities observable: Contributors to a community can be identified and classified based on their kind of contribution using available data. This helps to identify growing and healthy communities and attitudes, but as well decline and decay. Examples include a customer base in an online forum, or users talking online about a particular product or service. Also the evolution of communities and community interests can be observed: e.g., how topics evolve and new communities are formed. This enables companies to act and react quickly on emerging new trends. In this talk Prof. Decker presents work done at the Digital Enterprise Research Institute observing communities and mapping the evolution of science by observing their behavior and traces on the Web.
Prof. Stefan Decker
Stefan Decker is a professor at the National University of Ireland, Galway, and director of the Digital Enterprise Research Institute. Previously he worked at ISI, University of Southern California, Stanford University (3 Years), and the University of Karlsruhe. He is a widely cited Web scientist, and his current research interests include semantics in collaborative systems, Web 2.0, and distributed systems.
On demand access to Big Data through Semantic Technologies
Dealing with Big Data involves a number of different challenges, including increasing volume (amount of data), velocity (speed of data), and variety (range of data types, sources). Most Big Data solutions today focus on volume, in particular supporting vertical scalability. Yet the Big Data problem is not fully solved by vertical scale technologies alone.
A huge problem is that of horizontal scale. Consider the wealth of data that is published in open data initiatives: We are faced with a massive number of data sources, with a high degree of variety and heterogeneity in coverage, data models, and structure. Solving these problems and enabling users to tap into this wealth of data for on demand analytics bears enormous potentials and economic opportunities.
In this talk we present building blocks for solutions that enable on demand access to heterogeneous, distributed Big Data, in particular applying semantic technologies and the Linked Data paradigm. We demonstrate the use of these technologies in the Information Workbench, a platform for self-service analytics. Following a simple self-service process, the platform supports end users in 1) the discovery of relevant data sources, tapping into the Linked Open Data cloud and other open data sources, 2) the automated integration and interlinking of sources, and 3) on demand and interactive exploration and analysis of data.
Scaling-out Semantic Data - Management and Processing
Growing data amount creates new type of problems with data management and analysis. Many current solutions simply cannot scale and provide functionality they are supposed to deliver. During this talk we will take a look at some problem areas, try to understand their origin and see what approaches can be taken in the presence of large amount of data. We will also take a short look how some of those areas are tackled in current research projects at CIPSI.
Dr Tomasz Wiktor Wlodarczyk
Postdoctoral Research Fellow at CIPSI - Center for IP-base Service Innovation at University of Stavanger, Norway
His work focuses on analysis and management of Big Data. His interests include: data-intensive analysis and mining, knowledge modeling and complex event processing. He is currently working on those areas in several research projects including: SEEDS – Self-learning Energy Efficient Buildings, [email protected] – Smart System to support Safer Independent Living and Social Interaction for Elderly at Home, and SCC-Computing – Strategic collaboration with China on super-computing based on Tianhe-1A. He is also the Program Committee Chair of IEEE CloudCom 2012 – International Conference on Cloud Computing Technology and Science.
Building Earth Observatories using Semantic Web and Scientific Database Technologies
Earth Observation data has been constantly increasing in size in the last few years (now reaching multiple petabyte sizes), and have become a valuable source of information for many scientific and application domains (environment, oceanography, geology, archaeology, security etc.). TELEIOS is a recent European project that addresses the need for scalable access to petabytes of Earth Observation data and the discovery of knowledge that can be used in applications. To achieve this, TELEIOS builds on Semantic Web and scientific database technologies technologies. In this talk we outline the vision of TELEIOS (now in its second year), present its software architecture and give a detailed example of a fire monitoring application that we have completed.
Prof. Manolis Koubarakis
Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens.
He has published more than 100 papers that have been widely cited in the areas of Artificial Intelligence (especially Knowledge Representation), Databases, Semantic Web and P2P Computing. His research has been financially supported by the European Commission, the Greek General Secretariat for Research and Technology and industry sources.
Challenges, Approaches, and Solutions in Stream Reasoning
The widespread deployment of pervasive system and handled devices is changing the nature of the information from static to streaming and the type of decisions we based on it from strategic to operational. The increasing demand for applications that analyse in real-time heterogeneous streaming data to support the concurrent decisions of high number of users poses new challenges and calls for Stream Reasoning, i.e., Reasoning upon Rapidly Changing Information. This talk presents the different approaches and solutions to Stream Reasoning investigated in the last years and showcases them through the applications they have been deployed in. A special emphasis is given to practical evidences of the scalability of these solutions.
Emanuele Della Valle
Emanuele Della Valle is assistant professor of Software Project Management at DEI - Politecnico di Milano. His research interests are focused on: Semantic Web, Web Services and, more recently on Stream Management Systems. He started the CEFRIEL’s Semantic Web Practice in 2001 and he coordinated the group until June 2008. He is co-author of the first book in Italian about Semantic Web. From 2006 to 2008, he was Scientific Manager of the SEEMP FP6 project and the Project Coordinator of the Service-Finder FP7 project. He lead the activity about streams and smart cities in the LarKC FP7 project. He won, in 2011, the AI mashup Challenge with Traffic LarKC and the Semantic Web Challenge with Bottari Android application.
Challenging Real-time Data with Semantic Technologies
Traditionally almost all business aspects of any big industrial company operate with real-time data, which in fact requires stable and reliable software systems to work with. To achieve safety of these systems, they are strictly tailored to specific applications and all knowledge about data, analysis algorithms and methods is part of business logic. Therefore, any new functionality to be introduced is always a challenge. However the advantages of semantic technologies could be leveraged to introduce a desired flexibility to these systems and to improve representation of real-time data. Let us consider possibilities of practical and pragmatical ways to extend legacy systems and real-time data representation with semantics.
Mikhail Roshchin (PhD)
Mikhail Roshchin joined Siemens AG in 2004 for working on various R&D projects related to tentative remote and online diagnostics, condition monitoring and predictive maintenance. His main expertise covers the following topics: intelligent situation understanding, logic-based reasoning methods for symptom-based diagnosis, complex event processing. Currently, Mikhail Roshchin leads various Siemens projects, aimed in the development of a new generation of diagnosis systems for Energy and Industry domains.
EPIM Reporting Hub - Semantic Reporting for the Norwegian Oil and Gas Industry
This presentation is an introduction and overview of the business needs, approach and semantic content take for the EPIM Reporting Hub application. The use and relationship to ISO 15926 and the POSC/Caesar Reference Data Libraries is included.
David Price is Managing Director of TopQuadrant Limited, UK subsidiary of TopQuadrant Inc in the US, and is the technical lead and development project manager for EPIM Reporting Hub. Before joining TopQuadrant 2 1/2 years ago David was a Principal Consultant with Eurostep in the UK and Senior Software Engineer at IBM in the US. David has extensive experience in the standards arena, leading several ISO standard projects, and has been involved in ISO 15926 projects in previous consulting engagements.
Semantic challenges in environmental monitoring
Statoil has initiated a project on integrated real-time environmental monitoring. Semantics form an important part of the project, and in this presentation we will take a look at the semantic challenges facing the project.
Vidar Hepsø (PhD)
Vidar Hepsø holds a PhD in Social Anthropology from NTNU and is currently Principal Researcher/Project Manager in Statoil R&D. He is also Professor II at the NTNU IO Center for the Petroleums Industry.
Hepsø has worked for Statoil since 1991 and his main interests are related to new types of collaboration enabled by new information and communication technology (ICT) in general and in particular ICT-infrastucture development and collaboration technologies. The main focus of his work has been to study how ICT is integrated into existing and new collaborative practices and how these changes are accomplished through action.
Through this work Hepsø have worked both as an organizational insider and in collaboration with vendors. Those involved have ranged from CEO's to geologists, offshore workers, crane operators, able seamen and most oil and gas related engineering disciplines.
The main work area the latest years has been within 'Integrated Operations' .Within this field of activities he is involved in both planning, execution and evaluation of IO activities in Statoil in addition to being project mananger and resource for branch specific research and development of IO in both Norway and internationally.
Linked Data for the Enterprise: Opportunities and Challenges
Semantic Technologies and Linked Data in particular provide various opportunities for reducing the cost and complexity of data integration and information management within and across the enterprises, but at the same time there are certain challenges associated with such technologies. This talk will provide details on the successful application of Semantic Technologies in several industrial and research scenarios.
Marin Dimitrov is the CTO of Ontotext - a company providing solutions for data integration and information management with Semantic Technologies. He has experience with various industrial and research projects in the area of Linked Data, text mining, semantic search and semantic databases.
Data Quality Challenges in Acquisition of Fighter Vehicles
The presentation will highlight challenges in handling both processes and deliveries regarding Data Quality in Acquisition of Fighter Vehicles for the Norwegian Army.
Jarl Kvanli is Project Manager and Advisor at Consulting Network Norway AS. He holds a Cand Scient-degree (MSc) in Informatics from The Norwegian University of Science and Technology (Trondheim) and Master of Management in Innovation from the BI Norwegian Business School (Oslo). He also has an degree from the Norwegian Air Force College. He has more than 10 years of experience, from several positions, in the Norwegian Air Force and the Norwegian Army, and 10 years of experience as Project Manager and Advisor in several consultants companies.
Information quality – A key to Joint Intelligence, Surveillance and Reconnaissance
Correct and timely information is essential for effective military decision making. While information is traditionally held quite closely within defined military organizations, it is now recognized that “need to share” must replace “need to know” for better operational efficiency. This is the case nationally as well as for coalition operations. Joint Intelligence, Surveillance and Reconnaissance (JISR) is one such initiative, in which NATO partners seek to improve their operational efficiency by sharing intelligence, surveillance and reconnaissance (ISR) information. We look at some results from applying formal information quality measurements in an experimental sharing environment. It is seen that the need for information quality measures appear as soon as information sharing starts in earnest, and indeed that discussions of information quality are indications that one has moved from technical hurdles towards issues of use of shared information.
Trygve Sparr received his PhD in physics from the University of Tromsø, Norway, in 1998. He is currently a principal scientist at the Norwegian Defence Research Establishment where his main interests are sensor systems, data, processes, information and intelligence to support military operations. He is the project manager of a project that attempts to realize the concept “Joint Intelligence, Surveillance and Reconnaissance” for the Norwegian Forces in cooperation with coalition partners.
Semantic search and reporting implementation on .15926 platform
The goal of .15926 (pronounced "dot15926") project is to allow business user to perform advanced tasks with data using only engineering discipline-specific or geometry-specific terms, patterns and metaphors. Currently ISO 15926 is supported with possible extensions to STEP or discipline level standards. To this goal we've created a platform for ISO 15926 ontology programming - a set of tools to build and deploy domain-specific languages (DSLs) for easy viewing, navigation, search, editing and mapping of ISO 15926 compliant data.
Specific instruments are required to build higher abstraction language layers and define domain-specific constructs, rising above an “assembler” level of table/RDF/OWL representations of ISO 15926 data. We use Python programming language with special libraries and syntax tricks to build DSLs. By using general-purpose programming language for DSL construction it is possible to solve the problem of code reuse, be it for a user interfaces or for a specific ISO 15926 editing, search or mapping tasks.
One DSL, semantic search language of dot15926 Scanner, was created, implemented and optimized for work with various triple representations of ISO 15926 type and template instances stored both in files or at SPARQL endpoints. During the talk we shall show how the search language helps in the following tasks - data verification, pattern matching for the "raising" of templates, and construction of a high level object information models (OIMs) for user viewing.
Victor Agroskin is a Vice-President of TechInvestLab.ru. Не has an experience in strategy, IT and management consulting since 1990. For seven years hе was in charge of business development and IT in major Russian investment banking houses. Victor Agroskin was involved in investment and consulting projects in various industries, with accent on energy, engineering and IT. Supervised economical and technical aspects of corporate restructuring and industry reforms for private and government clients, including e-government projects. Participated in a number of infrastructure projects - financial market infrastructure, exchange trading, network capacity distribution, etc.
INCOSE Russian Chapter founder, served as Chapter Secretary of Board.