- RDS/WIP Introduction
- Models, Data & Meta-Data
- Paths to Interoperability
- Automated Mapping
- Thought and Language
- Coarse to Fine
- Fine to Coarse
- Template Methodologies
- Choice of System
- RDS/WIP Sample Queries
- RDS/WIP Staging Diagrams
- RDS/WIP 1.0 Plan
- RDS/WIP 1.0 Testing
- RDS/WIP 1.0 Process
- RDS/WIP 1.0 Inventory
- RDS/WIP 2.0 Plan
- RDS/WIP ID Generator
- RDS/WIP Domain Proposal
- RDS/WIP Requirements Table
- RDS/WIP Use Case: Discrete Editing
- RDS/WIP Use Case: CSV Upload
- RDS/WIP 1.0 General Use Cases
- RDS/WIP 2.0 General Use Cases
- RDS/WIP ISO 15926 Template Definitions
- RDS/WIP OWL/RDF Definition
- RDS/WIP OWL/RDF Project Plan
- RDS/WIP Forums
- RDS/WIP Use Case: Bulk Upload
POSC-Caesar FIATECH IDS-ADI Projects
Intelligent Data Sets Accelerating Deployment of ISO15926
Realizing Open Information Interoperability
RDS/WIP World View
IDS-ADI envision the RDS/WIP as a collaborative space for development and publication of multiple reference data libraries and the mappings between them.
This page is an informal treatment in an essay style that seeks to show what differentiates these libraries, the methodologies that produce them, approaches to mapping and the means by which all of these together address interoperability.
Model and Data
Probably the most important underpinning of the RDS/WIP as a concept is that the model is meta-data, and meta-data is data.
It is a short and simple step from this to establish that if you must transmit models, meta-data and related data, then it is simplest to utilise a system that allows you to treat all of that information in the same way, so that you can operate on meta-data and models just as though they were data.
To this end, we chose RDF and its supporting technologies (triplestores, OWL, SPARQL and URIs) as core elements of the RDS/WIP.
Relations, Classes and Members
Since RDF views all information as binary relationships, it becomes necessary to cast information into that mold in order to incorporate it into the RDS/WIP.
(@todo introduce notion of "system" to describe models built on the same foundation)
(@todo link to description of, differences between this and OO, ER etc.)
While it is possible to just put raw RDF into the RDS/WIP, its useful (and in fact easier), to leverage previous work in adapting other systems to RDF. Moreover, providing some common structure makes transformation between systems far easier.
While RDF provides three core concepts: binary relations, classes, and membership in classes, the most common missing pattern that we see in other systems is n-ary relations.
In the RDS/WIP we borrow the term "template" from ISO 15926 to describe an n-ary relation and its related definitional rules and meta-data.
@todo link to deeper definition of template, cross-link to IDS-ADI templates.
Arriving at Definitions
To make a set of definitions work well together, it is necessary for them to have been produced via some cohesive process. We call these processes "methodologies".
Different methodologies can be used to create sets of definitions in the same model. Sometimes, these sets of definitions will overlap, but for the most part they will be substantively different.
Paths to Interoperability
There are at least three different paths to interoperability, and the choice of which path is largely determined by the maturity of information handling in the particular contexts in which a language is used.
The RDS/WIP can be used to support any path to interoperability that requires the publishing of reference data - but the path chosen has important ramifications for the kind of system it is necessary to adopt.
Choice of System
The RDS/WIP is intended to be able to hold reference data for many systems, so in order to contribute to the RDS/WIP, the submitter must choose a system (or systems) to contribute to. The most important part of this choice is the methodology, because it is the methodology that leads to a particular standardization path.
As expanded in the conclusions section, it is incumbent on the user to decide the best set of reference data to solve their specific interoperability problem.
At the core of this is choosing a system that has the methodologies and the standardization paths that support data definition and exchange at the level required to solve the problem.