- RDS/WIP Introduction
- Models, Data & Meta-Data
- Paths to Interoperability
- Automated Mapping
- Thought and Language
- Coarse to Fine
- Fine to Coarse
- Template Methodologies
- Choice of System
- RDS/WIP Sample Queries
- RDS/WIP Staging Diagrams
- RDS/WIP 1.0 Plan
- RDS/WIP 1.0 Testing
- RDS/WIP 1.0 Process
- RDS/WIP 1.0 Inventory
- RDS/WIP 2.0 Plan
- RDS/WIP ID Generator
- RDS/WIP Domain Proposal
- RDS/WIP Requirements Table
- RDS/WIP Use Case: Discrete Editing
- RDS/WIP Use Case: CSV Upload
- RDS/WIP 1.0 General Use Cases
- RDS/WIP 2.0 General Use Cases
- RDS/WIP ISO 15926 Template Definitions
- RDS/WIP OWL/RDF Definition
- RDS/WIP OWL/RDF Project Plan
- RDS/WIP Forums
- RDS/WIP Use Case: Bulk Upload
POSC-Caesar FIATECH IDS-ADI Projects
Intelligent Data Sets Accelerating Deployment of ISO15926
Realizing Open Information Interoperability
The RDS/WIP can be used to hold reference data for many different systems and within each system, for many different methodologies. This richness is intentional and desirable because interoperability problems exist with different degrees of necessary fidelity, breadth of content, paths to standardization and time to delivery.
By analogy, there was no need to delay the creation of the steam engine just because fluid mechanics was not yet fully understood; but you don't want to go firing neutrons at a lump of plutonium without having a grasp of nuclear physics.
This richness makes it incumbent on the user to limit themselves to the right set of reference data and the right approach to solve their interoperability problem.
Simple, Low-Fidelity Integration
At its lowest level, informal templates in any system backed by linguistic methodology may be sufficient to create a useful point to point integration.
Complex, High-Fidelity Integration
On the other hand, if the task was a high fidelity integration across many live data systems dealing with users of different disciplines then it would be necessary to select a reference data set to use as a lingua franca. That reference data set would require a mapping language that had sufficient richness to address the union of the data systems involved. The high fidelity requirement would in turn select methodologies that were fine-to-coarse, rather than coarse-to-fine.
In practice, the best solution is probably not going to be available at the project outset; and so the choice might be made instead to accept an initial reference data set with a coarse-to-fine methodology, but in a system that has a fine-to-coarse methodology and a powerful mapping language available. The intent of this approach would be to accommodate a smooth migration towards the better solution over time.