- RDS/WIP Introduction
- Models, Data & Meta-Data
- Paths to Interoperability
- Automated Mapping
- Thought and Language
- Coarse to Fine
- Fine to Coarse
- Template Methodologies
- Choice of System
- RDS/WIP Sample Queries
- RDS/WIP Staging Diagrams
- RDS/WIP 1.0 Plan
- RDS/WIP 1.0 Testing
- RDS/WIP 1.0 Process
- RDS/WIP 1.0 Inventory
- RDS/WIP 2.0 Plan
- RDS/WIP ID Generator
- RDS/WIP Domain Proposal
- RDS/WIP Requirements Table
- RDS/WIP Use Case: Discrete Editing
- RDS/WIP Use Case: CSV Upload
- RDS/WIP 1.0 General Use Cases
- RDS/WIP 2.0 General Use Cases
- RDS/WIP ISO 15926 Template Definitions
- RDS/WIP OWL/RDF Definition
- RDS/WIP OWL/RDF Project Plan
- RDS/WIP Forums
- RDS/WIP Use Case: Bulk Upload
POSC-Caesar FIATECH IDS-ADI Projects
Intelligent Data Sets Accelerating Deployment of ISO15926
Realizing Open Information Interoperability
Modeling: Coarse-to-Fine Approach
A coarse-to-fine approach takes relationships that already exist in highly agglutinated or generalized forms and then may break them down into finer relationships in order to explore the structure of the data.
It is important to recognize that often they need not break them down much if at all - often, models built with this approach are purely for the recording of the data, without much emphasis placed on the analysis of the structure, beyond a certain level necessary to solve specific problems.
Close Fit to the Problem Set
Coarse-to-fine approaches are typically empirical or based on actual data - they model information as it is recorded by humans or other systems. This exploits two very important features:
- it is often purpose-driven - the model fits the data because it has been developed as an abstraction of the patterns already in the data.
- it tends to reflect the way that humans think about the data in the disciplines that work with the problem set.
Most data model design processes follow this approach - not because it necessarily results in the best possible model, but because it quickly results in a model that fits the problem and the data fast.
Perhaps more importantly, it is popular because it is successful, and it is successful because it does not require the participants to alter the way they think about the structure of the data.
As a result, these kinds of approaches can only be used for interoperability where the problem-set is roughly shared across the communicating parties - that is to say, if one party wants to use this data to solve a different kind of problem, there is a good chance it is actually useless to them.
Nevertheless, the power of these models as communicative tools, particularly to humans, cannot be understated.