TLCMap is focused on enabling Humanities researchers to work with digital maps with pathways from beginner to advanced. We also aim to make systems useful for developers to interact with, to extend research further where needed, primarily through RESTful webservices and adherance to common and open standards for interoperability, such as KML, GeoJSON, CSV and ROCrate, at times extending these for required functionality.
Developers of TLCMap systems or systems consuming TLCMap systems should follow these simple but powerful principles to encourage uptake, re-usability, interlinking and usefulness for humanities researchers:
The following, simple but important, easy but powerful, points will enable TLCMap systems and projects to work together as a cohesive whole rather than be a set of disparate research, development and humanities projects. These are all indicated as ‘should’ rather than ‘must’ as the diversity of areas of development means in reality some of these points may not be applicable. While some of these points may seem obvious and desirable and should be standard practice, there are many cases in which they are not implemented so it's worth stipulating.
All TLCMap projects should, where feasible and allowable:
No infrastructure without projects. No system should be built without projects to demonstrate usefulness.
No project without infrastructure. No project should be undertaken unless it is using software that can be re-used for similar projects.
Import/Export in standard formats. All systems should enable import and export of spatiotemporal data in standard formats such as KML, GeoJSON, JSON-LD, CSV, ROCrate and other relevant standards.
Layers. Systems should allow import/creation/visualisation/use of more than one dataset at a time.
Web Services. Systems should expose information through RESTful web services APIs for potential re-use in other developed systems.
Public, private, group permissions. All systems should enable at least these privacy settings for people to work comfortably and collaboratively, and make information available if and when ready.
All entities should have unique URLs. All relevant entities within a system should be uniquely addressed with a URL. Through a web service the URL returns relevant data for that entity. Through user interfaces the application should 'zoom' to or load the entity when the URL is visited in the browser. What a relevant 'entity' is depends on the context and application. Eg: it may be a place on a map, a word in a text, an image, a section of an image etc.
Entity URLs in exported data. Data exported or made available through a web service should include the URL of the entity/record in the system exported from. This enables data to work among all compliant systems, eg: I am viewing a dataset in a time visualisation with other datasets - my dataset was created in a text analyser. While in the time visualiser, I can still directly access the relevant text for this point, by clicking the link to it that goes to the text analyser.
Entity URL chaining. Data imported and exported should not overwrite existing entity URLs, but add to a list of URLs from other systems. This enables linking among many systems through which the information has passed.
Systems developed should enable import and export of data in standard geodata formats, particularly KML, GeoJSON and CSV. Systems should handle point line and polygon data in these formats, and at minimum a start date and end date for each ‘feature’.
Systems should allow for importing and/or creating more than one dataset or ‘layer’.
If development requires data structures that don’t fit within standard geodata formats, more general standard data formats should be used, such as XML, JSON and RDF.