Mapping Systems

Intensive rapid prototyping development will occur over 2019 and 2020. There will be regular updates and changes in this time. Please use, or just play with anything that is up and running. During this period we greatly value constructive feedback to help us make sure development is useful (email tlcmap@newcastle.edu.au). With stretched staff resources we may not be able to achieve everything, but it certainly helps us prioritise. The 'planned enhancements' are not guaranteed but aspirational and we aim to achieve as much as possible, making sure working software is delivered, 'doing what we can with what we can get'.

Now Online

GHAP 1.2b Temporal Earth (TLCM 1.2b) Recogito/TMT (TLCM 1.0b) HuNI Heurist Heurist Map Finder

Planned Development Streams

Access: GHAP v1.2b >>>

Working with Australian National Placenames Survey (ANPS), we have cleaned data, providing well formatted coordinates for around 200,000 of 300,000 placenames within Australia. This includes data aggregated from a variety of authoritative sources, and crucially for Humanities, historical placenames never before available as a large, Australia wide collection. Work on GHAP will have 3 phases:

Planned Enhancements

As far as resources permit:

Recogito/TMT (TLCM 1.0b) >>>

Map corpora of texts. Text Map Text (TMT) will combine functionality of a prototype developed through C21CH at the University of Newcastle (inspired by the Saga Map user interface, combined with a desire to use automated text processing to make this available to all), and the well established Recogito software of Pelagios, enabling users to automatically generate and edit interactive maps from textual corpora.

Planned Development

As far as resources permit:

Access: Temporal Earth (TLCM 1.2b) >>>

Visualise maps over time, add KML files. This project will allow people to import data to create their own maps, similar to Matt Coller's Time Machine.

Planned Development

As far as resources permit:

Heurist and Map Finder

Building on Heurist to handle complex data behind maps.

Planned Enhancements

As far as resources permit:

HuNI provides a meta search of curated humanities datasets, enabling you to build a collection, and to establish complex networks among entities. These networks can be visualised, explored and interconnected leading to serendipitous discovery. TLCMap will extend HuNI capability to include maps and geocoding for visualisation of networks of places and events on a map, allowing import and export of data, and the ability to connect to entities within and outside of HuNI.

Planned Enhancements

Presently Under Discussion

As far as resources permit:

Planned Enhancements

Presently Under Discussion

As far as resources permit:

A range of metrics enabling statistics, quantitative analysis, and data transforms. Such metrics allow for deriving information from existing information (such as deriving a frontier from collections of points) and quantitative in/validation of perceived patterns or assumptions (eg: these are close to those), etc. There are many possible metrics, to be prioritised, such as:

As far as resources permit:

As far as resources permit:

Search all the cultural spatiotemporal data. Make cultural spatiotemporal research discoverable for researchers and provide a tool for the public to access, restore or add to 'meaning of place'.

Topodex aims to be a large 'big data' index of all 'point' in cultural spatiotemporal datasets. It will enable researchers to register, add metadata, and load in their compliant datasets. A small subset of information for each data point (coordinates, name, description, keywords, link to dataset metadata, and link-back to the data source in application, web service or repository) will be retained. Users can search by bounding box (eg: I draw a box around where I live to ask "What's here?") and other facets.

There will be issues with scalability in terms of storage and processing, as with noise due to volume in terms of search results, and maintenance, so part of this project involves finding ways to handle those issues.

Humanities researchers, in contrast to those using STEM digital mapping, frequently need to treat things that aren't 2D pictures with cartesian coordinates as maps, e.g. songlines, someone telling a story about hitching from Australia to England, dances, lienzo canvases, sketches, Ptolemiac maps, etc etc. Understanding these requires associating them with other things - a point on a conventional digital map, a point in a text glossary, an annotation, an instance in a pictorial representation, in another AV file, etc.

This translates into a technical requirement to be able to address by 'coordinates' any kind of multimedia with URLs, (not just the media file but a 'fragment' within it) and to associate all these links - for which Linked Data formats such as JSON-LD are ideal.

Through experience working with humanities researchers, most things are simply about associating one thing with another - that's how we understand things - across panes. A point on a map opens up some metadata, we see a manuscript scan alongside a transcription and translation. Also, in some cases problems require solutions that exist in the world of digital mapping, even though they aren't about cartography per se. We want to seamlessly zoom in and out of a high resolution scan over the web, and we want to click on specific points in it, such as the marginalia which gives us a pop up with information - which is just how map tiles and pop ups typically work.

Generally, at a high level of abstraction this is all 'mapping' from some 'coordinate' in some resource in some media to another 'coordinate' in some resource in some media. However, in practice the requirements for specific researchers in specific domains can go far beyond basic media (such as we click a point in an audio recording of an endangered language and see the phrase in a multitiered gloss marked up in XML).

Implementation

This requires visualisation tools for media primitives such as:

There are already established conventions for addressing 'coordinates' or in multimedia using URL 'fragments', and linked data formats that should be used for marking up the associations. What we need are visualisation tools for these media primitives, that are spawned from reading in JSON-LD files to automatically run up visualisations and make the linking among them work. It also requires designing an extensible front end frame work for developers to handle new and idiosyncratic needs (such as associating points in audio visual with multilayered linguistic glosses).

We also need to promote the idea of all apps having their entities uniquely addressable with a URL (which one might assume already widespread but is commonly not). See TLCMap compliance

The full set of core functionality is:

The ability to link within and across media in different datasets or to points anywhere on the web is closely aligned with the core idea in HuNI of associating entities, and engendering serenditipitous links, rather than focusing on bigger and bigger ‘big data’. M2M draws on HuNI’s work in understanding the habitus of humanities researchers and providing them with well-adapted tools.

This is a speculative project but with potentially transformational implications for the world wide web and how we use it. It could be thought of as a reimagining of 'semantic web' that corrects some of it's theoretical flaws which have, arguably, held back widespread adoption. This functionality, although arising from DH mapping needs, in particular indigenous digital humanities mapping requirements, would be of widespread usefulness across multimedia and the web generally, not limited to cartography, maps or humanities. This project aims to establish a working demonstration with implimentations not to conquer, dominate the industry, or with any empire building agenda, but simply to establish a new paradigm and some standards which may be freely adopted and built on by anyone, anywhere, anytime.

Developers

TLCMap is focused on enabling Humanities researchers to work with digital maps with pathways from beginner to advanced. We also aim to make systems useful for developers to interact with, to extend research further where needed, primarily through RESTful webservices and adherance to common and open standards for interoperability, such as KML, GeoJSON, CSV and ROCrate, at times extending these for required functionality.

Developers of TLCMap systems or systems consuming TLCMap systems should follow these simple but powerful principles to encourage uptake, re-usability, interlinking and usefulness for humanities researchers:

The following, simple but important, easy but powerful, points will enable TLCMap systems and projects to work together as a cohesive whole rather than be a set of disparate research, development and humanities projects. These are all indicated as ‘should’ rather than ‘must’ as the diversity of areas of development means in reality some of these points may not be applicable. While some of these points may seem obvious and desirable and should be standard practice, there are many cases in which they are not implemented so it's worth stipulating.

All TLCMap projects should, where feasible and allowable:

A list of data files to use for testing is here.

Also look at the FAQs for other places to discover

If you are a digital humanist or a developer new to mapping technology, here's a few places to start and technologies to be aware of: