Capturing and Making Available Metadata in 3D models

Most humanities scholars are intimately familiar with best practices for making data about how they reach and support their claims (loosely, metadata) available to consumers of their scholarly analogue projects.  Indeed, it is probably not possible to legitimately earn a terminal degree in the humanities without mastering the intricacies of appropriately citing primary and secondary sources, documenting the kinds of evidence that support various claims, and grading claims in terms of how well- or ill- supported they are.  We manifestly have a moral obligation to engage in appropriate citation, documentation, and grading of claims, so as to avoid engaging in various forms of academic dishonesty.  And this obligation binds us in the conduct of our digital projects as much as our analogue ones.

Citing claims and documenting evidence within 3D models

It is, however, far from obvious how to cite claims and document evidence in constructing 3D digital environments and artifacts.  Embedding footnote numbers all over the place would be distracting, sacrificing the very sort of immersion which at least some content creators seek to achieve in their projects.  And where to put the actual footnotes, in any case?  

Such questions are evolving as the technology for creating and curating digital objects and environments evolve.  But we can get a sense for how one might answer it by considering one well-developed system for representing metadata in a 3D virtual environment, the one included within VSim, developed by the Institute for Digital Research and Education at UCLA.  VSim is a free platform (developed using a NEH DH Start-Up Grant) that enables users to interact with 3D models (objects or environments), create linear presentations within 3D models, and embed resources (including metadata) in 3D models.  To get a general sense for VSim, have a look at VSim_The Movie.  Of particular interest to us is the discussion of VSim's capabilities for embedding metadata (by authors) and displaying it (to users), which starts at about 5.5 minutes into into this approximately 12 minute clip.  Here is a screen shot of VSim's author interface for embedding resources.  And here is a screenshot of VSim's user interface for viewing embedded resources.  There are multiple alternatives to VSim for representing metadata within 3D models, but consideration of how VSim works should give you a broad sense of available strategies for discharging authorial responsibilities to accessibly cite sources and document evidence while working with 3D models.

Representing degrees of confidence and uncertainty within 3D models

Necessarily, in creating a digital object or 3D digital environment representing a real object or environment about which not everything is known, gaps will have to be filled in. The ruins of a building on Delos (Greece) might suggest that it had a roof, for instance.  But various questions remain more or less wide open, like the height of the roof, the interface to the roof, the support structure for the roof, the roof type, and so on.  A navigable 3D reconstruction of this building will have to fill in all of these gaps, and the parts of the building that are conjectured will tend to look just like parts of the building that are known with certainty or hypothesized with great confidence.  But as noted earlier, as academics, we have a responsibility to grade the confidence or uncertainty of the claims we make in constructing 3D models.

Responses to the challenge of how we might go about representing degrees of confidence and uncertainty in elements of 3D models are much less well-developed than responses to the challenge of capturing and representing metadata.  One option (employed by the Digital Hadrian's Villa Project discussed earlier) is just to include all of the metadata for the project on a companion website.  But I suspect that such information is even less likely to be perused than the endnotes in an academic book, because correlating the elements of a 3D model with information on a companion website is apt to take considerable effort.  Another option would be to just gray out portions of 3D models in which the designer has little confidence.  But this approach would likely militate against the immersiveness of the model involved, and it would not do well at representing alternative hypotheses about uncertain elements of models.

For a third option that may avoid the drawbacks of the two options just mentioned, we return to VSim.  So far as I am aware, VSim does not explicitly include a mechanism for grading the confidence or uncertainly that designers have in the various elements of the 3D models they manipulate within VSim.  But one project made accessible within VSim, Digital Karnak (directed by Elaine Sullivan while she was at UCLA's Institute for Digital Research and Education, includes an feature that could  be adapted to to register either degrees of confidence in the elements of a 3D model or even to register an array of alternative hypotheses about such elements. The feature in question is a "time slider" used to show how the actual archaeological site at Carnac looked at different times.  For a look at this time slider, follow this link to a talk by Elaine on Digital Karnak; discussion of the time slider occurs from about 18.5 minutes into this approximately 1 hour and 12 minute clip.  It would be fairly straightforward to repurpose something like Digital Karnak's time slider into a slider that represented different hypotheses about elements of 3D model.  And by assigning something like confidence values to the elements of a 3D model, one could alternatively repurpose this time slider into a "confidence slider," with one end of the slider showing only parts of the model with high confidence values.  Moving the slider towards the other end might progressively overlay the high confidence value elements of the model with more speculative elements.

There are other options than this for representing degrees of confidence and uncertainty within digital 3D models, but consideration of the above should get you started in thinking about how to do this within your own projects.

This page has paths:

This page references: