Cultural Heritage Imaging


Illumination of Material Culture: A Symposium on Computational Photography and Reflectance Transformation Imaging (RTI) at The Met, March 7-8, 2017 by chicaseyc

Our guest blogger, Emily B. Frank, is currently pursuing a Joint MS in Conservation of Historic and Artistic Works and MA in History of Art at New York University, Institute of Fine Arts. Thank you, Emily!

With the National Endowment for the Humanities (NEH) back on the chopping block in the most recent federal budget proposal, I feel particularly privileged to have taken part in the NEH-funded symposium, Illumination of Material Culture, earlier this month.

Co-hosted by The Metropolitan Museum of Art and Cultural Heritage Imaging (CHI), the symposium brought together conservators, curators, archaeologists, imaging specialists, cultural heritage and museum photographers, and the gamut of engineers to discuss and debate uses, innovations, and limitations of computational imaging tools. This interdisciplinary audience fostered an environment for collaboration and progress, and a few themes emerged.

The sold-out crowd at the symposium at The Met

The sold-out crowd at the symposium at The Met

(1) The emphasis among practitioners seems to have shifted from isolated techniques to integrating a range of data types.

E. Keats Webb, Digital Imaging Specialist at the Smithsonian’s Museum Conservation Institute, presented “Practical Applications for Integrating Spectral and 3D Imaging,” the focus of which was capturing and processing broadband 3D data. Holly Rushmeier, Professor of Computer Science at Yale University, gave a talk entitled “Analyzing and Sharing Heterogeneous Data for Cultural Heritage Sites and Objects,” which focused on CHER-Ob, an open source platform developed at Yale to enhance the analysis, integration, and sharing of textual, 2D, 3D, and scientific data. CHI’s Mark Mudge presented a technique for the integrated capture of RTI and photogrammetric data. The theme of integration propagated through the panelists’ presentations and the lightning talks, including but not limited to presentations by Kathryn Piquette, Senior Research Consultant and Imaging Specialist at University College London, on the integration of broadband multispectral and RTI data; Nathan Matsuda, PhD Candidate and National Science Foundation Graduate Fellow at Northwestern University, on work at NU-ACCESS with photometric stereo and photogrammetry; as well as a lightning talk by Chantal Stein, in collaboration with Sebastian Heath, Professor of Digital Humanities at the Institute for the Study of the Ancient World; and myself, about the integration of photogrammetry, RTI, and multiband data into a single, interactive file in Blender, a free, open source 3D graphics and animation software.

(2) There is an emerging emphasis on big data and the possibilities of machine learning.

Paul Messier, art conservator and head of the Lens Media Lab at the Institute for the Preservation of Cultural Heritage, Yale University

Paul Messier, art conservator and head of the Lens Media Lab at the Institute for the Preservation of Cultural Heritage, Yale University

The notion of machine learning and the possibilities it might unlock were addressed in multiple presentations, perhaps most notably in the “RTI: Beyond Relighting,” a panel discussion moderated by Paul Messier, Head of the Lens Media Lab, Institute for the Preservation of Cultural Heritage (IPCH), Yale University. Dale Kronkright presented work in progress at the Georgia O’Keeffe Museum in collaboration with NU-ACCESS that utilizes algorithms to track change to the surfaces of paintings, focusing on the dimensional change of metal soaps. Paul Messier briefly described the work being done at Yale to explore the possibilities for machine learning to work iteratively with connoisseurs to push data-driven research forward.

Mark Mudge, President of Cultural Heritage Imaging, leads a panel discussion

Mark Mudge, President of CHI, participates in a panel discussion

(3) The development of open tools for sharing and presenting computational data via the web and social media is catching up.

Graeme Earl, Director of Enterprise and Impact (Humanities) and Professor of Digital Humanities at the University of Southampton, UK, gave a keynote entitled “Open Scholarship, RTI-style: Museological and Archaeological Potential of Open Tools, Training, and Data,” which kicked off the discussion about open tools and where the future is heading. Szymon Rusinkiewicz, Professor of Computer Science at Princeton University, presented “Modeling the Past for Scholars and the Public,” a case study of a cross-listed Archaeology-Computer Science course given at Princeton in which students generated teaching tools and web content that provided curatorial narrative for visitors to the museum. CHI’s Carla Schroer presented new tools for collecting and managing metadata for computational photography. Roberto Scopigno, Research Director of the Visual Computing Lab, Consiglio Nazionale delle Richerche (CNR), Istituto di Scienza e Tecnologie dell’Informazione (ISTI), Pisa, Italy, delivered the keynote on the second day of the symposium about 3DHOP, a new web presentation and collaboration tools for computational data.

We had the privilege of hearing from Tom Malzbender, without whose work at HP Labs in the early 2000s this symposium would never have happened.

The keynotes at the symposium were streamed through The Met’s Facebook page. The other talks were recorded and will be available in three to four weeks. Enjoy!

Tom Malzbender, the inventor of RTI, at the podium

Tom Malzbender, the inventor of RTI, at the podium



Why Create and Use Open Source Software? Reflections from an imaging nonprofit by chicaseyc
April 29, 2016, 11:50 pm
Filed under: Commentary, Technology | Tags: ,

This blog by Carla Schroer, Director at Cultural Heritage Imaging, was first posted on the blog site, Center for the Future of Museums, founded by Elizabeth Merritt as an initiative of the American Alliance of Museums.

My organization, Cultural Heritage Imaging (CHI), has been involved in the development of open source software for over a decade. We also use open source software developed by others, as well as commercial software.

What drove us to make our work open for other people to use and adapt?

We are a small, independent nonprofit with a mission to advance the state of the art of digital capture and documentation of the world’s cultural, historic, and artistic treasures. Development of technology is only one piece of our work–we are involved in many aspects of scientific imaging, including developing methodology, training museum and heritage professionals, and engaging in imaging projects and research. Because we are small, and because the best ideas and most interesting breakthroughs can happen anywhere, we collaborate with others who share our interests and who have expertise in using digital cameras to collect empirical data about the real world.

Our choice to use an open source license with the technology that is produced in this kind of collaboration serves both the organizations involved in its development and the adopters of the software. By agreeing to an open source license, the people and institutions that contribute to the software development are assured that the software will be available and that they and others can use it and modify it now and in the future.  It also keeps things fair among the collaborators, ensuring that no one group exploits the joint work.

How does open source benefit its users?

It’s beneficial not just because the software is free. There is a lot of free software that is distributed only in the “executable” version without making the source code available to others to use and modify. One advantage to users who care about the longevity of their data − and, in our case, images and the results of image processing − is that the processes being applied to the images are transparent. People can figure out what is being done by looking at the source code. Also, making the source code available increases the likelihood that software to do the same kind of processing will be available in the future. It isn’t a guarantee, but it increases the chances for long-term reuse. Finally, open source benefits the community of people all working in a particular area, because other researchers, students, and community members can add features, fix errors, and customize for special uses. With a “copyleft” license, like the Gnu General Public License that we use, all modifications and additions to the software have to be released under this same license. This ensures that when others build on the software, their modifications and additions will be “open” as well, so that the whole community benefits. (This is a generalization of the terms; read more if you want to understand the details of a particular license.)

chi-front-page

Open source is a licensing model for software, nothing more. The fact that it is “open” tells you nothing about the quality of the software, whether it will meet your needs, whether anyone is still working on it, how difficult or easy to learn and use it is, and many other questions you should think about when choosing technology. The initial cost of software is only one consideration in adopting a technology strategy. For example, what will it cost to switch to another solution, if this one no longer does what you need? Will you be left with a lot of files in a proprietary format that other software can’t open or use?

That leads me to remark on a related issue– open file formats. Whether you choose to use commercial software or open source software, you should think about how your data and resulting files will be stored and used. Almost always, you should choose open formats (like JPEG, PDF, and TIFF) because that means that any software or application is allowed to read and write to the format, which protects the reuse of the data in the long term. Proprietary formats (like camera RAW files and Adobe PSD files) may not be usable in the future. The Library of Congress Sustainability of Digital Formats web site has great information about this topic.

At Cultural Heritage Imaging, we use an open source model for development of imaging software because it helps foster collaboration. It also provides transparency as well as a higher likelihood that the technology and resulting files will be usable in the future. If you want to learn more about our approach to the longevity and reuse of imaging data, read about the Digital Lab Notebook.