This blog by Carla Schroer, Director at Cultural Heritage Imaging, was first posted on the blog site, Center for the Future of Museums, founded by Elizabeth Merritt as an initiative of the American Alliance of Museums.
My organization, Cultural Heritage Imaging (CHI), has been involved in the development of open source software for over a decade. We also use open source software developed by others, as well as commercial software.
What drove us to make our work open for other people to use and adapt?
We are a small, independent nonprofit with a mission to advance the state of the art of digital capture and documentation of the world’s cultural, historic, and artistic treasures. Development of technology is only one piece of our work–we are involved in many aspects of scientific imaging, including developing methodology, training museum and heritage professionals, and engaging in imaging projects and research. Because we are small, and because the best ideas and most interesting breakthroughs can happen anywhere, we collaborate with others who share our interests and who have expertise in using digital cameras to collect empirical data about the real world.
Our choice to use an open source license with the technology that is produced in this kind of collaboration serves both the organizations involved in its development and the adopters of the software. By agreeing to an open source license, the people and institutions that contribute to the software development are assured that the software will be available and that they and others can use it and modify it now and in the future. It also keeps things fair among the collaborators, ensuring that no one group exploits the joint work.
How does open source benefit its users?
It’s beneficial not just because the software is free. There is a lot of free software that is distributed only in the “executable” version without making the source code available to others to use and modify. One advantage to users who care about the longevity of their data − and, in our case, images and the results of image processing − is that the processes being applied to the images are transparent. People can figure out what is being done by looking at the source code. Also, making the source code available increases the likelihood that software to do the same kind of processing will be available in the future. It isn’t a guarantee, but it increases the chances for long-term reuse. Finally, open source benefits the community of people all working in a particular area, because other researchers, students, and community members can add features, fix errors, and customize for special uses. With a “copyleft” license, like the Gnu General Public License that we use, all modifications and additions to the software have to be released under this same license. This ensures that when others build on the software, their modifications and additions will be “open” as well, so that the whole community benefits. (This is a generalization of the terms; read more if you want to understand the details of a particular license.)
Open source is a licensing model for software, nothing more. The fact that it is “open” tells you nothing about the quality of the software, whether it will meet your needs, whether anyone is still working on it, how difficult or easy to learn and use it is, and many other questions you should think about when choosing technology. The initial cost of software is only one consideration in adopting a technology strategy. For example, what will it cost to switch to another solution, if this one no longer does what you need? Will you be left with a lot of files in a proprietary format that other software can’t open or use?
That leads me to remark on a related issue– open file formats. Whether you choose to use commercial software or open source software, you should think about how your data and resulting files will be stored and used. Almost always, you should choose open formats (like JPEG, PDF, and TIFF) because that means that any software or application is allowed to read and write to the format, which protects the reuse of the data in the long term. Proprietary formats (like camera RAW files and Adobe PSD files) may not be usable in the future. The Library of Congress Sustainability of Digital Formats web site has great information about this topic.
At Cultural Heritage Imaging, we use an open source model for development of imaging software because it helps foster collaboration. It also provides transparency as well as a higher likelihood that the technology and resulting files will be usable in the future. If you want to learn more about our approach to the longevity and reuse of imaging data, read about the Digital Lab Notebook.
Filed under: Commentary, News | Tags: Anna Ressman, cultural heritage imaging, Digital Imaging, Oriental Institute Museum, Reflectance Transformation Imaging, RTI, Technology, University of Chicago, visualization
Recently Anna R. Ressman, Head of Photography at the Oriental Institute Museum, University of Chicago, shared a compelling article with me, and now I’m sharing it with you.
Here is a link to the Oriental Institute newsletter (PDF), which contains the article entitled, “Behind the Scenes: Museum Photography at the Oriental Institute.”
Anna describes the process in which five very different artifacts are documented, each with a unique challenge. And yes, you guessed it, one of those artifacts was documented using the RTI highlight method.
Documentation of the Egyptian stele “was photographed with a method of computational photography called Reflectance Transformation Imaging (RTI).”
Anna concludes the section on RTI with these insights: “RTI files can be created in such a manner that pixel data is analyzed to show specular information rather than color data, which can reveal more information about the surface of the object than color data alone (figs. 3–4). As you can see, the inscriptions on the stele are much clearer in the specular-enhancement PTM image (fig. 3), even though the studio photograph (fig. 4) was taken using a macro lens under controlled studio lighting. The former may not be as aesthetically pleasing as the latter, but it reveals much more information than would normally be seen — and that is just a single image out of a series of forty-five.”
Be sure to download the complete article and check out the rest of the newsletter as well.
Anna R. Ressman is Head of Photography at the Oriental Institute Museum, University of Chicago, USA. Anna is also a freelance photographer and a fine artist.
[Photos by Anna R. Ressman/Courtesy Oriental Institute Museum, University of Chicago]
On August 16, 2002 we founded Cultural Heritage Imaging as a nonprofit corporation in San Francisco. Wow, it seems like yesterday and it seems like a long time ago! Our digital camera at that time was 3 megapixels and it had a pretty slow auto focus. We had seen Tom Malzbender’s pioneering Polynomial Texture Mapping paper at SIGGRAPH in 2001, and we began working with him several weeks later. However, using the technique required working with command-line software and capturing images using either a lighting array (dome) or a very time consuming detailed template approach.
We were shooting some 3D using structured light software from Eyetronics, and we had been on site with Professor Patrick Hunt of Stanford University at his archeological excavation at the Grand St. Bernard Pass in Switzerland as early as 2001.
We have come a long way since then, working with numerous museums, historic sites, archaeologists and historians, as well as computer science researchers. In 2006 we developed (with Tom Malzbender) the Highlight RTI technique, and we worked with the team at the University of Minho in Portugal to develop open source software to support that (RTIBuilder). With a grant from the Institute of Museum and Library Services beginning in 2006, we researched a multi-view approach to RTI and out of that collaboration with Professor James Davis et. al. of UC Santa Cruz and the Visual Computing Lab in Pisa came the open source Hemispherical Harmonics fitter (section 6 in the tutorial notes) and the RTIViewer.
Also in 2006 we were contacted by folks at the Worcester Art Museum Conservation Lab interested in using RTI for art conservation. After a small pilot project, we built a light array for them and trained them in the RTI technique. To this day we appreciate this group, their vision of how this technology could be used regularly in their field, and their willingness to go out on a limb to make to make it happen and share their work with others.
In 2008, as interest in RTI grew on the part of museums and historic sites, CHI made a great effort to develop training programs for RTI and other computational photography techniques. We have since trained over 200 people in our full 4 day RTI class, and we have introduced hundreds more to RTI through workshops and presentations at numerous conferences and lecture series.
Our current research work includes an NSF funded project with Professor Szymon Rusinkiewicz of Princeton University to further develop the technique of Algorithmic Rendering with RTI data sets and easy-to-use software that includes a way to keep track of the full process history in a digital lab notebook. We began working on the requirements and methodology for how to manage this process history for all of our imaging work and especially RTI back in 2002, and we shared it with the computer graphics community in 2004 on a SIGGRAPH panel called “Computer Graphics and Cultural Heritage: What are the Issues?” chaired by professor Holly Rushmeier. Our early work referred to this subject as “empirical provenance,” described in detail in this 2007 paper delivered at the CIPA conference.
So now, 11 cameras, many well-worn travel bags, and I can’t even count how many laptops later, we enter our second decade of collaboration with many wonderful people from all over the planet. We thank some of the folks who have helped us along the way on our acknowledgments web page but it isn’t and can’t possibly be a complete list. CHI was founded on the principles of collaboration and the democratization of technology, producing tools and methodology that enhance scientific reliability and long-term preservation.
We would like to say thank-you to everyone who has volunteered time, donated money or equipment, shared their work, asked us questions, answered our questions, written down how to do things, listened to us speak, formed project collaborations, or run across our path in some interesting way! We hope to meet you all again, and many others down the road.
Filed under: Commentary, Equipment | Tags: breeze software, canon, capture software, focus, nikon, software, Technology
I was recently asked, ‘What DSLR camera is better for RTI data capture? ‘Canon or Nikon?’ The answer is like Godzilla Vs King Kong. Its gonna be a good fight.
The Short Answer is that either camera will work.
In the hands of a professional photographer, they are both very similar. The difference is the workflow – what you’re familiar with, what high quality lenses you own, and what equipment you’ve already got in your gear bag and studio ——— and what “Capture/acquisition Software” you decide to start a relationship with. (think Mind/Body – these two need to be pulling on the same oar)
Before you purchase a camera, you need to examine *how you’re going to interface with your DSLR when you’re shooting in *tethered* mode. Here’s the scenario, your stage is setup, your object is in place, you’re tethered to the camera via USB cable, and you launch your ‘capture software’ App. You need complete command of the basics : Composition, Exposure and Focus.
DSLR Remote Pro for Macs (Canon -> Mac)
http://www.breezesys.com/DSLRRemotePro4Mac/index.htm | http://www.breezesys.com/products.htm
The most stable Capture Software that we have used (bare in mind that we use Macs and Canons), is coded by a third party guy, Chris Breeze. He has taken the (Canon and Nikon) SDK and developed for a “combo” of Canon, Nikon – Mac, PC configurations. I’m not going to deep dive into the setups, but what I am going to state is that (at this moment in time), the Breeze software is stable, solid, is easy to use, and hardly *ever crashes. The user interface is Ok, a bit bare bones, but this tool gets the job done, and thats what we all want. Again, bare in mind that we use Macs and Canons (we have only used the Canon—Mac version). This software is installed on all our computers is our goto tool for image acquisition procedures.
The last version of the Nikon Control Pro 2 software that I experienced worked really well, *except for the fact that it was difficult to check focus and scroll around bc that particular window has/had a restricted pixel size. It wasn’t as small as a thumbnail, but lets say that it did not take advantage of your screen size. All of the other functions were well behaved. Check it here: http://www.nikonusa.com/Nikon-Products/Product/Imaging-Software/25366/Camera-Control-Pro-2.html
The Canon Capture Utility (free with the purchase of a new camera) has a great interface, looks clean, works well, but it could be better, much better — it could be more stable. Sometimes it just flakes out and crashes. We used it for years with lots of happy moments, but towards the end we had a bitter break up. As RTI grew and we pushed the technology, we began to experience flaws. Specifically, with the ‘Live View Focus Controller functions’ (and its algorithms). Numerous frustrating crashes occurred when we asked it perform fine focusing adjustments in the ‘magnified mode’. This is pretty important considering that RTI *requires the subject to be in focus. Software crashes were even more problematic when we used a modified IR / UV camera — for some reason(s) that we can not explain, the software just didn’t adjust well to the different wavelengths of light under those conditions.
A few more comments:
If you use ‘good Glass’ (think prime lenses+superior optics) both the Canon and Nikon are going to get you professional results. We know many many Canon RTI shooters as well as a few Nikon shooters (and hasselblad-er(s). I think that the majority of users tend to be Canon. When we are asked to purchase equipment for client(s) we always steer them towards the Canon family.
With that said, I have seen professionals purchase a suite of Nikon gear and then *re-convert all the new gear and go to Canon. (and from ongoing conversations, they didn’t go back to nikon).
At CHI we’re Canon all the way.
Thanks for reading, Happy F-stop.
Filed under: Commentary, Technology, Training, Workshops | Tags: paper pulp molds, reflectance imaging, Reflectance Transformation Imaging, RTI, Smithsonian, Technology, Training, virtual archaeology, visualization, Workshops
By Guest Blogger E. Keats Webb
I mentioned briefly last month some of the objects that we have been using Reflectance Transformation Imaging (RTI) on here at the Smithsonian’s Museum Conservation Institute (MCI). One project involved paper “squeezes,” paper pulp molds made from the surfaces of ancient monuments at archaeological sites.
In some cases these “squeezes” are primary resources containing rare intellectual and physical information from monuments that have deteriorated or sites that no longer exist. Unfortunately, the fragility of the paper minimizes accessibility of these objects to researchers and scholars. This makes them great candidates for non-destructive documentation of the 3-D characteristics of their surfaces with the RTI method.
Senior Conservator, Melvin Wachowiak, and I worked with the conservators from a Smithsonian museum, imaging a couple of examples of paper squeezes to see what the RTI method might contribute in terms of preservation and research.
Since the squeezes are molds taken from stone inscriptions, the writing is reversed. After the image acquisition we “flipped” the images using imaging software, and then processed the files so that the final RTI product could be a legible rectified document for researchers to study.
We found that the RTI method increases legibility through the combination of raking light features and the specular enhancement option while also creating a surrogate that can be more extensively “handled” by researchers and scholars. (See images below.)
We continue to use RTI on a daily basis and look forward to sharing more with you about how the method is helping the scientists and conservators within MCI and the Smithsonian for the research and preservation of the collections.
Filed under: Commentary, Technology, Training, Workshops | Tags: Digital Conservation, Digital Preservation, Preservation, reflectance imaging, Reflectance Transformation Imaging, RTI, Technology, virtual archaeology
By Guest Blogger E. Keats Webb
Over the past three months I have been interning with Senior Conservator, Melvin Wachowiak, at the Smithsonian’s Museum Conservation Institute (MCI) exploring advanced imaging techniques for research and preservation of the collections focusing mostly on the Reflectance Transformation Imaging (RTI) method. We started in September with an African leather shoulder bag, the RTI enhancing the faint tooling and degradation on the surface. In October we imaged a writing slate from the 1600s found in an archeological excavation of a well at the site of Jamestown, Virginia. RTI proved an excellent tool in interpreting the drawings and writings that are found on both surfaces of the slate and at all orientations. Other types of objects that we have explored include paper “squeezes” (molds taken from stone inscriptions), oil paintings, a jawbone, ebony and ivory inlaid cabinet doors and a daguerreotype. We work alongside scientists and conservators on a daily basis at the Museum Conservation Institute, and RTI complements the studies happening within our labs along with other advanced imaging techniques used for research and preservation.
Set-up for the RTI of the Jamestown Slate.
E. Keats Webb left, Melvin Wachowiak right; Photo: Charles Durfor
Filed under: On Location | Tags: Digital Conservation, Digital Preservation, FAMSF, Fine Arts Museums of San Francisco, partners, reflectance imaging, Reflectance Transformation Imaging, RTI, Technology, virtual archaeology
In October 2009, Susan Grinols – Director of Photo Services and Imaging for Fine Arts Museums of San Francisco (FAMSF), assembled a team of scholars and imaging professionals to document an Anthropoid Coffin with the highlight RTI technique.
Sue and her team members were thrilled with the final results. Using the RTI Viewer, suddenly, hard to decipher glyphs were clearer and easier to view. The curators, interpreters and conservators were shocked at how the RTI technology delivered so much detail, in a completely nondestructive manner.
For a brief look into the RTI capture session be sure to view the flickr gallery.