Recreating 1850s Vintage Photography (and IR!)

Using Careful Science and Pretty Faces

While I was visiting San Francisco, Kristy Headley, a dear friend and fellow engineer, showed me her studio. There I was lucky enough to sit for her while she did some vintage tintyping.  Tintyping was one of the earliest forms of photography, popular in the 1850s. It was a kind of incredibly inconvenient polaroid- the photos were exposed instantly after a quick wash on plates of iron or aluminum (never actually tin). All you needed was a very, very large camera, plates treated with “collodion” to make a light-reactive surface, and a sizeable collection of chemicals- Unfortunately, the genuine process has a high risk of explosion, creating cyanide gas, incurring blindness from silver halide splashback, or getting whacked out on ether.

“Dusty”
Tintype/ferrotype by Quinn Jacobson – CC BY-SA 3.0

Kristy showed me each painstaking step, and I was enamoured with the result. With her mastery of lighting techniques, she’d produced tintype photographs where every detail popped- every freckle, hair, and eyelash seemed to stand out. Yet, these looked very different from modern photography. There was a certain, seemingly unquantifiable trait to them. Why?

I sought out to quantify this ol’ timey look, and the fashion by which one could reproduce it with a digital camera. Perhaps I could share in this delight without keeping jugs of volatile chemicals in my room.

Look to the Spectrum!

Kristy had told me that her exposure plates were only sensitive to certain colors: UV, blue, and some green. There was a specific range of wavelengths this collodion emulsion was sensitive to- one that doesn’t match human vision.

Human vision, as we can see above, is a sensitivity to the electromagnetic spectrum. Electromagnetic waves between about 400 and 700 nanometers will be perceived as light to us- the seven colors of the rainbow! (Interesting article about this to come later) If we dip to wavelengths shorter than 400, we get imperceivable ultraviolet (UV) light, and rise to wavelengths above 700, we get imperceivable infrared (IR) light.

But what do vintage photographic emulsion surfaces see?

Engineer Niles Lund seems to be the only person who has researched the spectral sensitivity of collodion, the emulsion chemical stew used to make wet-plate and tintype photographs in the mid-1800s. After exchanging a few emails, Niles sent me this updated spectral chart.

Spectral sensitivity of cadmium bromide, potassium iodide, and collodion consisting of the two – available at https://www.lundphotographics.com/index.php/tips-and-techniques#CollodionOptics

So now we know what to recreate if we want “digital collodion”! We need to cut out the upper half of the visible spectrum allowing nothing above about 520 nanometers into the camera.

I’ve done much work in this field, so I thought this would be a great time to blend science and art to explore this concept, and compare it to other hyper-spectral imagery!

Setting up an experiment

Now we invite some friends and models to come sit for us! I asked the models to avoid skin-concealing makeup and zinc-oxide sunscreen, which is the only sunscreen to obstruct UV-A rays. They were asked to model under a broad-spectrum light source (the sun), with an additional light source of a silver reflector, and maintain a pose for a few minutes while I photograph each under the following conditions:

Visible Spectrum

This is what humans see (obviously).  Using three cones, sensitive to three different ranges of wavelengths, which loosely map to red-green-blue, we see everything from violet to red.  Violet is the shortest wavelength.  Red is the longest. Not surprisingly, this is what digital cameras see, as well. Silicon photovoltaic sensors, whether CCD or CMOS, can take in a theoretical 290nm to 1100nm. However, it would be maddening to have your photography overexposed, blooming, and flaring due to lightwaves you cannot see, to say nothing of threats of overheating, so sensors are limited by optical filters placed directly over the sensor to only photograph, more-or-less, in the human visible spectrum. Photons that don’t play nice in this range are either bounced back out of the lens, or absorbed and destroyed.

Here’s what our model Kayla looked like in the visible spectrum:

Captured on an unmodified Nikon D610 through a normal UV-limiting 50mm lens.
Notice her light freckles, dark shirt, and slightly light eyes

Orthochromatic Blue (Collodion AND Squid Vision)

This could be seen as the same spectrum that octopus and squid might view in.  Humans, except for rare and awesome mutants and the colorblind, are “trichromatic”, with three cones that allow for three different ranges of wavelengths in. Cephalopods only have one-  the S cone, which is sensitive in almost the exact fashion of the spectral curve of the 1850’s collodion (except for UV).  Though they might have a different mechanical use of their eyes, which could allows for more color vision, but I digress…

By using Schott BG25 bandpass glass, bought off a gentleman who fits esoteric german glass into filters, on a lens that cuts UV (any normal lens) on any unmodified camera, we get a theoretical spectral sensitivity that looks like:

The “double bandpass” ionically colored glass made by Schott AG in Germany nicely mimics the spectral response of the collodion.

Of course, all the photons that this can transmit that the camera can’t capture become irrelevant, so on an unmodified camera, we will get a spectral response that looks like:

This means that on an unmodified camera, using a lens that doesn’t block UV, we’re going to get an image where the focus is going to be more on near-UV damage to the skin- concentrations of freckles and blemishes are going to much more pronounced. There’s going to be a light leak of very red photons, and we’ll be missing much of the UV range, but we’re very close to vintage emulsion now.

Our model, under the same lighting conditions (with an exposure compensation of +.7, calibrated from an 18% gray card):

Captured on an unmodified Nikon D610 through a UV-permitting 50mm lens and a BG25 double bandpass filter.
Notice her pronounced freckles, dark shirt, and very light eyes

By gum, we’ve done it! With that nanty narking, I think we can say we’ve captured the ol’ timey feel of collodion!

…but it is not scientifically accurate. This is about the best we can do on an UNMODIFIED camera, as we need some expensive modifications to allow UV to extend into the sensor. More on that later.

Let’s see what she looks like in some other, more bizarre wavelengths!

Near Infrared and Short Wave Infrared (Snake Vision)

This is, likely, what snakes see.  This is the orange-and-red human-visible spectrum (near infrared, or NIR) and the infrared waves closer to the human spectrum. This is what I like to photograph foliage in, because it can be false colored and remapped using the color channels that your camera knows how to deliver data in. I have a Nikon D80 that has been modified to shoot in this wavelength, with an upper infrared spectrum bound (unmeasured) limited only by the physical limitations of a silicon CCD.

For some reason, for the duration of this experiment, this setup absolutely refused to focus properly. This camera does not have live-view, and it is very difficult to assess these images until they’ve been processed on a computer. Nonetheless, these blurry images were interesting…

Captured on a modified Nikon D80 through normal 50mm lens, no filter.
Notice there are no freckles, and her shirt appears grey.

Short Wave Infrared

I don’t think anything “sees” solely in this spectrum, though it is a difficult concept to explore, so perhaps time will tell. To capture the full breadth of short wave infrared, or to move into longer wave infrared, we would have to use a special sensor made out of indium gallium arsenide. It’s interesting to see that any sun damage and melanin deposits, such as freckles and tans, will not appear in this wavelength. The IR photons do a good deal more sub-surface scattering in the skin than visible light, rendering each person as a waxen figure.

Captured on a modified Nikon D80 through a normal 50mm lens and a BG25 double bandpass filter.
Notice there are no freckles, her shirt now appears white, her eyes have darkened considerably, and all of her skin appears waxen and soft.

Polishing these photos up a bit in a vintage style split-toning

You might have noticed these are not delivered in monochromatic black and white, as in the style of a silver gelatin process. Instead, I chose to render them out with a bit of split-toning.  Since the birth of photography, this was a common practice, using various methods to bring color to the highlights and to the shadows.

In this case, I made the highlights gold to mimic the flame-gilding of the mid 1800s, which “fixed” the highlights of a photograph using a gold chloride solution and an open flame. As ambrotypes caught on in the 1860s, “ruby” hued backs became more common in photography- so I enjoy using a red tint in my lowlights.

A Horrible, No-Good Problem Arises in the bayer filter!

The images I captured occasionally showed very odd artifacts.  Fine hairs would get lost, and appear in a fashion that almost looked like… a shadow?  But why would such a thing happen?

Human-visible spectrum photo (left) shows frizzy hair, as anticipated.
Limited orthochromatic spectrum photo (right) renders the hair as a weird blobby shadow. Why?

Then I realized we’ve surpassed the intention of a digital sensor, and we’re paying the price.

All digital sensors are actually monochromatic- black and white. In order to capture color, a translucent, microscopic grid of red, green, and blue tiles is spread across the sensor, with one tile to each pixel. This is a “Color Filter Array”, or CFA. Conventionally, this grid is laid out in a “Bayer pattern”, named for the Bryce Bayer of Eastman Kodak, who conceived of this. This Red-Green-Green-Blue pattern mimics the human eye’s sensitivity (a range where two of our three cones’ sensitivities overlap), allowing for more green to be considered by the camera. Interesting fact, this was supposed to be Cyan-Magenta-Yellow, to better subtract out the wavelengths necessary, but the technology simply did not exist at the time of inception to make such a CFA.

The Bayer filter atop a sensor

The sensor still functionally reports the incoming light as black-and-white, but now the camera, or camera raw processor if you move it to a computer, performs a process called “demosaicing”, or “debayering”. This reconstructs the image in four pixel groups, using a variety of different methods, depending on the camera and the software, finally outputting RGB channels in full resolution.

This means every digital photograph you have ever taken has functionally been reconstructed from quarter-resolution samples! Obviously, we haven’t noticed it, so we don’t really mind, but when we’re missing an entire wavelength (red), and sharply attenuating another (green), we’re only effectively using one out of every four pixels.

A closeup of an image captured in the “collodion spectrum”, pre-demosaic.
Note only 1 in 4 pixels is appropriately captured.

Using RAWTherapee, the discovery of which was a definite win in this project, I could explore my sensor data before it was demosaiced and processed into the image I would see on a screen. As you can see above, it was disheartening.

What is a demosaic process supposed to use to rebuild the image? Without the appropriate data, it simply uses blank black pixels to try and interpolate data. This is going to get weird. Without writing your own demosaic algorithm, and accompanying debayering software to run it, there is no fix for this. (Update: I fixed it by writing my own demosaic algorithm and accompanying debayering software.)

Images captured in the “digital collodion” spectrum on a color digital camera will always get a little muddled when opened by your camera or photo editing software because of this simple fact. The luminance will be interpreted as ~60% less than it is, artificially flattening an image. Details will be lost, and reconstructed as goo.

Immediately after posting this I thought to look at the spectral response of each of the bayer tiles. It turns out the blue tile perfectly matches the orthochromatic technique, so I can effectively write an algorithm to simply crop the other 3 pixels, reducing this to a highly accurate 1/4 resolution. Updates to follow.

Update: Fixing the Debayer Problem!

After writing a surprisingly simple script in Matlab, which did an embarrassingly simple downsample to throw out the three pixels that were not relevant to the spectrum we captured this in, we have a solution. I cannot stress enough, we got lucky. This almost teaches bad color science- you typically cannot select a wavelength in post. We just got very, very, comically lucky and found that my Nikon D610 uses the same spectral sensitivities in the blue bayer tile as collodion once did. That is an odd coincidence, to say the least. This is worth researching more on later.

I will later rewrite this script for free interpreters, so that anyone can follow in my footsteps without spending money on expensive coding platforms. A full new article will be posted exploring this. Update: You can find the code, run on freeware, to perform this in my next post.

The Results are Stunning

Image as brought in from the camera (left) vs run through a custom de-mosaic (right)

The differences far surpassed what I expected. Pleasantly, the math suddenly matched up to what my light meter had suggested in my camera. Now, without any exposure adjustments, the photograph matches the levels of the full-spectrum photo- remember, I had already taken exposure compensation into consideration, allowing an extra 2/3s of a stop of light, when I took the photo.

There’s no bringing back the fine details- those little wisps of hair are lost to the ages, but they didn’t become a gooey mess. So much detail has been restored to the photo, I am absolutely stunned. I did not expect this level of improvement. I went back and reuploaded this spectrum in the earlier section with Kayla.

One more comparison:

Image as brought in from the camera (left) vs run through a custom de-mosaic (right).
The difference and detail recovered is nothing short of stunning.

Perhaps my short-wave infrared photos could benefit from this process as well?

All of the Photo Results

All results delivered as near to from-camera as possible, using small exposure adjustments to match skin tone. When gradient adjustments were used, they were used uniformly across all photos of the model, to keep lighting consistent between wavelengths.

Kaitlin

Kevin

Christine

Xach

Caitlin

Kayla

Next Steps to Take it Further

In order to fully and accurately capture the collodion sensitivity range, I need to not only use a lens that allows ultraviolet photons to pass, as used in this experiment, but a camera with a sensor modified to receive UV light. Because this is not a fast or cheap prospect, I’ve been dragging my feet to see if this could be done in conjunction with another modification…

As photographing humans draws no benefit from presenting false color, hyperspectral photography (such as UV) is best delivered in black and white. Due to the problems with the bayer pattern described above, having a color sensor actually hurts collodion-spectrum portraiture- if the construction of the image relies on a missing red channel, then we need to have a monochromatic sensor to gather an image using the filters described in this process with due sharpness. So we have two options (Update, a third option has arisen): Buy a monochromatic camera, which can run between 3,000 and 50,000 dollars, or do a “mono-mod” to scrape the bayer filter off of an existing camera. Conversion shops have assured me this is impossible on the Nikon cameras that I like to use, but I hold out hope for a “full spectrum” monochromatic modification. Alternatively, now that a custom debayer option exists that quarters the resolution, I can look into intelligent upscaling, possibly using AI to regain resolution lost in the process.

Final Thoughts

  • As I used attractive models in this experiment, instead of the lab tools one should use to characterize optics, the spectral range of each photograph is an educated guess at best, based on theoretical readings. Lab characterizations will follow.
  • Trust the math. I know things look underexposed and incorrect, but if you’ve set your exposure compensation based off an 18% gray card- or, realistically, ignored that and just set it to +.7, then you can trust your lightmeter shooting in “orthochromatic blue”.
  • Don’t trust the Bayer filter. Its bad and should feel bad. Consider this in any hyperspectral pursuit, and any pursuit where you are limiting your input wavelengths via optical filter.
  • Sometimes, there’s no substitute for the real thing- recreating wet-plate collodion and tintypes is severely limited by digital consideration- but limitation is the mother of invention!