While I was waiting for burlesque bunny models to prepare for an upcoming shoot, a Danish traveler wandered into our set. She was goodly enough to pose so I could try out my brand new lens, which I threw on the nearest camera, my modified infrared Nikon. As has been demonstrated in past explorations, this is not the sharpest sensor, as there is no guarantee that a critical focus plane will even exist if your aperture is too open.
To my surprise, the new Daguerrotype Achromat lens, a recreation of the Petzlav design from 1839, worked perfectly with this modification. Apparently simplistic configurations are sharp as a tack in infrared, even while wide open!
As she was captured, straight out of camera:
Of course, this is barely even a photograph. Let’s adjust this, and make it black and white:
But what if we want color?
As we all know, infrared pictures look really weird in traditional “false color”, which is how modern digital photographers often present their infrared work. Obviously, it’s impossible to present an invisible spectrum without modifying it to become visible, so SOME color has to be given to it. Further articles on traditional false coloring will come, but a cursory Googling will provide a litany of tutorials.
As an example, let’s look at the lovely Madeleine in this original infrared photo, after it’s been white balanced to pull the highlights into a blue range. She looks weird. And she’s not a weird looking girl:
We swap the Red and Blue channel, as is the standard jumping off point. Ummm… Great? Now she looks weird AND sickly. We have done no good here.
But perhaps there is another way? I’ve been poking at machine learning for some separate projects, so maybe we can use an artificial intelligence to fill in some gaps for us.
I’ve been interested in the work of Richard Zhang and the work he did with Phillip Isola and Alexei A. Efros on using artificial intelligence to colorize things. It seems a solid use of “Deep Learning” to train an AI on a large set of images, then have it color an image for you. You can read more about here, if you want! http://richzhang.github.io/colorization/
Let’s pop the first black and white image from above into Zhang, Isola, and Efros’ automatic artificial-intelligence colorizer, a demo of which he provides freely at https://demos.algorithmia.com/colorize-photos/
Interesting. Kind of.
Ultimately we see that AI recognizes that there is, in fact, a girl, and the AI’s model (think of this as the memories the AI has retained from past studies) recognizes foliage pretty well.
The only problem is, I’m not interested in foliage turning green, or the model’s eyes turning brown. I don’t much care for the colors bleeding out of the edges, either. I came to make a candy-colored infrared nightmare, and make a candy-colored infrared nightmare I shall!
Instead, I did more reading, and found this paper, a further exploration that involved Zhang, Isola and Efros. https://richzhang.github.io/ideepcolor/ By “steering” an AI, pointing to where certain colors should be, perhaps we can make a more interesting picture?
I followed some simple directions to run this code on a Mac, and within a few minutes of getting the program started, I had colorized a version I liked! No training or reading instructions was necessary. It was largely very intuitive.
To use iDeepColor, simply mouse-over a spot on the image on the left, click, wait a second, and see what the AI recommends. Chances are pretty good many of the points are going to already be a reasonable value, so simply click a point of color you like, and when the palette is recommended, do nothing. Click to a point where you’d like to change the color, then choose a color in the a b color gamut. I believe this to be a* b* channels, from the 1976 CIE L*a*b* space, but we’ll explore this later. You may have to get finicky, adding a few points around an area you’ve just defined, to keep the color from bleeding out.
Honestly, despite the frustration of not having a “zoom” ability, or an “undo” function, this was so intuitive there seems to be little left for me to explain.
After about 20 minutes futzing about, I was left with this:
Is it perfect? No. Is it novel? Absolutely! In fact, it gives me some ideas for colorizing more competently in Photoshop for future endeavors.
I hope this inspires some creative steps with colorization and AI in hyperspectral photography!