While I was waiting for burlesque bunny models to prepare for an upcoming shoot, a Danish traveler wandered into our set. She was goodly enough to pose so I could try out my brand new lens, which I threw on the nearest camera, my modified infrared Nikon. As has been demonstrated in past explorations, this is not the sharpest sensor, as there is no guarantee that a critical focus plane will even exist if your aperture is too open.
To my surprise, the new Daguerrotype Achromat lens, a recreation of the Petzlav design from 1839, worked perfectly with this modification. Apparently simplistic configurations are sharp as a tack in infrared, even while wide open!
As she was captured, straight out of camera:
Of course, this is barely even a photograph. Let’s adjust this, and make it black and white:
But what if we want color?
As we all know, infrared pictures look really weird in traditional “false color”, which is how modern digital photographers often present their infrared work. Obviously, it’s impossible to present an invisible spectrum without modifying it to become visible, so SOME color has to be given to it. Further articles on traditional false coloring will come, but a cursory Googling will provide a litany of tutorials.
As an example, let’s look at the lovely Madeleine in this original infrared photo, after it’s been white balanced to pull the highlights into a blue range. She looks weird. And she’s not a weird looking girl:
We swap the Red and Blue channel, as is the standard jumping off point. Ummm… Great? Now she looks weird AND sickly. We have done no good here.
But perhaps there is another way? I’ve been poking at machine learning for some separate projects, so maybe we can use an artificial intelligence to fill in some gaps for us.
I’ve been interested in the work of Richard Zhang and the work he did with Phillip Isola and Alexei A. Efros on using artificial intelligence to colorize things. It seems a solid use of “Deep Learning” to train an AI on a large set of images, then have it color an image for you. You can read more about here, if you want! http://richzhang.github.io/colorization/
Let’s pop the first black and white image from above into Zhang, Isola, and Efros’ automatic artificial-intelligence colorizer, a demo of which he provides freely at https://demos.algorithmia.com/colorize-photos/
Interesting. Kind of.
Ultimately we see that AI recognizes that there is, in fact, a girl, and the AI’s model (think of this as the memories the AI has retained from past studies) recognizes foliage pretty well.
The only problem is, I’m not interested in foliage turning green, or the model’s eyes turning brown. I don’t much care for the colors bleeding out of the edges, either. I came to make a candy-colored infrared nightmare, and make a candy-colored infrared nightmare I shall!
Instead, I did more reading, and found this paper, a further exploration that involved Zhang, Isola and Efros. https://richzhang.github.io/ideepcolor/ By “steering” an AI, pointing to where certain colors should be, perhaps we can make a more interesting picture?
I followed some simple directions to run this code on a Mac, and within a few minutes of getting the program started, I had colorized a version I liked! No training or reading instructions was necessary. It was largely very intuitive.
To use iDeepColor, simply mouse-over a spot on the image on the left, click, wait a second, and see what the AI recommends. Chances are pretty good many of the points are going to already be a reasonable value, so simply click a point of color you like, and when the palette is recommended, do nothing. Click to a point where you’d like to change the color, then choose a color in the a b color gamut. I believe this to be a* b* channels, from the 1976 CIE L*a*b* space, but we’ll explore this later. You may have to get finicky, adding a few points around an area you’ve just defined, to keep the color from bleeding out.
Honestly, despite the frustration of not having a “zoom” ability, or an “undo” function, this was so intuitive there seems to be little left for me to explain.
After about 20 minutes futzing about, I was left with this:
Is it perfect? No. Is it novel? Absolutely! In fact, it gives me some ideas for colorizing more competently in Photoshop for future endeavors.
I hope this inspires some creative steps with colorization and AI in hyperspectral photography!
In short: Astrophotography relies on trying to let the most light into your camera before the earth rotates too much and the celestial bodies begin to leave streaks- which is typically under 30 seconds, depending on your lens.
Finding a Dark Sky
Mastery of your camera makes absolutely no difference if you aren’t shooting in an environment with very little light polution.
Shoot Without the Moon
Ideally, you shoot between sunset and moonrise.
Don’t Forget the Weather
Living in California as I do, it becomes very easy to forget that weather exists. The skies are generally clear at night here, but when travelling this isn’t the case. Additionally, wind storms in the desert (the darker spots of California) kick up quite a bit of dust, which can become a cloying haze in even simple astrophotography.
Avoid Light Pollution
This is simple, check darkskyfinder.com and find a place that is very, very dark. Interestingly, these line up with ghost towns with some frequency.
Your Camera Settings
To better understand astrophotography, you need to assess the ideal settings, and then understand why those simply won’t work. Then we can dial in actual, pragmatic settings that will work.
The Ideal Settings
You want those stars to be sharp as possible, right? And your lens likely has the best ratio of sharpness to focal depth at f7.
You want to let in as much light as possible, and with such a low ISO, you probably want to be
You want as little noise as possible, because in post, trying to differentiate between faint stars and noise may become difficult. Minimize noise by setting the ISO as low as possible, and the lowest possible is ISO100.
Try these settings, and see what you get. Blackness, with no stars. Clearly these settings don’t quite work, but we can reasonably assume that the two most important settings are the shutter speed and ISO.
Actual settings that work
Open up that aperture as much as possible. You are desperate for every bit of light, so open it up as far as the lens will allow- f2.8 or lower, if you can.
Here you’ll need to use something called the “500 Rule”. Simply put, determine your maximum shutter speed time by dividing 500 by your lens’ focal length. I like to shoot on as wide a lens as possible for this reason, and with my Rokinon 14mm, I can shoot at (500/14), or 35 seconds. I cannot recommend the Rokinon 14mm enough for astrophotography.
You very likely need to set this near your camera’s max, try around ISO3200. The lower the better, obviously, but
Try to hit an Exposure Value (EV) of -7
The exposure value is a universal number for how bright something is. There’s a formula to calculate it. You don’t need to know it. You can use a calculator, and with a more in-depth explanation of astrophotography!
Consider your foreground
You might find that shooting the dizzying expanse of the milky way gets old quickly- without capturing the environment you are experiencing it in, often these shots lack impact.
Consider tree silhouettes, models on rock outcroppings, or interesting structures.
Think about painting the environment with flashlights, or sprinting around firing your flashes during the long exposures.