The Math of Exposure Values

Exposure Values are a somewhat important concept, kind of, if you are trying to maintain consistent exposure while fiddling with your camera in manual mode.  Realistically, by shooting in aperture or shutter priority, with your ISO locked down, your camera will maintain the proper exposure value for you anyway.

Exposure Value Chart

A photographer might need to prepare for any values between -6 and 16, and the difference of 1 exposure value is called a “stop”. Remember, though “stopping down” generally refers to using a narrower aperture, you can also decrease your shutter speed or ISO by one stop!

The actual math of this does not matter too much to a normal photographer. Or an abnormal photographer. It’s somewhat fascinating to think about this if you are designing experiments or writing software that does crazy camera things though.

Honestly, if you’re doing crazy math-based camera experiments, you probably didn’t need to read this. However, I’m upset that the equations for the exposure triangle (which dictate the relationship between iso/shutter speed/f-number) are not readily available without you solving for these elements yourself, so I figured I’d commit them to one place on the internet. If you need these, look no further.

Note many resources will give versions of these equations using EV_100, or Exposure Value at ISO100. No standards body has codified use of EV_100, and while some websites claim that “Light Value” refers to non-ISO100 EV (and sometimes the opposite, EV_100), this is inconsistent with contradicting definitions around the internet. Forget about light value, for now it’s not a useful term, it’s the math photography equivalent of “nonplussed”. Just use these equations, and if you want ISO100 or EV_100 plug in 100 for the ISO.

If these all feel like they are yielding answers that are just a littttle off-  welp, congratulations, you’ve learned the dirty secret of photography.  The fnumbers are very, very slightly rounded.  Below is mathematically sound, where:

  • F is the relative aperture (f-number)
  • t is the exposure time (“shutter speed”) in seconds
  • S is the ISO arithmetic speed
  • EV is the exposure value


S = \frac{100 \times F^2}{t \times 2^{EV}}
iso = (100 * (fnumber ** 2))/(shutter_speed * (2 ** exposure_value))


F = \sqrt{ \frac{S \times t \times 2^{EV}}{100} }
import math
fnumber = math.sqrt(((iso*shutter_speed) * (2 ** exposure_value))/100)


t = \frac{100 \times F^{2}}{S \times 2^{EV}}
shutter_speed = (100 * (fnumber ** 2))/(iso * (2 ** exposure_value))


EV = \log_2(\frac{100 \times F^2}{S \times t})
import math 
exposure_value = math.log2((100 * (fnumber ** 2))/(iso * shutter_speed))


If these are useful for even one photography nerd blossoming into an engineer nerd, then this was completely worth it!

Demosaic for Tintype or Blue-Spectra Photography

After reading my post on how to recreate the look of tintype and wet-plate collodion digitally, you may have been left wanting the code to appropriately demosaic/debayer.

This is a photography blog, not a coding blog, so please bear with me for my first go at making this accessible.

To recap from my earlier posts:

In looking at an image captured through a B25 glass, you will only capture 1 out of every 4 pixels correctly, thanks to the debayer color filter array (CFA) on top of your digital camera’s sensor.  That means 3 pixels are “dead” or, at least “incorrect”.  Here’s a quick script to throw them out.

As we can see in this close-up of the raw data, this image has a lot of dead pixels. Let’s get rid of ’em.

I know I will get some feedback that this is not a debayer, or demosaic, in any traditional sense.  You are technically right, haters. This is simply a downsample to strip out the three dead pixels.

You will quarter the resolution of your image with this process.  I’m sorry, but it has to happen.  The resulting clarity will play nicely with any upscaling processes you then wish to engage upon, however, and, especially in portraiture, artificial intelligence can work wonders.

I originally wrote this in MATLAB, but instead let me give it to you with freeware.  

Software Requirements

You will need the following command-line tools. These are free, but if you aren’t comfortable using a command line interface (CLI) then I have bad news for you- this is unfortunately how you’re going to have to do it.

ImageMagick – This is THE tool for altering an image from the command line. It exists for all platforms, so whatever computer you are running, you can use it.

DCRaw – This is THE tool for opening raw files from a digital camera. RawTherapee is a front-end for this tool. Unfortunately, you still will have to install it, even if you have RawTherapee, to use the command-line interface.

I recommend, if you are on a Mac/Linux, simply using homebrew to grab these-  open up a terminal window, and type 

homebrew install imagemagick

and then 

homebrew install dcraw

On PC, this might be a bit more complex, but these tools exist for all platforms, which is why I chose to code in this fashion.

Assuming these are installed correctly, you should be able to run this single line of code to demosaic down to the blue sensor pixel.

The Code

Open a terminal window, navigate to where your RAW file lives, and type:

dcraw -c -D -T -6 -g 2.4 12.92 -o 1 YOURINPUTFILE.(NEF.RAW) | convert - -roll +1+1 -sample 50% YOUROUTPUT.TIF

This tells DCRaw:

(-c) Output in a fashion that can be handed off to ImageMagick (STDOUT for piping)

(-D) Monochromatic, untouched output

(-T) Tiff

(-6) 16 bit

(-g 2.4 12.92) gamma 2.4 with a toe-slope of 12.92, which is, more simply put, the appropriate setting for sRGB.

(-o 1) Set the colorspace to sRGB with a d65 white point.  You can tinker with this, but ultimately it shouldn’t make too big of a difference, since we are compressing to a monochromatic pipeline here regardless.

Note that some tutorials will tell you to use a -4.  This will output a linear electro-optical transfer function, and unless you really know what you’re doing, you probably don’t want this.  I would argue you really don’t want this.

This then pipes ( | )the data to ImageMagick- or rather the utility it provides named “convert”:

(sample 50%) Downsample at 2:1, implied that you are dropping every other pixel in both X and Y directions

(-roll +1+1) Offset your image by the requisite number of pixels to place a blue pixel at the top left of the image.  Assuming RGGB, this is +1+1.  If you have BGGR, then this will be +0+0.  I recommend trying them both.  Heck, you can try +0+1 and +1+0 to see what would typically be the two green pixels.  The one that looks the brightest is your correct image.

To test these I ran:

dcraw -c -D -T -6 -g 2.4 12.92 -o 1 DigitalCollodion_D610-1297copy.nef | convert - -roll +0+0 -sample 50% OutputTest_A.tif
Close-up on the result: This looks dim and, dare I say, stupid. It looks like we’re peering at the data captured by the red pixel of the color filter array, which is the “leakiest” of the three color filters, accepting some blue light.
dcraw -c -D -T -6 -g 2.4 12.92 -o 1 DigitalCollodion_D610-1297copy.nef | convert - -roll +0+1 -sample 50% OutputTest_B.tif
Close-up on the result: Looking better, but still far dimmer than I’d expect.
dcraw -c -D -T -6 -g 2.4 12.92 -o 1 DigitalCollodion_D610-1297copy.nef | convert - -roll +1+0 -sample 50% OutputTest_C.tif
Close-up on the result: Looks identical to the last one- this means that, as anticipated, we are looking at one of the two G pixels on the color filter array.
dcraw -c -D -T -6 -g 2.4 12.92 -o 1 DigitalCollodion_D610-1297copy.nef | convert - -roll +1+1 -sample 50% OutputTest_D.tif
Closeup of the result: Bingo! This is the blue pixel!

Not surprisingly, OutputTest_D matches the blue debayer pixel for my Nikon.

Note this is for a traditional bayer filter.  You might wish to review the description of color filter arrays, and see if there is a better fit for you and your sensor.  I don’t have the foggiest of how this might be accomplished with an x-Trans color array filter with free command-line tools, unfortunately.