This is the third and the final part of the Camera Jargon series. Hopefully, I explained most of the popular terms in such a way that a beginner can understand. I left out some of the technical details when I felt that they are not necessary to understand the concept. Some of these terms are only explained briefly and has much more to it. As I keep writing this blog, I intend to go in to details in depth.
Depth of Field
Depth of Field or DoF for short is the area of an image that appears reasonably sharp or in focus. I discussed about DoF in the previous article in brief and I mentioned how the aperture affects the DoF. There are couple of other factors contributing to the DoF. However, I think I’m going to leave out most of the technical details out in this discussion.
Aperture – No point in repeating myself. See the previous post.
Sensor Size – Have you ever noticed when you’re taking pictures with your phone or your pocket camera, everything seems to be in focus no matter what you do? Hence the increased popularity in the Instagram and its blur feature. But when you’re using a DSLR, it’s much easier to achieve a blurred background and a sharply focused subject. FYI the blurred out background is called bokeh and nobody knows how to pronounce it (or to spell it). Anyway, the DSLRs have a much larger image sensor compared to mobile phones or pocket cameras. Larger image sensors allow shallower DoFs.
Camera to Subject Distance – When you’re up close and personal to the subject, you get a shallower DoF and if you step back couple of feet, you get a deeper DoF.
Subject to Background Distance – If the distance between your subject and the background is greater, you will get a much blurred background. If the said distance is smaller, you will get a deeper DoF. Just think of a person standing right next to a wall. The wall will almost always will be in focus. But if your subject is standing out in an open field, the far away mountains will almost always will be out of focus.
Focal Length – This gets a little tricky to explain without going into technical details. Contrary to the popular belief, focal length does not contribute to DoF. But there’s a reason why I included it here. It APPEARS to have a significant impact on DoF. There is an apparent change in DoF with a long lens vs. a short lens because the length of the lens changes the perspective in the scene – longer lenses render the background larger in relationship to the subject making it seem more blurry thus creating the impression of less depth of field. This is because when you’re shooting at a very wide focal length (say 16mm), your field of view is much greater. But when you’re shooting at a telephoto length (300mm or so), you field of view gets narrower. Thus your subject occupies different fractions of your image. Tele lenses magnifies the subject and therefore, they occupy a greater portion of your image whereas in wide angles, the subject appears smaller and they only occupy a smaller portion of the image. If the subject occupies the same fraction in both scenarios, the DoF will be the same. In order to do this, you have to step back a lot when you’re shooting with the telephoto lens. But that kind of defeats the purpose of a telephoto lens, doesn’t it? So the take home message is that technically, focal length does not have an effect on DoF. However, artistically it does. Since photography is an artistic medium, I will leave it up to you to interpret this one. It has been raining in Sri Lanka like crazy for the last couple of days. I will demonstrate this with an example the first chance I get and post it here. Bare with me until then.
I think it would be fair to say that white balancing is a way to reproduce the colors as accurately as possible. Each light source has a different temperature (or a color) associated with it. Day light, tungsten lights, candle light, fluorescent etc. has different colors. However, modern DSLRs are more than capable of doing a decent job when the available light is uniform, or you have one type of light source. When you’re using couple of different types of light sources, it gets a little tricky. Imagine you’re shooting inside a room that is lit by fluorescent bulbs and tungsten bulbs. Your pictures are likely to have an unnatural color cast on them.
Have you ever taken pictures, specially with a compact digital camera, inside a room and all of your pictures had an amber color to them and you wondered why? This is because your camera gets confused under these conditions unlike human eye, which does a fantastic white balancing job. Simply put, white balancing is telling your camera what white looks like, hence the name white balancing. When you tell your camera what white looks like, it automatically puts all the other colors in to their proper places and thus produces an accurate image. If you’re mathematically inclined, you can think of this as a circle centered at origin (0,0) on a XY plane. The radius is irrelevant here. Imagine that inside the circle is all the colors you need. When the camera is not properly white balanced, the circle shifts its position. So what’s supposed to look like white, does not look like white anymore. When you white balance your camera, the circle shifts back to its proper place, and you have all the natural colors again.
There are several methods to white balance properly. You can shoot RAW format and adjust your white balance later during post processing. This is what I personally do. If you shoot JPEGs, you limit your ability to properly white balance during post processing because you burn your white balancing profile into the image and doesn’t collect enough data to do it later. You can adjust it a little bit but not as much as shooting RAW. If you want to get it right in the camera, you can use something called an 18% grey card. As the name suggests, it’s simply a card that is grey. What so important about the color grey? Well grey has equal amounts of each primary color and reflects natural light and the camera can use it as a reference point. What you do is, you place the grey card against the subject you want to shoot and take a picture of the card, properly exposed, filling the entire scene. Then use this image as a reference custom white balance image. Refer to your camera manual on how to achieve this on your camera. When you do this, your camera will produce accurate colors.
Aspect ratio describes the ration between the width and the height of an image. This is something often ignored by many people because the aspect ratio is something that is fixed. This ratio reflects the width to height ratio of your camera’s image sensor. Canon, Nikon, and Pentax have a 3 : 2 ratio whereas Olympus and Panasonic have a 4 : 3 ratio. The 3 : 2 ratio comes from the 35 mm film where the area that records the image is 36 mm wide and 24 mm tall. This becomes a very important subject when you’re going to print your pictures. If your camera’s aspect ratio is 3 : 2, you can make 2 x 3, 4 x 6, 10 x 15, 16 x 24, 20 x 30 prints or anything that matches the 3 : 2 ratio without cropping out your image. However, the problem is that most of the popular print sizes, like 5 x 7, 8 x 10, 11 x 17 don’t match with this ratio. There are some places that will make prints that matches with your ratio, so don’t fret. I don’t like to throw away pixels but most importantly, when you crop your picture, it changes the composition of the image. So when you’re out there taking pictures, and if you plan to print these pictures with anything other than the native ratio of your camera, you need to take into account the fact that you have to crop this image later. Personally, I always stick with 3 : 2 ratio because that’s what my camera gives me.
In photography, fps stands for frames per second, not first person shooters. When you look at a camera’s specifications, this number is often under continuous mode or burst mode. This mode allows you to take a series of shots by holding the shutter button. This is mainly used when taking action shots, like sports or birds in flight. This increases the chance of getting a sharp shot and later you can discard the rest of the images if you want to. Thank goodness we’re shooting digital. The fps depends on several things. Buffer, the temporary memory where the images are stored before they are transferred into the memory card. Higher the buffer, high the fps. Image processor is another factor. If your camera has a faster image processor, the fps is higher too. The megapixel count has an indirect impact on fps. Since more megapixels mean bigger file sizes, it fills up the buffer really fast and thus results in a lower fps. This is why the Nikon’s the D800, fully equipped with all the other modern features, still shoots at a very low 4 fps because it has a unnecessarily large 36.8 megapixel count.
Remember I mentioned the one over focal length rule? Now that I have discussed the effective focal length, I should say that it’s actually one over effective focal length. This rule gives you a rough idea of how much your minimum shutter speed should be to handhold your camera in order to obtain a sharp image. This is of course not a rule written in the stone and it varies from person to person. However, the Image Stabilization (IS) technology gives you the option to handhold the camera at lower shutter speeds. Usually a lens would say that it has 2 stop IS or 4 stop IS. What this means is that you can shoot either 2 or 4 stops of shutter speed lower than the regular shutter speed when handholding the camera with an image stabilized lens. There are two main ways of stabilizing images.
Lens – Based
This is accomplished by introducing gyroscopic sensors. It would move the lens elements to counter the movement of the camera due to hand shake and direct the light into the sensor. There are two gyroscopic sensors, one to detect horizontal movement and the other to detect vertical movement. Some high end lenses come with a secondary mode of image stabilizing which allows you to turn off the horizontal gyroscopic sensor. This is useful when you are panning your camera to follow a subject, like a moving car.
Sensor – Shift
In this method, the sensor is shifted to compensate for the movement of the camera. The advantage of this method is that the image is stabilized irrespective of what lens is used. The disadvantage is that the effectiveness of stabilization is limited to the movement of the sensor and that if your camera has an optical view finder (most DSLRs do) the view finder won’t be stabilized. Sensor Shift IS is also called the in body image stabilization.
Different lens manufactures like to call this feature different names.
- Canon – Image Stabilization (IS)
- Nikon – Vibration Reduction (VR)
- Olympus – In Body Image Stabilization (IBIS)
- Sony Cyber Shot – Optical Steady Shot (OSS)
- Leica and Panasonic – MegaOIS
- Sony – Super Steady Shot (SSS)
- Sigma – Optical Stabilization (OS)
- Tamron – Vibration Compensation (VC)
- Pentax – Shake Reduction (SR)
Olympus, Sony, and Pentax uses sensor shift stabilization whereas the others use lens-based stabilization. Image stabilization only works to compensate for the camera movement. If your subject is moving, you will still get a blurry image if you don’t use a fast enough shutter speed. IS is specially useful when shooting in low light conditions where handholding the camera is necessary.
This does not obviously conclude all of technical terms used in the wonderful world of photography but some of the most important ones. Other terms will be explained as they come along in our discussions.