AOP Home | Calendar | History | Sitemap

Advanced Guide

1.   Digital Imaging: What are CCDs?1


Over the years, astronomers have sought better ways of seeing and recording light. For hundreds of years, astronomers relied on their eyes. Then the telescope was invented and helped people to see distant objects more clearly. It did not take long for Galileo to point the telescope to the heavens and begin discovering the phases of Venus or the moons of Jupiter. Now astronomers could see more, but they could only record what they saw by sketching. Eventually, photography was invented and the photographic methods were quickly adapted to astronomical needs. Astronomers could now accurately record what they saw.

Improvements in telescope design helped astronomers see deeper into the universe while advances in photography allowed them to take shorter exposures to record the same amount of detail. From the pictures, astronomers could determine the positions of stars, the brightness of stars, and the size of craters on the Moon. Movie cameras provided a method for recording eclipses and occultations. Unfortunately, it is hard to take pictures if the camera is in space and the people are on the ground. Therefore, remote photography had to be developed. It is easy enough to rig a camera for delayed exposures or to set up a computer to trigger the camera, but how can the film be retrieved if the spacecraft and the camera never return to Earth? One early method was to develop the film on board the spacecraft using a technique similar to the Polaroid process. The developed negative was then scanned with a high-intensity beam. A photomultiplier tube detected the varying intensities of the beam as it passed through the varying densities of the film. The light intensity at different points was converted into an electrical current (analog signal) and then into a digital signal that could be transmitted back to Earth.

Similar to television cameras, vidicon cameras focus a scene onto a photosensitive plate. The important property of the plate is that it has a high electrical resistance where no light is shining on it. By scanning the plate and recording the resistivity of small sections or pixels, short for picture element, electrical currents are produced. The analog signal is converted into a digital signal that is transmitted back to Earth.

  1. Why is photography a better method than sketching when recording views of the night sky?
  2. Why is a movie camera beneficial for recording eclipses and occultations? (as compared to still images)

The spacecraft facsimile camera scans the scene directly. Light from each portion of the scene passes through a small slit and is detected by a photomultiplier tube producing small, varying amounts of current. As in each case before, the analog signal is converted into the digital signal that is transmitted back to Earth.

When the digital signal is received on Earth, a computer can assign shades of gray to the values to produce the image. Otherwise, the file is simply an array of numbers.


A charge-coupled device is an integrated circuit chip consisting of an array of electronic light sensitive elements. There are four steps involved for a CCD to work. The first step is charge production. This is done by a physical process called the photoelectric effect.

Experiments during the late 1800's revealed an unexplainable phenomenon termed the 'photoelectric effect.' The effect occurs when a material emits negative charges after being illuminated. The composition of the negative charges was not known until the discovery of the electron by J.J. Thomson in 1897. Of the four experimental observations of the photoelectric effect, only one -- that the current increases with an increase in the intensity of light -- could be understood using classical physics. The other three- that electrons are emitted only when the frequency of the light is above a threshold value despite the intensity of the light, that the maximum kinetic energy of the emitted electrons is dependent on the frequency of the light, and that the electrons are emitted as soon as the light illuminates the material -- were explained in 1905 by Albert Einstein. Early observers of the photoelectric effect could not explain the experimental results because they thought of light as being smooth like a wave. Albert Einstein thought about it for many years and in 1905 hypothesized that what was believed to be a continuous beam of light was composed of Planck's energy quanta, called photons. Today, the four observations of the photoelectric effect are very basic in the operation of a CCD detector.


The photoelectric effect is the process whereby negative charges are emitted from a material when light shines on it. There are four points to the photoelectric effect:

  1. the current increases with an increase in the intensity of light,
  2. electrons are emitted only when the frequency of light is above a threshold value despite the intensity of light,
  3. the maximum kinetic energy of the emitted electrons is dependent on the frequency of the light, and
  4. electrons are emitted as soon as the light illuminates the material.
  1. Why is A important to the operation of a CCD?
  2. How does B affect a CCD?

Once the electrons are liberated, they are collected in capacitors. This is the second step in the CCD's operation, charge collection. Since the number of electrons in a capacitor is proportional to the number of photons striking, by measuring the amount of charge, a count of how many photons hit can be determined. In order to measure the number of electrons collected in a capacitor, the third step, charge transfer, is done when the camera's electronics manipulate the gates on each capacitor so that the charges are transferred from one capacitor to the next -- hence the name charge-coupled device -- row by row and column by column to an amplifier on the chip. The last step, charge detection occurs when the amplifier senses the number of electrons and generates a signal proportional to the amount of charge. This electrical signal is normally only a small fraction of a volt, but it is related to the amount of light that struck a particular capacitor. The voltage, an analog signal, is converted by another device called an analog- to- digital (A/D) converter into numbers that a computer understands. Therefore, corresponding to each capacitor there is a numerical value proportional to the voltage produced by a number of electrons that is proportional to the amount of light hitting that capacitor. Since the charges were transferred systematically, the pattern of light falling on the chip can be reconstructed by assigning different shades of gray to each pixel in the image based on the numerical values obtained from each capacitor in the array. The range of gray shades depends on the output of the A/D converter. For an 8-bit A/D, there are 28 or 256 shades. A 12-bit image has 212 or 4096 shades, which is a much larger dynamic range and allows a greater amount of detail to be seen. Most CCD cameras used in astronomical imaging produce 8-, 16-, or 32-bit image files.

A simple analogy thought up by Morley Blouke of Tektronix and Jerome Kristian of Mt. Wilson Observatory best summarizes the principle of a CCD. Imagine a field covered with buckets. After a rainstorm, the buckets are systematically moved to a metering station where the amount of water in each bucket is measured. The amount of water in a bucket and the position of the bucket are then sent to a computer. After all the buckets have been measured, the computer can draw a picture of how much rain fell on the field. The raindrops correspond to the photons, the buckets to the capacitors, and the metering station to the amplifier.


The charge-coupled device was invented in the late 1960's by W.S. Boyle and G.E. Smith. Their concept was described in a pair of papers published in the Bell System Technical Journal in 1970. Originally, the CCD was designed as an electronic analogue to the magnetic bubble device, that itself had only been invented shortly before. However, the potential of CCDs as an imaging sensor was soon recognized so that the development of CCDs has been as a detector to replace tube-type sensors rather than as a memory device competing against the magnetic bubble device. CCDs have already replaced tube sensors in most video cameras and are slowly replacing the film in standard cameras. Although it will be several more years before CCD cameras become the point and shoot cameras of today, they are already the camera of choice for scientists in many fields. Wherever film or tube sensors were once used as a detector, CCDs are now employed. Physicists studying high energy particles, analytical chemists studying the spectra of chemicals, and astronomers observing the stars, all use CCD detectors.

Astronomers might have been the first to recognize the potential of the CCD for scientific imaging. The capabilities of CCDs fulfilled the need of astronomers to record very faint objects in a relatively short amount of time as compared to film. Also the CCD could be operated from a remote location. These two benefits led to the initiation of a Traveling CCD Camera System in 1973 by workers at the Jet Propulsion Laboratory. The program was designed to promote the development of a large area array CCD and to promote interest in the CCD as an imaging sensor. The Traveling CCD Camera System visited major astronomical observatories all over the world and made significant discoveries each time. Astronomers and observatory engineers previously unfamiliar with CCDs quickly saw the advantages of the CCD and ordered their own system. Within a few short years, most major observatories had converted over to CCD sensors.

After CCDs were successfully introduced to the astronomy community, several new NASA/JPL proposals were written and awarded in using the sensor in space-born imaging systems. Today, most of the NASA missions utilize CCD cameras. The original Wide Field /Planetary Camera (WF/PC) and its replacement, WF/PC II, on the Hubble Space Telescope are CCD cameras. The spacecraft Galileo also uses a CCD camera. A modified Nikon 35-mm body with a CCD chip in it was flown in September 1991 on STS-48 generating numerous high resolution images of the shuttle and astronauts. In other words, most of the space pictures published today were produced by CCD cameras.


When the general public reads an article in a newspaper or scientific journal about the newest discovery made by the Hubble Space Telescope or some major ground-based observatory, they often see clear, detailed pictures of the discovered object. These pictures, which may be black and white or color, are sometimes so clear and sharp that it is easy to believe that the picture is a photograph. The accompanying article usually does not explain how the picture was made or taken. Thus the public assumes that 'picture' means 'photograph.' The word 'picture' is also used interchangeably with several other words such as 'painting' and 'drawing.' Although not incorrect, it is confusing. Therefore, CCD camera users often refer to pictures made from film as 'photographs' and pictures produced by CCD cameras as 'images.'

CCD cameras used in astronomy are not very different from everyday photographic cameras in their standard operation. In a regular camera there is typically a cover (shutter) that is opened for a period of time (exposure) during which light is allowed to hit the recording medium (film). After some intermediate steps involving chemical processing, an image is obtained from the film. In a CCD camera, there is also some type of shutter that regulates the exposure during which light falls on the chip. After some intermediate steps involving electronic processing, an image is produced.

Although CCD cameras seem to be very similar to film cameras, there are some important differences. Standard cameras normally require a very long exposure to capture an image. There is also the time involved in processing the film. A CCD chip, on the other hand, is much more sensitive to light; therefore, shorter exposures are possible to record the same details. Meanwhile, the processing time is practically eliminated since the image is digitally produced, which means that the image can be viewed on a monitor within seconds after the exposure. The CCD detector also has other advantages over film including quantitative accuracy, ability to remove noise, ability to combine multiple images without laborious dark room operations, wider spectral range, larger dynamic range, geometric stability, better resolution, linearity, reliability, and durability. The advantages of film such as larger format, color, and battery operation are slowly disappearing as larger chips are made, filters are used, and larger battery packs are designed. The one major advantage of film is the much lower cost.

Exercise A
  1. Given the numerical array in Figure 1a, appropriately shade in Figure 1b using the gray scale beside Figure 1b.
  2. Using Figure 1a again, shade in Figure 1c using the gray scale beside Figure 1c.
  3. Questions:
    1. Which shaded array corresponds to a 1-bit image?
    2. The other array is a    -bit image.
    3. Which array shows more detail?

Technically, a CCD camera does not produce an image but rather an image file. The file consists of an array of pixels, short for picture element, that correspond to the capacitors in the array on the CCD chip. Each pixel contains a number that is proportional to the amount of light that hit the corresponding capacitor. But printing out an array of numbers does not reveal the image. However, by using an image processing program, the numbers can be translated into a picture people can understand.

The numbers are not actually translated but rather they are assigned a brightness level -- typically a shade of gray. The lowest values are black, the highest are white, and the values in between are different shades of gray. As mentioned earlier, the output of the A/D converter determines the maximum number of shades of gray. An 8-bit image can have 256 shades while a 16-bit image can have 65,536 shades. Suppose there were a CCD camera that produced 1-bit images. These images would only have two shades -- black and white. That would be okay if only stars were being imaged- the pixels representing the sky would be black and those representing the stars would be white. A 2-bit image with four shades would show stars of varying brightness. Therefore, having more bits allows for more shades resulting in a better visual representation.

Because each pixel is described by a number, and since numbers can be mathematically manipulated, then the image can also be adjusted or processed. Simple functions would involve adding, subtracting, multiplying, or dividing each pixel value by a constant. More complicated functions would involve matrix operations on the pixels. What image processing functions can be done depends on the program being used. However, as in standard photography, the better images have very few adjustments made.


Ideally, the CCD would detect only the light of objects within the field of view of the chip. Therefore, the resulting raw image would be a duplicate of the view and no further processing would be needed. In reality, as with any scientific instrument, there are certain errors called 'noise' inherent to a CCD camera and its operation. Factors not associated with the electronics, such as dust on the optical surfaces, also produce undesirable effects termed 'artifacts.' There are several sources of noise and artifacts, but only those that can be dealt with using the available software will be discussed.

Just running the camera produces noise in the form of heat. Since heat is energy and since the composition of most CCD chips makes them sensitive even to infrared sources, the pixel value increases. Therefore, many cameras are cooled either thermoelectrically or with liquid nitrogen to reduce the thermal noise. Thermal noise is still present even in cooled cameras, but can be removed by subtracting a dark frame. This is an image with the same exposure as the raw image, but the camera or telescope is covered so that no light strikes the CCD.

Exercise B

A Gedankenexperiment is a test conducted in your mind instead of in a lab. Keeping the bucket analogy in mind, answer the questions about the sources of noise and artifacts.

  1. The buckets are all out and covered. It is not raining but you notice dew has formed in the buckets. The dew corresponds to what type of noise?
  2. What environmental factor(s) in the analogy would compare to a temperature change in the camera?
  3. What would happen if it is barely drizzling and you uncovered-covered the buckets too quickly?
  4. There is a heavy downpour and you cannot get out to cover the buckets. What happens?
  5. Some of the buckets are under a tree and you notice that the branches are channeling the water into certain buckets. How would this affect your measurements?
  6. During the time that the buckets were uncovered, the sun peeked out and illuminated some of the buckets. Would this affect your readings? If so, how?
  7. Your assistant has a wicked sense of humor. While you were not looking, she drilled holes in some buckets and dumped extra water in others. Explain the results.
  8. Without patching the holes or scooping out water, how could you partially rectify the false readings?
  9. Why was this a gedankenexperiment?

Another type of noise is the bias. This is simply the charge "applied to a CCD to activate its photon-collecting capacity. It is present as a false signal in every image (Newberry 1995, 20)." This errant data can be simply removed by subtracting a bias frame from the raw image. The frame is created by covering the telescope or camera as if taking a dark frame but with an extremely short exposure.

As in photography, the best CCD images are taken under clear, dark skies. At Melton Observatory, the sky background will be the greatest contributor of noise to images of very faint objects. Whether the background sky can be easily removed by subtracting a 'sky' frame has not been determined.

As mentioned earlier, dust on the optics will produce an artifact in the image. If the CCD camera is being used with a refractor, then the dust specks will show up as out-of-focus blobs in the image. In a reflecting system with a central obstruction, the dust specks appear as rings in the image. Another artifact is uneven lighting across the chip because of shadowing or vignetting in the optical system. This occurs when the rectangular field of view of the chip is as large as or larger than the circular field of view of the telescope. Similar to vignetting but produced by the chip, a high gain occurs when sections of the CCD chip consistently release electrons more efficiently than other regions resulting in brighter sections. All of these artifacts are relatively simple to correct. A flat-field frame is an image with the same exposure time as the raw image but of an evenly lit surface or source (the dusk or dawn sky) revealing only the artifacts. The flat-field is divided (flat-fielded) into the raw image to remove the artifacts.

Thermal noise, bias, and the various artifacts are easy to remove by subtracting and dividing the appropriate calibration frames. Harder to correct is an underexposed image where there is a low signal-to-noise (S/N) ratio. An overexposed image will saturate the pixels, making the entire object appear white. Very technical algorithms such as those used at the Space Telescope Institute can nearly correct an out-of-focus image like those taken by the Hubble Space Telescope before it was fixed. Unfortunately, those algorithms are available in the more technical and advanced image processing programs that also tend to be more expensive.


Exercise C
  1. Shade in all of the pixels marked by the triangle in each array.
  2. Questions:
    1. Which arrays are the same size?
    2. Which arrays have pixels of the same dimensions?
    3. Why is a larger array more desirable?
    4. what happened to the triangle in the top array?
    5. How does the pixel size affect the image?
    6. In which array does the triangle look best?

The physical characteristics of the CCD are also important factors in the quality of the final image. The chip is usually described as being an array of a certain size and composed of so many pixels of a certain size. Currently, the array dimensions are limited by the amount of time required to read the huge number of pixels. A 320x240 pixel array can be downloaded much more quickly than a 1024x1024 pixel array. Larger arrays are being developed to match photographic plates which can photograph large chunks of sky. The field of view of even large arrays is still much smaller than the plates. By building an array of arrays, larger fields of view can be obtained.

As the arrays grow larger the pixels become smaller. Manufacturing capabilities may seem to limit how much the pixels can shrink, but the real limitation is caused by the Earths atmosphere. When the air is still, smaller details can be discerned than on a windy night. The seeing is typically described in arcseconds. For a given optical system, if the pixel has a field of view smaller than the seeing, then the resolution of the image will depend on the seeing. However, in most instances, smaller pixels will yield a better resolution than larger pixels in the same optical system.

The resolution of an optical system refers to the smallest shape that can be detected by that system. The field of view refers to the amount of sky that an array or pixel can see. Both measurements are usually given in arcminutes or arcseconds. By changing the camera from one telescope to another, the field of view and therefore the resolution of the camera will change.

Most of the brighter comets visible to the eye alone usually span several degrees in the sky making them poor targets for a high power telescope and CCD system with a small field of view. In those instances, attaching the CCD camera to a smaller (lower power) telescope or even just to a regular camera lens, will often give the best results. Exposure times will vary depending on the brightness of the comet and how quickly it is moving through the sky. Things to look for include variations in brightness within the coma, jets, knots and other structures in the tails.

Astronomical CCD cameras vs digital cameras (quick cams, point and shoots)

Astronomical CCD cameras refers to those cameras that have a system in place to cool the CCD chip to help reduce the noise. It is expected that the user will also take the necessary calibration frames (bias, dark, flats) in order to process the image. But CCD chips and their CMOS cousins are becoming less expensive. Many of the commercially available digital cameras (SLR and point and shoots) and quickcams also have a CCD or CMOS chip. What's the difference? Both are "digital". The non-astro cameras (we'll keep calling them digital cameras and the ones used in astronomy we'll call CCD cameras) are often designed for imaging during the day when there is plenty of light and therefore a very high signal to noise ratio. This effectively hides some of the thermal noise of the system so these cameras do not need to be cooled. And they have filters fixed in place so that you can get a color image. CCD cameras don't have filters fixed in place, instead the user has the option of using a filter wheel which might contain filters for color imaging or other types of filters for imaging other features. While CCD cameras are usually preferred if you plan on doing data acquisition, both can yield good data and spectacular images with careful work.

1 Does this material look familiar? I wrote Unit 59 and Unit 60 for an astronomy exercise book as part of my M.S. thesis at the University of South Carolina many years ago.

<-- PREVIOUS     NEXT -->

Updated: 30-Jul-2013