DIGITAL CAMERA SENSORS
Digital camera uses a sensor array of millions of insignificant pixels in order to produce the final image. When you press your camera's shutter button and the exposure begins, each of these pixels has a "photo site" which is uncovered to collect and store photons in a cavity. Once the exposure finishes, the camera closes each of these photo sites, and then tries to charge how many photons fell into each. The relative capacity of photons in each cavity are then sorted into different intensity levels, whose precision is determined by bit depth (0 - 255 for an 8-bit image).
Every cavity is incapable to distinguish how much of each color has fallen in, so the above illustration would only be able to create grayscale images. To capture color images, each cavity has to have a filter placed over it which only allows infiltration of a particular color of light. Virtually all current digital cameras can only capture one of the three primary colors in each cavity, and so they discard roughly 2/3 of the incoming light. As a result, the camera has to approximate the other two primary colors in order to have information about all three colors at every pixel. The most common type of color filter array is called a "Bayer array,"
A Bayer array consists of alternating rows of red-green and green-blue filters. Notice how the Bayer array contains twice as many green as red or blue sensors. Each primary color does not receive an equal fraction of the total area because the human eye is more sensitive to green light than both red and blue light. Redundancy with green pixels produces an image which appears less noisy and has finer detail than could be accomplished if each color were treated equally. This also explains why noise in the green channel is much less than for the other two primary colors.