Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use OpenCV for exposure Fusion in python

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

Today, the editor will share with you the relevant knowledge points about how python uses OpenCV for exposure and fusion. The content is detailed and the logic is clear. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article.

1 what is exposure fusion

Exposure fusion is a method of combining images taken with different exposure settings into an image that looks like a high dynamic range (HDR) image with tone mapping. When we use the camera to take pictures, there are only 8 bits in each color channel to represent the brightness of the scene. However, the brightness of the world around us can theoretically range from 0 (black) to almost infinite (looking directly at the sun). Therefore, the stupid camera or the mobile camera determines the exposure settings based on the scene so that the camera's dynamic range (0-255 values) is used to represent the most interesting part of the image. For example, in many cameras, face detection is used to find the face and set the exposure to make the face look good. This raises the question-can we take multiple photos and shoot a wider range of scene brightness under different exposure settings? The answer is yes. Traditionally, HDR imaging and tone mapping are used. See the previous article for details:

HDR imaging requires us to know the exact exposure time. The HDR image itself looks dark and not very beautiful. The minimum intensity in a DR image is 0, but there is no maximum in theory. So we need to map its value to between 0 and 255 so that we can display it. The process of mapping a HDR image to a regular 8-bit color image per channel is called tone mapping. As you can see, assembling HDR images and tone mapping is a bit of a hassle. We cannot use multiple images to create tone-mapped images without using HDR. It turns out that we can use exposure fusion to achieve this.

2 the principle of exposure fusion

The steps for applying exposure fusion are as follows:

Take multiple images with different exposures

First, we need to capture a series of images of the same scene without moving the camera. As shown above, the images in the sequence have different exposures. This is achieved by changing the shutter speed of the camera. Usually, we choose some underexposed images, some overexposed images and a correctly exposed image.

In a "correctly" exposed image, select the shutter speed (automatically selected by the camera or photographer) so that the 8-bit dynamic range per channel is used to represent the most interesting parts of the image. Areas that are too dark are cut to 0, while areas that are too bright are saturated to 255.

In an underexposed image, the shutter speed is very fast and the image is very dark. Therefore, the 8 bits of the image are used to capture the bright area, while the dark area is cut to 0. In an overexposed image, the shutter speed is slower, so the sensor captures more light, so the image is brighter. The 8 bits of the sensor are used to capture the intensity of the dark area, while the bright area is saturated to the value of 255. Most SLR cameras have a feature called automatic exposure surround (AEB), which allows us to take multiple photos under different exposures at the press of a button. When we use HDR mode in iPhone, it requires three photos (Android can download the supercamera software).

Image alignment:

Even using a tripod to get the images in the sequence requires alignment, because even a small camera jitter can degrade the quality of the final image. OpenCV provides an easy way to align these images, AlignMTB. The algorithm converts all images to median threshold bitmaps (MTB). The MTB of the image is calculated by assigning a value of 1 to pixels that are brighter than the median, otherwise 0. The exposure time of MTB is the same. Therefore, it is possible to aim at the MTB without requiring us to specify the exposure time.

Image fusion:

Images with different exposures capture different ranges of scene brightness. According to Tom Mertens,Jan Kautz and Frank Van Reeth's paper entitled Exposure Fusion. See: exposure fusion calculates the desired image by retaining only the "best" part of the multi-exposure image sequence.

The author puts forward three quality indicators:

1 good exposure: if the pixels in the image in the sequence are close to zero or close to 255, you should not use the image to find the final pixel value. A pixel whose value is close to the intermediate intensity (128) is more appropriate.

2 contrast: high contrast usually means high quality. Therefore, for this pixel, an image with a high contrast value given to a particular pixel has a higher weight.

3 saturation: similarly, more saturated colors are less eliminated and represent higher quality pixels. Therefore, an image with high saturation of a particular pixel is given a higher weight to that pixel.

Three quality metrics are used to create a weight map that represents the contribution of the image to the final intensity of the pixel at the location

The weight graph is normalized so that the total contribution of all images to any pixel is 1.

It is effective to combine images with weight maps using the following equations:

Among them

Is the original image, is the output image. The problem is that because pixels are taken from images with different exposures,

The output image obtained using the above equation will show many cracks. The authors of this paper use the Laplace Pyramid to mix images. We will cover the details of this technology in future articles.

Fortunately, with OpenCV, this image exposure fusion merge uses only two lines of code from the MergeMertens class. Please note that the name depends on Tom Mertens, the first author of the Exposure Fusion paper.

3 Code and result

Code address:

Caterpillar:

# include "pch.h" # include # include using namespace cv;using namespace std;// Read Imagesvoid readImages (vector & images) {int numImages = 16 Static const char* filenames [] = {"image/memorial0061.jpg", "image/memorial0062.jpg", "image/memorial0063.jpg", "image/memorial0064.jpg", "image/memorial0065.jpg", "image/memorial0066.jpg", "image/memorial0067.jpg", "image/memorial0068.jpg", "image/memorial0069.jpg", "image/memorial0070.jpg" "image/memorial0071.jpg", "image/memorial0072.jpg", "image/memorial0073.jpg", "image/memorial0074.jpg", "image/memorial0075.jpg", "image/memorial0076.jpg"} / / for (int I = 0; I < numImages; iTunes +) {Mat im = imread (filenames [I]); images.push_back (im);}} int main () {/ / Read images reads image cout

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 286

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report