Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use OpenCV to obtain High dynamic range Imaging HDR for python

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article Xiaobian introduces in detail for you "python how to use OpenCV to obtain high dynamic range imaging HDR", the content is detailed, the steps are clear, and the details are handled properly. I hope that this "python how to use OpenCV to obtain high dynamic range imaging HDR" article can help you solve your doubts, following the editor's ideas slowly in depth, together to learn new knowledge.

1 background 1.1What is high dynamic range (HDR) imaging?

Most digital cameras and monitors capture or display color images as a 24-bit matrix. Each color channel has 8 bits, a total of three channels, so the pixel value of each channel is between 0 and 255. In other words, an ordinary camera or display has a limited dynamic range.

However, there is a very wide range of colors in the world around us. When the lights are off, the garage darkens; in the sun, the garage looks very bright. Even if these extreme situations are not taken into account, 8 bits is hardly enough to capture the scene in daily situations. Therefore, the camera attempts to estimate the light and automatically set the exposure so that the most useful parts of the image have a good dynamic color range, while the too dark and too bright parts are set to 0 and 255, respectively.

In the following image, the image on the left is a normally exposed image. Notice that the sky in the background has completely disappeared because the camera decides to use a setting that allows the child to be photographed correctly and the bright sky to be ignored. The image on the right is the HDR image generated by iPhone.

How does iPhone capture HDR images? It actually takes three images under three different exposures. The image is taken continuously and quickly, so there is almost no offset between the three shots. The three images are then combined to produce a HDR image.

1.2 how does high dynamic range (HDR) imaging work?

In this section, we will describe the steps to create a HDR image using OpenCV.

1) take multiple images with different exposure settings

When we take pictures with a camera, there are only 8 bits in each channel to represent the dynamic range (luminance range) of the scene. But we can take multiple images of the scene under different exposures by changing the shutter speed. Most SLR cameras have a feature called automatic surround exposure (AEB), which allows us to take multiple photos under different exposures at the press of a button. Using the AEB on the camera or the auto-surround app on the phone, we can quickly take multiple photos one by one, so the scene won't change. When we use HDR mode in iPhone, it requires three photos (Android can download the supercamera software).

1 underexposed image: this image is darker than the correct exposed image. The goal is to take a very bright part of the image.

2 correctly exposed image: this is a conventional image taken by the camera according to the estimated illuminance.

3 overexposed image: this image is brighter than the correct exposed image. The goal is to capture very dark parts of the image.

However, if the dynamic range of the scene is very large, we can take more than three pictures to form a HDR image. In this tutorial, we will use 4 images taken with an exposure time of 1 dyna, 30, 0.25, 2.5 and 15 seconds. The thumbnail is shown below.

Information about the exposure time and other settings used by SLR cameras or mobile phones is usually stored in the EXIF metadata in the JPEG file. Use the following links to view EXIF metadata stored in JPEG files in Windows and Mac.

Windows right-click the picture-attributes-details, there are image details. As follows:

Or, you can use my favorite EXIF command-line utility called EXIFTOOL.

2 Code 2.1 runtime environment configuration

Because the code used in this article involves opencv non-free code, createTonemapMantiuk this part of the algorithm is to apply for a patent fee (this article may not need this code). You need to select OPENCV_ENABLE_NONFREE to compile opencv and opencv_contrib when in use.

If it is python, just install the specified version of opencv directly:

Pip install opencv-contrib-python==3.4.2.17

Using non-free code

The header file and namespace are as follows:

# include using namespace xphoto;2.2 reads images and exposure time

Enter the image, exposure time and number of images manually.

The code is as follows: Clippers:

/ * * @ brief read pictures * * @ param images * @ param times * / void readImagesAndTimes (vector & images, vector & times) {/ / number of images int numImages = 3; / / Image exposure time static const float timesArray [] = {1.0 / 25,1.0 / 17,1.0 / 13}; times.assign (timesArray, timesArray + numImages) Static const char* filenames [] = {"1_25.jpg", "1_17.jpg", "1_13.jpg"}; / / read image for (int I = 0; I

< numImages; i++) { Mat im = imread(filenames[i]); images.push_back(im); }} python: def readImagesAndTimes(): # List of exposure times times = np.array([ 1/30.0, 0.25, 2.5, 15.0 ], dtype=np.float32) # List of image filenames filenames = ["img_0.033.jpg", "img_0.25.jpg", "img_2.5.jpg", "img_15.jpg"] images = [] for filename in filenames: im = cv2.imread(filename) images.append(im) return images, times2.3 图像对齐 用于合成HDR图像的原始图像未对准可能导致严重的伪影。在下图中,左侧图像是使用未对齐图像组成的HDR图像,右侧图像是使用对齐图像的图像。通过放大图像的一部分,使用红色圆圈显示,我们在左图像中看到严重的重影瑕疵。 当然,在拍摄用于创建HDR图像的照片时,专业摄影师将相机安装在三脚架上。他们还使用一种称为反光镜锁死的功能来减少额外的振动。即使这样,图像也可能无法完美对齐,因为无法保证无振动的环境。使用手持相机或手机拍摄图像时,对齐问题会变得更糟。 幸运的是,OpenCV 提供了一种简单的方法,使用AlignMTB对齐这些图像。该算法将所有图像转换为中值阈值位图median threshold bitmaps(MTB)。图像的MTB生成方式为将比中值亮度亮的点分配为1,其余为0。MTB不随曝光时间的改变而改变。因此不需要我们指定曝光时间就可以对齐MTB。 代码如下: C++: // Align input imagesPtr alignMTB = createAlignMTB();alignMTB->

Process (images, images)

Python:

# Align input imagesalignMTB = cv2.createAlignMTB () alignMTB.process (images, images) 2.4 restore camera response function

The response of a typical camera is not linear with the brightness of the scene. What does that mean? Suppose a camera captures two objects, one of which is twice as bright as in the real world. When you measure the pixel intensity of two objects in a photo, the pixel value of the lighter object will not be twice that of the darker object. Without estimating the camera response function (CRF), we will not be able to combine the image into a single HDR image. What does it mean to combine multiple exposure images into HDR images?

Only one pixel is considered at a certain location in the image (x _ ray y). If CRF is linear, the pixel value is proportional to the exposure time unless the pixel is too dark (that is, close to zero) or too bright (that is, close to 255in a particular image). We can filter out these bad pixels (too dark or too bright), divide the pixel value by the exposure time to estimate the pixel brightness, and then average the brightness value on all images with the same pixel (too dark or too bright). We can do this for all pixels and get a single image of all pixels by averaging the "good" pixels. But CRF is not linear, so we need to make the image intensity linear before evaluating CRF.

The good news is that if we know the exposure time of each image, we can estimate the CRF from the image. Like many problems in computer vision, the problem of finding CRF is set as an optimization problem, in which the goal is to minimize the objective function composed of data items and smooth terms. These problems are usually reduced to linear least squares problems solved by singular value decomposition (SVD), which is part of all linear algebraic packages. Details of CRF recovery algorithm can be found in Recovering High Dynamic Range Radiance Maps from Photographs.

Use CalibrateDebevec or just two lines of code in OpenCV to find CRF CalibrateRobertson. We will use CalibrateDebevec in this tutorial.

The code is as follows:

Caterpillar:

/ / Obtain Camera Response Function (CRF) Mat responseDebevec;Ptr calibrateDebevec = createCalibrateDebevec (); calibrateDebevec- > process (images, responseDebevec, times)

Python:

# Obtain Camera Response Function (CRF) calibrateDebevec = cv2.createCalibrateDebevec () responseDebevec = calibrateDebevec.process (images, times)

The following image shows the CRF restored using the red, green, and blue channels.

2.5 merge Ima

Once the CRF is estimated, we can merge the exposed image into a HDR image MergeDebevec. The C + + and Python code is shown below.

Caterpillar:

/ / Merge images into an HDR linear imageMat hdrDebevec;Ptr mergeDebevec = createMergeDebevec (); mergeDebevec- > process (images, hdrDebevec, times, responseDebevec); / / Save HDR image.imwrite ("hdrDebevec.hdr", hdrDebevec)

Python:

# Merge images into an HDR linear imagemergeDebevec = cv2.createMergeDebevec () hdrDebevec = mergeDebevec.process (images, times, responseDebevec) # Save HDR image.cv2.imwrite ("hdrDebevec.hdr", hdrDebevec)

The HDR image saved above can be loaded in Photoshop and tone mapped. An example is shown below.

2.6 tone Mappin

Now let's merge the exposed image into a single HDR image. Can you guess the minimum and maximum pixel values of this picture? For pitch black conditions, the minimum value is obviously 0. What is the theoretical maximum? Infinity! In fact, the maximum values are different in different cases. If the scene contains a very bright light source, we will see a very large maximum. Although we have used multiple images to restore the relative luminance information, the challenge we now face is to save this information as a 24-bit image for display.

Tone mapping: the process of converting a high dynamic range (HDR) image to an 8-bit image per channel while preserving as much detail as possible is called tone mapping.

There are several tone mapping algorithms. OpenCV implements four of them. Keep in mind that there is no correct way to do tone mapping. In general, we want to see more detail in a tone-mapped image than in any exposed image. Sometimes, the goal of tone mapping is to produce realistic images, and usually the goal is to produce surreal images. Algorithms implemented in OpenCV tend to produce realistic and therefore less compelling results.

Let's look at the various options. Some common parameters for different tone mapping algorithms are listed below.

1) Gamma gamma: this parameter compresses the dynamic range by applying gamma correction. When gamma equals 1, no correction is applied. A grayscale of less than 1 darkens the image, while a grayscale greater than 1 brightens the image.

2) saturation saturation: this parameter is used to increase or decrease saturation. When the saturation is high, the color is richer and more intense. The saturation value is close to zero, making the color gradually change to grayscale.

3) contrast contrast: controls the contrast of the output image (i.e. log (maxPixelValue / minPixelValue)).

Let's explore four tone mapping algorithms available in OpenCV

Drago Tonemap

The parameters of Drago Tonemap are as follows:

CreateTonemapDrago (float gamma = 1.0f float saturation = 1.0f float bias = 0.85f)

Here, bias is the value of the offset function in the range of [0jue 1]. Values from 0.7 to 0.9 usually get the best results. The default value is 0.85. See this article for more technical details. The parameters are obtained by repeated experiments. The final output is multiplied by 3 simply because it gives the most satisfactory results. For more technical details, see:

The results are as follows:

Durand Tonemap

The parameters of Durand Tonemap are as follows:

CreateTonemapDurand (float gamma = 1.0f, float contrast = 4.0f, float saturation = 1.0f, float sigma_space = 2.0f, float sigma_color = 2.0f)

The algorithm is based on decomposing the image into base layer and detail layer. The base layer is obtained using an edge-preserving filter called a bilateral filter. Sigma_space and sigma_color are the parameters of bilateral filter, which control the amount of smoothing in spatial domain and color domain respectively. For more technical details, see:

The results are as follows:

Reinhard Tonemap

The parameters of Reinhard Tonemap are as follows:

CreateTonemapReinhard (float gamma = 1.0f float intensity = 0.0f float light_adapt = 1.0f float color_adapt = 0.0f)

The parameter intensity should be in the range of [- 8, 8, 8]. The higher the intensity value, the brighter the result. The parameter light_adapt controls light adaptation and is in the range of [0jin1]. A value of 1 indicates adaptation based only on pixel values, and a value of 0 indicates global adaptation. The intermediate value can be used for a weighted combination of the two. The parameter color_adapt controls chromaticity adaptation and is in the range of [0recom 1]. If the value is set to 1, the channels are processed independently, and if the value is set to 0, each channel has the same level of adaptation. The median value can be used for a weighted combination of the two. For more technical details, see:

The results are as follows:

Mantiuk Tonemap

The parameters of Mantiuk Tonemap are as follows:

CreateTonemapMantiuk (float gamma = 1.0f float scale = 0.7f float saturation = 1.0f)

Scale is the contrast scale factor. Values from 0.6 to 0.9 produce the best results. For more technical details, see:

The results are as follows:

For all the tone mapping codes above, see:

Caterpillar:

/ / Tonemap using Drago's method to obtain 24-bit color image tone mapping algorithm cout process (hdrDebevec, ldrDrago); ldrDrago = 3 * ldrDrago; imwrite ("ldr-Drago.jpg", ldrDrago * 255); cout

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report