Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

contact request #1

Open
Bra1nsen opened this issue Sep 14, 2022 · 15 comments
Open

contact request #1

Bra1nsen opened this issue Sep 14, 2022 · 15 comments

Comments

@Bra1nsen
Copy link

hey fthaler thanks alot for sharing your work!

I try to estimate solar irradiance with groundbased sky imaging.

SKYCAM  - Kopie

multiple ldr sky images

Clouds or sky recordings have the greatest dynamic range of all. I was thinking about merging ~ 15 images. There will also be heavily overexposed and heavily underexposed images - is there a kind of weight function, with which such images are sorted out directly? Furthermore, I wanted to ask how exactly your / on which mathematics your merging works?

Many greetings
Paul

@Bra1nsen
Copy link
Author

error2

is that normal?

@fthaler
Copy link
Owner

fthaler commented Nov 6, 2022

error2 is that normal?

Definitely not, see my answer on issue #3.

@fthaler
Copy link
Owner

fthaler commented Nov 6, 2022

The HDR merging algorithm of rawhdr is relatively simple: it tries to find parts of the image which are neither under-exposed nor clipped (currently using a simple heuristic). Then it compares the average pixel intensity of these pixels, leading to an estimate for the relative exposure compared to the second image (for which it does exactly the same). Then it merges the two images such that it takes the well-exposed regions of both and drops over- and under-exposed regions as much as possible.

The first image passed to rawhdr is taken as the reference image, that is, the exposure of the other images is scaled to match the first one.

This is a quite simple and physically accurate procedure, but note that it only works for images with a linear color-space (that is, RAW or linear high-dynamic-range images). Compared to other methods, it does not rely on (often not perfectly accurate) camera EXIF information. To estimate the absolute solar irradiance, you probably need to incorporate this information – like ISO, aperture size, exposure time – of the reference image.

In your case, the algorithm could fail if you have no region which is not totally overexposed, as rawhdr can not detect the correct exposure then. But it might be relatively straightforward to either filter these images out before feeding them to rawhdr, or alternatively to filter them out inside rawhdr. So if you run into such an issue, I could try to add this filtering directly in rawhdr.

@Bra1nsen
Copy link
Author

Bra1nsen commented Nov 6, 2022

Nice to hear from you.
I'm still working on this project.

The dynamic range of the sky is gigantic, but with exposure time series fusion it becomes possible to capture it.

What did you create this project for, what was your goal?

By the way, my name is Paul.

@Bra1nsen
Copy link
Author

Bra1nsen commented Nov 6, 2022

The HDR merging algorithm of rawhdr is relatively simple: it tries to find parts of the image which are neither under-exposed nor clipped (currently using a simple heuristic). Then it compares the average pixel intensity of these pixels, leading to an estimate for the relative exposure compared to the second image (for which it does exactly the same). Then it merges the two images such that it takes the well-exposed regions of both and drops over- and under-exposed regions as much as possible.

that sounds great!

@Bra1nsen
Copy link
Author

Bra1nsen commented Nov 6, 2022

The first image passed to rawhdr is taken as the reference image, that is, the exposure of the other images is scaled to match the first one.

Why not an average reference image. The Average of all images together.

hdr = [raw_1, raw_2, raw_3, raw_4] #np.array(of all images)
hdr = sum(hdr).view(np.uint16)

#Normalize according to exposure profile
equals_exp = img_as_ubyte(exposure.rescale_intensity(hdr))
iio.imwrite('hdr.tga', equals_exp)

@Bra1nsen
Copy link
Author

Bra1nsen commented Nov 6, 2022

To estimate the absolute solar irradiance, you probably need to incorporate this information – like ISO, aperture size, exposure time – of the reference image.

basically I need a function to determine:

N - number of images
e_I -optimal exposure time setting for every image taken

Goal: optimal Point for:

minimal Time/ Computational Effort<--> HDR <--> maximal Solar Range

@Bra1nsen
Copy link
Author

Bra1nsen commented Nov 6, 2022

In your case, the algorithm could fail if you have no region which is not totally overexposed

Could you please explain that further? What kind of filter and why?

@fthaler
Copy link
Owner

fthaler commented Nov 11, 2022

Nice to hear from you. I'm still working on this project.

The dynamic range of the sky is gigantic, but with exposure time series fusion it becomes possible to capture it.

What did you create this project for, what was your goal?

By the way, my name is Paul.

Hi Paul :)

I use it mainly for classical HDR photography and was not happy with other available open source HDR merging solutions. All algorithms I found/tried generated banding artifacts in smooth areas or introduced strange color shifts. There was no software available which uses the fact that RAW images are using a linear color space and are thus quite easy to merge physically meaningful. So I created my own…

Cheers, Felix

@fthaler
Copy link
Owner

fthaler commented Nov 11, 2022

The first image passed to rawhdr is taken as the reference image, that is, the exposure of the other images is scaled to match the first one.

Why not an average reference image. The Average of all images together.

hdr = [raw_1, raw_2, raw_3, raw_4] #np.array(of all images)
hdr = sum(hdr).view(np.uint16)

#Normalize according to exposure profile
equals_exp = img_as_ubyte(exposure.rescale_intensity(hdr))
iio.imwrite('hdr.tga', equals_exp)

My camera can be configured to first expose the first image in an exposure range as normal and then take all under- and overexposed images. Thus, when merging the exposure stack, it’s convenient to use the first image’s exposure as reference. Other approaches are of course possible.

@fthaler
Copy link
Owner

fthaler commented Nov 11, 2022

In your case, the algorithm could fail if you have no region which is not totally overexposed

Could you please explain that further? What kind of filter and why?

For example, if all pixels are overexposed (e.g. pure white), the given algorithm has no chance to correctly estimate the image’s relative exposure and there is no value in using it at all. So it should be filtered out before passing to rawhdr.

@Bra1nsen
Copy link
Author

There was no software available which uses the fact that RAW images are using a linear color space and are thus quite easy to merge physically meaningful. So I created my own…

Cheers, Felix

Thats exactly what Iam looking for, really grateful for your work. I plotted today the raw camera response function. Indeed pretty linear:

crf_imx477

@Bra1nsen
Copy link
Author

My camera can be configured to first expose the first image in an exposure range as normal and then take all under- and overexposed images. Thus, when merging the exposure stack, it’s convenient to use the first image’s exposure as reference. Other approaches are of course possible.

I guess I will just choose the array with the most average histogram as reference image :)

@Bra1nsen
Copy link
Author

Basically your code operates on a weight exposure function, is that correct?

image

@fthaler
Copy link
Owner

fthaler commented Dec 7, 2022

Not sure, what the definition of an exposure weight function is. My algorithm does scale all pixel intensities of an image uniformly, independent of their value. It just uses an exposure-based weight function for merging two (or more) images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants