Hacker News new | past | comments | ask | show | jobs | submit login
Why does the chromaticity diagram look like that? (jlongster.com)
209 points by samwillis 18 hours ago | hide | past | favorite | 63 comments





In my opinion, plotting chromaticity on a Cartesian grid — by far the most common way — is pretty misleading, since chromaticity diagrams use barycentric coordinates (and to be clear, I blame the institution, not the author). The effect is that the shape of the gamut looks skewed, but only because of how it's plotted; the weird skewedness of a typical XYZ chromaticity diagram doesn't represent anything real about the data.

Instead, a chromaticity diagram is better thought of as a 2D planar slice of a 3D color space, specifically the slice through all three standard unit vectors. From this conception, it's much more natural to plot a chromaticity diagram in an equilateral triangle, such as the diagram at [1]. A plot in a triangle makes it clear, for instance, that the full color gamut in XYZ space isn't some arbitrary, weird, squished shape, but instead was intentionally chosen in a way that fills the positive octant pretty well given the constraints of human vision.

[1]: https://physics.stackexchange.com/questions/777501/why-is-th...


Here's a video that shows the concept. Each frame shows the allowable colors for a particular brightness in Rec. 709 YCbCr space.

https://www.w6rz.net/chromacity.mp4


That particular version of the chromaticity diagram makes it look like the colors missing from your display are various shades of laser pointer green as opposed to all the shades of red and blue that are missing because really saturated red and blue primaries are too dim (per unit of energy) to use.

See https://nanosys.com/blog-archive/2012/08/14/color-space-conf...

I learned a lot more about color management than I wanted to know in the progress of making red-cyan stereograms because I found when I asked for sRGB red I was getting something like (180,16,16) on my high gamut monitor which resulted in serious crosstalk between the channels.

Right now I am working with a seamstress friend on custom printed fabrics and I have a flower print where yellow somehow turned to orange in the midst of processing the image and I want to get it debugged and thoroughly proofed before I send out the order... I am still learning more than I want to know about color management.


> as opposed to all the shades of red and blue that are missing because really saturated red and blue primaries are too dim (per unit of energy) to use.

Would another way to put that be, that the chromaticity diagram could keep going southeastward (i.e. the XYZ color-space could have the X and Z activation functions extended leftward and rightward), but due to the frequencies continuing on the spectral line, that area of the diagram would necessarily be made mostly of infrared and ultraviolet frequencies that we can't see?


>Right now I am working with a seamstress friend on custom printed fabrics and I have a flower print where yellow somehow turned to orange in the midst of processing the image

That's why Pantone makes so much money.


And, importantly, that's why Pantone isn't just a freeloader making money off of nothing the way that some of the more clickbaity parts of the internet represent them. They're not solving an easy problem, if they were they wouldn't get paid.

Aren’t they (present humans) simply profiting from work done decades ago (past humans, not them) through patents or others kinds of IP / protections granted by governments ? Surely it was original and useful before but now it’s part of humans knowledge base

They solve a very real problem with most of their products. Universal physical references and supplies are great.

But charging for the libraries that list a basic sRGB or CMYK code for each Pantone color is a pain in the ass leech.


Here's another really interesting exploration of color spaces https://ericportis.com/posts/2024/okay-color-spaces/

I prefer this. The failure to discus Lab and OKLab in the main link is quite odd.

Also, I'd mention to those who think that violet/magenta aren't "real" colors that the red X decays more slowly than the blue Z at short wavelengths so you can get saturated violet/magenta single wavelength colors (not well represented on the standard chroma charts) below 400nm at high power. Of course they aren't efficient for monitors (even blue isn't) and they're dangerous to look at for any length of time. But if you see a (single wavelength) violet/magenta laser, it's time to look away or shut your eyes.


Yeah that article is better. I'm the author and I wrote this only for me as I studied it, it's not great as a way to describe it to others

I wanted to start from the very beginning and as far as I know Lab and OKLab didn't come later. Studying the 1931 studies and such was a start, and I wanted to later bring up all the other things we've learned since then, but haven't had time to write more about it


> What do you think a negative red light source means?

It means that the subject turned a dial to add red light to the color being matched.

Basically, you have an unknown color C, and then an R+G+B color. Sometimes, you can’t match it, so you try matching C+R = G+B. This results in “negative” R, because you’re adding R to the other side of the equation.

The same happens with green and blue, but to a lesser extent.


This is fantastic. It gave me an idea about colors, perception, and gamut.

Put simply, imagine that there is a combination of wavelengths of light that causes you to perceive the smell of ripe cheese, and another that causes you to think that there is a bear behind you. Now your diagrams must be filled in not only with colored pixels but also include a small picture of a cheese and a bear at the points where those specific perceptions occur.

I think, in real life, this is what magenta is: a non spectral color that’s more of a feeling or sensation that, in order for our brains to not get too overwhelmed, we simply perceive as another color. This is also, I believe, close to describing a real phenomenon for those living with varying degress of synesthesia or, if you will forgive a play on words, those on the synesthesia spectrum.


It's probably good to start with XYZ, but we have much better colorspaces now that do a better job at correlating with our vision.

Mainly CIE 1976 L',u',v' and even more recently ICtCp from Dolby research.


The xyY colour space is designed such that the colours of light you get by blending two points all lie on the line between the two corresponding points. This makes it extremely helpful when you want to figure out which colours you can make with a particular set of primaries. Similarly you can draw the colours corresponding to pure wavelenghts and figure out the entire space of physically possible colours by taking its convex closure.

These features are not really replicable in any other colour space, at best you can use a linear transformation of it (which XYZ already is, and it has almost all properties you could want of a choice of basis).


That's true. And I will add that the viewable color gamut of a display can be depicted with a simple triangle on the xyY plot. All you need to know are the three chromaticity values for the reg, green and blue phosphors — they make up the three corners of the triangle.

I don't think starting with XYZ and color matching functions is a good idea. LMS and cone response functions are a more fundamental and intuitive description of human color response, so if you're going to bother with XYZ at all, you should arrive there from first principles, via LMS.

And when you're done with XYZ, check out XYB used in JPEG XL and jpegli:

https://giannirosato.com/blog/post/jpegli-xyb/


CAM-16. When in doubt, ask the color scientists :)


No, it's not, by definition. It's one matrix multiplication to do an approximation of it. More here: https://news.ycombinator.com/item?id=41081832

The only claim to superiority it makes is gradients, and that's a category error: they blend polar opposite hues in the Cartesian space (i.e. x / y / z), rather than polar (i.e. h/s/l). Opposite hues mean lerp'ing in cartesian brings it through the center of the circle, 0 saturation. Thus, blue and yellow do combine to a off-white. Engineering around it indicates something fundamentally off, much less that it is better. I don't ascribe ill intent but I do worry very much about how widely this is misunderstood.


Does anyone know of a nice "pedagogical" color space? That is, one optimized for teaching and learning, for correctness rather than for simple math? Where the space's highly-noticeable characteristics are actual features of human perception, rather than the usual mess of "nope, that too is a model artifact" (mostly from optimizing for computation). And full-gamut, well behaved out to spectral locus. And with at least somewhat linear hues and color combination. Sort of the Munsell niche, but full gamut, and this century.

I wasn't able to find anything even close, for a "maybe teach color better by emphasizing spectra?" side project, so I kludged. CAM16UCS as state-of-the-art for perceptual color, untwisted with Jzazbz for linear hues (it also sanity checked absolute luminosity), with a rather-unprincipled mashing down of CAM's IIUC-non-perceptual near-locus silly blue tail. Implemented as lookup tables. If there is any related work out there, I'd love to hear of it. Tnx.


> Does anyone know of a nice "pedagogical" color space? Where the space's highly-noticeable characteristics are actual features of human perception

When talking with students about color, I find the HSL space the easiest to employ. From a color maths point of view I have been told that it is very messy, which is one reason why Adobe stopped using it in version 3 of Photoshop. But from the perceptual point if view, it is the artists favorite.

Perhaps a better option is the Muncell color space. Muncell chopped up the entirety of the color domain into thousands of small chunks. The distance between each chunk was a single units of 'barely perceptual difference' which he established through meticulous user testing. Hence the green domian was much larger than the yellow. The story behind his development of this space makes for facinatting reading. He was an artist (a painter) yet his work paved the way for more modern spaces.


Cam16 (as opposed to cam16 ucs) is perception based. It calculates chroma, lightness, and hue, and is based on the munsell color system. Hellwig and Fairchild recently simplifed the model mathematically, improving it's chroma accuracy.( http://markfairchild.org/PDFs/PAP45.pdf) Another, simpler, model is CIELAB, which outputs paramters L, a, and b, where L is lightness, hypot(a,b) is chroma, and arctan2(b,a) is the hue.

Thanks! IIRC(fuzzily - it's been a while), I chose -UCS for a more euclidean color difference metric - I should review that. My even fuzzier recollection, is CIELAB's visible gamut shape is very artifacty[1], perhaps misleadingly representing the volume outside sRGB/P3 for instance.

The pedagogical objectives of playing well with full visible 3D gamut, and spectral locus, and of avoiding shape artifacts (concavities, excursions), are... non-traditional. Characteristics which could be happily traded away in traditional uses of color spaces, for characteristics like model math and simplicity which here have near-zero value (lookup tables satisficing). And were - most spaces have "oh my, that's a hard downselect" bizarre visual hulls, and topologies outside of P3 or even sRGB can get quite strange. Thus the need to untwist CAM16's curving hue lines - they're not bad within sRGB, but by the time they hit visible hull, yipes, I recall some as near parallel to hull.

Having a color space to play with as a realistic 3D whole, seems not the kind of thing we collectively incentivize. A lot of science education content difficulty seems like that.

[1] https://commons.wikimedia.org/wiki/File:Visible_gamut_within...


CAM16's hue lines are curved by design. Hue is not linear with regards to xy chromaticity, as evidenced by the Abney effect[1].

[1] https://en.wikipedia.org/wiki/Abney_effect


But maybe not this[1] non-linear? Fun if real. But perhaps fitting was done within a gamut folks care more about, and model math then induced artifacts at the margins of the full visible gamut? I'd really love to know if that blue tail represents real perception.

[1] https://www.researchgate.net/profile/Volodymyr-Pyliavskyi/pu... [png] from https://ojs.suitt.edu.ua/index.php/digitech/article/download... [PDF dl] (Curiously, bing image search has this figure, but google doesn't.)


I have a question for fellow color science nerds. I've been reading through Guild's original data: https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.1932...

However, I'm having trouble understanding the meaning of the numbers in table 4. Does anyone understand all the columns there?

What I'm particularly interested in is finding the unnormalized coefficients from the color matching experiments, or some way to un-normalize those coefficients. (By "those coefficients," I mean the trichromatic coefficients u{a,b,c}_\lambda listed in table 3.) I don't know if that data is in table 4 so maybe those are two separate questions.


Mostly correct, but I don't understand what the author is trying to do in the last section, where they try to fill the locus by generating spectra with two peaks and projecting it into the chromaticity diagram. Why do it like that?

This is how you should do it:

- You pick a Y value. This is going to be the luminance of your diagram.

- For each pixel inside the area bounded by the spectral locus (and the line of purples - the line connecting the two endpoints of the locus) you take its x, y coordinates.

- Together these 3 values specify your color in the CIE xyY color space. Converting from xyY to XYZ is trivial: X = Y / y * x, Y = Y, Z = Y / y * (1 - x - y)

- You map these XYZ values into your output image's color space (e.g. sRGB). If a given XYZ value maps outside the [0,1] interval in sRGB, then it's outside the sRGB gamut, and you may clip the values to the closest valid value inside the gamut.


Author of the article here; I wasn't able to understand how to get that to work, and I talked about that in the post. This demo is doing that: https://jlongster.com/why-chromaticity-shape#block-31f373

The problem is "for each pixel inside the area". I could have done that, and then clipped the output by that shape. The problem is this doesn't answer why the shape is this way at all because you are using the shape itself to clip the output. It felt fake.

I do think this is what is most common though. I was trying to understand a more rigorous approach, and the one where you generate spectra and try to fill it is described here: https://clarkvision.com/articles/color-cie-chromaticity-and-...

That feels like a more rigorous approach, but clipping is probably "good enough" too


Ok, so you're asking why every visible color has to lie within the bounds of the spectral locus in the chromaticity diagram?

The reasoning is simple:

1. Spectral colors are basis vectors of the color spectrum. I.e. every possible spectrum can be thought of as a weighted sum of infinitely many Dirac deltas. With nonnegative weights, in particular, so it's a so-called conical combination (i.e. linear combination with nonnegative weights).

2. Taking the inner product with color matching functions is a linear transformation from this infinite dimensional space spanned by spectral colors to a 3-dimensional space. Linearity means that weighted sums are preserved, that is: every possible color spectrum's XYZ values are going to be the weighted sum of the spectral colors' XYZ values. And because the XYZ color matching functions are nonnegative everywhere, conical combinations are also preserved.

3. And finally, the conversion from XYZ to xyz is such, that it turns conical combinations into convex combinations (i.e. conical combinations where the weights sum to 1). It's easy to verify this with pen and paper.

It follows, that every color on the xy chart is going to be a convex combination of xy points corresponding to spectral colors, which geometrically means that they're going to lie inside the spectral colors' convex hull.


Bengtsson & Zyczkowski in the introduction (see p14-16) to their wonderful book make use of chromaticity diagrams to motivate their study of quantum states.

https://www.researchgate.net/profile/Karol-Zyczkowski/public...

>In a way tradition suggests that colour theory should be studied before quantum mechanics, because this is what Schroedinger was doing before inventing his wave equation.


I’ve been having problems studying this topic for years now, is there actually an official scientific field with official books and an official consensus on this ? Seems hard to know who to trust on this wide-but-niche topic


Kinda related, but does someone maybe have a good set of links to help understand what HDR actually is? Whenever I tried in the past, I always got lost and none of it was intuitive.

There’s so many concepts there like: color spaces, transfer functions, HDR vs Apple’s XDR HDR, HLG vs Dolby Vision, mastering displays, max brightness vs peak brightness, all the different hdr monitor certification levels, 8 bit vs 10bit, “full” vs “video” levels when recording video etc etc.

Example use case - I want to play iPhone-recorded videos using mpv on my MacBook. There’s hundreds of knobs to set, and while I can muck around with them and get it looking close-ish to what playing the file in QuickTime/Finder, I still have no idea what any of these settings are doing.


HDR is whatever marketing wants it to be.

Originally it's just about being able to show both really dark and really bright colors. Something that's really easy if each pixel is an individual LED, but that's very hard in LCD monitors with one big backlight and pixels are just dimmable filters for that backlight. Or alternatively on the sensor side the ability to capture really bright and really dark spots in the same shot, something our sensors are much worse at than our eyes, but you can pull some tricks.

Once you have that ability you notice that 8 bits of brightness information isn't that much. So you go with 10 bit or 16 bits. Your gamma settings also play a role (the thing that turns your linear color values into exponential values).

And of course the people who care about HDR have a big overlap with people who care about colors, so that's where your color spaces, certifying and calibrating monitors to match those color spaces etc comes in. It's really adjacent but often just rolled in for convenience.


More bits to store more color/brightness etc makes sense.

I think my main confusion has usually been that it all feels like some kind of a… hack? Suppose I set my macbook screen to max brightness, and then open up a normal “white” png. Looks fine, and you would think “well, the display is at max brightness, and the png is filled with white”, so a fair conclusion would be thats the whitest/brightest that screen goes. But then you open another png but of a more special “whiter white”, and suddenly you see your screen actually can go brighter! So you get thoughts like “why is this white brighter”, “how do I trigger it”, “what are the actual limits of my screen”, “is this all some separate hacky code path”, “how come I only see it in images/videos, and not UI elements”, “is it possible to make a native Mac ui with that brightness”.

In any case, thanks for the answer. I might be overthinking it and there’s probably lots of historical/legacy reasons for the way things are with hdr.


> So you get thoughts like [...] “what are the actual limits of my screen” [...]

Some of the limitations, at least in Apple's displays, are thermal! The backlight cannot run at full brightness continuously across the full display; it can only hit its peak brightness (1600 nits) in a small area, or for a short time.


One motivation for HDR is having absolute physical units, such as luminance in candelas per square meter. You can imagine that might be a floating point value and that 8 bits per channel might not be enough.

The problem you’re describing is that color brightness is relative, but if you want physical units and you have a calibrated display then adjusting your brightness is not allowed, because it would break the calibration.

Another reason for HDR is to allow you to change the “exposure” of an image. Imagine you take a photo of the sun with a camera. It clips to white. Most of the time even the whole sky clips to white, and clouds too. With a film camera, once the film is exposed, that’s it. You can’t see the sun or clouds because they got clamped to white. But what if you had a special camera that could see any color value, bright or dark, and you could decide later which parts are white and which are black. That’s what HDR gives you - lots of range, and it’s not necessarily all meant to be visible.

In computer graphics, this is useful for the same reason - if you render something with path tracing, you don’t want to expose it and throw away information that happens to get clamped to white. You want to save out the physical units and then simulate the exposure part later, so you don’t have to re-render.

So that’s all to say- the concept of HDR isn’t hacky at all, it’s closer to physics, but that can make it a bit harder to use and understand. Others have pointed out that productized HDR can be a confusing array of marketing mumbo jumbo, and that’s true, but not because HDR is messed up, that’s just a thing companies tend to do to consumers when dealing with science and technology.

I was introduced to HDR image formats in college while studying physically based rendering, and the first HDR image format I remember was Greg Ward’s .hdr format that is clever- 8 bits mantissa per channel and an 8 bit shared exponent, because if, say, green is way brighter than the other channels, you can’t see the dark detail in red & blue.


> there’s probably lots of historical/legacy reasons for the way things are with hdr

That's pretty much it. If you use a HDR TV it will usually work like you describe. It would display the same white for a normal white PNG and an "even whiter" "HDR" PNG.

Apple's decision makes sense if you imagine SDR (so not-HDR) images as HDR images clipped to to some SDR range in the middle of the HDR range (leading to lots of over- and underexposure in the SDR image). If you then show them side-by-side of course the whitest white in the HDR range is whiter than the whitest white in the SDR image. Of course that's a crude simplification of how images work, but it makes for a great demo: HDR images really pop and look visually better. If you stretched everything to the same brightness range the HDR images wouldn't be nearly as impressive, just more detail and less color banding. The marketing people wouldn't like that


While one commenter had it somewhat right that HDR has to do with how bright/dark an image can be, the main thing HDR images specify is how far ABOVE reference white you can display. With srgb, 100 percent of all channels is 100 percent white (brightness of a perfect lambertian reflector). Rec 2100 together with rec 2408 specify modern hdr encoding, where 203 nits is 100 percebt white, and above that would be anything brighter (light sources, specular reflection, etc). So if a white image encoded in sdr looks dimmer than hdr for non specular detail, that is probably encoding or decoding error.

It is all extremely hacky.

Because HDR allows us to encode brightnesses that virtually no consumer displays can display.

And so deciding how to display those on any given display, on a given OS, in a given app, is making whatever "hacky" and totally non-standardized tradeoffs the display+OS+app decide to make. And they're all different.

It's a complete mess. I'm strongly of the opinion that HDR made a fundamental mistake in trying to design for "ideal" hardware that nobody has, and then leaving "degraded" operation to be implementation-specific.

It's a complete design failure that playing HDR content on different apps/devices results in output that is often too dark and often has a telltale green tint. It's ironic that in practice, something meant to enable higher brightness and greater color accuracy has resulted in darker images and color that varies from slightly wrong to totally wrong.


A summary, with pictures:

https://cdn.theasc.com/curtis-clark-white-paper-on-hdr-asc.p...

To better understand the "knobs", consider opening up your iPhone videos in DaVinci Resolve[1] and playing around with the scopes and tools in the color panel.

[1] https://www.blackmagicdesign.com/products/davinciresolve/tra...


You can start by understanding the physics behind high dynamic range. Any real world analog value can have a tremendous dynamic range, it’s not just light : distances, sound, weight, time, frequencies etc. We always need to reduce / compress / limit / saturate dynamic range when converting to digital values. And we always need to expand it back when reconverting to an analog signal

The https://oklch.com color picker shows another way to represent colors:

- The 3D version looks like a mountainscape of colors

- L(ightness), C(hroma), and H(ue) are orthogonal 2d slices of this mountainscape

---

And this software renders 3D chromaticity (gamut?) diagrams: https://youtu.be/FdFpJFSTMVw?t=679


This page is also a beautiful explanation of color spaces, with chromaticity explained toward the end: https://ciechanow.ski/color-spaces/

Note that many of the diagrams are interactive 3d graphics (I didn't realize that at first, and it makes the page more interesting.)


This article illustrates the theory and math that lead to the horseshoe diagram in a very approachable style that is as simple as possible without being too simple.

A Beginner’s Guide to (CIE) Colorimetry — Chandler Abraham

https://medium.com/hipster-color-science/a-beginners-guide-t...


I think a good explanation of color spaces might be starting at a camera sensor with a bayer array and how that’s processed.

I'm not so sure.

I think it's really important to understand spectral colors and metamerism.

If you start at the Bayer array, you are going to have an odd discussion about how the bayer filters have spectral transfer functions that aren't directly related to the cones in our eyes, nor any color space's primaries, etc.

It's going in the deep end.


Starting with the function of the human eye seems like a good choice, I mean how else would you explain why a Bayer matrix only has 3 colours?

Or why there are two green pixels in the Bayer pattern for every red and blue...

> I say "cursed" because I have no idea what that means. What the heck is that shape??

Reminds me of frinklang.

"The most-commonly used, CIE 1931, is long known to be off by a factor of 7 from average human perception at short wavelengths, (compare it to the 1978 definition at 400 nm) and is arbitrarily truncated before the limits of human perception. In addition, no one perceptually-weighted curve is possible because the human eye is differently sensitive for photopic (bright-light, cone cells) and scotopic (dark-adapted, rod cells), or if the illumination occurs over narrower or wider fields. Many incremental improvements on these systems have been proposed, but none are part of the authoritative, oversimplified definition of the candela, making it useless for unambiguous definitions that can be agreed upon or binding to any party. Pronouncements of the CIE are in no way binding on the BIPM, nor vice-versa, and the CIE has a proliferation of "standard curves," which all disagree with each other. Agreements to use one curve or another thus have to be agreed outside the definitions of the SI, and, of course, parties can disagree on which curve to use. You can use CIE 1931, or CIE 1978, or the "CIE 1988 Modified 2° Spectral Luminous Efficiency Function for Photopic Vision" or the 2005 improvements by Sharpe, Stockman, Jagla & Jägle, or ISO 23539:2005(E), or something else..."

https://frinklang.org/frinkdata/units.txt


I thought this might be a useful article because I've often had a similar question. But there's a diagram that has text:

> More simply put: imagine that you have red, green, and blue light sources. What is the intensity of each one so that the resulting light matches a specific color on the spectrum?

> ...

> The CIE 1931 color space defines these RGB color matching functions. The red, green, and blue lines represent the intensity of each RGB light source:

This seems very oddly phrased to me. I would presume that what that chart is actually showing is the response for each color of cone in the human eye?

In which case it's not a question of "intensity of the light source" but more like "the visual response across different wavelengths of a otherwise uniform intensity light source"?

... fwiw, I'm not trying to be pedantic, just trying to see if I'm missing the point or not.


The wording on the article is correct, despite being confusing. The CIE 1931 RGB primaries each stimulate multiple types of cone in human eyes, so the RGB Color Matching Functions (CMFs) don't represent individual cone stimulations.

However, the CMFs for LMS space[1] do directly represent individual cone stimulations over. Like the CIE RGB CMFs, the LMS CMFs can also be thought of as the required intensities of three primariy colors required to reproduce the color of a given spectrum. The reason these two definitions coorespond for LMS space is that each primary would stimulate only one type of cone. However, unlike CIE RGB, no colors of light which stimulate only one type of cone physically exist.

Finally, CIE RGB and LMS space are linear transformations of each other, so the CIE RGB CMFs are linear combinations of the LMS CMFs, so each CIE RGB CMF can be though of as representing a specific linear combination of cone stimulations (the combination excited by the primary color).

I often find it easiest to reason about these color spaces in terms of LMS space, since it's the most physically straightforward.

[1]: https://en.m.wikipedia.org/wiki/LMS_color_space


I'm the author of the article and the intensity is referring to the level of the light source used in the study to generate the data. See the study explained here: https://medium.com/hipster-color-science/a-beginners-guide-t...

but you're right, the intensity needed of each R, G, and B light sources to produce the correct color is directly related to how our eyes perceive each of those sources, so yes you are correct


I think the explanation is simple: Color is light and it is linear going from ultraviolet to blue to green to yellow to red to infrared. It's just a line.

In physical reality, there exists no purple light. Our minds make up all the shades of purple and magenta between blue and red when our eyes receive both red and blue light.

So in order to include the magentas, you need to draw another line between blue and red. Meaning you have to bend the real color line. And that's what we see in the chromaticity diagram.


> Color is light

For an ELI5 on a "maybe teach color better by emphasizing spectra?" side project, I went for hard disjointness on "color" vs "light". Distinguishing world-physics-light from wetware-perception-color. Writing not "red light", but "\"red\" light". So physical spectra were grayscale, on nm and energy. Paired with perceptual spectra in color, on hue angle and luminosity. And both could be wrapped around a 3D perceptual color space (tweening the physical spectra from nm to hue). Or along a 2D non-primate mammalian dichromat space, to emphasize the wetware dependence. Misconceptions around color are so very pervasive, K-graduate, that extreme care for clarity seems helpful.


Wavelength (or frequency) is linear but light, in general, is made up of many wavelengths -- an entire spectrum.

Each wavelength of visible light corresponds to a color on the gradient from blue-green-yellow-red. Purple or magenta colors do not exist as light and only exists in our minds. That's why rainbows do not contain any of these colors.

Purple totally exists, but isn’t a single wavelength of light. It is multiple wavelengths of light. Physical colors are all blends of wavelengths.

Displays are tricking the eye by showing three single colors that look like real color.


As a hue, magenta and purple shades do not physically exist in the electromagnetic spectrum. All hues on the gradient blue-green-yellow-red exist and can be generated by a single wavelength of radiation.

You can test this in physical reality with a prism, which will never show purple shades, because it is an extraspectral color that is made up in our minds.

Color can thus exist as pure in physical reality. However, our eyes can maybe not perceive colors purely, since our receptors overlap each other.


That link is so slow where I live that I had difficulty getting the site to work, but as far as I could judge it gives a rather nice and understandable explanation of what is a rather complex matter. It's in considerable contrast to those sections of my textbooks on color theory, they're so dry as to make one yawn, they're full of algebra, the complex operator and matrices with precious little other explanation of what it all means.

Some of the comments have already covered most of what I'd have mentioned so I won't dwell on them now, although I'd add that I reckon GrantMoyer is on the mark with his point about the inappropriateness of displaying chromaticity on Cartesian coordinates.

It's worth noting that understanding the intricacies of chromaticity and color theory is difficult to the extent that its 'opaqueness' has been used to protect trade secrets (and likely still is for reasons I'll mention in a moment).

Commercial lab printers that print masked color negative (neg film with the orange mask) to positives—color photos and color film print stock—go to great lengths to protect their matrices (precision resistor banks) against copying. Similarly, companies like Kodak do not publish the 'film terms' for their various emulsions ('film terms' being the unique matrix information for each film emulsion).

The reason for this that to reverse-engineer the matrix with enough accuracy for a single film is a complex job let alone do so for a multitude of different films. Moreover, it's imperative the matrix be accurate if good color balance is to be achieved. Keeping this info secret provided a competitive edge, selling or licensing the info is worth money.

I'd add that the destructive orange mask used in color negative film is a brilliant concept for reasons I cannot cover here, however what's relevant here is that the mask makes reverse-engineering the negative's film terms that much more complicated.

I'm a bit out of touch these days but no doubt the same applies with inkjet printers and the like (matching coordinates to specific inks etc). So there's a modicum of truth to statements from Canon, Epson and HP when they say not to use third-party inks because the colors won't match properly (mind you, that's never stopped me at the exorbitant and outrageous prices they charge for inks).

My point is that if it were possible to unravel and make this chromaticity stuff simpler to understand then many of these expensive commercial decisions would disappear.

Ahh but alas, we're suck with it.

BTW, for those who've used scanner software like SilverFast the manufacturer provides a list of film emulsions to select from before the film is scanned. Selecting the correct emulsion type ensures the proper 'film terms' are used for the scan, this in turn ensures the color balance is optimal.

I'm a bit cynical about SilverFast's approach to the problem, they've a limited range of film emulsions to select from (many of the old and important color negative types are missing). SilverFast's literature suggests that if one's color negative type is not listed then to select one that best suits. I am at a loss how one does that except to just make a guesstimate, so much for calibration. Also, one has to wonder why SilverFast has such a limited range given they've been in the business since many of said emulsions were still in production.

There are similar issues with Hamrick's VueScan software but I've not time to address them here.

Again, all these issues further illustrate the practical complexities surroundibg the chromaticity diagram.


TL;DR: Imagine color space has 3 dimensions in polar coordinates.

- hue, the angle. your familiar red, orange, yellow, green, blue...

- saturation/chroma, radial distance from center. intensity of the pigment

- lightness, top to bottom, white to black

The XY diagram shows 3d color space, from the top, in XYZ.

XYZ is a particular color space that Nathan Myhrvold picked at Microsoft in the early 90s.

There is no privileged "correct" color space, they're developed based on A/B tests and intuition by color scientists.

However, there are more correct color spaces, in that color science matters and is a real field. Commonly agreed state of the art is CAM16.

It's a significant mistake that Oklab is the first space with significant mindshare since HSL, it was a quick hack by a ex-game developer to make something akin to CAM16 with just one matrix multiplication.

CAM16 conversions involve significantly more than one matrix multiplication. But, its ~400 lines of code, and you can do millions a second on modern hardware.

The lightness scale is _way_ off from scientific color spaces, and thus it can't be used to create simple rules like "40 delta L* ~= 3.0 contrast ratio, 50 delta L* ~= 4.5 contrast ratio". Instead you're still manually plugging colors into a contrast checker :(

Then again, its still a step forward. It's even more maddening HSL was used for so long: it's absolutely absurd, ex. lightness = average of two highest RGB components. Great for demo hacks in 1976, not so great in 2016.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: