Why Does Yellow Look Like White?

found it.

It’s not actually intended to make it easier to read, it’s to reduce blue light emissions at night, which have been linked to all kinds of stuff from messing up sleep patterns to vision problems.

I don’t know about vision problems but the reason they call it “night” mode is the idea that blue light tends to alert the brain that the sun is night and it’s time to wake up, and more yellow light says “hey, sun’s going down” and signals it’s time to start getting sleepy. However, further studies seem to be challenging that notion.

Math error in 100 year old equation is changing everything we know about perceiving color

This article addresses some of the perception issues mentioned in this thread. It doesn’t seem surprising that 3D mapping of color relationships improves understanding color relationships. I’m not picking up what the error was from the article. There’s mention of relative relationships not working out on traditional mappings but I’d like to know what methodology is used to measure those perceived relative differences.

The core element of the paper is that small differences don’t add up to big differences in a simple manner. Despite the breathless prose of the paper it isn’t exactly a huge surprise.

I have a lot of sympathy for the guys who did the work. In a different life I spent a few long evenings trying to get colour mappings for various geophysical fields in maps and diagrams to work. There are lots of pitfalls and getting visual cues for relative strength and contrasting strength is really hard. The mind will see contours and structure where it isn’t and miss where it is if you get it wrong. White versus yellow is one common problem. Lots of standard colour maps in use avoid yellow for this reason.

I have not read the paper—mathematically speaking, what is the experimentally supported color space? And what does it mean for two colors to be separated by a single unit, or half a unit?

I have heard that the reason that we easily see yellow as white is that evolving with a yellow sun our eyes and/or perception have evolved to interpret light bright yellow as pure or white light.
Is that a current theory? Fallen out of favor theory? Never was a theory?

The noonday sun is white more or less by definition, and never looks yellow. (The setting sun can turn yellow, red, etc.) However, the eye can adapt to all sorts of lighting conditions:

Nor have I. It is pay-walled. I just got to the Abstract:
https://www.pnas.org/doi/10.1073/pnas.2119753119#sec-8

IMHO there is a lot of blather that covers a pretty simple reality. They mention \Delta E which defined by the CIE, and they reference the CIE 2000 standard, so I assume that is where they are starting.

The photoreceptor basis is a very low-level explanation for this, and the “final” layer of the retina at the ganglion level changes this further. It can’t explain a lot of this, and there isn’t one explanation, but there’s a probably bigger contribution from high-level processes.

The link from @DPRK 2 posts up is a pretty good reference to one of these processes. Consider that sunlight isn’t white itself, but our vision wouldn’t be terribly helpful if we see everything yellowish (or reddish today, fuck you smoke) depending on the light reaching us, and thus call objects different colors at different times.

The “math error” link above is full of dead or worthless links, so I’m not quite sure what the premise is, but it sounds to be very overstated and claims groundbreaking results when it doesn’t appear that they are.

I have it on good authority that a certain “cruel-hearted orb” is responsible. Also the reason red looks grey. However, I’ll let you decide which is right.

I found the paper through a rather convoluted (but legal) process, not sure how to get others there. It’s math-dense and I’ll have to do more than a skim later.

For some reason they seem to lean on BIPM (weights and measures) and not CIE.

Some thoughts from a quick read: they say all perceptual color spaces are flawed, but don’t really define which ones. They mention CIELAB, RGB, XYZ/xyY, which are all already known to be not ideal for this. They seem to be using CIELAB mostly.

They also used MTurk, which means they’re relying on perceived color difference on individual, non-calibrated, consumer monitors? That’s… going to take some justification. I don’t see it.

Well we “re-evolved” red light cones. Our common ancestor with other mammals had only two color receptors, and primates evolved a third, with peak sensitivity in the red part of the spectrum.

It’s complicated to diagram human color perception, because firstly, there are various mappings of inputs that happens within the neurons of the eye even before the signal gets to the brain. Then the brain performs additional “post-processing”, before somehow doing the magic trick where we have a subjective experience of color.
Yellow is actually among the “special” colors, as, for example, you can have the same subjective experience of yellow from one object that is just reflecting a single wavelength as another that is reflecting two different wavelengths: it exists as both a “pure” color and one formed of a mixture.
…and not just as a side effect of the cone sensitivities, but as a specific mapping (there was a very nice mapping diagram I remember from studying this, but I can’t find it right now).

But anyway, the answer to the OP is largely that only one of the three cones is sensitive in the “blue” part of the spectrum, and it’s one that the brain subjectively puts less weighting. So the difference between a white and yellow object is just whether the “weakest” cone is activated or not.

I have not expended too much effort trying to find this paper (am I supposed to? Are they worried someone might actually read it?) but I did type the authors’ names into a search engine and got this review

but does not include the original data.

there do seem to be a number of assumptions and caveats

Usually if you want an open access paper, the authors have to pay a one-time fee to open it to the public. Some journals make this mandatory, PNAS only does for one type of paper. If they can’t or won’t, then instead they submit for free but the publisher charges people to access it, or else institutions pay for general access and then people affiliated with the institution can access the paper by logging in.

Brainard is a well-established color scientist, so if he generally likes it I’ll give a fair shake. It’s at least clearer to me than the original. I guess now that I think maybe the monitors vary, they’re measuring actual differences and not the points in space themselves, so it doesn’t matter if a color appears differently on different monitors as long as they’re shifted by the same magnitude on each. I do wonder how they control for things like gamma correction, assume based on monitor type?