Hacker Newsnew | past | comments | ask | show | jobs | submit | mpetroff's commentslogin

Please don't use rainbow-type palettes, as they generally have poor accessibility for colorblind individuals. With my red deficiency, the middle two colors in this palette look virtually identical.


This is super interesting to me.

Is there another 12-color palette that allows you to easily distinguish between every color? If so, I'd love to see it.

I'd also appreciate if anyone who happens to read this who has a different variety of colorblindness - or who finds color palettes inaccessible for any reason - could share what colors in the 12-bit palette and any others that are suggested in this thread are problematic and why, that'd be awesome :)

My initial instinct is that finding 12 colors that are visually distinguishable for all users is likely impossible. That being the case, the ideal solution IMO is likely something like providing a dynamic option to change the palette (or even the representation!) and then choosing a default that the author is happy with.


Cheysson and other cross hatched patterns will get you a long way [0].

[0]: https://observablehq.com/@tomshanley/cheysson-color-palettes


> My initial instinct is that finding 12 colors that are visually distinguishable for all users is likely impossible.

Without going to lightness extremes, I agree that this likely isn't possible, at least when trying to accommodate all three types of dichromacy and for small color patch sizes (like those typically used for line and scatter plots). For example, you could take the 10-color accessible palette from work I've published [1] and add black and bright yellow to get twelve colors, but the lightness extremes of adding these colors would result in significantly-different visual weights. Based on a validation survey I conducted, I think even ten colors is pushing the limit of what's reasonable when lightness extremes aren't used.

> could share what colors in the 12-bit palette...are problematic

#9d5 and #4d8 is the color pair I find particularly problematic.

[1] https://arxiv.org/abs/2107.02270


I’ve but tried to this yet, but wanted to let you know that I’m actively thinking about it and very interested.

This is far from my area of expertise, but I’ve always been interested in accessibility on the web in particular.

Thank you for responding, I’ll probably be in touch :)


I understand there's no direct way to answer this, but does this image appear the same as the live page to you?

https://imgur.com/a/nZnr0BN

If so... wow. That's not good at all; it's almost as hard to distinguish the minimum value from the maximum value as it is the two in the center.


While not completely identical, it looks very similar (I also only have strong protanomaly, not complete protanopia, so I wouldn't expect it to look identical).

Color-vision deficiency simulations collapse colors along the confusion lines, but this can be done multiple ways. These different mapping will all look the same (and identical to the original) to a dichromat but will appear different, with different perceptual differences between colors, to a color-normal individual. Simulating in a way that accurately portrays perceived color distances is still an open research problem.


Fair point! My eyes are (mostly) fine, but even I would have a hard time telling these colors apart when used in the same chart.

I usually do very simple charts, using maybe 2 or 3 colors, and with this palette I feel the results are typically very pleasing, whichever colors I end up selecting.


I wrote a simple web-based night sky viewer a while ago [1], which renders the 750 brightest stars from coordinates in a data file (along with the moon). It uses D3.js to do fully client-side SVG-based rendering for interactive use, but it could be simplified to render server side to an SVG file. I think the main complication is that by adding stars, a projection needs to be decided on, and you'd need to consider the aspect ratio of the browser window.

[1] https://github.com/mpetroff/nightsky


Figure 24 in Paul Tol's Notes is a reasonable thing to try: https://web.archive.org/web/20250201164619/https://personal....

However, to properly screen for color vision deficiencies requires calibrated spectra. Thus, even a color-calibrated monitor is insufficient, since color calibration assumes that the standard cone response functions are valid, which isn't the case for anomalous trichromats (which encompasses the most common types of colorblindness). This is why screening, such as with the HRR test, is done with plates printed with spectrally-calibrated inks in controlled lightning conditions (again with a known spectrum).


BICEP3 actually uses a >20 year old CCD camera with analog video output (BICEP Array uses newer cameras, with more modern sensors). Daytime star pointings are possible by using a low-pass filter to block visible light and take advantage of the sensitivity of CCD / CMOS sensors to the near infrared, where the daytime sky is more transparent, combined with baffling.


I would add it also uses an ancient analog TV for manual sighting in combination with the GUI for semi-auto centroiding. I always thought that was funny to see, but it seems to work well enough. Also, inserting that baffle is somewhat terrifying because it slots into a hole next to the main vacuum window and if you dropped it on the membrane, bad things would happen. Always fun to bump into Polies here :)


> Always fun to bump into Polies here :)

Definitely! I wasn't expecting to see a mention of BICEP while reading HN from Pole, particularly not on something as arcane as its star camera.


how hard would this be to set up for a total hardware noob? and how good or useful would the data be?

i know gaia data for instance is available for free but if one used just a homemade telescope could any useful celestial data be acquired?


It depends what you mean by useful. On its own, all you're doing is taking pictures of the sky and figuring out where the camera was pointing (and its field of view). Where it's useful is calibrating the pointing direction of other systems. It's fun to try the software at home (there is a public web interface), you just need a camera that can take long enough exposures to see stars without too much noise.

One of the more "useful" backyard astronomy tasks that is achievable for a dedicated amateur is variable star observation (eg AAVSO), because many stars don't need huge telescopes to observe and it's very expensive for a big observatory to stare at a single patch of sky for weeks. Nowadays we have instruments like LSST which is basically designed for this sort of surveying, but public data are still useful. And you do need to know exactly where you're pointing, so either you do this manually by pointing at a bunch of target stars, or you can use a guide scope that solves the field for you.


With images taken at night, you can run the images through Astrometry.net, which is a blind astrometric solver and will provide you with RA / Dec for most images, as long as you have at least a dozen or two stars visible. The code compares asterisms formed by multiple stars to index files built from Gaia or other similar data. This is the technique that's used more frequently for microwave telescopes located where there's a normal diurnal cycle, e.g., CLASS. The smaller the field of the view, the higher the precision, but it also works fine with a camera with a zoom lens.

BICEP, however, is located at the South Pole on a moving ice sheet, requiring frequent updates to its pointing model, and has six months of continuous daylight, so daytime star pointing observations are required. This requires a different technique. Instead of looking at asterisms with multiple stars, the optical pointing telescope is pointed at a single star using an initial pointing model, the telescope pointing is adjusted until the star is centered, and the offset is recorded. This measurement process is repeated for the few dozen brightest stars, which acquires the data needed for refining the pointing model.


> the default color palette is colorblind-friendly

No, it very much isn't. The second and third colors, the orange and the green, look extremely similar to protanopes (red deficiency). Fortunately, there's a plan to fix this for Matplotlib 4.0.


I implemented it as part of a GSoC project a decade ago, which was first included in the 0.91 release in 2015.


You're correct, and the other responses saying otherwise are misinformed. Protanopes do not have long-wavelength cones and thus have reduced sensitivity to that end of the visible spectrum, i.e., red light appears dimmer to such individuals. This is also why red on black (or vice versa) is a color combination with poor accessibility, since it has reduced contrast for protanopes as the red appears darker and thus closer to black.


That's to be expected since the Coblis methods, particularly the v1/ColorMatrix version, are known to be very inaccurate. Unfortunately, it's usually the top search result for "colorblindness simulator," which leads folks with normal color vision who try to check for color vision deficiency accessibility to conclude that the images they're checking are accessible in cases when they're not.


For those interested, I've done some recent research on this topic [1], which tries to find cycles of colors that optimize both accessibility for color vision deficiencies, using simulations and minimum perceptual distances, and aesthetics, using crowd-sourced survey data.

[1] https://arxiv.org/abs/2107.02270


The simulation technique [1] currently used by the Firefox / Chrome dev tools is reasonable, so it's definitely much better than not checking for color vision deficiency accessibility at all. Ideally, color should only be used for progressive enhancement; if your UI can be used in grayscale, it's probably fine.

However, the method originally used by both browsers, which is also used by some other tools, had no scientific provenance and produced clearly incorrect results, so one needs to be careful about this sort of thing in general.

[1] https://doi.org/10.1109/TVCG.2009.113


colorblind person. making sure things look ok in greyscale is excellent advice. for me perceptual difference in intensity are most of what my visual system pays attention to...color only if I try.

most of the the colored interface are hugely garish, and the massive intensity differences actually make it really hard to read.

and conversely if two indicator colors are the same brightness and you need me to distinguish between them, we're going to struggle.


One thing I worry about here that I should probably do more research on: is colorblindness like turning off or lessening a color channel? Because I know for some graphical work I've done, if I do a pure grayscale by just turning down the saturation or something, the contrast might be good, but if I turn off one color channel, the contrast/intensity differences sometimes seem to vanish.

I do grayscale testing, but I worry that when I do grayscale tests for stuff like game graphics where there aren't clear contour lines, that I'm getting a false sense of confidence -- I worry that the test might not be an accurate representation of the level of contrast a colorblind person will actually see if they sit down in front of the non-grayscale version. Is that something I should be worried about, or am I just misunderstanding how colorblindness works?

If I do a grayscale test and the interface still seems clear/usable, how confident should I actually be that you'll be able to use the non-grayscale interface?


part of the issue here is while the physical mechanism of colorblindness is quite clear, the perceptual implications are inherently subjective.

so I don't know how much my advice is relevant to colorblind people in general.

but greyscale is very good start. after that - if you want to show contrast at least for me there are whole areas of the palette that you don't want to take more than one color from. for example the whole red/green/brown/tan/orange space for me is a giant wash. purple looks nice, but don't set it against blue unless its substantially different in saturation.

a secondary issue is that some color combinations I find confusing/distracting. that is my brain can't really settle on the color(s). so for example casual games sometimes just don't work, because its going to take me a second or two to even decide if I can combine these two dots...and I may not even be able to focus on anything else because there is the loud flashing thing on the screen (flashing because I'm 'trying on' colors to see if they fit)

if there is too much color going on I'll probably just reject the whole activity if I can.

sadly I've seen games with a 'colorblind mode' which shows that someone really cared - but it was no more usable than the default more or even worse.

but low saturation UIs with good color choices do look better than mono. I have some form of blue/yellow, but certain choices of blue and yellow are really quite readable and even look pleasant (shrug).

but I wouldn't flail too much. as other people have mentioned its not really that life-changing. if I have to use your garish interface to book an appointment it doesn't really matter...but if you're going to go in and colorize my emacs with nice round hex values I wish you would have the courage to post your address so I can come visit you and have a talk.

that was all over the place. hope it helped.


That is super helpful. I've done a little bit of research on this, but apparently not nearly enough because I haven't really thought that much before about the distraction angle at all; I assumed if I got the contrast right for everyone, that there just wouldn't be any other problems with color.

I guess this is a good argument for not just testing in grayscale, but at least thinking about actually shipping the grayscale mode or individual color channel adjusters. Because I know in a worst case scenario that a grayscale test will be accurate to what people see if I provide a toggle that turns it on in the final product.

Just... yeah, to your point, need to make sure that whatever colorblind mode is shipped is actually an improvement :)


What was the original method may I ask? Some… basic RGB mixing?


It applied a linear transform to RGB values, like the current method, but the original matrices were the "colorjack.com matrices" [1] that have been floating around the internet for a while. They weren't derived scientifically and are quite inaccurate, at least for protanopia.

[1] https://github.com/MaPePeR/jsColorblindSimulator#the-colorma...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: