Right now VNC will just map to the closest possible colour when it needs to reduce the colour level. This is extremely fast, but looks like crap. We might want to consider dithering, which would give a much better result. The downside would be more CPU time and possibly increased bandwidth. These problems might be negligible though.
Created attachment 390 [details] example.png Example picture of the difference between closest colour (posterisation) and dithering.
Got bored and did a quick proof of concept. Builds upon the work in bug 4915. diff --git a/common/rfb/PixelFormat.cxx b/common/rfb/PixelFormat.cxx index 6ecbdd4..01912d1 100644 --- a/common/rfb/PixelFormat.cxx +++ b/common/rfb/PixelFormat.cxx @@ -33,6 +33,16 @@ using namespace rfb; +static const int dither[8][8] = { +{ 0, 32, 8, 40, 2, 34, 10, 42}, /* 8x8 Bayer ordered dithering */ +{48, 16, 56, 24, 50, 18, 58, 26}, /* pattern. Each input pixel */ +{12, 44, 4, 36, 14, 46, 6, 38}, /* is scaled to the 0..63 range */ +{60, 28, 52, 20, 62, 30, 54, 22}, /* before looking in this table */ +{ 3, 35, 11, 43, 1, 33, 9, 41}, /* to determine the action. */ +{51, 19, 59, 27, 49, 17, 57, 25}, +{15, 47, 7, 39, 13, 45, 5, 37}, +{63, 31, 55, 23, 61, 29, 53, 21} }; + PixelFormat::PixelFormat(int b, int d, bool e, bool t, int rm, int gm, int bm, int rs, int gs, int bs) : bpp(b), depth(d), trueColour(t), bigEndian(e), @@ -335,6 +345,7 @@ void PixelFormat::bufferFromBuffer(rdr::U8* dst, const PixelFormat &srcPF, dst += dstStride * bpp/8; src += srcStride * srcPF.bpp/8; } +#if 0 } else if (IS_ALIGNED(dst, bpp/8) && IS_ALIGNED(src, srcPF.bpp/8)) { // Optimised common case switch (bpp) { @@ -387,17 +398,33 @@ void PixelFormat::bufferFromBuffer(rdr::U8* dst, const PixelFormat &srcPF, } break; } +#endif } else { // Generic code + int redDitherShift, greenDitherShift, blueDitherShift; + redDitherShift = 6 - (srcPF.redBits - redBits); + greenDitherShift = 6 - (srcPF.greenBits - greenBits); + blueDitherShift = 6 - (srcPF.blueBits - blueBits); int dstPad = (dstStride - w) * bpp/8; int srcPad = (srcStride - w) * srcPF.bpp/8; while (h--) { int w_ = w; while (w_--) { Pixel p; + rdr::U8 r, g, b, d; p = srcPF.pixelFromBuffer(src); - p = pixelFromPixel(srcPF, p); + srcPF.rgbFromPixel(p, &r, &g, &b); + + d = dither[w_ & 7][h & 7]; + if (r + (d >> redDitherShift) <= 0xff) + r += d >> redDitherShift; + if (g + (d >> greenDitherShift) <= 0xff) + g += d >> greenDitherShift; + if (b + (d >> blueDitherShift) <= 0xff) + b += d >> blueDitherShift; + + p = pixelFromRGB(r, g, b); bufferFromPixel(dst, p); dst += bpp/8; Works fairly well. Some points: - We will most likely have to use ordered dithering as we update small portions of the image each time. An error diffusion dither will simply lose too much of the accumulated error. - We need to propagate the x/y coordinates down to the dithering layer somehow, otherwise we'll get noticeable patterns when we update small adjacent areas. - Compression will most likely be worse as we add high frequency data (it will really screw up RLE). Ordered dithering will probably still compress fairly well though under dictionary algorithms (like deflate) as there are a lot of repeating patterns. - We need to disable solid area detection when dithering as that looks at the buffer before conversion, and it might no longer be solid once dithered.
One more note: The above implementation does the calculations wrong and results in an increased luminance.
We also need to adjust for gamma as it has a massive impact here.
This page has some interesting information: http://bisqwit.iki.fi/story/howto/dither/jy/ The algorithms there is probably too costly for us, but interesting general information as well.
Another issue is CopyRect. A copied area will not be dithered with the correct offset into the matrix, and patterns can easily emerge around it. We probably don't want to disable CopyRect though, given how efficient it is. Bug 2928 might be the answer where we mark such areas as "lossy" and they will gradually be fixed with the correct matrix offset.