Zorba the Hutt (zorbathut) wrote,
Zorba the Hutt
zorbathut

This is turning into a truly byzantine compression algorithm.

The core of it is vector quantization compression, which is basically palettization on steroids. Then I've got a *second* layer of palettization, because the first layer is truecolor in effect.

Said palettization isn't trivial, and luckily, Ezra had found an algorithm a while back that does the job beautifully. It's far better than most of the algorithms you find on the 'net, but also incredibly hard to track down. (Search for Xiaolin Wu's Color Quantizer on Google if you really want.)

However, the two public implementations are sort of lousy, as their first step involves stripping off the low three bits of all the color values. I mean, ick. So we're not doing that. Unfortunately, they *also* involve setting up a 3d histogram of density . . . and when you're not stripping off the low three bits, that's much much harder. Ezra had a truly ghastly linked-list thing set up.

Well, that didn't work. Because now we're expanding it to palettizing in a 16-dimensional colorspace, and even the linked-list thing got a ton of overhead. So I'm setting it up to use a kdtree, normally designed for 3d searching in a physical environment, to help with counting 16-dimensional vertices.

So: a kdtree in order to run a pair of palettizers, the output huffman-encoded, and the entire process set up with a distributed computing network, because while it's not as painfully slow as it used to be, it's still not exactly incendiary.

On the other hand, it's still using well under 50% of the space of the original solution, I'm still not doing playing with it, and so it's a fantastic success.

Just . . . interesting.
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 10 comments