LCQuant - A perceptual image quantizer by Leandro Correia - 2025
LCQuant is an image quantizer. The idea of quantization is to reduce the number of colors in an image (thus reducing its file size) while minimizing the inevitable loss of quality. It's a lightweight command‑line tool for modern workflows, created by a designer, illustrator, and developer with a passion for uncompromising visual quality. LCQuant tries to preserve contrast and color diversity in logos, photos, and gradients, supports alpha transparency, and even allows palettes beyond 256 colors for impressive file size optimizations.
Alpha support – Transparency fully respected, ideal for web and game development.
Fast & portable – Single‑file executable, optimized for speed.
Original (left) vs my algorithm
The original render contains 77,960 unique colors. Here I reduced it to a palette of 128 colors with my new algorithm. Artifacts such as banding are inevitable in these scenarios, still the result is better than many of the most famous tools.
Photoshop (left) vs my algorithm
Photoshop’s quantization (I assume it uses median cut) strong banding, specially at smooth gradients such as this sky. My approach preserves tone transitions while improving color distribution uniformity. Here I'm using 256 colors in Photoshop against 128 colors in my conversion. From now on, all other comparisons will have 128 colors.
Xiaolin Wu (1991) vs my algorithm
Wu is a fast, high-quality quantization method that builds an optimized palette using variance minimization in 3D color space (RGB cube subdivision), achieving excellent results with relatively low computational cost. My conversion handles banding better than it, specially in smooth gradients (notice the grayish spot right above the doll's head, for example).
Dennis Lee V3 (1997) vs my algorithm
A 2-pass color quantizer that uses an exhaustive search technique to minimise the error introduced at each pass. Less banding and less color aberrations (the grayish area above the doll's head is less visible than in Wu's). Still my conversion has even less banding in the sky, and pratically as good in the floor.
Neuquant (1994) vs my algorithm
Neuquant uses a self-organizing neural network to perform color quantization, “learning” representative colors from the image through iterative training. Both Neuquant and my algorithm achieve similar results in detailed sharp areas of the image, but my conversion still has less banding and it also handles better the brownish tones of the sky (notice the top-left greenish area on Neuquant).
K-Means (1982) vs my algorithm
K-Means' strength is in directly minimizing perceptual error, often outperforming older methods like median cut or octree. Still my approach has less banding and better color accuracy (check the weird small color on the top-center of K-Means).
208.242 unique colors (24-bit), LCQuant in 32 colors, Neuquant in 32 colors, LCQuant 32 colors dithered, Neuquant in 32 colors, dithered.
At 16 colors, we inevitably have more banding and quality loss. My algorithm shines against older quantizations at larger palettes, but still holds pretty well against them in smaller palettes too.
24 bit image with 20,889 unique colors vs my algorithm and 486 colors.
Although JPGs are much more effective in terms of quality and compression for photographic images, they don't support alpha channels (transparencies). If you apply LCQuant to transparent images, you can reduce A LOT its file size. In this example, the quantized image is virtually equal to the original, but its file size is almost 8x smaller. As such, LCQuant is an invaluable tool for both webdesigners and game devs.
The interesting part is that my algorithm has a much simpler approach than many of the famous quantizers in choosing colors, and there's still some room for improvement. Stay tuned and if you're interested in my other works, just look for me at my site www.leandrocorreia.com