Designer News
Where the design community meets.
Founder at Optimage Joined about 9 years ago
Thanks!
PNGyu uses Pngquant internally. So the result should be comparable to ImageOptim in lossy mode.
It's unfortunate though that due to aggressive configuration some other tools get ahead of ImageOptim. But it's all at even greater cost of visual quality.
I made Clean History plugin for this. It uses official Versions API and has an option to automatically remove document versions on close. Free and open source.
Optimage is different on many levels.
Lossless compression. PNG is about 3-10% smaller on average. The best test I could found is https://css-ig.net/png-tools-overview. Total savings: Optimage 1 787 523 B vs ZopfliPNG (used in ImageOptim) 1 507 775 B. That's about 3%. While in the reduction subset, it's about 7% on top of ImageOptim. JPEG and GIF are about the same, although individual GIFs can be smaller. Optimage also supports SVG, PDF, ICO and ICNS out-of-the-box. ImageOptim requires Node.js to be installed for SVGO.
Lossy compression. Optimage, at least for my own projects, is good enough for automatic lossy on random images. That means gradients won't be broken by color quantization, JPEGs won't end up overly compressed because average error is low (it applies to many other tools). This is where numbers deviate the most. My bar is high here. Not JPEGmini and TinyPNG but pngquant, Zoplfi, Guetzli and beyond.
App. Pausing, file renaming, configurable destination folder, auto-scaling parallel processing (image compression is slow but it should not stand in a way of other activities), etc.
Things like color management, conversion to sRGB (only doing it if the attached profile is actually different), auto rotation for photos using Orientation tags (it's lossless in lossless mode), etc.
States eliminate repeating work. They naturally expand to artboards and allow exportable symbols with per-resolution adjustments.
I even implemented the latter in the plugin for Photoshop a while back. But that alone would not fix it.
It is time consuming, takes seconds. You need all the performance you have.
Type trials, palette sorting, filter brute forcing, nearly-optimal Deflate compression, etc. It's way more than 6 lines of code to do it properly.
With bit-optimal parsing and elaborate heuristics you can probably get close and keep top performance.
The 23.3 kB is 254 colors. The 20.4 kB is just 25 colors, and just 15.1 kB (-26.1%) losslessly compressed with a better tool. That's the difference.
Thanks! Do you have a link?
Edit: Found it. It’s all straight to the point! For anyone interested here's the link.
Special thanks for including test images. Let's see if I can improve on that one.
"hard problem to solve in the browser" - what do you mean by that?
Just check out code complexity in the linked projects.
I don't think that tools that you mentioned can do a better job than UPNG.
I've updated the comment with the link to image. Can you make it this small without changing the number of colors?
Dithering may improve the visual quality, but it also usually increases the file size (by creating noise), that is why I avoid it.
It's not for everything. But without it, images containing gradients have noticeable quality degradation. Dithering can also be applied selectively.
Designer News
Where the design community meets.
Designer News is a large, global community of people working or interested in design and technology.
Have feedback?
Optimage, the best app right now to automate your image optimization workflow, is 40% off.
https://optimage.app