JPEG is king. For now. But Google’s got their WebP image format. Microsoft’s got their JPEG-XR. BUT, as images get more high-rez, and users become less patient with downloading more content at high speeds (although higher bandwidth is becoming more household and hopefully more affordable), what needs to be done with the Internet “image” in order to make it a little more efficient?
Here’s my idea for a new Internet image format:
JPEGs already do a sufficient job of image compression. The problem is when you have to download the same image multiple times to see it at multiple resolutions. Essentially, you’re downloading different files from the server. This is how Google has setup Blogger, which has multiple resolutions (separate files on the server) of the same image. So my idea is to reduce this clutter by keeping all these different resolutions in one file on the server. That’s not the same as a single JPEG. My idea is to store multiple JPEGs inside a JPEG container, similar to how a JPEG already stores a thumbnail.
Header
The header will contain the original width and original height. However, users will not likely download the entire file unless they choose to. Unfortunately, not all servers support partial file downloading and resuming. Asynchronous downloading is ideal for this image format, but if the user can’t access the file starting from a certain byte without using up server resources for file handling, it might get complicated.
Data chunks
Anyway, each chunk would contain only the necessary pixels required to interpolate the previous chunk’s pixels into the next resolution size. The user would only download the file up until the desired chunk. And each chunk would be stored as a JPEG for best compression. The entire file containing these JPEGs wouldn’t be compressed of course.
So when each chunk is decoded, the interpreter will place the pixels automagically where they need to be. And all this would happen client-side as user’s computers become more graphics powerful.
Final thoughts
Ultimately, a JPEG alone would be sufficient for offline viewing. However, when it’s transferred over the wire, it becomes something else. Especially when there are kazillions of JPEGs getting downloaded at any given moment. Because users are using the Internet 24-7 now, it becomes like our harddrives now, and the physical bandwidth becomes our bus (Google foresaw this, that’s why they rolled out their Google OS on netbooks). And you don’t want to download an entire JPEG unless you have to. It’s not efficient to wait for a buffer on clientside to be filled before graphics processing when you have to transfer that same amount over the line.
Tags: compression, download, efficient, faster, format, image, internet, jpeg, optimize, smaller size, webp