Let’s start with something simple – there are three universally supported image formats: GIF, PNG, and JPEG. In addition to these formats, some browsers also support newer formats such as WebP and JPEG XR, which offer better overall compression and more features. So, which format should you use?
At first – we should do a good work with images itself (this part is so boring, so, I’ll just provide a short list of recommendations):
- Sizing. Developers / designers – make sure the images you deliver perfectly fit their required website dimensions. Even if the same image should be made into different sized thumbnails to fit different pages, it’s well worth creating all these different thumbnails rather than deliver a large image and rely on the browser to resize it.
- Quality. don’t be afraid to experiment with lower JPEG quality levels. For certain websites we found that using a 50% JPEG quality yielded a very reasonable result.
- Correct Format.Once again, PNG should be used for computer generated images (charts, logos, etc.), JPEG when you are showing a captured photograph.Notice that despite the common belief, PNG will outperform GIF in almost every other aspect.
- Use compression tools
- Useless data. Make sure you strip the meta-data off your images and user uploaded photos
- Using images when CSS3 can be used. This one point is more interesting, but still, we are far away from algorithms. Just make sure you use CSS3 whenever possible. If your graphics designer is responsible for the markup, make sure you ask for CSS3 based elements where it makes sense. This thing should go as a non-functional requirement.
- Incorrect image cache settings. we highly recommend using aggressive caching for all your website images by setting your images HTTP ‘Expires’ header to as far in the future as possible.
Delivering static icons one by one
Other than photos and thumbnails, your website most likely includes many icons (arrows, stars, signs, marks) and auxiliary images. Google’s search results page is comprised of over 80 (!) tiny icons.
A simple solution for this problem is to utilize a CSS Sprite, a single image that contains all your smaller icons. Your web page is modified to download this single image from your server and the page’s HTML uses alternative CSS class names to point to the smaller images within the larger one.
Now, instead of 80 images, Google’s visitors download just a single image. Their browser will quickly download and cache this single image from Google’s servers and all images will be immediately visible.
To be honest, would also highly recommend Font Awesome to be implemented at your page.
Delivering images straight from your servers
Once your website’s content is in place, your next goal is to make sure that all your website’s images are delivered as fast as possible to your visitors.
One of the common website problems we see is developers hosting images on their own servers, usually on the same machine as their website. Two things happen here – first, your server strains in delivering images instead of focusing on delivering your unique website content, and the second – you’re missing out on one of the most amazing image delivery solutions out there – Content Delivery Networks.
Content Delivery Networks are simple to use services that serve your website images much faster than how your website hosting service can deliver them. CDNs are based on a large number of world-wide servers, or “edges”. When visitors visit your website, they are automatically routed to the nearest edge location, so images are delivered with the best possible performance with a much reduced latency. Almost every large company as Amazon, Macy’s, Ebay, etc. uses this technology.
Other key problems that CDN Software has to solve:
- Synchronization. So you have all your neat farms in the US, in Europe and in Asia, but how do you make sure that they all have the same versions of the files you’re trying to serve? And if one of the farms does not have the current version, how do you tell the load balancer which farm to use instead?
- Logging. In a CDN, you usually want to bill your customer, so you need to measure the traffic and file accesses. But with multiple farms and multiple Web Servers in each farm, you need to somehow centralize logging
- Authentication. After all, a CDN is not just a Web Server delivering HTTP Content to everyone. What if you have a CDN for video streaming that actually restricts access to only certain users?
- Load-Balancing. While this is usually done separately, this also links to the Synchronization part. So I am a user from South Korea trying to access the content. The Load Balancer finds out that the Farm in Seoul is the nearest – but unfortunately, Seoul’s Farm does not have the content yet. So the CDN and Load Balancer need to figure out what the nearest Farm that has the content is. Let’s see… Both Paris, France and Los Angeles, USA have the content. Which one should serve?
Progressive or Lazy image loading
This is an extremely wide technique used almost by every high load web-site, also, YouTube got something similar during the video play.
Here is what is going on:
- Render a div where the image will be displayed.
- Load a tiny version of the image. At the moment, they seem to be requesting small JPEG thumbnails with a very low quality (e.g. 20%). The markup for this small image is returned in the initial HTML as an image tag, so the browser starts fetching them right away.
- Once the image is loaded, it is drawn in a canvas tag. Then, the image data is taken and passed through a custom blur() function You can see it, a bit scrambled, in the main-base.bundle JS file. This function is similar, though not identical, to StackBlur‘s blur function. At the same time, the main image is requested.
- Once the main image is loaded, it is shown and the canvas is hidden.
All the transitions are quite smooth, thanks to the CSS animations applied. A bird’s eye view of the markup for an image:
<figure> <div> <div/> <img/> <canvas/> <img/> <noscript/> </div> </figure>
Here is a pretty nice implementation example: https://stackoverflow.com/questions/18906266/fast-image-loading-methods-low-to-high-res-with-multiple-backgrounds-javascri
or some additional theory: https://www.sitepoint.com/five-techniques-lazy-load-images-website-performance/
Prefetching, Preloading and Prerendering
We also could predict what data we will need to show, for example – if you do a search and have a search results, the most predictable link to be open is the first one. That means that the System could be well prepared to open this exact page.
Prefetching is for telling the browser to look up DNS settings ahead of when they’re needed. In practice, most of the requests your browser makes do not require a DNS lookup because the IP address of the server will have already been cached. Browsers like Google Chrome actually comb over webpages for links and automatically look up the DNS for unkown domains. When you’re trying to build an application that is as fast as possible, you don’t want to wait for a DNS lookup on one of your subdomains or an external domain ( lookups can take between 20 and 250 milliseconds). Prefetching uses the following syntax:
This is the most useless thing among these 3 because you should use it only in these cases:
- When you’re making AJAX requests to a domain other than the one you’re on. This includes subdomains.
- When you’re dynamically rendering content on a page from a different domain.
Preloading is for telling the browser to download assets like images and scripts that you’re not currently using but might be using soon. Preloading uses the following syntax:
When to use preloading:
- When you have different stylesheets for different pages and know a visitor is likely to visit another page.
- When you know a visitor is likely to visit a “next page” with lots of images or other large files.
When not to use preloading:
- When the asset is referred to somewhere else on the same page.
When you’re not sure the user will actually require that asset. Like on a page visitors only go to 3% of the time.
When to use prerendering:
- When you’re certain that a user is going to visit a page and
- When that pages has lots of assets that take a while to load.
- When there’s a “next page” that’s very likely to be visited such as in a sign-up flow or the first link in a series of search results.
- When an increase in page load speed would dramatically increase conversions.
When not to use prerendering:
- When a user is not likely to visit a page.
- When the amount of network data is limited(you don’t want users to pay for pages they don’t even visit).
There are some limitations – Google and Mozilla have lists of user/site actions that will cancel any prefetching/preloading/prerendering actions.