Representing more than 75% of the videos served on the Internet, MP4 is the most commonly used format today. However, MP4 is often used improperly, which can have an unfortunate impact on the User Experience. Let’s see how we can improve this.
Whether you are a luxury player wishing to broadcast extremely high-quality videos or a news platform looking for an eye-catching proposal, video has become an essential part of the Web in just a few years. But adding video to a site can be challenging.
The most widely used and supported encoding is undoubtedly H.264, served by an MP4 file. This format is supported by most video manipulation software: Handbrake, MPEG Streamclip, OpenShot… even VLC! But few of them offer a dedicated export for the Web.
However, you don’t broadcast a video on the Internet like you do on a desktop computer. There are a few things to be careful about.
Disclaimer: the following examples will make intensive use of ffmpeg, one of the most popular video editing program among developers, but most of the optimizations should be available in your favorite software.
Reducing file weight: the right balance between quality and performance
A heavy video will cause an increase in the total page weight. Keep in mind that some Internet Providers sell packages that are limited in total bandwidth. Making your pages unnecessarily heavy will not help your visitors.
To optimize your video, you need to ask yourself how it delivers its content over the network. Even if the file size seems small, it may be possible to improve the bitrate for an optimal diffusion.
What is the bitrate? To say it simply, bitrate is the amount of data needed to encode one second of video. The more data you allocate per second, the better is the quality but the heavier is the file. I don’t have a magic formula to share that would compute the perfect bitrate. Instead, I would like to invite you to ask yourself about the quality required for each individual video usage.
For example, it is interesting to evaluate the necessary bitrate according to the abrupt changes presented in the video: the more fixed is the content, the less data you will need to allocate to each second of the video. Alternatively, if the video contains a lot of motion, the bitrate needed for equivalent quality will increase.
You can easily anticipate the weight of a video after encoding by using either a constant bitrate over the entire video, or a multi-pass encoding. Here is a comparison between an original 10-second extract of a footage from the June 2009 Endeavour liftoff and a two-pass encoding with ffmpeg. Left part weighted 85MB, right video weighted 1,2MB after being optimized:
This example shows what can technically be done to improve the weight of a video, but we can also extrapolate optimizations from the video purpose. It is quite common, for example, to visit web pages containing a large centered content banner with a welcoming message. Sometimes, behind this “Hero Container”, a background video is played.
These videos are neither intended to be viewed nor useful for conversion. They often are a subtle improvement, only intended to beautify the page and not meant to be distracting. Do you need this video to be of the highest possible quality? Using a blurring effect like the frei0r iirblur effect, you can slightly fog your original content, hence gaining precious kilobytes.
ffmpeg -i origin.mp4 -vf frei0r=iirblur:0.4 -a:c copy blurred.mp4
-vf frei0r=iirblur:0.4 option telles ffmpeg to blur, using a 40% factor, while the
-a:c copy option tells it to keep the audio track as it is.
Another possible optimization: the audio track. If your video is not meant to play sound, why keep this track? Don’t hesitate to remove it:
ffmpeg -i origin.mp4 -an -vcodec copy muted.mp4
-an option tells ffmpeg to disable the audio track while the
-vcodec copy option tells it to keep the video track as it is. Don’t forget to also explicit the absence of sound to the browser by using the
<video> element muted attribute.
Even if your video is lightweight, your work is not finished. You need to focus on the video purpose, which is quite often to be streamed.
Start playback before downloading the entire content
Here’s how streaming works: when a video playback is requested, the browser will fetch the file to find the video metadata. MP4 video metadata includes such things as display characteristics, time scale and duration. Without this information, the browser cannot start playback!
If your server is configured to accept Byte Serving, which means it has included an Accept-Ranges header in its initial HTTP response, the browser will fetch the file piece by piece through several partial content requests (HTTP Code 206) to the resource. As soon as it finds the video metadata, it will be able to start playback while downloading the complete file.
If your server does not support Range Requests, the browser has no choice but to download the entire file. If your video is in autoplay, the bandwidth available to download other resources needed to render the page will be reduced. The time required for the display will increase, degrading the user experience.
Where is my video metadata stored?
An MP4 file breaks down into several units of data called atoms. The metadata are contained in the movie atom movie, also called
moov atom. A software such as MP4creator or AtomicParsley can help you visualize the atoms of an MP4 file.
There are several methods to move the moov atom to the first position. Software such as Handbrake offers a Web Optimized option. In other softwares, this option is called MP4 “Fast Start”.
ffmpeg can quickly correct a video through the option -movflags faststart, that runs a second pass moving the moov atom to the beginning of the file (see the documentation):
ffmpeg -i origin.mp4 -acodec copy -vcodec copy -movflags faststart fast_start.mp4
If you want to learn more about the movie atom, don’t miss “Understanding the MPEG-4 Movie Atom” by Maxim Levkov.
Multiple sources for targeted performance
Although h264 is the most widely used and supported codec, it is not necessarily the most effective in every cases. We have already seen that the
<image> tag accepts several sources, allowing the browser to fetch WebP images for Chrome users. The
<video> element can also accept multiple sources and the WebP equivalent for videos is WebM.
ffmpeg can encode WebM files, provided it is installed with the
--with-libvpx option. Here is an example of a two-pass encoding with a 1MB targeted bitrate, using the VP9 video encoder (on Windows, please replace
ffmpeg -i source.mp4 -c:v libvpx-vp9 -b:v 1M -pass 1 -f webm /dev/null && ffmpeg -i source.mp4 -c:v libvpx-vp9 -b:v 1M -pass 2 output.webm
From the optimized 1.2MB video of Endeavour Shuttle presented at the beginning of the article, this command generates a WebM file of 715KB, i.e. a 40% weight cut. Too bad WebM is not more widely supported.
Last pieces of advice
Be careful with the autoplay. Not only is this practice perceived negatively by many users if it is not used properly (in a subtle and unobtrusive way, behind a hero background for example), but also the video playback will always consume some bandwidth, slowing down the download of other resources.
Sometimes, the best video is no video. Do not hesitate to hide the video in some situations, especially if it is decorative. A well-placed CSS media query and you’ll save your mobile users from a bad experience. Also, consider supporting the Save-Data Client Hint as it is an explicit browser opt-in into a reduced data usage mode.
- Reduce the weight of your files by using optimization strategies that depend on the content and purpose of your videos.
- Optimize streaming by encoding your videos to serve metadata as soon as possible.
- Propose alternatives to MP4, such as WebM.
- Be careful with autoplay, consider dedicated solutions for Full HD and don’t hesitate to hide videos when needed.