x265 turns 5!

As it turns out, the x265 project turned 5 a couple of months back; our first commits from the HM encoder date back to March, 2013. And being the geeks that we are, we didn’t even realize it!

Many thanks to all those who have enabled x265 accomplish all that it has over 5 years! We continue to innovate inside x265 to improve both quailty, and performance and look forward to celebrating our 10th anniversary with you.

Cheers,

Team-x265.

Thanks for a great NAB 2018

By Pradeep Ramachandran

Now that the dust has settled, it is time to thank all the contributors for enabling a great showing by x265 at NAB 2018. We showed-off our new ML-accelerated content adaptive encoding for ABR, AVX-512 acceleration, and the recently added support for HDR10+/HLG at NAB. We received great feedback on what people would like to see in the coming releases, and will be working hard to continue to innovate in that space. We’ve also formed a committee to guide the future development of x265, and other open source media codecs that we will blog about more in the coming weeks; read this article for an initial idea of what this is about.

Until our next showing, don’t hesitate to reach out to us on the developer mailing list, doom9 (), or Facebook to talk.

And finally, avx512 in x265!

By Pradeep Ramachandran

Finally, the acceleration that we’ve all been waiting for is here! We’ve been working extensively with Intel for the last few months to use Intel Advanced Vector Extensions 512 (AVX-512) to accelerate x265. After much effort, we’re delighted to share that we’ve been able to accelerate 4K HDR encoding in main10 profile by over 15% for high-quality offline encoding. Checkout this white-paper on the Intel site for more information.

The patches will be pushed to the default branch soon. Let us know the results of your tests – you know where to find us!

Excited about AV1 closing doors, but…

By Pradeep Ramachandran

After what seems to have been a long delay, AV1 finally froze its bitstream last week! Like many folks in the industry, we have been waiting for this moment for a long time to see what a truly ‘royalty-free’ codec can can bring in terms of tools to the encoding space.

A few months ago, we stopped looking further into how AV1’s tools compare to that of HEVC due to this peer-review paper  published in a leading journal from leading researchers in the field of video coding. That paper reported that the HM encoder provide average bitrate savings of 30% relative to AV1 with an encoding speed that was 25X that of the AV1 encoder; the informed would know that x265’s veryslow is very comparable in encoder efficiency to the HM encoder. While optimizations could bridge the gap in speed, bridging the gap in encoder efficiency would be hard unless some fundamental improvements (that are not covered by the HEVC-patents, mind you) are made. Now that the bitstream is frozen, we will be digging to see what new tools have been brought to the table that were not included in this comparison in the hope to answer the question “Is AV1 fundamentally better than HEVC as a standard?”. Stay tuned here to hear more, or share your thoughts on doom9/x265-devel mailing list.

And of course, there is the issue of patent royalties and licensing, but we will leave that up to the lawyers to deliberate and decide; lets talk more tech here!

NAB 2018, of course we will be there!

By Pradeep Ramachandran

MulticoreWare, the developers of x265, will be available to discuss all things media at NAB 2018 in Las Vegas from April 9th – 12th in their booth at SU-14708. Swing by to talk about the soon-to-publish AVX512 acceleration, content adaptive optimizations for ABR encoding with x265, or if you just want a selfie with the creators of the world’s most popular HEVC encoder.

See you in Vegas!

PS: Make sure to mention this blog post if you stop by to stand a chance to win some open-source memorabilia!

x265 Incorrectly Represented in MSU’s 2017 Codec Comparison I

By Pradeep Ramachandran

MSU recently released their 2017 codec comparison, where they compared x265 against other HEVC encoders, VP9 encoders, and the AV1 encoder. While the efforts that go into such large scale tests are appreciated, we as the x265 developers have to respectfully disagree with your conclusions drawn from this MSU report as we believe it is incomplete.

If you notice, these tests use v1.9 of x265 which is over 20months old! Since then, x265 has had 7 versions with an imminent 8th version. As anyone would expect, x265 has made considerable progress in speed and quality during this time. Specifically, we’ve made big changes to the lambda tables which considerably improved visual quality as reported by both consumers and customers.

That said, it is possible that maybe AV1 is a better codec than HEVC (at least in quality); maybe so is VP9. Maybe they have tools that are competent and can challenge HEVC. However, for the above said reasons, these results do not conclusively prove so, in our opinion!

Now as to why MSU used a 20month old encoder, there is some history there about the reservations of the x265 team to the validity of MSUs past tests as only objective scores were looked at. Quoting my dear friend, ‘we look at video and not at graphs!’. It is encouraging to see the MSU tests taking a turn towards doing subjective testing, which, in our opinion, is the right direction. Hopefully we will work with their newly mended ways in future for a more fair and realistic evaluation!

x265 version 2.6 released

By Pradeep Ramachandran

New features
1) x265 can now refine analysis from a previous HEVC encode (using options –refine-inter, and –refine-intra), or a previous AVC encode (using option –refine-mv-type). The previous encode’s information can be packaged using the x265_analysis_data_t data field available in the x265_picture object.
2) Basic support for segmented (or chunked) encoding added with –vbv-end that can specify the status of CPB at the end of a segment. String this together with –vbv-init to encode a title as chunks while maintaining VBV compliance!
3) –force-flush can be used to trigger a premature flush of the encoder. This option is beneficial when input is known to be bursty, and may be at a rate slower than the encoder.
4) Experimental feature –lowpass-dct that uses truncated DCT for transformation.
Encoder enhancements
1) Slice-parallel mode gets a significant boost in performance, particularly in low-latency mode.
2) x265 now officially supported on VS2017.
3) x265 now supports all depths from mono0 to mono16 for Y4M format.
API changes
1) Options that modified PPS dynamically (–opt-qp-pps and –opt-ref-list-length-pps) are now disabled by default to enable users to save bits by not sending headers. If these options are enabled, headers have to be repeated for every GOP.
2) Rate-control and analysis parameters can dynamically be reconfigured simultaneously via the x265_encoder_reconfig API.
3) New API functions to extract intermediate information such as slice-type, scenecut information, reference frames, etc. are now available. This information may be beneficial to integrating applications that are attempting to perform content-adaptive encoding. Refer to documentation on x265_get_slicetype_poc_and_scenecut, and x265_get_ref_frame_list for more details and suggested usage.
4) A new API to pass supplemental CTU information to x265 to influence analysis decisions has been added. Refer to documentation on x265_encoder_ctu_info for more details.
Bug fixes
1) Bug fixes when –slices is used with VBV settings.
2) Minor memory leak fixed for HDR10+ builds, and default x265 when pools option is specified.
3) HDR10+ bug fix to remove dependence on poc counter to select meta-data information.

Beamr compares their HEVC encoder to x265

By Tom Vaughan

Recently, a competitor (Beamr) published a blog post comparing their HEVC encoder to x265.  They claimed that their HEVC encoder is faster, AND it produces better video quality.

Of course, we’re used to people comparing other HEVC encoders to x265.

  • x265 is available under an open source license, and therefore it is widely available for any competitor to use in such comparison tests.
  • x265 is by far the most widely known and widely used HEVC encoder, and so other companies want to build their brand by comparing to x265.
  • But most importantly, they know that x265 is the “gold standard” of HEVC encoders.  It’s the benchmark that all others must try to beat.

So, did Beamr’s encoder really beat x265?  Or are their claims just a bunch of marketing bluster, that isn’t backed up by the facts?

This latest test was conducted in a way that mostly followed the encoder comparison guidelines we recently published.  But there were many flaws.

  • The bit rates produced were not identical.  On average the Beamr encodes had 6% higher bit rates.  In one case, the Beamr encode had a bit rate that was 18% higher.  This was due to the fact that the test video sequences were very short, and one-pass encoding was used (giving the encoder less time to “dial in” to the target bit rate).  This problem could have been avoided by using 2 pass encoding, which can achieve a more accurate bit rate while leveling quality across the encode.
  • All of the test parameters were hand-picked by Beamr.  The test content, the test hardware, the settings used, including the bit rate, the number of hardware threads used and the x265 presets.  This is known as “cherry picking” (the fallacy of incomplete evidence).  We wonder how many tests were run other different conditions, in order to determine the tests that showed the Beamr encoder under the most favorable light.
    • Beamr chose to use x265’s ultrafast, medium and veryslow presets.  They claimed a big speed advantage against x265’s veryslow preset.  Of course, as the name implies, we designed our veryslow preset to be very slow.  It is focused on achieving the highest possible quality, and no compromises are made that would improve performance.  The next preset, slower, is twice as fast, but has nearly identical encoding efficiency.  Why didn’t Beamr compare their encoder to x265 with the slower preset?  [Because they would have lost on a speed comparison, as well as on quality.]
    • Beamr only chose video sequences with relatively low motion and detail.  Clips like Bar Scene, Dinner Scene, and Wind And Nature are exceptionally easy to encode.  Why no sports, or other high detail + high motion content?  Perhaps it’s because Beamr relies on using Tiles, which cannot use inter-prediction across tile boundaries.  x265 uses Wavefront Parallel Processing, which is more efficient.
    • Beamr chose only 4K content, at 7 Mbps bit rates.  Why didn’t they show compare the 2 encoders for 1080P, 720P and lower picture sizes?  Why not compare the encoders at a wider range of bit rates?
    • Beamr chose a 4 year old hardware architecture (Xeon E5 v2, code named “Ivy Bridge”) to run their comparison test on.  These processors don’t support AVX2 instructions, which newer Xeon generations (Haswell, Broadwell, Purley) do.  We wonder how the Beamr encoder compares to x265 on modern machines.

Beamr claims a speed advantage over x265 under these carefully selected conditions.  Most commercial companies use a preset like “slow” or “slower”, but rarely use our veryslow preset for their high quality offline encoding.  For real-time encoding, we have a more advanced encoding library called UHDkit that can run multiple encoding instances in parallel to achieve high performance on a many-core server, allowing for higher quality settings than x265’s ultrafast preset.  On modern Haswell, Broadwell or Purley generation Xeon powered servers, customers who have tried competing HEVC encoders tell us that UHDkit outperforms the competition under a range of scenarios.

Beamr made the bitstreams available, so that anyone could compare the quality of the video produced.  You should download the videos and compare for yourself.   Beamr claimed that the visual quality of their encodes was clearly superior, but we are confused by this claim. Perhaps their definition of higher visual quality is different from ours.  Perhaps by “higher quality”, they mean “softer, with less detail.”  If you prefer video with more detail and accuracy, x265 is the clear winner.

To compare video, you can’t look at still images – you need to actually watch the video at full speed.  We have a special video comparison tool (UHDcode Pro Player) that we make available to customers and partners which can play 2 streams simultaneously, letting you hide or reveal more or less of each stream.  This makes it easy to see which encode is better.  But the screen shots below are fairly representative of the difference in detail you will see in the competing encodes.   Take a look at the texture on all of the surfaces (the road, the buildings, the water).  Take a look a the sharpness of the detail on every object.  Everyone we’ve talked to so far agrees that x265 produced the better video.  That matches the feedback we get from our customers and prospective customers that have compared x265 to Beamr, and all of our other competitors.  We consistently win these customer shoot-outs, based on the quality and performance of x265 and our premium encoding library, UHDkit.

On the Beamr blog, they’ve posted some screen shots, which you can download to see for yourself how much more detail the x265 encodes have.  Here are some additional examples…

x265: Ritual Dance 4K Medium Frame 412

Beamr: Ritual Dance 4K Medium Frame 412

x265: Pier Seaside 4K Medium Frame 337

Beamr: Pier Seaside 4K Medium Frame 337

x265: Driving medium 4K – Frame 486

Beamr: Driving medium 4K – Frame 486

x265: Aerial 4K veryslow -Frame 568

Beamr: Aerial 4K veryslow -Frame 568


How to compare video encoders

By Tom Vaughan

Whether you want to compare two encoders, or compare different settings for the same encoder, it’s important to understand how to set up and run a valid test.  These guidelines are designed to allow anyone to conduct a good test, with useful results.  If you publish the results of an encoder comparison and you violate these rules, you shouldn’t be surprised when video professionals point out the flaws in your test design.

  1. You must use your eyes. Comparing video encoders without visually comparing the video quality is like comparing wines without tasting them.  While it’s tempting to use mathematical quality metrics, like peak signal to noise ratio (PSNR), or Structural Similarity (SSIM), these metrics don’t accurately measure what you are really trying to judge; subjective visual quality.  Only real people can judge whether test sample A looks better than test sample B, or whether two samples are visually identical.  Video encoders can be optimized to produce the highest possible PSNR or SSIM scores, but then they won’t produce the highest possible visual quality at a given bit rate.  If you publish PSNR and SSIM values, but you don’t make the encoded video available for others to compare visually, you’re not conducting a valid test at all.
    Note:  If you’re running a test with x264 or x265, and you wish to publish PSNR or SSIM scores (hopefully in addition to and not instead of conducting subjective visual quality tests), you MUST use –tune PSNR or –tune SSIM, or your results will be completely invalid.  Even with these options, PSNR and SSIM scores are not a good way to compare encoders.  x264 and x265 were not optimized to produce the best PSNR and SSIM scores.  They include a number of algorithms that are proven to improve subjective visual quality, while at the same time reducing PSNR and SSIM scores.  Only subjective visual quality, as judged by real humans matters.
    Of course subjective visual quality testing is very time consuming.  But it’s the only valid method for true comparison tests. If, however, you have a very large quantity of video content, and you need to compare the quality of content A against content B, or set up an early warning indicator in an automated quality control system, objective metrics are useful.  Netflix has done some valuable work in this area, and we would recommend their VMAF (Video Multimethod Assessment Fusion) metric as the best available today.  At best, objective metric scores should be considered only a rough indication of visual quality.
  2. Video must be evaluated as video, not still frames. It’s relatively easy to compare the visual quality of two decoded frames, but that’s not a valid comparison.  Video encoders are designed to encode moving images.  Things that are obvious when you are examining a single frame may be completely invisible to any observer when viewed as part of a sequence of images at 24 frames per second or faster.  Similarly, you’ll never spot motion inaccuracy or other temporal issues such as pulsing grain or textures if you’re only comparing still frames.
  3. Use only the highest quality source sequences. A source “sequence” is a video file (a sequence of still pictures) that will serve as the input to your test.  It’s important for your source video files to be “camera quality”.  You can’t use video that has already been compressed by a camcorder, or video that was uploaded to a video sharing site like YouTube or Vimeo that compresses the video to consumer bit rates so that it can be streamed efficiently from those websites.  Important high frequency spatial details will be lost, and the motion and position of objects in the video will be inaccurate if the video was already compressed by another encoder.
    In the early days of digital video, film cameras were able to capture higher quality images than video cameras, and so the highest quality source sequences were originally shot with film movie cameras, and then scanned one frame at a time.  Today, high quality digital video cameras are able to capture video images that rival the highest quality film images.  Modern professional digital video cameras can either record uncompressed (RAW) or very lightly compressed high bit rate (Redcode, CinemaDNG, etc.) video, or transfer uncompressed video via HDMI or SDI to an external recording device (Atomos Shogun, Blackmagic Video Assist), which can store the video in a format that utilizes very little video compression (ProRes HQ, DNxHD, DNxHR).  Never use the compressed video from a consumer video camera (GoPro, Nikon, Canon, Sony, Panasonic, etc.).  The quality of the embedded video encoder chips in consumer video cameras, mobile devices and DSLRs is not good enough.  To test video encoders, you need video that does not already include any video compression artifacts.
  4. Use a variety of source sequences. You should include source video that is a representative sample of all of the scenarios that you are targeting.  This may include different picture sizes (4K, 1080P, 720P, etc), different frame rates, and different content types (fixed camera / talking heads, moving camera, sports/action, animation or computer generated video), and different levels of complexity (the combination of motion and detail).
  5. Reproducibility matters.  Ideally, you should choose source sequences that are available to others so that they can replicate your test, and reproduce and validate your results.  A great source of test sequences can be found at https://media.xiph.org/video and https://media.xiph.org/video/derf/.  Otherwise, if you are conducting a test that you will publish and you have your own high quality source sequences, you should make them available for others to replicate your test.
  6. Speed matters. Video encoders try many ways to encode each block of video, choosing the best one.  They can be configured to go faster, but this will always have a trade-off in encoding efficiency (quality at a given bit rate).  Typically, encoders provide presets that enable you to choose a point along the speed vs. efficiency tradeoff function (x264 and x265 provide ten performance presets, ranging from –preset ultrafast to –preset placebo).  It’s not a valid test to compare two encoders unless they are configured to run at a similar speed (the frames per second that they encode) on identical hardware systems.  If encoder A requires a Supercomputer to compare favorably with encoder X running on a standard PC or server, or if both encoders are not tested with similar configurations (fast, medium, or slow/high quality), the result is not valid.
  7. Eliminate confounding factors.  When comparing encoding performance (speed), it’s crucial to eliminate other factors, such as decoding, multiplexing and disk I/O. Encoders only take uncompressed video frames as input, so you can decode a high quality source sequence to raw YUV, storing it on a fast storage system such as an SSD or array of SSDs so that I/O bandwidth will be adequate to avoid any bottlenecks.
  8. Bit Rate matters. If encoders are run at high bit rates, the quality may be “visually lossless”.  In other words, an average person will not be able to see any quality degradation between the source video and the encoded test video.  Of course, it isn’t possible to determine which encoder or which settings are best if both test samples are visually lossless, and therefore, visually identical.  The bit rates (or quality level, for constant quality encoding) you chose for your tests should be in a reasonable range.  This will vary with the complexity of the content, but for typical 1080P30 content, for HEVC encoder testing, you should test at bit rates ranging roughly from 400 kbps to 3 Mbps, and for 4K30 you should cover a range of roughly 500 kbps to 15 Mbps.  It will be easiest to see the differences at low bit rates, but a valid test will cover the full range of quality levels applicable to the conditions you expect the encoder to be used for.
  9. Rate Control matters.  Depending on the method that the video will be delivered, and the devices that will be used to decode and display the video, the bit rate may need to be carefully controlled in order to avoid problems.  For example, a satellite transmission channel will have a fixed bandwidth, and if the video bit rate exceeds this channel bandwidth, the full video stream will not be able to be transmitted through the channel, and the video will be corrupted in some way.  Similarly, most video is decoded by hardware video decoders built into the device (TV, PC, mobile device, etc.), and these decoders have a fixed amount of memory to hold the incoming compressed video stream, and to hold the decoded frames as they are reordered for display.  Encoding a video file to an overall average target bit rate is relatively easy.  Maintaining limits on bit rate throughout the video, so as not to overfill a transmission channel, or overflow a video decoder memory buffer is critical for professional applications.
  10. Encoder Settings matter. There are many, many settings available in a good video encoder, like x265.  We have done many experiments to determine the optimal combination of settings that trade off encoder speed for encoding efficiency.  These 10 performance presets make it easy to run valid encoder comparison tests.  If you are comparing x265 with another encoder, and you believe you have the need to modify default settings, contact us to discuss your test parameters, and we’ll give you the guidance you need.
  11. Show your work. Before you believe any published test or claim (especially from one of our competitors), ask for all of the information and materials needed to reproduce those results.  It’s easy to make unsubstantiated claims, and it’s easy for companies to run hundreds of tests, cherry-picking the tests that show their product in the most favorable light.  Unless you are given access to the source video, the encoded bitstreams, the settings, the system configuration, and you are able to reproduce the results independently with your own test video sequences under conditions that meet your requirements, don’t believe everything you read.
  12. Speak for yourself.  Don’t claim to be expert in the design and operation of a particular video encoder if you are not.  Recognize that your experience with each encoder is limited to the types of video you work with, while encoders are generally designed to cover a very wide range of uses, from the highest quality archiving of 8K masters or medical images, to extremely low bit rate transmission of video through wireless connections.  If you want to know an encoder can or can’t do, or how to optimize it for a particular scenario, you should ask the developers of that encoder.

Hardware vs. Software encoders. 
It is a bit silly to compare hardware encoders to software encoders. While it’s interesting to know how a hardware encoder compares to a software encoder at any given point in time on a given hardware configuration, there are vast differences between the two types of encoders.   Each type has distinct advantages and disadvantages.  Hardware encoders are not cross platform;  they are either built in or added on to the platform. Hardware encoders are typically designed to run in real-time, and with lower power consumption than software encoders, but for the highest quality video encoding, hardware encoders can NEVER beat software encoders, because their algorithms are fixed (designed into the hardware), while software encoders are infinitely flexible, configurable and upgradeable.  There are many situations where only a hardware encoder makes sense, such as in a video camera or cell phone.  There are also many situations where only a software encoder makes sense, such as when it comes to high quality video encoding in the cloud, on virtual machines.

x265 Receives Significant Boost from Intel Xeon Scalable Processor Family

By Pradeep Ramachandran

Today Intel launched the next generation of Xeon processors, the Intel Xeon Scalable Processor Family (code-named “Purley”), based on the Skylake CPU architecture.  The Intel Xeon Scalable Processor Family is a powerful new generation of 14nm chips which provide significant improvements over the previous generation of Xeon processors (Xeon E5 v4 and E7 v4, code named “Broadwell”), including many fundamental  CPU architectural improvements, a much faster internal data transfer architecture (a mesh architecture with 2x the bandwidth instead of a ring architecture), AVX-512 vector processing, improved cache, and improved I/O architecture with six DDR4 memory channels and 48 PCIe lanes.

With x265 pushing the previous generation processors to the edge for memory bandwidth and threading, the benefits that these new Xeons provide for x265 users will be game changing. Our initial results with the latest build of x265 show a 67% average per-core gain for encoding using HEVC Main profile, and a 50% average gain with Main10 profile across different presets. In particularly, off-line encoding of 4K content is seeing tremendous benefits due to the higher memory bandwidth that the CPUs are able to utilize from cache and system memory. Intel’s Xeon Scalable Processor Family makes x265 and UHDkit the ideal option for a wider range of scenarios including both live and offline HEVC encoding, and they double the performance/cost you’ll get with our software-based encoding libraries.  We’re also seeing significant performance improvements with x264 – roughly 40% higher performance per core on average.

As we enhance x265 to take advantage of the new technologies that these new processors bring to the light, including AVX-512, we expect that users of x265 will love the benefits that they see with these new Xeons.  This even extends to the Core i9 (Skylake-X) consumer processor family, which are based on the same Purley architecture.  Give them a spin, and let us know what you think!