Beamr compares their HEVC encoder to x265

Recently, a competitor (Beamr) published a blog post comparing their HEVC encoder to x265.  They claimed that their HEVC encoder is faster, AND it produces better video quality.

Of course, we’re used to people comparing other HEVC encoders to x265.

  • x265 is available under an open source license, and therefore it is widely available for any competitor to use in such comparison tests.
  • x265 is by far the most widely known and widely used HEVC encoder, and so other companies want to build their brand by comparing to x265.
  • But most importantly, they know that x265 is the “gold standard” of HEVC encoders.  It’s the benchmark that all others must try to beat.

So, did Beamr’s encoder really beat x265?  Or are their claims just a bunch of marketing bluster, that isn’t backed up by the facts?

This latest test was conducted in a way that mostly followed the encoder comparison guidelines we recently published.  But there were many flaws.

  • The bit rates produced were not identical.  On average the Beamr encodes had 6% higher bit rates.  In one case, the Beamr encode had a bit rate that was 18% higher.  This was due to the fact that the test video sequences were very short, and one-pass encoding was used (giving the encoder less time to “dial in” to the target bit rate).  This problem could have been avoided by using 2 pass encoding, which can achieve a more accurate bit rate while leveling quality across the encode.
  • All of the test parameters were hand-picked by Beamr.  The test content, the test hardware, the settings used, including the bit rate, the number of hardware threads used and the x265 presets.  This is known as “cherry picking” (the fallacy of incomplete evidence).  We wonder how many tests were run other different conditions, in order to determine the tests that showed the Beamr encoder under the most favorable light.
    • Beamr chose to use x265’s ultrafast, medium and veryslow presets.  They claimed a big speed advantage against x265’s veryslow preset.  Of course, as the name implies, we designed our veryslow preset to be very slow.  It is focused on achieving the highest possible quality, and no compromises are made that would improve performance.  The next preset, slower, is twice as fast, but has nearly identical encoding efficiency.  Why didn’t Beamr compare their encoder to x265 with the slower preset?  [Because they would have lost on a speed comparison, as well as on quality.]
    • Beamr only chose video sequences with relatively low motion and detail.  Clips like Bar Scene, Dinner Scene, and Wind And Nature are exceptionally easy to encode.  Why no sports, or other high detail + high motion content?  Perhaps it’s because Beamr relies on using Tiles, which cannot use inter-prediction across tile boundaries.  x265 uses Wavefront Parallel Processing, which is more efficient.
    • Beamr chose only 4K content, at 7 Mbps bit rates.  Why didn’t they show compare the 2 encoders for 1080P, 720P and lower picture sizes?  Why not compare the encoders at a wider range of bit rates?
    • Beamr chose a 4 year old hardware architecture (Xeon E5 v2, code named “Ivy Bridge”) to run their comparison test on.  These processors don’t support AVX2 instructions, which newer Xeon generations (Haswell, Broadwell, Purley) do.  We wonder how the Beamr encoder compares to x265 on modern machines.

Beamr claims a speed advantage over x265 under these carefully selected conditions.  Most commercial companies use a preset like “slow” or “slower”, but rarely use our veryslow preset for their high quality offline encoding.  For real-time encoding, we have a more advanced encoding library called UHDkit that can run multiple encoding instances in parallel to achieve high performance on a many-core server, allowing for higher quality settings than x265’s ultrafast preset.  On modern Haswell, Broadwell or Purley generation Xeon powered servers, customers who have tried competing HEVC encoders tell us that UHDkit outperforms the competition under a range of scenarios.

Beamr made the bitstreams available, so that anyone could compare the quality of the video produced.  You should download the videos and compare for yourself.   Beamr claimed that the visual quality of their encodes was clearly superior, but we are confused by this claim. Perhaps their definition of higher visual quality is different from ours.  Perhaps by “higher quality”, they mean “softer, with less detail.”  If you prefer video with more detail and accuracy, x265 is the clear winner.

To compare video, you can’t look at still images – you need to actually watch the video at full speed.  We have a special video comparison tool (UHDcode Pro Player) that we make available to customers and partners which can play 2 streams simultaneously, letting you hide or reveal more or less of each stream.  This makes it easy to see which encode is better.  But the screen shots below are fairly representative of the difference in detail you will see in the competing encodes.   Take a look at the texture on all of the surfaces (the road, the buildings, the water).  Take a look a the sharpness of the detail on every object.  Everyone we’ve talked to so far agrees that x265 produced the better video.  That matches the feedback we get from our customers and prospective customers that have compared x265 to Beamr, and all of our other competitors.  We consistently win these customer shoot-outs, based on the quality and performance of x265 and our premium encoding library, UHDkit.

On the Beamr blog, they’ve posted some screen shots, which you can download to see for yourself how much more detail the x265 encodes have.  Here are some additional examples…

x265: Ritual Dance 4K Medium Frame 412

Beamr: Ritual Dance 4K Medium Frame 412

x265: Pier Seaside 4K Medium Frame 337

Beamr: Pier Seaside 4K Medium Frame 337

x265: Driving medium 4K – Frame 486

Beamr: Driving medium 4K – Frame 486

x265: Aerial 4K veryslow -Frame 568

Beamr: Aerial 4K veryslow -Frame 568

How to compare video encoders

Whether you want to compare two encoders, or compare different settings for the same encoder, it’s important to understand how to set up and run a valid test.  These guidelines are designed to allow anyone to conduct a good test, with useful results.  If you publish the results of an encoder comparison and you violate these rules, you shouldn’t be surprised when video professionals point out the flaws in your test design.

  1. You must use your eyes. Comparing video encoders without visually comparing the video quality is like comparing wines without tasting them.  While it’s tempting to use mathematical quality metrics, like peak signal to noise ratio (PSNR), or Structural Similarity (SSIM), these metrics don’t accurately measure what you are really trying to judge; subjective visual quality.  Only real people can judge whether test sample A looks better than test sample B, or whether two samples are visually identical.  Video encoders can be optimized to produce the highest possible PSNR or SSIM scores, but then they won’t produce the highest possible visual quality at a given bit rate.  If you publish PSNR and SSIM values, but you don’t make the encoded video available for others to compare visually, you’re not conducting a valid test at all.
    Note:  If you’re running a test with x264 or x265, and you wish to publish PSNR or SSIM scores (hopefully in addition to and not instead of conducting subjective visual quality tests), you MUST use –tune PSNR or –tune SSIM, or your results will be completely invalid.  Even with these options, PSNR and SSIM scores are not a good way to compare encoders.  x264 and x265 were not optimized to produce the best PSNR and SSIM scores.  They include a number of algorithms that are proven to improve subjective visual quality, while at the same time reducing PSNR and SSIM scores.  Only subjective visual quality, as judged by real humans matters.
    Of course subjective visual quality testing is very time consuming.  But it’s the only valid method for true comparison tests. If, however, you have a very large quantity of video content, and you need to compare the quality of content A against content B, or set up an early warning indicator in an automated quality control system, objective metrics are useful.  Netflix has done some valuable work in this area, and we would recommend their VMAF (Video Multimethod Assessment Fusion) metric as the best available today.  At best, objective metric scores should be considered only a rough indication of visual quality.
  2. Video must be evaluated as video, not still frames. It’s relatively easy to compare the visual quality of two decoded frames, but that’s not a valid comparison.  Video encoders are designed to encode moving images.  Things that are obvious when you are examining a single frame may be completely invisible to any observer when viewed as part of a sequence of images at 24 frames per second or faster.  Similarly, you’ll never spot motion inaccuracy or other temporal issues such as pulsing grain or textures if you’re only comparing still frames.
  3. Use only the highest quality source sequences. A source “sequence” is a video file (a sequence of still pictures) that will serve as the input to your test.  It’s important for your source video files to be “camera quality”.  You can’t use video that has already been compressed by a camcorder, or video that was uploaded to a video sharing site like YouTube or Vimeo that compresses the video to consumer bit rates so that it can be streamed efficiently from those websites.  Important high frequency spatial details will be lost, and the motion and position of objects in the video will be inaccurate if the video was already compressed by another encoder.
    In the early days of digital video, film cameras were able to capture higher quality images than video cameras, and so the highest quality source sequences were originally shot with film movie cameras, and then scanned one frame at a time.  Today, high quality digital video cameras are able to capture video images that rival the highest quality film images.  Modern professional digital video cameras can either record uncompressed (RAW) or very lightly compressed high bit rate (Redcode, CinemaDNG, etc.) video, or transfer uncompressed video via HDMI or SDI to an external recording device (Atomos Shogun, Blackmagic Video Assist), which can store the video in a format that utilizes very little video compression (ProRes HQ, DNxHD, DNxHR).  Never use the compressed video from a consumer video camera (GoPro, Nikon, Canon, Sony, Panasonic, etc.).  The quality of the embedded video encoder chips in consumer video cameras, mobile devices and DSLRs is not good enough.  To test video encoders, you need video that does not already include any video compression artifacts.
  4. Use a variety of source sequences. You should include source video that is a representative sample of all of the scenarios that you are targeting.  This may include different picture sizes (4K, 1080P, 720P, etc), different frame rates, and different content types (fixed camera / talking heads, moving camera, sports/action, animation or computer generated video), and different levels of complexity (the combination of motion and detail).
  5. Reproducibility matters.  Ideally, you should choose source sequences that are available to others so that they can replicate your test, and reproduce and validate your results.  A great source of test sequences can be found at and  Otherwise, if you are conducting a test that you will publish and you have your own high quality source sequences, you should make them available for others to replicate your test.
  6. Speed matters. Video encoders try many ways to encode each block of video, choosing the best one.  They can be configured to go faster, but this will always have a trade-off in encoding efficiency (quality at a given bit rate).  Typically, encoders provide presets that enable you to choose a point along the speed vs. efficiency tradeoff function (x264 and x265 provide ten performance presets, ranging from –preset ultrafast to –preset placebo).  It’s not a valid test to compare two encoders unless they are configured to run at a similar speed (the frames per second that they encode) on identical hardware systems.  If encoder A requires a Supercomputer to compare favorably with encoder X running on a standard PC or server, or if both encoders are not tested with similar configurations (fast, medium, or slow/high quality), the result is not valid.
  7. Eliminate confounding factors.  When comparing encoding performance (speed), it’s crucial to eliminate other factors, such as decoding, multiplexing and disk I/O. Encoders only take uncompressed video frames as input, so you can decode a high quality source sequence to raw YUV, storing it on a fast storage system such as an SSD or array of SSDs so that I/O bandwidth will be adequate to avoid any bottlenecks.
  8. Bit Rate matters. If encoders are run at high bit rates, the quality may be “visually lossless”.  In other words, an average person will not be able to see any quality degradation between the source video and the encoded test video.  Of course, it isn’t possible to determine which encoder or which settings are best if both test samples are visually lossless, and therefore, visually identical.  The bit rates (or quality level, for constant quality encoding) you chose for your tests should be in a reasonable range.  This will vary with the complexity of the content, but for typical 1080P30 content, for HEVC encoder testing, you should test at bit rates ranging roughly from 400 kbps to 3 Mbps, and for 4K30 you should cover a range of roughly 500 kbps to 15 Mbps.  It will be easiest to see the differences at low bit rates, but a valid test will cover the full range of quality levels applicable to the conditions you expect the encoder to be used for.
  9. Rate Control matters.  Depending on the method that the video will be delivered, and the devices that will be used to decode and display the video, the bit rate may need to be carefully controlled in order to avoid problems.  For example, a satellite transmission channel will have a fixed bandwidth, and if the video bit rate exceeds this channel bandwidth, the full video stream will not be able to be transmitted through the channel, and the video will be corrupted in some way.  Similarly, most video is decoded by hardware video decoders built into the device (TV, PC, mobile device, etc.), and these decoders have a fixed amount of memory to hold the incoming compressed video stream, and to hold the decoded frames as they are reordered for display.  Encoding a video file to an overall average target bit rate is relatively easy.  Maintaining limits on bit rate throughout the video, so as not to overfill a transmission channel, or overflow a video decoder memory buffer is critical for professional applications.
  10. Encoder Settings matter. There are many, many settings available in a good video encoder, like x265.  We have done many experiments to determine the optimal combination of settings that trade off encoder speed for encoding efficiency.  These 10 performance presets make it easy to run valid encoder comparison tests.  If you are comparing x265 with another encoder, and you believe you have the need to modify default settings, contact us to discuss your test parameters, and we’ll give you the guidance you need.
  11. Show your work. Before you believe any published test or claim (especially from one of our competitors), ask for all of the information and materials needed to reproduce those results.  It’s easy to make unsubstantiated claims, and it’s easy for companies to run hundreds of tests, cherry-picking the tests that show their product in the most favorable light.  Unless you are given access to the source video, the encoded bitstreams, the settings, the system configuration, and you are able to reproduce the results independently with your own test video sequences under conditions that meet your requirements, don’t believe everything you read.
  12. Speak for yourself.  Don’t claim to be expert in the design and operation of a particular video encoder if you are not.  Recognize that your experience with each encoder is limited to the types of video you work with, while encoders are generally designed to cover a very wide range of uses, from the highest quality archiving of 8K masters or medical images, to extremely low bit rate transmission of video through wireless connections.  If you want to know an encoder can or can’t do, or how to optimize it for a particular scenario, you should ask the developers of that encoder.

Hardware vs. Software encoders. 
It is a bit silly to compare hardware encoders to software encoders. While it’s interesting to know how a hardware encoder compares to a software encoder at any given point in time on a given hardware configuration, there are vast differences between the two types of encoders.   Each type has distinct advantages and disadvantages.  Hardware encoders are not cross platform;  they are either built in or added on to the platform. Hardware encoders are typically designed to run in real-time, and with lower power consumption than software encoders, but for the highest quality video encoding, hardware encoders can NEVER beat software encoders, because their algorithms are fixed (designed into the hardware), while software encoders are infinitely flexible, configurable and upgradeable.  There are many situations where only a hardware encoder makes sense, such as in a video camera or cell phone.  There are also many situations where only a software encoder makes sense, such as when it comes to high quality video encoding in the cloud, on virtual machines.

Haivision to Demonstrate Breakthrough Performance of Live 4K HEVC/H.265 Software Encoding at 2017 NAB Show

Haivision contributions to the x265 open-source initiative have pushed boundaries on quality and performance of live video streaming on Intel processors

MONTREAL, CANADA – APRIL 19, 2017 – At the 2017 NAB Show, Haivision will demonstrate a breakthrough in live 4Kp60 HEVC software-only performance video streaming, leveraging the unparalleled quality of x265 software encoding, while running at a performance level that was previously only possible with dedicated hardware. This demonstration will be presented by Haivision’s HaiGear Labs, the company’s technologies research group, at the Renaissance Hotel (suite Ren Deluxe – B) next to the Las Vegas Convention Center.

Through the use of commodity off-the-shelf processing capabilities, Haivision will showcase how x265 software encoding, running on readily available dual-socket servers from the Intel® Xeon® Processor E5-2600 v4 product family, addresses the growing demand for high-quality 4K video streaming. This development brings down the costs associated with encoding live 4K video and enables 4K video streaming on ubiquitous Intel cloud compute architectures.

The foundation for these video streaming innovations comes from the company’s four years of active involvement in the x265 open source project, a commercially backed initiative founded with the goal of producing the highest performance, most efficient HEVC/H.265 video encoder software implementation. Haivision is an original charter licensee of the x265 project and has made significant contributions to the x265 initiative through tight technology collaboration with MulticoreWare, the primary developer of the widely adopted open-source codec.

Haivision’s quality-to-performance in its live 4K HEVC demonstration leverages UHDKit, MulticoreWare’s extended encoding library built on top of the x265 HEVC encoder. By heavily investing in advancing the UHDKit for low-latency live encoding, Haivision has been able to push the boundaries on what has been possible in HEVC software encoding.

“Haivision has been an active contributor to x265 and UHDKit and has helped MulticoreWare push the envelope with regard to live encoding performance,” said Tom Vaughan, vice president, general manager for video, MulticoreWare. “Haivision’s numerous contributions are invaluable to every user of x265.”

“Haivision’s long-term association with MulticoreWare’s x265 project and our tuning of the UHDKit for high performance streaming on the Intel platform has enabled our customers to benefit from software-only or CPU/GPU balanced performance,” said Mahmoud Al-Daccak, chief technology officer, Haivision. “We will continue to pioneer and contribute to these development communities that rely on open-source initiatives to move the streaming video industry forward.”

As a pioneer in high performance streaming solutions, Haivision innovates in the areas of live hardware and software encoding/decoding, video stream transport and management. The company is dedicated to pushing the technology envelope, and fostering partnerships and collaboration within the industry to expand the ecosystems of performance video that its customers depend on. To learn more or book a demonstration, visit

About Haivision
Haivision, a private company founded in 2004, provides media management and video streaming solutions that help the world’s leading organizations communicate, collaborate and educate. Haivision is recognized as one of the most influential companies in video by Streaming Media and one of the fastest growing companies by Deloitte’s Technology Fast 500. Haivision is headquartered in Montreal and Chicago, with regional offices located throughout the United States, Europe, Asia and South America. Learn more at

Meet the x265 Development Team at NAB 2017

MulticoreWare, the developers of x265, will be at the National Association of Broadcasters convention in Las Vegas, Nevada, April 24-27th.  You can find us in the South Hall, Upper, booth SU14002.  We’ll be demonstrating the latest advances to x265, and our premium video encoding framework, UHDkit (which includes both x264 and x265, plus many extended capabilities).  If you haven’t registered, you can get a free guest pass by using Guest Pass Code: LV6642.  Contact us through our x265 Facebook page if you would like to schedule a meeting.

HEVC Advance Announces ‘Royalty Free’ HEVC Software

Major initiative designed to rapidly accelerate widespread HEVC (H.265)/UHD adoption on mobile devices and personal computers

BOSTON, Nov. 22, 2016 /PRNewswire/ — HEVC Advance, an independent licensing administrator, today announced a major software policy initiative to rapidly accelerate widespread HEVC/UHD adoption in consumer mobile devices and personal computers.  Under the software initiative, HEVC Advance will not seek a license or royalties on HEVC functionality implemented in application layer software downloaded to mobile devices or personal computers after the initial sale of the device, where the HEVC encoding or decoding is fully executed in software on a general purpose CPU.  Examples of the types of software within the policy include browsers, media players and various software applications.

According to Peter Moller, CEO of HEVC Advance, “We are very pleased to offer this initiative.  A critical goal of HEVC Advance is to encourage widespread adoption of HEVC/UHD technology in consumer devices.  While HEVC technology implemented in specialized hardware circuitry provides the best and most efficient user experience, there are millions of existing mobile devices and personal computers that do not have HEVC hardware capability.  Our initiative is tailored to enable software app and browser providers to include HEVC capability in their software products so that everyone can enjoy HEVC/UHD video today.  I’d like to specifically thank Tom Vaughan at MulticoreWare for his guidance in our development of this initiative.”

Tom Vaughan, VP and GM of Video at MulticoreWare remarked: “We have invested heavily in the development of our HEVC codecs and associated libraries.  From the start, HEVC Advance worked very hard to listen, understand and develop solutions to respond to the market’s concerns about HEVC Adoption.  We are thankful for their efforts and believe this initiative will encourage and facilitate software developers, web service and content providers, mobile device manufacturers and others to take advantage of the tremendous competitive advantage that HEVC provides as quickly as possible.”

Subject to certain exceptions/conditions.  Full details on the software policy initiative are available on the HEVC Advance website:

About the HEVC Advance Licensing Program
In addition to the above described software initiative, HEVC Advance offers incentives to encourage consumer device manufacturers to include HEVC functionality at initial sale.  For example, HEVC Advance only seeks one device royalty for a consumer device, even if that device includes multiple HEVC decoders or encoders at the time of the initial sale (subject to limited exceptions).  Therefore, any number of HEVC software products may be included in a consumer device at initial sale without incurring additional royalties, providing the applicable device royalty has been (or will be) paid by the consumer device manufacturer.  In addition, even after initial sale, HEVC Advance Licensees can receive waivers for device royalties on HEVC software products if these software products are installed on a consumer device for which the applicable device royalty has been (or will be) paid.

For more details or questions, please contact  To request a license or for a detailed summary of the HEVC Advance licensing structure and incentive program, please visit the HEVC Advance website:

About HEVC Advance
HEVC Advance is an independent licensing administrator company formed to lead the development, administration and management of an HEVC/H.265 patent pool for licensing essential patents. HEVC Advance provides a transparent and efficient licensing mechanism for HEVC patented technology. For more information about HEVC Advance, visit

x265 2.1 Released

x265 version 2.1 has been released.  Full documentation is available at

Release Notes for 2.1

Encoder enhancements

  1. Support for qg-size of 8
  2. Experimental support for slice-parallelism.
  3. Able to insert non-IDR I-frames at scene changes when encoding with fixed GOP lengths (min-keyint = keyint)

API changes

  1. Encode user-define SEI messages passed in through x265_picture object.
  2. Disable SEI and VUI messages from the bitstream
  3. Specify qpmin and qpmax
  4. Control number of bits to encode POC.

Bug fixes

  1. QP fluctuation fix for first B-frame in mini-GOP for 2-pass encoding with tune-grain.
  2. Assembly fix for crashes in 32-bit from dct_sse4.
  3. Threadpool creation fix in windows platform.

A Proposal to Accelerate HEVC Adoption

Clogged pipes  Annual global Internet traffic is no longer measured in Megabytes, Gigabytes, Terabytes, or even Petabytes.  This year, global IP traffic will be more than one Zettabyte (a trillion billion, or 1,000,000,000,000,000,000,000 bytes)[1].  By far the biggest driver of IP traffic growth is video.  Today, more than 70% of all IP traffic is video.  By 2020, video will consume an estimated 82% of 2.3 Zettabytes of IP traffic[2].  Video has become strategic priority for many companies, including leading social media and messaging services.  In addition to the explosive growth in on-demand video, we are in the middle of a transition from high definition video to ultra high definition (UHD) video, with higher resolution, higher color accuracy, and higher dynamic range.  UHD content is forecast to grow to 20.7% of global IP video traffic by 2020.  Internet bandwidth is never as cheap or plentiful as we would want, so there is a serious need for more efficient video compression.  Fortunately, a new standard is available which can deliver identical quality to consumers using half the bandwidth of previous video compression standards.  Imagine a technology that could theoretically free up more than 30% of worldwide IP bandwidth, reducing congestion and allowing that bandwidth to be used to deliver a much higher quality of experience.  This technology is ready, mature, and optimized, but it’s hardly being utilized.

Twice the Efficiency  HEVC, also known as H.265, is a new video coding standard, ratified by the ITU and ISO in January 2013.  Many leading technology companies and research groups contributed to the new standard, including Microsoft, Apple, and Samsung.  These organizations contributed their researcher’s time, efforts, and intellectual property to create a significantly more powerful video compression standard, and the results are outstanding.  A 1080P movie that required 6 Mbps to be delivered in high quality with AVC/H.264 (today’s most widely used video compression standard), typically only requires about 3 Mbps to be delivered in the same quality with HEVC/H.265.  The improved efficiency of HEVC results in significantly higher quality at any fixed bandwidth.  So, for anyone trying to watch a video over a congested or bandwidth limited Internet connection, HEVC is able to deliver much better picture quality than AVC.

Ready to Rock  3 ½ years after the standard was ratified, HEVC hardware and software implementations, full-featured, efficient and widely available.  HEVC hardware decoders are built in to the latest smartphone SOCs, high-end PC graphics chipsets, and most high-end televisions.  Hardware HEVC encoders are embedded in smartphones, PCs, cameras and broadcast encoders.  The x265 HEVC encoder software is available under the GPL v2 open source license, and it’s been incorporated in FFMPEG, VLC, and dozens of open source and commercial applications.  There are many other commercial HEVC encoder and decoder software implementations.  HEVC is ready to rock and roll.  Sadly, HEVC is not in widespread use.  There isn’t a whole lot of HEVC encoded content, or HEVC enabled software applications reaching end-users.  The exception to this rule is for 4K content streamed by Netflix, Amazon and other movie streaming services to the latest generation of 4K TVs.  But although billions of devices are capable of supporting HEVC, in almost every application we continue to use AVC/H.264, which requires roughly twice the bit rate (and file size) to achieve the same quality as HEVC.  Consumers and enterprises aren’t getting the benefits of this incredibly powerful new technology.  There is no technical reason that HEVC isn’t being used for most video applications.  The hold up is due to the cost and uncertainty associated with HEVC patent licensing.

Patent Licensing Impasse  The HEVC video coding standard was developed by a team of experts jointly managed by two standards bodies; ITU and ISO.  It benefits consumers when multiple companies collaborate to develop a new industry standard.  Standards development involving many contributing organizations reduces the overall cost of developing a new technology, can produce a better overall result by combining valuable, proprietary innovations from many contributors, while providing compatibility across many different company’s products.  As per the policies of the ITU and ISO, the companies involved in setting the HEVC standard must disclose any patents they have or may file on the techniques that they contributed to the standard, and they pledge to license their patents on Reasonable, and Non-Discriminatory (RAND) terms.

Thirteen years ago, the majority of companies that developed AVC agreed to license their patents through one organization, MPEG LA.  Today, some of the companies that developed HEVC have agreed to make their patents available through MPEG LA, and some have formed a 2nd patent pool, called HEVC Advance.  A  number of important companies with HEVC patents have not yet joined one of the patent pools.  This fragmentation, combined with total patent royalties that are potentially many times greater than video solution developers currently pay for AVC, has caused many potential HEVC adopters to hold off for now.  We’ve all got too much invested, and HEVC is far too good to let the patent licensing situation delay adoption much longer.

Available Options  HEVC is the best video compression standard available today, and it is likely to remain the best video compression standard available for the next several years.  That isn’t to say that it doesn’t have any competition.  Google’s VP9 is a very capable video codec.  Google, Microsoft, Netflix, Amazon, Intel, Cisco and Mozilla have formed the Alliance for Open Media (AOM), a group dedicated to developing a next-generation video compression standard.  Some may have thought that the AOM was organized simply to gain leverage over HEVC patent negotiations.  The work that the AOM is doing makes it clear that it is a very serious project.  The AOM represents the merger of three royalty -free video codec development efforts (Google’s VP10, Cisco’s Thor, and Xiph/Mozilla’s Daala formats).  It has the backing of many industry leaders, including Adobe, ARM, AMD, NVIDIA and Vidyo.  More leading companies will join in the coming months, and I expect the AOM to be successful in their goal of establishing a widely adopted, royalty free next-generation video standard.  But it will take time to finalize this standard, and more time to develop and deploy implementations.  There is a window of opportunity for HEVC to achieve the widespread, pervasive adoption we all want to enjoy, in order to ensure a long shelf life and a good return on the massive combined industry investment.

Moving Forward  I manage the video software business for MulticoreWare, developers of x265; the world’s most widely adopted HEVC encoder.  This has enabled me to build strong relationships with many of the leading adopters of HEVC, including movie studios and post production companies, semiconductor companies, broadcast and streaming video encoding system vendors, web video streaming services, web video processing services, device OEMs, and major cloud and device platform owners.  I have had many discussions on the topic of HEVC adoption with key players, and I’ve found that all involved are intelligent, reasonable people, but their perspectives differ widely.  It’s clear from the people I’ve spoken with that there is a big gap in the patent licensing discussions.  It’s time for a détente.  HEVC is poised for breakout success, but it won’t happen unless we see some significant improvements in the patent licensing situation.  I’m optimistic that most organizations involved are recognizing this, and are ready to find ways to accelerate adoption.

Some believe the only way to solve the problem we see today is to move to royalty-free standards developed outside of international standards bodies.  While that can work well, it’s not the only way a great standard can be developed.  I believe that it’s perfectly reasonable for inventors to be given an incentive to contribute their valuable developments to a global standard, and to be compensated for their contributions.  Every company recognizes that intellectual property; whether it is a movie, TV show, software, hardware or invention; takes a lot of time, money, and unique talent to develop.  So it’s unreasonable to expect that the only way to develop new standards is to force contributors to donate their IP to a new standard royalty free.  However, in standards setting organizations, contributors shouldn’t come to the party with the goal of earning a big windfall on their R&D investment.  If that is your goal, you might be ‘persona non grata’ when it’s time for the next party.  A reasonable ROI is fine, but patents for techniques contributed to a technical standard are RAND encumbered, and must be reasonably priced.

So, how can the patent licensing situation be resolved?  There are several possibilities.  The stalemate could continue, with many companies sticking with H.264 or VP9 until AV1 is available.  In this scenario, most everyone loses.  It could be resolved in court.  Of course, a legal battle is the least attractive option for all concerned.  The best option is for patent holders, including those on the sidelines, to come together and compromise and offer a licensing solution acceptable to the vast majority of potential licensees.

It’s a Web and Mobile World AVC/H.264 was finalized as a specification in March, 2003, roughly 10 years before HEVC/H.265.  Ten years ago, at this stage in the life cycle of AVC/H.264, things were quite different.  AVC/H.264 patent holders pooled their patents together, enabling licensing through a single organization, called MPEG LA.  In terms of video distribution, was a “set-top box and DVD” world.  The Blu-ray Disc format was competing with HD-DVD to become the next generation optical disc standard.  The VC-1 video codec (developed by Microsoft and others) was competing with AVC, and both were supported by HD-DVD and Blu-ray Disc.  VC-1 patents were offered under very competitive terms.  The iPhone did not exist.  YouTube was less 8 months old, offering 320 x 240 pixel videos.  Netflix streaming, Amazon Video and Hulu were not available.  Facebook was a closed social network for college students.  Today, we live in a web and mobile world, where Internet video streaming is no longer a science experiment; it’s the primary method of accessing video content for billions of people.  Support from the leading web browsers and mobile platforms is essential for any new video standard to succeed.

Key to the success of AVC/H.264 was adoption by all of the leading device OEMs and web browsers.  Once H.264 was supported natively on every popular computing device, software developers could utilize it without an additional patent license or added cost.  Today, popular web services don’t charge for the client software required to access the service.  Instead, you just access the service through your browser, or download a free client app.  HEVC will not achieve critical mass if streaming video services, social media services and video conferencing services can’t continue to provide free client apps.  Web browsers will not support HEVC if there is an added royalty cost.  Unless they are content to cede the consumer PC and mobile video compression market to royalty free codecs, HEVC patent holders need to recognize the reality of today’s technology ecosystem, and adjust their license terms to compete successfully.  Treating software the same as hardware is a mistake.  Hardware is the engine, but software and content are the fuel.  Without the fuel, the engine won’t start, and it won’t run.

Proposal  To accelerate HEVC adoption, I propose that HEVC patent licensors agree to the following principles;

  • All HEVC patent holders should make their patents available through a patent pool
  • Ideally, all patent holders should join one patent pool
  • Only one reasonable royalty should be paid per device
  • Software decoding on consumer devices must be royalty free
  • Software encoding on consumer devices must be royalty free
  • Content distribution must be royalty free
  • There must be a reasonable cap on total royalties owed for HEVC implementations

The HEVC standard was first ratified 3 ½ years ago. It’s time for all HEVC patent holders to make their patents available under reasonable terms, through a single patent pool.  Holdouts need to fulfill their obligation to license their patents on reasonable and non-discriminatory terms.  Ideally, all patent holders should join a single patent pool, to eliminate redundancy, and to insure more reasonable royalty rates.  It much easier for licensees to track and maintain compliance if they don’t have to sign two or more patent license agreements.

I’m going to hold off suggesting the ideal compromise for patent royalty rates.  This is something that needs to be worked out in discussions between licensors and licensees.  Patent license revenues are a function of both royalty rates and adoption rates.  At this point, patent licensors should be concerned with accelerating adoption.

Royalty free software decoding would immediately enable more than a billion legacy devices to support HEVC playback, massively accelerating adoption.  This proposal would enable all web browsers, social media and video player apps to immediately add support for HEVC.  Royalty free software encoding would mean that video chat, video conferencing and video sharing services wouldn’t hesitate to significantly improve the quality of their services.  For clarification, this would not be a workaround for device manufacturers to offer HEVC support.  Under my proposal, only software distributed to hardware devices by third party companies after the first sale of the device would be considered royalty free.

Native Hardware Support Of course, software decoding and encoding is just a good first step to unleash the ecosystem.  Consumers will recognize that dedicated video decoding hardware reduces power consumption, drastically improves battery life, and ensures reliable playback.  Hardware accelerated encoding is desirable on battery powered mobile devices for video recording and live video messaging or streaming applications.  As HEVC adoption quickly spreads to every app and service that utilizes video, native HEVC support will be essential for any device to remain competitive.  This is not a problem, as every leading semiconductor manufacturer already supports HEVC in their PC, mobile device and embedded graphics.

Is this proposal unrealistic?  Not at all.  HEVC patent holders understand that it is in their interest to see HEVC adoption accelerated quickly and significantly, and that it is critically important for HEVC to become common for web video distribution and mobile applications.  Effectively, this proposal mirrors the current state of AVC patent licensing.  AVC has long been supported natively on every device that matters, and app developers don’t have to worry about AVC royalties.  The vast majority of software and web service developers currently do not, and will not, pay “device royalties” for client applications, or for video playback in a web browser.  Once leading browsers, apps and services are using HEVC, consumer demand for a fast, reliable experience will ensure that every leading device OEM offers native HEVC hardware support.  There are roughly 2.5 billion consumer compute-capable devices (PCs, tablets and mobile phones) sold each year[3], and hundreds of millions more consumer electronic devices (TVs, cameras, set-top boxes), it’s clear that the total addressable market for licensing HEVC is more than sufficient to provide a reasonable return on investment to HEVC patent holders.  It should be clear to all that the benefits of adopting this proposal will far outweigh any perceived cost.  I believe this proposal represents a win/win situation for licensors and licensees, and the biggest winners of all would be the end consumer worldwide.  I look forward to the conversation that is sure to follow.

Tom Vaughan
VP and GM, Video




x265 v2.0 released

x265 version 2.0 has been released. This release supports many new features as well as support for ARM assembly optimizations for most basic pixel and ME operations, as well as SAO cleanups and a fully tested reconfigure functionality.

Full documentation is available at

=========================================== New Features =========================================

• uhd-bd: Enable Ultra-HD Bluray support
• rskip: Enables skipping recursion to analyze lower CU sizes using heuristics at different rd-levels. Provides good visual quality gains at the highest quality presets.
• rc-grain: Enables a new ratecontrol mode specifically for grainy content. Strictly prevents QP oscillations within and between frames to avoid grain fluctuations.
• tune grain: A fully refactored and improved option to encode film grain content including QP control as well as analysis options.
• asm: ARM assembly is now enabled by default, native or cross compiled builds supported on armv6 and later systems.
==================================== API and Key Behaviour Changes ==================================
• x265_rc_stats added to x265_picture, containing all RC decision points for that frame
• PTL: high tier is now allowed by default, chosen only if necessary
• multi-pass: First pass now uses slow-firstpass by default, enabling better RC decisions in future passes
• pools: fix behaviour on multi-socketed Windows systems, provide more flexibility in determining thread and pool counts
• ABR: improve bits allocation in the first few frames, abr reset, vbv and cutree improved
=============================================== Misc ==============================================
• An SSIM calculation bug was corrected

x265 version 1.9

x265 version 1.9 has now been released. This release supports many new features as well as additional assembly optimizations for Main12, intra prediction and SAO. Recently added features lookahead-slices, limit-refs and limit-modes have been enabled by default in the supported presets. Full documentation is available at

New Features

  • Quant offsets: This feature allows block level quantization offsets to be specified for every frame. An API-only feature.
  • intra-refresh: Keyframes can be replaced by a moving column of intra blocks in non-keyframes.
  • limit-modes: Intelligently restricts mode analysis. – –max-luma and –min-luma for luma clipping, optional for HDR use-cases
  • Emergency denoising is now enabled by default in very low bitrate, VBV encodes API Changes – x265_frame_stats returns many additional fields: maxCLL, maxFALL, residual energy, scenecut and latency logging
  • qpfile now supports frametype ‘K” – x265 now allows CRF ratecontrol in pass N (N greater than or equal to 2)
  • Chroma subsampling format YUV 4:0:0 is now fully supported and tested

Presets and Performance

  • Recently added features lookahead-slices, limit-modes, limit-refs have been enabled by default for applicable presets.
  • The default psy-rd strength has been increased to 2.0
  • Multi-socket machines now use a single pool of threads that can work cross-socket.

Performance Presets

x265 has ten performance presets which enable anyone to make a good choice between encoding speed and compression efficiency.  These presets are combinations of x265 settings that should provide the best possible result at the encoding speed that you want to achieve.

If you want the highest compression efficiency (quality at your desired bit rate), you can select “–preset veryslow”.  Of course, “–preset veryslow” will run much slower than one of the faster x265 presets, so you will either need more time or more compute power (a more powerful PC or server).  If you’re trying to encode in real time, you will need x265 to maintain an encoding speed that is faster than the frame rate of your video, and so you’ll want to choose one of the faster presets, like “–preset faster” or “–preset veryfast”.

Over the past year we’ve added a number of new capabilities to x265 designed to allow it to run faster with very little tradeoff in encoding efficiency.  These include –limit-refs, –limit-modes, and –lookahead-slices.  We’ve performed extensive testing using a set of videos at various sizes (720P, 1080P and 2160P) on a range of machines.  We tested many possible improvements to our performance presets, trying to find the right combination of settings at each performance level.  The result is an update to our performance presets that incorporates some of our new algorithms, and a few changes to some of the existing settings.  The following charts illustrate the benefits of the new presets.  Your mileage may vary depending on your machine and your content.  In some cases you’ll notice a big improvement in speed, with a small tradeoff in quality, and in other cases you’ll notice both improved quality and speed.

The data points below show the average encoding speed and efficiency relative to the old (v1.8) veryslow preset.