Digital video now represents a vast, complex and expanding landscape covering broadcast VOD, in-stream /in-read VOD, publisher VOD, social VOD and YouTube. Basically, if it plays on a screen and loads via the internet, it represents digital video.
Typically, the performance of digital video is measured in terms of view-through rate (total completed views/total served impressions). This intuitively makes sense – if you make a video you want people to watch it. However, when you look a little more closely at how video is measured things start to unravel.
Let’s start with the Views themselves. It is important to realise that they are not all equal – some digital video is autoplay and some user-initiated, some views have the sound turned on, some don’t, some are played full-screen, some are crammed into a 300×250 display format in the corner of a page. All these factors have associated metrics and are easily measurable, but rarely are they utilised.
Next up for debate is the Impression metric. A served impression is not the same as a viewed impression. Viewability averages in the UK tend to fluctuate between 45-55% depending on who you ask, so half the served impressions in the UK are not seen by anyone at all. If you apply this logic to the view-through rate calculation the numbers suddenly halve (100 views/1,000 served impressions changes to 100 views/500 viewable impressions).
Viewability is not consistent and varies by platform, publisher, device and format. This means you can’t apply generic averages to your video “estate” and unless you implement viewability tracking to all your video inventory it will be incorrectly measured. However, up until recently viewability tracking could not be applied to all video inventory since 3rd party tracking could only be implemented for VPAID tagged video formats, not VAST. UK Broadcast VOD is currently delivered through VAST tagging for example, so none of that inventory can be tracked for viewability. VAST 4.1 has recently been released to deal with the issue but we will have to wait and see how seamlessly the update is rolled out. In-app viewability tracking for mobile is also in its infancy, so video delivered in that environment is unlikely to be adequately tracked.
Determining if a video was viewable is only the first step, understanding the media Environment is equally important, particularly:
- The content around which the video is being served on the page
Was it brand safe? Was it in-geo? Was it the only video on the page, or one of many?
- The format of the video
Was it pre-roll (i.e. was it served before a longer piece of video content), in-stream or hosted in-banner?
We need to take account of the Impact the video has on the viewer, where we know that success is heavily influenced by factors such as creative length, whether the ad is skippable, starts automatically or whether it is user-initiated. Lastly, we need to consider the effects of the video being served to the same user a number of times. Metrics that will help here are frequency (ideally on a weekly basis rather than at a campaign level), but just as importantly sentiment, otherwise you run the risk of damaging your brand by repeatedly exposing viewers to a video they have voiced their objection to.
There is a lot to consider! The good news is that the metrics are out there to deliver a robust, insightful analysis of video performance. Putting the right metrics in place, combined with strong guidelines around the environment and regular monitoring of audience sentiment, can help brands measure their video activity better for greater success.