For a class with such a small interface it seems remarkable that I would feel that it deserves a blog post all on its own. But a mixture of appreciation for what it does, and the pain I have had working with it requires catharsis.

AVAssetWriterInputPixelBufferAdaptor objects are used for taking in pixel data and providing it to a AVAssetWriterInput in a suitable format for writing that pixel data into a movie file.

One of the best features of the AVAssetWriterInputPixelBufferAdaptor is the pixel buffer pool. The documentation informs you that it is more efficient to use the adaptor’s pixel buffer pool to create pixel buffers than it is to create your own pixel buffers, what it leaves out is that it also does some memory management of the pixel buffers for you. You still need to call CVPixelBufferRelease after appending your pixel buffer but after that you can forget about it.

If instead of using the adaptor’s pixel buffer pool you naively create pixel buffers using CVPixelBufferCreate you will crash if you call CVPixelBufferRelease after appending the pixel buffer to the adaptor because the adaptor hasn’t yet finished with the pixel buffer but if you don’t call CVPixelBufferRelease then you will leak like the Titanic. When you create pixel buffers you can add a CVPixelBufferReleaseBytesCallback function which you can use to manage the allocation of the pixel data. But as I said the advantage of the pool is that the adaptor manages the lifetime of the pixel data for you. Using the pixel buffer pool makes it easier to manage memory and is faster.

The biggest issue is just plain lack of documentation about the pixel buffer adaptor and its associated AVAssetWriterInput. There is nothing that informs you that specifying the AVVideoColorPropertiesKey when creating a AVAssetWriterInput object for codecs other than ProRes4444 and ProRes422 will result in failure.

No information is provided to inform you on OS X that on older hardware (not different OSes) that the adaptor can give up generating color profile data which is needed for accepting pixel data with one color profile and producing video output with the output color profile. The way I found out was because the pixel buffer adaptor returned a NULL pixelBufferPool after a failure to convert a previous pixel buffer. This failure doesn’t happen immediately and you can have appended multiple pixel buffers to the adaptor before the failure happens as the processing of the pixel buffers is handled asynchronously. The adaptor itself holds no error state, AVAssetWriterInput objects holds no error state, instead the information your after is in the associated AVAssetWriter object. But then the information you are provided with is a joke. “The operation could not be completed because an unknown error occurred -11800 with OSStatus -12918.

Searching the system frameworks using the find command line tool informs me that -11800 is an AVFoundation error (AVError.h) meaning unknown. The OSStatus code -12918 is a kVTCouldNotCreateColorCorrectionDataErr error (VTErrors.h) which is how I found out that the adaptor could not convert my pixel data.

I’ve had numerous errors occur and been provided with unhelpful information and have learnt that the various frameworks within which to find the error codes are:

  • VideoToolbox
  • CoreMedia
  • CoreVideo
  • AVFoundation

After I get the CVPixelBuffer from the pixel buffer pool I add to it the CGColorSpace object which contains a color profile to the CVPixelBuffer using the CVBufferSetAttachment function. This color space is the one which matches the color space of the bitmap that the CVPixelBuffer will be backing.

The answers on stack overflow that relate to using AVAssetWriterInputPixelBufferAdaptor are mostly poor. This Code From Above blog post is far better though it does not use the pixel buffer pool.

Last but not least, a thank you to @invalidname on twitter who sent me some sample code to help me get going with AVAssetWriter.