Alternatives to #pragma mark in Swift

Are you use to using the #pragma mark pre-processor directive in c/c++/objective-c and have found yourself stuck in Swift? If you don’t know why you should be using #pragma mark then I recommend this NSHipster article on #pragma mark.

In Swift use the // MARK: feature instead:

// MARK: This is the comment you want to end up in the jump bar.

Having said that, it is recommended that in Swift you should break up your code into logical units by the liberal use of extensions where possible.

So for example you could implement all the delegate methods that a class conforms to in a class extension, see this stack overflow Q&A and ignore the comments about // MARK: not yet being implemented.

As well as

// MARK:

there is:

// TODO:, and // FIXME:

And a last update via @TheCocoaNaut that // MARK: etc. all work in Objective-C as well.

Tags: ,

Nullability and Objective-C

I’m annotating an Objective-C framework for nullability to improve the interaction of the framework with Swift code. Annotating Objective-C code as I have found has helped a little to clarify my thinking.

I have religiously kept to the following pattern where I always check the return value of [super init]:

@interface Timer : NSObject
@property (assign) startTime;  

@implementation Timer  
  self = [super init];

  if (self)    
    self->_startTime = start;
  return self;

Now I question myself as to why am I checking whether self is nil here, because I’ve been reviewing Objective-C code for annotation for nullability.

There is no need to be checking for nil here, if [super init] returns nil when super refers to a NSObject object the program is going to crash in the very near future anyway.

I think sometime ago I was guided to follow this pattern always so that if I was to change the class that Timer inherits from then the pattern keeps you safe from potential changes in behaviour. Somewhere along the line I forgot this and this just became a pattern I followed.

The consequence now is that when annotating for nullability I’ve got to break the anxiety* that this causes me where I have to accept these checks for nil in many cases do nothing and I have to look beyond this test to determine if initialiser will return nil.

*Cognitive dissonance like this creates a strange anxiety for me that brings work to a halt. The writing of this blog post is the action that breaks the connection.

The Pleasure and Pain of AVAssetWriterInputPixelBufferAdaptor

For a class with such a small interface it seems remarkable that I would feel that it deserves a blog post all on its own. But a mixture of appreciation for what it does, and the pain I have had working with it requires catharsis.

AVAssetWriterInputPixelBufferAdaptor objects are used for taking in pixel data and providing it to a AVAssetWriterInput in a suitable format for writing that pixel data into a movie file.

Read the rest of this entry »


Drawing rotated text with CoreText is broken

Here is a video where the text is being rotated at different angles. You can see from the flickering that at certain angles the text just doesn’t get drawn.


And here is a video where I’ve created a bitmap of the text drawn unrotated and then I draw the bitmap rotated at different angles. Which works:


I am drawing the core text within a path rather than from a point and I think that is the issue. I have seen in other situations when drawing multi-line wrapping text in a a path that is a column then if the drawing happens while the context is rotated then at certain angles the line wrapping for the first line of text behaves oddly.

I get exactly the same behaviour on iOS as I do on OS X.

This is definitely an issue of drawing the text within a path, drawing the text from a point works as expected.


AV Foundation editing movie file content

Firstly a link to a new useful Technote for AV Foundation released 1 December 2014.

TechNote 2404 – A short note on added AV Foundation API in Yosemite, specifically see section on AVSampleCursor and AVSampleBufferGenerator

Now onto WWDC 2013 session 612 Advanced Editing with AV Foundation on this page

Talk Overview

  • Custom Video compositing
    • Existing architecture
    • New custom video compositing
    • Choosing pixel formats
    • Tweening
    • Performance
  • Debugging compositions

    • Common pitfalls

Existing Architecture

AV Foundation editing today

  • Available since iOS 4.0 and OS X Lion
  • Used in video editing apps from Apple and in the store
  • Video editing
    • Temporal composition
    • Video composition
    • Audio mixing

Custom Video Compositor

What is a Video Compositor?

  • Unit of video mixing code
    • A chunk of video mixing code takes multiple sources.
  • Receives multiple source frames
  • Blends or transforms pixels
  • Delivers single output frame
  • Part of the composition architecture

What is a Composition model


Instruction objects in a AVVideoComposition


The Video compositor takes multiple source frames in and produces a single frame out.

For example we can encode a dissolve as a property of an instruction. For example an opacity ramp for 1 down to 0.

New Custom Video Compositing

As of Mavericks there is a new custom compositing API. You can replace the built in compositor with your own compositor. Instructions with mixing parameters are bundled up together with the source frames into a request object. You’ll be implementing the protocol @AVVideoCompositing which receives the new object AVAsynchronousVideoCompositionRequest and also implementing the protocol AVVideoCompositionInstruction.

func startVideoCompositionRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest!)

Once you have rendered the frame you deliver it with

func finishWithComposedVideoFrame(_ composedVideoFrame: CVPixelBuffer!)

But you can also finish with one of:

func finishCancelledRequest()
func finishWithError(_ error: NSError!)

Choosing Pixel Formats

  • Source Pixel Formats – small subset
    • YUV 8-bit 4:2:0
    • YUV 8-bit 4:4:4
    • YUV 10-bit 4:2:2
    • YUV 10-bit 4:4:4
    • RGB 24-bit
    • BGRA 32-bit
    • ARGB 32-bit
    • ABGR 32-bit

When decoding H264 Video your source pixel format is typically YUV 8-bit 4:2:0

You may not be able to deal with that format or whatever the native format of the source pixels are that you require. You can specify what format you require by your custom video compositor using method: func sourcePixelBufferAttributes which should return a dictionary. The key kCVPixelBufferPixelFormatTypeKey should be specified and it takes an array of possible pixel formats and if you want the compositor to work with a CoreAnimation video layer then you should provide a single entry in the array with a value kCVPixelFormatType_32BGRA.

This will cause the source frames to be converted into the format required by your custom compositor.

Output Pixel Formats

For the output pixel formats there is also a method requiredPixelBufferAttributesForRenderContext where you specify the formats your custom renderer can provide.

To get hold of a new empty frame to render into, we go back to the request object ask it for the render context which contains information about the aspect ratio and size that we are rendering to and also the required pixel format. We ask for a new pixel buffer which comes from a managed pool and we can then render into it to get our dissolve.

The Hello World equivalent for a custom compositor

@interface MyCompositor1 : NSObject<AVVideoCompositing>

// Sources as BGRA please
-(NSDictionary *)sourcePixelBufferAttributes {
    return @{ (id)kCVPixelBufferPixelFormatTypeKey :
            @[ @(kCVPixelFormatType_32BGRA) ] }

// We'll output BGRA
-(NSDictionary *)requiredPixelBufferAttributesForRenderContext {
    return @{ (id)kCVPixelBufferPixelFormatTypeKey :
            @[ @(kCVPixelFormatType_32BGRA) ] }

// Render a frame - the action happens here on receiving a request object.
-(void)startVideoCompositionRequest:(AVAsynchronousVideoCompositionRequest *)request {
    if (request.sourceTrackIDs count] != 2) {

    // There'll be an attempt to back the pixel buffer with an IOSurface which means
    // that they will be in GPU memory.
    CVPixelBufferRef srcPixelsBackground = [request sourceFrameByTrackID:[request.sourceTrackIDs[0] intValue]];
    CVPixelBufferRef srcPixelsForeground = [request sourceFrameByTrackID:[request.sourceTrackIDs[1] intValue]];
    CVPixelBufferRef outPixels = [[request renderContext] newPixelBuffer];

    // render - this is really only in its own scope so that code folding is poss in demo.
        // However here because we want to manipulate pixels ourselves we will lock
        // the pixel buffer base address which I think makes sure the pixel data is in
        // main memory so that we can access it.
        CVPixelBufferLockBaseAddress(srcPixelsForeground, kCVPixelBufferLock_ReadOnly);
        CVPixelBufferLockBaseAddress(srcPixelsBackground, kCVPixelBufferLock_ReadOnly);
        CVPixelBufferLockBaseAddress(outPixels, 0);

        // Calculate the tween block.
        float tween;
        CMTime renderTime = request.compositionTime;
        CMTimeRange range = request.videoCompositionInstruction.timeRange;
        CMTime elapsed = CMTimeSubtract(renderTime, range.start);
        tween = CMTimeGetSeconds(elapsed) / CMTimeGetSeconds(range.duration)

        size_t height = CVPixelBufferGetHeight(srcPixelsBackground, kCVPixelBufferLock_ReadOnly);
        size_t foregroundBytesPerRow = CVPixelBufferGetBytesPerRow(srcPixelsForeground, kCVPixelBufferLock_ReadOnly);
        size_t backgroundBytesPerRow = CVPixelBufferGetBytesPerRow(srcPixelsBackground, kCVPixelBufferLock_ReadOnly);
        outBytesPerRow = CVPixelBufferGetBaseAddress(outPixels);
        const char *foregroundRow = CVPixelBufferGetBaseAddress(srcPixelsForeground)
        const char *backgroundRow = CVPixelBufferGetBaseAddress(srcPixelsBackground)
        outRow = CVPixelBufferGetBaseAddress(outputPixels)
        for (size_t y = 0; y < height; ++y)
            // Some hacky code for copy bytes from one buffer to another.
        CVPixelBufferUnlockBaseAddress(srcPixelsForeground, kCVPixelBufferLock_ReadOnly);
        CVPixelBufferUnlockBaseAddress(srcPixelsBackground, kCVPixelBufferLock_ReadOnly);
        CVPixelBufferUnlockBaseAddress(outPixels, 0);

    // deliver output
    [request finishWithcomposedVideoFrame:outPixels];


Tweening is the parameterisation of the transition from one state to another. In the case where you are transitioning so that the new video track generated images start with images from one video track and end with images from another, for a dissolve transition the tween is an opacity ramp where the input for the opacity ramp is time.


The image above shows the two input video tracks and the opacity ramp. The image below shows the calculation of the tween value once you are 10% of the way through the transition. In this case the output video frame will display the first video at 90% opacity and the second video at 10%.



Instruction properties for the AVVideoCompositionInstruction protocol help with the compositor optimising performance.

@protocol AVVideoCompositionInstruction<NSObject>
    @property CMPersistentTrackID passthroughTrackID;
    @property NSArray *requiredSourceTrackIDs;
    @property BOOL containsTweening;

By setting these values appropriately there are performance wins to be had.


Some instructions are simpler than others, they might just take one source and often not even change the frames, for example in the frames leading up to a transition, the output frames are just the input frames from a particular track. In the instruction if you set the passthroughTrackID to the id of a particular track then the compositor will be bypassed.


Use this to specify the required tracks and that we do want the compositor to be called. If we have just a single track but we want to modify the contents of the frame in some then requiredSourceTrackIDs will contain just the single track. If you leave requiredSourceTrackIDs set to nil then that means deliver all frames from all tracks.


Even if source frames are the same, two static images for example but if we want to have a picture in a picture effect where the smaller image moves within the bigger picture then containsTweening needs to be set to YES. We have time extended source. If the smaller image doesn’t move then if we leave containsTweening to be YES then we are just re-rendering identical output so instead containsTweening should be set to NO. Then after the initial frame is rendered the compositor can optimise by just reusing the identical output.

Pixel buffer formats

  • Performance hit converting sources
    • H.264 decodes to YUV 4:2:0 natively.
    • For best performance, work in YUV 4:2:0
  • Output format less critical, display can accept multiple formats for example:

    • BGRA
    • YUV 4:2:0

The AVCustomEdit example code is available here.

Debugging Compositions

  • Common pitfalls
    • Gaps between segments
      • Results in black frames or hanging onto the last frame.
    • Misaligned track segments

      • Rounding errors when working with CMTime etc.
      • Results in a short gap between end of one segment & beginning of next.
    • Misaligned layer instructions

      • Track/Layers are rendered in wrong order.
    • Misaligned opacity/audio ramps

      • Opacity/Audio ramps over or undershoot their final value.
    • Bogus layer transforms

      • Errors in your transformation matrix so layers disappear
      • Outside boundaries of output frame

Being able to view the structure of the composition is useful and this is where AVCompositionDebugView comes in.

There is also the composition validation API which you can implement. You will receive callbacks when something appears to not be correct in the video composition.


Image generation performance

I’m wanting random access to individual movie frames so I’m using the AVAssetImageGenerator class but for this part of the project using generateCGImagesAsynchronously is not appropriate. Now clearly performance is not the crucial component here but at the same time you don’t want to do something that is stupidly slow.

I’d like to not have to hold onto a AVAssetImageGenerator object to use each time I need an image but just create one at the time a image is requested. So I thought I’d find out the penalty of creating a AVAssetImageGenerator object each time.

To compare the performance I added some performance tests and ran those tests on my i7 MBP with an SSD and on my iPad Mini 2. I’ve confirmed that the images are generated. See code at end.

On my iPad Mini 2 the measure block in performance test 1 took between 0.25 and 0.45 seconds to run. Most results clustering around 0.45 seconds. It was the second run that returned the 0.25 second result. Performance test 2 on the iPad Mini 2 was much more consistent with times ranging between 0.5 and 0.52 seconds. But reversing the order in which the tests run reverses these results. I’m not sure what to think about this, but in relation to what I’m testing for I feel comfortable that the cost of creating an AVAssetImageGenerator object before generating an image is minimal in comparison to generating the CGImage.

Strangely my MBP pro is slower, but doesn’t have the variation observed on the iPad. The measure block in both performance tests take 1.1 seconds.

Whatever performance difference there is in keeping a AVAssetImageGenerator object around or not is inconsequential.

    func testAVAssetImageGeneratorPerformance1() {
        let options = [
            AVURLAssetPreferPreciseDurationAndTimingKey: true,
        let asset = AVURLAsset(URL: movieURL, options: options)!
        self.measureBlock() {
            let generator = AVAssetImageGenerator(asset: asset)
            var actualTime:CMTime = CMTimeMake(0, 600)
            for i in 0..<10 {
                let image = generator.copyCGImageAtTime(
                    CMTimeMake(i * 600 + 30, 600),
                    actualTime: &actualTime, error: nil)
    func functionToTestPerformance(#movieAsset:AVURLAsset, index:Int) -> Void {
        let generator = AVAssetImageGenerator(asset: movieAsset)
        var actualTime:CMTime = CMTimeMake(0, 600)
        let image = generator.copyCGImageAtTime(
            CMTimeMake(index * 600 + 30, 600),
            actualTime: &actualTime, error: nil)
    func testAVAssetImageGeneratorPerformance2() {
        let options = [
            AVURLAssetPreferPreciseDurationAndTimingKey: true,
        let asset = AVURLAsset(URL: movieURL, options: options)!
        self.measureBlock() {
            for i in 0..<10 {
                self.functionToTestPerformance(movieAsset: asset, index: i)

Tags: ,

CGImageSource behaviour

I need to update a protocol that advertises that an object can create a CGImage.

I want to have my movie asset wrapping object conform to this protocol but it means I need a way to pass in more information as arguments than either of the two methods of the protocol allows. The protocol methods currently take an image index which objects conforming to the protocol that only have one possible image just ignore.

When requesting images from a movie we need to be able to specify a time in the movie, and also whether we want the image rendered from all video tracks or just some/one. So I think it is relevant to get rid of the image index argument and replace it with a dictionary which contains properties relevant to the object providing the image.

This means I’ve been revisiting looking at the behaviour of the ImageIO framework and the CoreFoundation object CGImageSource. Like last time I find no way to crop whilst creating a CGImage from a CGImageSource. I still find this a big weakness, if you have a very large image and you want just part of the image at original resolution then you first have to create a CGImage containing the full image and then crop it from there. The Quicktime image importer component allowed you to crop when rendering so this feels like a loss of useful functionality.

CGImageSource provides 2 methods for creating a CGImage. These are:

  • CGImageSourceCreateThumbnailAtIndex
  • CGImageSourceCreateImageAtIndex

The CGImageSourceCreateThumbnailAtIndex function provides a way to generate a scaled image up to the dimensions of the original while CGImageSourceCreateImageAtIndex doesn’t. Both of these functions take an options dictionary as well the image index. The scaling functionality of CGImageSourceCreateThumbnailAtIndex is limited. To generate a scaled image you need to specify two properties of the options dictionary passed into CGImageSourceCreateThumbnailAtIndex.


let imageSource = CGImageSourceCreateWithURL(...)
let thumbnailOptions = [
    String(kCGImageSourceCreateThumbnailFromImageAlways): true,
    String(kCGImageSourceThumbnailMaxPixelSize): 2272.0
let tnCGImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, thumbnailOptions)

The property with key kCGImageSourceCreateThumbnailFromImageAlways with a value of true means that any thumbnail image embedded in the image file is ignored and the image generated is from the full size image. If you don’t do this and the image file contains a thumbnail image then the max size you specify is ignored and you get the embedded thumbnail image, this is rarely what you want.

The property with key kCGImageSourceThumbnailMaxPixelSize value is the maximum size in pixels of either the width or height of the generated image. There is a limitation, in that you can’t scale up an image only scale down using this option. If you specify a size greater than both the width and height of the image then the generated CGImage will have the same dimensions as the original.

For very large images (I played with images of 21600 x 10800 pixels) on my iPad Mini 2 creating a CGImage at full size using CGImageSourceCreateThumbnailAtIndex failed, whereas CGImageSourceCreateImageAtIndex succeeded, CGImageSourceCreateThumbnailAtIndex successfully created the image on the simulator and OS X. If the options dictionary does not contain the kCGImageSourceThumbnailMaxPixelSize property then the function CGImageSourceCreateThumbnailAtIndex will create an image at full size up to a maximum size of 5000 pixels on OS X and iOS.

CGImageSourceCreateImageAtIndex takes different dictionary options. Peter Steinberger recommends against setting the kCGImageSourceShouldCacheImmediately to true but that was in 2013, the situation may have changed. I’ll add an addendum to this when I know for sure. There is little documentation for the kCGImageSourceShouldCache property so I’d only be guessing as to what it does exactly. On a 64bit system this value is true by default and false on 32bit systems. Leaving this with its default value is probably best.

Creating CGImages on OS X from an image with dimensions: 21600 x 10800 using CGImageSourceCreateThumbnailAtIndex

  • 1.87 seconds to create a thumbnail image with a width of 21600 and height 10800 pixels.
  • 0.81 seconds to create a thumbnail image with a width of 10800 and height 5400 pixels.
  • 0.54 seconds to create a thumbnail image with a width of 2800 and height 1400 pixels.

Drawing the scaled down CGImage in OS X

Creating the 2800 wide CGImage and drawing the image to a bitmap context took 0.57 seconds.

Drawing the thumbnail image to the context took 0.010506 seconds the first time and 0.00232 seconds for subsequent draws.

Creating CGImage using CGImageSourceCreateImageAtIndex and drawing the image to a bitmap context on OS X

Creating a image using CGImageSourceCreateImageAtIndex doesn’t allow scaling of the image so we will always get the full size image, in this case 21600 x 10800.

  • Creating the image for the first time took 0.000342 seconds
  • Subsequently creating the image took 0.00006 seconds
  • Creating and drawing the image for the first time took: 1.67 seconds
  • Subsequent image drawing took 0.15 seconds.

Creating the CGImage using CGImageSourceCreateImageAtIndex doesn’t decompress the data, this doesn’t appear to happen until attempting to draw the image.

Creating CGImages and drawing in iOS on an iPad Mini 2

The 21600 x 10800 image was too large to render at full scale on iPad Mini 2. Couldn’t create a bitmap context big enough. Same goes for the 12000 x 12000 image.

Creating and drawing a 2800 x 1400 thumbnail CGImage on the iPad Mini 2 to a bitmap context

  • Creating the thumbnail took 0.7 seconds
  • Drawing the thumbnail image took 0.030532 seconds for the first draw
  • Subsequent draws took 0.0052 seconds
  • Creating and drawing the thumbnail image took 0.71 seconds

The time taken to generate the 2272 x 1704 thumbnail CGImage from the 2272 x 1704 image file is 0.001169 seconds.

Drawing the non thumbnail 2272 x 1704 CGImage to the bitmap context on the iPad Mini took 0.163412 seconds to draw the first time and 0.11 seconds for subsequent draws. If the CGImage has to be generated each time then the time taken is: 0.117 seconds.

Conclusions in relation to using CGImageSource

  • Use CGImageSourceCreateThumbnailAtIndex where possible but beware of its limitations.
    • You need to know the max size you want (width or height)
    • For very large images CGImageSourceCreateThumbnailAtIndex can fail when creating large CGImages
    • To generate a CGImages at the size of a very large original image use CGImageSourceCreateImageAtIndex instead
    • You can’t scale up
    • The CGImage generated by CGImageSourceCreateThumbnailAtIndex is not cached so requesting it a second time isn’t any faster
    • Cropping is better done using a CGImage created using CGImageSourceCreateImageAtIndex and then calling CGImageCreateWithImageInRect
    • If you are generating a CGImage that has any dimension greater than 5000 pixels it is better to use CGImageSourceCreateImageAtIndex especially on iOS
  • Advantages of using CGImageSourceCreateThumbnailAtIndex over CGImageSourceCreateImageAtIndex are:
    • You can scale the generated CGImage to any size up to the size of the original
    • Drawing of the image is faster as the image data is uncompressed and decoded

CGImageSourceCreateImageAtIndex appears to do very little other than creating the CGImage wrapper. The time taken to create the CGImage is very short but drawing from the CGImage is very slow. Creating an CGImage using CGImageSourceCreateThumbnailAtIndex is slower but drawing afterwards is faster.

As well as the objcio article by @steipete mentioned above this article by @mattt at @nshipster is also helpful. iOS image resizing techniques


Getting tracks from an AVAsset

I’ve been playing around with the AVAsset AVFoundation API.

The AVAsset object is at the core of representing an imported movie. An AVAssetTrack is AVFoundation’s representation of a track in a movie. There are multiple ways to get AVAssetTracks from an AVAsset.

You can get a list of all the tracks:

let movie:AVAsset = ...
let tracks:[AVAssetTrack] = movie.tracks

You can get a list of tracks with a specific characteristic, for example a visual characteristic:

let movie:AVAsset = ...
let tracks:[AVAssetTrack] = movie.tracksWithMediaCharacteristic(AVMediaCharacteristicVisual)

Or you can get a list of tracks which have a specific media type, for example audio:

let movie:AVAsset = ...
let tracks:[AVAssetTrack] = movie.tracksWithMediaType(AVMediaTypeAudio)

You can obtain a single AVAssetTrack object if you know it’s persistent track identifier value.

The persistent track identifier is of type CMPersitentTrackID which is 32 bit integer typedef and the invalid track reference kCMPersistentTrackID_Invalid is an anonymous enum with value 0.

Unfortunately the only way to get the track id of a track in an imported movie is querying a AVAssetTrack object so the persistent track id is useful when later on you want to reference a track that you have previously identified.

From what I understand the AVAssetTrack objects are fairly lightweight so keeping a list of AVAssetTrack objects is not going to be too much of a drain, but you might still prefer to keep a list of persistent track id values to request a AVAssetTrack object when you need it rather than holding onto a reference to an AVAssetTrack object.

To get a track using the tracks persistent identifier

let track:AVAssetTrack = movie.trackWithTrackID(2)

Tracks have segments and a segment specifies when one bit of content in a track starts and finishes based on a within the time range of a track. Each segment contains a time mapping between the source and target. You can get a list of all the segments in a track and this will often be a list of 1 segment which lasts the full length of the track but this is not necessarily the case.

To get a list of segments from a track:

let segments = track.segments

You can get the segment that corresponds to a specific track time:

let trackTime = CMTimeMake(60000, 600)
let segment = track.segmentForTrackTime(trackTime)

I’ve created a gist which is a very simple command line tool written in swift demonstrating this blog post.


Accessing CoreImage transition filters from the command line

As part of MovingImages I wrote a number of command line tools that can take advantage of MovingImages. The command line tools are written in ruby and most of them come with the MovingImages ruby gem so after installing MovingImages these command line tools are installed. If you haven’t already you’ll need to download and install MovingImages.

This blog post is about the ‘dotransition’ command line tool. This tool provides access to CoreImage’s transition filters. A CoreImage transition filter takes a source and target images and provides a way of transitioning from the source image to the target image by generating a sequence of images that represents the transition between the two images.

The command line tools all take the “–help” option which describes how to call the command line tool with the various options, and they all also take the “–verbose” option which can be useful when you get an error calling the script.

In all the examples I am going to assume that the source image is an jpeg image file in your Pictures folder called “sourceimage.jpg” and that the destination image is an image file in your Pictures folder called “targetimage.jpg”. The generated images are saved in a sub folder of a folder called transition on your desktop. The first example uses the long form of the command line options as they are more descriptive but it does result in longer commands. The other examples use a mixture of long and short form demonstrating that they can be mixed. The filter specific options are only available in the long form. The ‘dotransition’ command line tool assumes that source and target images are the same size.

The bar swipe transition uses the CoreImage filter CIBarsSwipeTransition

dotransition --transitionfilter CIBarsSwipeTransition --basename image --outputdir ~/Desktop/transition/barswipe --baroffset 60 --angle 2.0 --width 20 --sourceimage ~/Pictures/sourceimage.jpg --targetimage ~/Pictures/targetimage.jpg --exportfiletype public.jpeg --count 20

I’ll cover the command line options that are common to all the transition filters first. The “–transitionfilter” option specifies the transition filter to use. To get a list of the available transition filters call the do transition command line tool like so:

dotransition -l

The ‘dotransition’ command generates a sequence of image files. These images are created in the directory specified after the –outputdir option. If the path to the directory contains a space or other non standard character then the path will need to be quoted or escaped. To generate the file names ‘dotransition’ takes a –basename option which specifies the beginning of the filename, which is then followed by the image sequence number and then the file extension for the file type of the generated image. The –sourceimage option specifies the image file that the transition starts with, and the the –targetimage option specifies the transition ends with. Like the –outputdir option if the path contains spaces or other non standard path character then the path will need to be quoted or escaped. The –exportfiletype is optional and defaults to public.tiff if not specified. The –count option specifies the number of images in the image sequence to be generated.

The bar swipe transition takes a number of filter specific properties. These are –baroffset, –angle, –width. To determine the filter properties that belong to a specific filter you can request that from the dotransition command line tool.

dotransition --filterproperties CIBarsSwipeTransition

Which outputs the information about each property. You can use the filter key value to match with the equivalent command line option.

The ripple transition filter creates a ripple effect when transitioning from one image to another. The following example assumes that the input images have a width of 2272 and a height of 1704 pixels. The ripple effect is to center itself on the image.

dotransition --transitionfilter CIRippleTransition -b image -o ~/Desktop/transition/ripple --center 1136,852 --scale=40.0 --width 50 --extent 0,0,2272,1704 -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 20

The swipe transition replaces the source image with the destination by swiping away the source image.

dotransition --transitionfilter CISwipeTransition -b image -o ~/Desktop/transition/swipe --angle 2.0 --width 150 --color 0.3,0.2,0.8,1.0 --extent 0,0,2272,1704 --opacity 0.7 -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -t public.jpeg -c 20

The copy machine transition, imitates the action of a photocopier, but replaces the source image with the target as it passes.

dotransition --transitionfilter CICopyMachineTransition -b image -o ~/Desktop/transition/copymachine --angle=-2.0 --width 100 --color 0.8,0.2,0.6,1.0 --extent 0,0,2272,1704 --opacity 0.85 -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 30

The disintegrate with mask transition takes a mask image which it applies to the transition effect. For best results the mask image should be black and white and the same dimensions as the source and destination images.

dotransition --transitionfilter CIDisintegrateWithMaskTransition -b image -o ~/Desktop/transition/disintegratemask --maskimage ~/Pictures/maskimage.jpg --shadowradius 80 --shadowdensity 0.8 --shadowoffset 10,25 -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 20

The flash transition filter generates a flash image over the source image which is then replaced by the target image.

dotransition --transitionfilter CIFlashTransition -b image -o ~/Desktop/transition/flash --extent 0,0,2272,1704 --color 0.0,0.2,0.8,1.0 --striationcontrast 2.0 --fadethreshold 0.8 --striationstrength 1.0 --center 1136,852 -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 40 -t public.jpeg

The dissolve transition is the simplest of the transition filters, where the source image is gradually replaced with the target image. This example shows exporting the images as a png sequence. Exporting as png files is considerably slower than jpeg or tiff.

dotransition --transitionfilter CIDissolveTransition -b image -o ~/Desktop/transition/dissolve --exportfiletype public.png -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 20

The mod transition filter creates oblated oval shapes that move and grow, gradually replace the source image with the target.

dotransition --transitionfilter CIModTransition -b image -o ~/Desktop/transition/mod  -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 20 --angle 3.14159 --radius 340 --compression 300 --center 1136,852

The page curl transition filter, creates a page turning effect to go from the source to the target image.

dotransition --transitionfilter CIPageCurlTransition -b image -o ~/Desktop/transition/pagecurl  -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 20 --angle 3.2 --radius 250 --extent 0,0,2272,1704 --backsideimage ~/Pictures/sourceimage.jpg

The page curl with shadow transition filter is very similar to the above but generates the shadow of the page being turned differently.

dotransition --transitionfilter CIPageCurlWithShadowTransition -b image -o ~/Desktop/transition/pagecurlwithshadow  -s ~/Pictures/sourceimage.jpg -d ~/Pictures/targetimage.jpg -c 20 --angle 3.2 --radius 350 --extent 0,0,2272,1704 --shadowamount 0.7 --shadowsize 250 --backsideimage ~/Pictures/sourceimage.jpg

Information about all the scripts that work with MovingImages.

And that’s it, please enjoy.

Tags: ,

XCTest with Xcode 6

These are my notes from the 2014 WWDC Session 414 Testing with Xcode 6

What is covered

  • Benefits of testing
  • Getting started and adding tests to old projects
  • Asynchronous testing
  • Catch performance regressions


Why Test?

  • Find bugs
  • Hate regressions
  • Capture performance changes
  • Codify requirements – tests help to codify the range of functionality of code.
  • The tests themselves will tell other engineers what expected behaviour is.
Add tests to an existing project
In a new project
  • You can write tests first
  • Write code that passes those tests
  • This is known as Test driven development

Continuous integration workflow

  • Start off in a green state
  • Green state represents known quality
  • Add features, fix bugs, add new tests
    • At some point a test will fail
    • This will be flagged immediately

Xcode testing with Framework XCTest

  • Expectations passes/failures
  • To create tests subclass XCTestCase
  • Implement test methods
  • Return void, method name starts with “test”. Rest of name should describe purpose of test
    • -(void)testThatMyFunctionWorks
    • Use assertion APIs to report failures
    • XCTAssertEqual(value, expectedValue); // Compares two scalar values
    • If a failure, outputs a string and reports a failure to the test harness.
    • Xcode has test targets for managing tests.
  • Test targets build bundles
    • Contains the compiled test code
    • Resources you use in the test
    • These go in the test bundle, not in the application
    • Test targets are automatically included in new projects
    • You can have as many test targets as you want so you can break tests up into groups.
  • Test bundles
  • Test hosting
    • Tests are hosted in an executable process
    • The tests are usually injected into your app.
    • That means the tests have available all the code in the application.
    • Alternatively you can run the tests in an hosting process provided by Xcode
    • When you go to load resources for tests the resources are not in the main bundle
      • Don't do [NSBundle mainBundle]
      • Instead: [NSBundle bundleForClass:[MyTest class]]
  • Running tests
    • Simplest way is Command-U
      • Takes the active Scheme and runs the tests for that scheme
    • In the editor window gutter there is also run test buttons. You can run a single test, all tests in a test class
    • There is a similar set of buttons in the test Navigator
    • You can also run tests using xcodebuild
      • You can create your own automation setup
      • xcodebuild test -project ~/MyApp.xcodeproj -scheme MyApp -destination 'platoform=iOS,name=iPhone'
    • Where are the results displayed
      • The test navigator where you’ll have green/red checkmarks against each test
      • In the issue navigator
        • You’ll get the failure, and the reason for the failure.
      • In the Source editor gutter.
      • In the test reports, which show all tests that are run and associated logs.
    • Demo – Add tests to an existing project
    • One major point. Keep tests simple so it is clear why the test failed.
      • E.g. If your testing parsing data from the internet. You don’t want to test internet access.
      • Download & save the data and add the saved data to the test target.
      • Then load the data from a file.
  • Asynchronous Tests
    • More and more APIs are asynchronous
      • Block invocations
      • Delegate callbacks
      • Make network requests
      • Background processing
    • Unit tests run synchronously so this creates a challenge
    • XCTests adds the “Expectation” objects API to Xcode 6 testing
      • The expectation object describes events that you expect to happen at some point in the future
      • - (XCTestExpectation *)expectationWithDescription:(NSString *)description
    • XCTestsCase waits for expectations to be fulfilled.
      • - (void)waitForExpectationsWithTimeout:(NSTimeInterval)timeout handler:(XCWaitCompletionHandler)handlerOrNil
      • This will wait until the timeout interval is complete or when all events have been fulfilled
      • Testing opening of a document asynchronously:
- (void)testDocumentOpening
  XCTestExpectation *expectation = [self expectationWithDescription:@"open doc"];
  IDocument * doc = ...;
  [doc openWithCompletionHandler:^(BOOL success) {
    [expectation fulfill];
  [self waitForExpectationsWithTimeout:5.0 handler:nil];

See the demo code for writing an asynchronous test. Basically look for earthquake parser.

Performance testing

  • It can be easy to introduce performance regressions in code.
  • Catching these regressions is difficult.
  • Performance testing allows you to automate this
  • New APIs to measure performance
  • New Xcode UI to interpret results
    • Can profile tests with instruments
    • Use the new measure block API `- (void)measureBlock:(void (^)(void))block;`
    • Takes a block of code and runs it 10 times and shows the results in Xcode


  • See the demo code for testing performance. This is the mac version of the eartquake parser.
    • Call -measureBlock: to detect performance regressions
    • View results in Source Editor and Test Report
    • Profile tests with instruments.
    • Setting Baselines – Needs a fix point to compare against
    • Standard deviation – How much variation is allowed before a performance is considered to have failed
    • Measuring precisely
    • Baseline is the Average from a previous run
      • Set Baseline Average to detect regressions
      • Fail if > 10% increase from Baseline Average
      • Regressions less than 0.1 seconds are ignored – to remove false positives
    • Baselines are stored in source
    • Baselines are per-device configuration
    • Include device model, CPU, and OS


Tags: , ,