Archive for category CoreFoundation

Drawing rotated text with CoreText is broken

Here is a video where the text is being rotated at different angles. You can see from the flickering that at certain angles the text just doesn’t get drawn.

 

And here is a video where I’ve created a bitmap of the text drawn unrotated and then I draw the bitmap rotated at different angles. Which works:

 

I am drawing the core text within a path rather than from a point and I think that is the issue. I have seen in other situations when drawing multi-line wrapping text in a a path that is a column then if the drawing happens while the context is rotated then at certain angles the line wrapping for the first line of text behaves oddly.

I get exactly the same behaviour on iOS as I do on OS X.

This is definitely an issue of drawing the text within a path, drawing the text from a point works as expected.

Tags:

Image generation performance

I’m wanting random access to individual movie frames so I’m using the AVAssetImageGenerator class but for this part of the project using generateCGImagesAsynchronously is not appropriate. Now clearly performance is not the crucial component here but at the same time you don’t want to do something that is stupidly slow.

I’d like to not have to hold onto a AVAssetImageGenerator object to use each time I need an image but just create one at the time a image is requested. So I thought I’d find out the penalty of creating a AVAssetImageGenerator object each time.

To compare the performance I added some performance tests and ran those tests on my i7 MBP with an SSD and on my iPad Mini 2. I’ve confirmed that the images are generated. See code at end.

On my iPad Mini 2 the measure block in performance test 1 took between 0.25 and 0.45 seconds to run. Most results clustering around 0.45 seconds. It was the second run that returned the 0.25 second result. Performance test 2 on the iPad Mini 2 was much more consistent with times ranging between 0.5 and 0.52 seconds. But reversing the order in which the tests run reverses these results. I’m not sure what to think about this, but in relation to what I’m testing for I feel comfortable that the cost of creating an AVAssetImageGenerator object before generating an image is minimal in comparison to generating the CGImage.

Strangely my MBP pro is slower, but doesn’t have the variation observed on the iPad. The measure block in both performance tests take 1.1 seconds.

Whatever performance difference there is in keeping a AVAssetImageGenerator object around or not is inconsequential.

    func testAVAssetImageGeneratorPerformance1() {
        let options = [
            AVURLAssetPreferPreciseDurationAndTimingKey: true,
            AVURLAssetReferenceRestrictionsKey:
                AVAssetReferenceRestrictions.RestrictionForbidNone.rawValue
        ]
        
        let asset = AVURLAsset(URL: movieURL, options: options)!
        self.measureBlock() {
            let generator = AVAssetImageGenerator(asset: asset)
            var actualTime:CMTime = CMTimeMake(0, 600)
            for i in 0..<10 {
                let image = generator.copyCGImageAtTime(
                    CMTimeMake(i * 600 + 30, 600),
                    actualTime: &actualTime, error: nil)
            }
        }
    }
    
    func functionToTestPerformance(#movieAsset:AVURLAsset, index:Int) -> Void {
        let generator = AVAssetImageGenerator(asset: movieAsset)
        var actualTime:CMTime = CMTimeMake(0, 600)
        let image = generator.copyCGImageAtTime(
            CMTimeMake(index * 600 + 30, 600),
            actualTime: &actualTime, error: nil)
    }
    
    func testAVAssetImageGeneratorPerformance2() {
        let options = [
            AVURLAssetPreferPreciseDurationAndTimingKey: true,
            AVURLAssetReferenceRestrictionsKey:
                AVAssetReferenceRestrictions.RestrictionForbidNone.rawValue
        ]
        
        let asset = AVURLAsset(URL: movieURL, options: options)!
        self.measureBlock() {
            for i in 0..<10 {
                self.functionToTestPerformance(movieAsset: asset, index: i)
            }
        }
    }

Tags: ,

CGImageSource behaviour

I need to update a protocol that advertises that an object can create a CGImage.

I want to have my movie asset wrapping object conform to this protocol but it means I need a way to pass in more information as arguments than either of the two methods of the protocol allows. The protocol methods currently take an image index which objects conforming to the protocol that only have one possible image just ignore.

When requesting images from a movie we need to be able to specify a time in the movie, and also whether we want the image rendered from all video tracks or just some/one. So I think it is relevant to get rid of the image index argument and replace it with a dictionary which contains properties relevant to the object providing the image.

This means I’ve been revisiting looking at the behaviour of the ImageIO framework and the CoreFoundation object CGImageSource. Like last time I find no way to crop whilst creating a CGImage from a CGImageSource. I still find this a big weakness, if you have a very large image and you want just part of the image at original resolution then you first have to create a CGImage containing the full image and then crop it from there. The Quicktime image importer component allowed you to crop when rendering so this feels like a loss of useful functionality.

CGImageSource provides 2 methods for creating a CGImage. These are:

  • CGImageSourceCreateThumbnailAtIndex
  • CGImageSourceCreateImageAtIndex

The CGImageSourceCreateThumbnailAtIndex function provides a way to generate a scaled image up to the dimensions of the original while CGImageSourceCreateImageAtIndex doesn’t. Both of these functions take an options dictionary as well the image index. The scaling functionality of CGImageSourceCreateThumbnailAtIndex is limited. To generate a scaled image you need to specify two properties of the options dictionary passed into CGImageSourceCreateThumbnailAtIndex.

Swift

let imageSource = CGImageSourceCreateWithURL(...)
let thumbnailOptions = [
    String(kCGImageSourceCreateThumbnailFromImageAlways): true,
    String(kCGImageSourceThumbnailMaxPixelSize): 2272.0
]
let tnCGImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, thumbnailOptions)

The property with key kCGImageSourceCreateThumbnailFromImageAlways with a value of true means that any thumbnail image embedded in the image file is ignored and the image generated is from the full size image. If you don’t do this and the image file contains a thumbnail image then the max size you specify is ignored and you get the embedded thumbnail image, this is rarely what you want.

The property with key kCGImageSourceThumbnailMaxPixelSize value is the maximum size in pixels of either the width or height of the generated image. There is a limitation, in that you can’t scale up an image only scale down using this option. If you specify a size greater than both the width and height of the image then the generated CGImage will have the same dimensions as the original.

For very large images (I played with images of 21600 x 10800 pixels) on my iPad Mini 2 creating a CGImage at full size using CGImageSourceCreateThumbnailAtIndex failed, whereas CGImageSourceCreateImageAtIndex succeeded, CGImageSourceCreateThumbnailAtIndex successfully created the image on the simulator and OS X. If the options dictionary does not contain the kCGImageSourceThumbnailMaxPixelSize property then the function CGImageSourceCreateThumbnailAtIndex will create an image at full size up to a maximum size of 5000 pixels on OS X and iOS.

CGImageSourceCreateImageAtIndex takes different dictionary options. Peter Steinberger recommends against setting the kCGImageSourceShouldCacheImmediately to true but that was in 2013, the situation may have changed. I’ll add an addendum to this when I know for sure. There is little documentation for the kCGImageSourceShouldCache property so I’d only be guessing as to what it does exactly. On a 64bit system this value is true by default and false on 32bit systems. Leaving this with its default value is probably best.

Creating CGImages on OS X from an image with dimensions: 21600 x 10800 using CGImageSourceCreateThumbnailAtIndex

  • 1.87 seconds to create a thumbnail image with a width of 21600 and height 10800 pixels.
  • 0.81 seconds to create a thumbnail image with a width of 10800 and height 5400 pixels.
  • 0.54 seconds to create a thumbnail image with a width of 2800 and height 1400 pixels.

Drawing the scaled down CGImage in OS X

Creating the 2800 wide CGImage and drawing the image to a bitmap context took 0.57 seconds.

Drawing the thumbnail image to the context took 0.010506 seconds the first time and 0.00232 seconds for subsequent draws.

Creating CGImage using CGImageSourceCreateImageAtIndex and drawing the image to a bitmap context on OS X

Creating a image using CGImageSourceCreateImageAtIndex doesn’t allow scaling of the image so we will always get the full size image, in this case 21600 x 10800.

  • Creating the image for the first time took 0.000342 seconds
  • Subsequently creating the image took 0.00006 seconds
  • Creating and drawing the image for the first time took: 1.67 seconds
  • Subsequent image drawing took 0.15 seconds.

Creating the CGImage using CGImageSourceCreateImageAtIndex doesn’t decompress the data, this doesn’t appear to happen until attempting to draw the image.

Creating CGImages and drawing in iOS on an iPad Mini 2

The 21600 x 10800 image was too large to render at full scale on iPad Mini 2. Couldn’t create a bitmap context big enough. Same goes for the 12000 x 12000 image.

Creating and drawing a 2800 x 1400 thumbnail CGImage on the iPad Mini 2 to a bitmap context

  • Creating the thumbnail took 0.7 seconds
  • Drawing the thumbnail image took 0.030532 seconds for the first draw
  • Subsequent draws took 0.0052 seconds
  • Creating and drawing the thumbnail image took 0.71 seconds

The time taken to generate the 2272 x 1704 thumbnail CGImage from the 2272 x 1704 image file is 0.001169 seconds.

Drawing the non thumbnail 2272 x 1704 CGImage to the bitmap context on the iPad Mini took 0.163412 seconds to draw the first time and 0.11 seconds for subsequent draws. If the CGImage has to be generated each time then the time taken is: 0.117 seconds.

Conclusions in relation to using CGImageSource

  • Use CGImageSourceCreateThumbnailAtIndex where possible but beware of its limitations.
    • You need to know the max size you want (width or height)
    • For very large images CGImageSourceCreateThumbnailAtIndex can fail when creating large CGImages
    • To generate a CGImages at the size of a very large original image use CGImageSourceCreateImageAtIndex instead
    • You can’t scale up
    • The CGImage generated by CGImageSourceCreateThumbnailAtIndex is not cached so requesting it a second time isn’t any faster
    • Cropping is better done using a CGImage created using CGImageSourceCreateImageAtIndex and then calling CGImageCreateWithImageInRect
    • If you are generating a CGImage that has any dimension greater than 5000 pixels it is better to use CGImageSourceCreateImageAtIndex especially on iOS
  • Advantages of using CGImageSourceCreateThumbnailAtIndex over CGImageSourceCreateImageAtIndex are:
    • You can scale the generated CGImage to any size up to the size of the original
    • Drawing of the image is faster as the image data is uncompressed and decoded

CGImageSourceCreateImageAtIndex appears to do very little other than creating the CGImage wrapper. The time taken to create the CGImage is very short but drawing from the CGImage is very slow. Creating an CGImage using CGImageSourceCreateThumbnailAtIndex is slower but drawing afterwards is faster.

As well as the objcio article by @steipete mentioned above this article by @mattt at @nshipster is also helpful. iOS image resizing techniques

Tags:

Thinking about my tests

I’ve installed Yosemite and of course the first thing I did was to run my tests

Almost every test failed. Generated images are all different. They look the same to my poor eyesight but pixel values can be quite different with the compare tolerance increased to 26* from 0 needed for an image to be identified as the same. I had previously only needed to do this when comparing images created from windows on different monitors. I think perhaps I need to have a think about exactly what it is I’m testing. These tests have saved me a lot of time and given me confidence that I’ve not been breaking stuff but for so many to break with an os upgrade doesn’t help.

For now the failure of the tests beyond the image generation described above, has informed me about the following changes to ImageIO and CoreImage filters.

Information returned about functionality provided by ImageIO and CoreImage

ImageIO can now import three new formats: “public.pbm”, “public.pvr”, “com.apple.rjpeg”
ImageIO has lost one import format: “public.xbitmap-image”

I’ve no idea what these formats are and I’ve been unsuccessful at finding information about them.

ImageIO has added export formats: “public.pbm”, “public.pvr”, “com.apple.rjpeg”

Apple has added these new CoreImage filters:

CIAccordionFoldTransition CIAztecCodeGenerator CICode128BarcodeGenerator CIDivideBlendMode CILinearBurnBlendMode CILinearDodgeBlendMode CILinearToSRGBToneCurve CIMaskedVariableBlur CIPerspectiveCorrection CIPinLightBlendMode CISRGBToneCurveToLinear CISubtractBlendMode

There are minor configuration or filter property changes to the filters listed below with a brief description of the change:

  • CIBarsSwipeTransition inputAngle given updated values for default and max. Identity attributes removed for inputWidth and inputBarOffset.
  • CIVignetteEffect inputIntensity slider min changed from 0 to -1.
  • CIQRCodeGenerator has spaces added to description of one property, and a description added for another.
  • CILanczosScaleTransform has a fix for the filter display name.
  • CIHighlightShadowAdjust inputRadius has minimum slider value changed from 1 to 0.
  • CICMYKHalftone inputWidth attribute minimum changed from 2 to -2. inputShapness attribute type is CIAttributeTypeDistance not CIAttributeTypeScalar
  • CICircleSplashDistortion inputRadius has a new identity attribute with value 0.1
  • CIBumpDistortionLinear inputScale, inputRadius and inputCenter given slightly more rational default values.
  • CIBumpDistortion inputScale, and inputRadius are given slightly more rational defaults.
  • CIBarsSwipeTransition inputAngle given updated values for default and max. Identity attributes removed for inputWidth and inputBarOffset.

*This is comparing images created from a 8 bit per color component bitmap context. So out of a range of 256 possible values images generated on Mavericks compared to ones generated on Yosemite are different by up to 26 of those 256 values. That’s huge.

Core Image Filter Rendering. Performance & color profiles

The Apple documentation for rendering a core image filter chain notes that allowing the filter chain to render in the Generic Linear color space is faster. If you need better performance and are willing to trade that off against better color matching then allowing the filter chain to render in the generic linear color space should be faster.

I thought I better look at what the impact of this was both for performance and color matching. I also wanted to see what the difference was if the core graphics context that the filter chain rendered to was created with a sRGB color profile or a Generic Linear RGB profile when the context bitmap was saved as an image to an image file.

All the tests were done on my laptop with the following configuration:

OS: Mavericks 10.9.2
System information: MacBookPro non retina, model: MacBookPro9,1
Chipset Model:	NVIDIA GeForce GT 650M 500MByte.
Chipset Model:	Intel HD Graphics 4000
A 512GByte SSD, 16GByte RAM.

I installed gfxCard Status tool sometime ago which allows me to manually switch which cards to use, and also to inform me when the system automatically changes which card is in use. I use to get changes reported regularly but after one of the Mavericks updates this happened much less. After that update the only consistent way for the discrete card to be switched on automatically by the system was having an external monitor plugged in. I think the OS is trying much harder to keep the discrete graphics card turned off. I have NSSupportsAutomaticGraphics switching key in my info.plist set to YES. I have tried setting the value to NO, and if I run the tests then as long as software render is not specified I’m informed that the system has turned the discrete graphics card on but the CoreImage filter render performance is still poor. The consequence is I’m not really sure that the discrete graphics card is being used for these tests. Perhaps I’d get different results as to whether GPU rendering or software rendering was faster if I had a more complex filter chain so what I might be seeing here is the time needed to push the data to the graphics card, and then pull it back dominating the timing results.

First up, when comparing images where the only difference in image generation has been whether they are rendered to a CGContext with a sRGB profile or a Generic Linear RGB profile then when I view the images in Preview they look identical. The reported profiles are different, the image generated from a context with Generic Linear RGB has a reported profile of Generic HDR profile while the image from a context with a SRGB profile has a reported profile of sRGB IEC61966-2.1.

When the filter chain has the straighten filter and it rotates the image 180 degrees the colors of the output image are exactly the same as the input image when viewed in Preview, no matter the options for generating the output image.

When the filter chain has the box blur filter applied with a radius of 10 pixels the image rendered in the linear generic rgb profile is lighter than the one rendered using the sRGB profile when viewing the output images in preview. The image rendered using the sRGB looks to match better the original colors of the image. The generic linear rgb profile appears to lighten the image. The color change is not large and would be probably be acceptable for real time rendering purposes.

Setting kCIContextUseSoftwareRenderer to YES or NO when creating the CIContext makes no difference in terms of the color changes.

However I get the opposite of what I’d expect with speed.

Asking the filter chain with filter CIBoxBlur with radius of 10 to render 200 times to a Core Graphics context with a sRGB color profile:

Software render using sRGB profile: 4.1 seconds
Software render using Linear Generic RGB profile: 5.3 seconds
GPU render using sRGB profile: 7.0 seconds
GPU render using Linear Generic RGB profile: 7.5 seconds

If I create a Core Graphics context with a Generic Linear RGB color profile then:

Software render using sRGB profile: 4.0 seconds
Software render using Linear Generic RGB profile: 5.3 seconds
GPU render using sRGB profile: 7.3 seconds
GPU render using Linear Generic RGB profile: 7.7 seconds
  1. These results are completely 180º turned around from the results that I’d expect. If I was to accept them as unquestioned truth then I’d have to decide to always just work using the sRGB profile and to do all rendering via software and not worry about using the GPU unless I needed to offload work from the CPU.

A later observation (Friday 2nd Mary 2014), when drawing text into a bitmap context and running off battery power, I’m informed that the system has switched temporarily to using the discrete graphics card and then informed soon after it has switched back.

Tags: , ,

SSD versus HDD, Movie Frame Grabs and the importance of profiling

There was an e-mail to Apple’s cocoa-dev e-mail list that provoked a bit of discussion. The discussion thread starts with this e-mail to cocoa dev by Trygve.

Basically Trygve was wanting to get better performance from his code for taking frame grabs from movies and drawing those frame grabs as thumbnails to what I call a cover sheet. Trygve was using NSImage to do the drawing and was complaining that based on profiling his code was spending 90% of the time in a method called drawInRect.

Read the rest of this entry »

Tags: , , , , , , , ,

Creating GIF animations using CoreGraphics. Quartz.

I’ve had far more trouble getting the generation of gif animations to work satisfactorily than I thought was necessary so I decided to share my conclusions and hopefully you’ll waste less time than I did. I’ve provided  source code for a command line tool that you can run to generate gif animations.

The source is provided as a gist.

My major problem was that there is little or no documentation about which properties are related to the gif file and which to individual frames. Adding a UnclampedDelayTime property to the gif properties for a frame is pointless. It is ignored. It is a property that you can read from an individual frame in an already generated gif animation but is not a property that you can add.

The other issue is that CoreGraphics (Core-Graphics, Quartz) limits the values you can specify for the delay time. For any frame delay time specified to be more than 0.1 second it needs to be a multiple of 0.5 seconds (so 0.5, 1.0, 1.5, 2.0 … etc.) and if it isn’t the frame delay time is set to 0.1 second. The frame delay time cannot be set to less than 0.1 seconds. None of this information is provided in the ImageIO documentation. To work this out I needed to write this tiny command line tool as I was confused about the results I was getting.

I don’t add the functionality for adding customised colour tables to the gif animation. But to see details how to do that you can see this stack overflow discussion (custom gif colour table).

 

Tags: , , ,