Improvement to Trevor Harmon’s UIImage+Resize.m

from the “Resize a UIImage the right way” post on Trevor’s Bike Shed
Trevor says:
December 24, 2009 at 9:18 pm
True; I didn’t bother with handling the imageRotation setting in croppedImage. I changed the method to include a source code comment explaining this.
If anyone would like to contribute an improved croppedImage function that fixes this oversight, I’d be happy to include it in the distribution.
I found a good post on Robert Clark’s Niftybean blog, “Selecting regions from rotated EXIF images on iPhone” with code for adapting the a crop rect based on a UIImage’s imageRotation. Adding this to croppedImage I then only needed to finally rotate the resulting CGImageRef, for I only needed to slightly generalize the private resizeImage:transform:… helper method and then use it to rotate the resulting cropped image to match imageRotation. It would be much better if the original UIImage’s metadata, imageRotation included, could be retained so the final rotation step could be avoided.
// Returns a copy of this image that is cropped to the given bounds.
// The bounds will be adjusted using CGRectIntegral.
// JPMH-This method no long ignores the image's imageOrientation setting.
- (UIImage *)croppedImage:(CGRect)bounds {
    CGAffineTransform txTranslate;
    CGAffineTransform txCompound;
    CGRect adjustedBounds;
    BOOL drawTransposed;
    
    switch (self.imageOrientation) {
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
            txTranslate = CGAffineTransformMakeTranslation(self.size.width, self.size.height);
            txCompound = CGAffineTransformRotate(txTranslate, M_PI);
            adjustedBounds = CGRectApplyAffineTransform(bounds, txCompound);
            drawTransposed = NO;
            break;
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            txTranslate = CGAffineTransformMakeTranslation(self.size.height, 0.0);
            txCompound = CGAffineTransformRotate(txTranslate, M_PI_2);
            adjustedBounds = CGRectApplyAffineTransform(bounds, txCompound);
            drawTransposed = YES;
            break;
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
            txTranslate = CGAffineTransformMakeTranslation(0.0, self.size.width);
            txCompound = CGAffineTransformRotate(txTranslate, M_PI + M_PI_2);
            adjustedBounds = CGRectApplyAffineTransform(bounds, txCompound);
            drawTransposed = YES;
            break;
        default:
            adjustedBounds = bounds;
            drawTransposed = NO;
    }
    
    CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], adjustedBounds);
    UIImage *croppedImage;
    if (CGRectEqualToRect(adjustedBounds, bounds))
        croppedImage = [UIImage imageWithCGImage:imageRef];
    else
        croppedImage = [self resizedImage:imageRef
                                     size:bounds.size
                                transform:[self transformForOrientation:bounds.size]
                           drawTransposed:drawTransposed
                     interpolationQuality:kCGInterpolationHigh];
    CGImageRelease(imageRef);
    return croppedImage;
}
Where the resizeImage: utility method was change to take a CGImageRef as a parameter instead of always using self.CGImage:
- (UIImage *)resizedImage:(CGImageRef)imageRef
                     size:(CGSize)newSize
                transform:(CGAffineTransform)transform
           drawTransposed:(BOOL)transpose
     interpolationQuality:(CGInterpolationQuality)quality;
- (CGAffineTransform)transformForOrientation:(CGSize)newSize;

6 Comments on “Improvement to Trevor Harmon’s UIImage+Resize.m”

  1. rowol says:

    Thanks, that worked really well and made the Trevor Harmon’s code truly excellent! One minor detail: after you change the resizedImage method to take CGImageRef as the first parameter, I believe you’ll also need to change the call around line 140

    from

    return [self resizedImage:newSize
    transform:[self transformForOrientation:newSize] drawTransposed:drawTransposed interpolationQuality:quality];

    to

    return [self resizedImage:self.CGImage
    size:newSize
    transform:[self transformForOrientation:newSize] drawTransposed:drawTransposed interpolationQuality:quality];

  2. srool says:

    Thanks for this – saved me some time…
    Instead f modifying the resizedImage method, you could use this code inside the croppedImage implementation:

    croppedImage = [UIImage imageWithCGImage:imageRef];
    croppedImage = [croppedImage resizedImage:bounds.size
    transform:[self transformForOrientation:bounds.size]
    drawTransposed:drawTransposed
    interpolationQuality:kCGInterpolationHigh];

    • smallduck says:

      Well that makes sense 🙂 However, it creates an extra, autoreleased, intermediary UIImage, something I didn’t want to do because my images are large. [Edit- I think this was a red herring for me (or some other fishy metaphor) I should be measuring the difference before claiming this as a valid reason]

      I’m revising this code for iOS4 which has scaled UIImages and a new initWithCGImage variant that takes a orientation parameter, making some of this code redundant. I’ll post the revision soon.

      • smallduck says:

        “Soon”, ha! As it turns out I never did that. My app needed to do resize and related operations on a background thread and I wanted it to be compatible with iOS v3.x. This necessitated going down to CoreGraphics across the board, passing around CGImageRefs instead of UIImage objects. I never did encapsulate the code into a utility class or category like Trevor did.
        Looking back at the code in this post, and specifically my changes, I’m liking it less and less. I eagerly await someone contributing something better.

  3. Byron says:

    Hi,

    Thanks for sharing this tweak to Trevor’s code.

    Question…I’m using Trevor’s code (which really solved some issues btw) in an app that uploads to a PhotoSmash galleries on WordPress. I did a comparison between an image uploaded to Flickr via Flickit and an image I uploaded via my app and the Flickit image is noticeably sharper. Even the downsized images on Flickr are sharper. The comparison is pretty apple to apples since I’m using the same image on the same iPod Touch.

    I don’t think the image that is hitting the WordPress site is being resized on the site at all.

    So, the question is: why is the Flickit image sharper?

    Do you think Flickit is loading the full-unresized image and letting Flickr do the resize? Or are they using a better algo than Trevor’s code?

    Cheers,
    Byron

    • smallduck says:

      The algorithm in Trevor’s code is CGContextDrawImage with interpolation quality set to High, which could indeed be less than the best algorithm, but I wouldn’t think it’d be terribly worse than anything else. I’d investigate elsewhere, like double check the sizes you’re comparing. I don’t know anything about Flickit myself.


Leave a reply to rowol Cancel reply