I Broke My Blog

ouch

Well, I’m not sure what happened here yet but whatever it was I’m not planning to give a round of high-fives over it. The comments are missing in action currently which is a real shame because a lot of them were helpful and added a lot of value to the posts.

So, sorry if your news reader got spammed, your comments deleted and your dreams of making it to stardom via these hallowed pages are dashed against the cold hard rocks of some weird-assed technical crap.

Scoped Objects in Objective-C

Object Scope and Lifetime

In Objective-C the lifetime of an object is not governed by the scope in which it appears – it is managed manually by the programmer. – (id) retain, -(void) release, and – (id) autorelease are the methods we use to let the runtime know if we’re still interested in an object or not.

Other variables in Objective-C don’t have to be managed in this way. Structures can be declared and will cease to exist once they leave the lexical scope in which they’re defined. Other languages, such as C++, can have instances that do the same thing – when the scope of a C++ stack allocated object is exited it’s destructor is called. That behaviour can be quite handy in that it allows the programmer to see the lifetime of the object simply by looking at the brackets in which it’s defined. It also means the programmer is less likely to accidentally leak an object since the scope will clean it up anyway. But we can’t do that in Objective-C.

How To Do That In Objective-C

{
    NSObject *myObject KBScopeReleased = [[NSObject alloc] init];
    NSLog( @"%@", myObject );
} // myObject is sent a release message here.

How does it work? Well first we notice that there’s something special in the declaration of myObject. KBScopeReleased lets the compiler know that we want this instance to be sent a release message as it leaves the scope it is defined in. Which seems like magic but here’s all that is involved:

#define KBScopeReleased __attribute__((cleanup($kb_scopeReleaseObject)))

We use the __attribute__ feature of the GCC compiler to define a cleanup function. This cleanup function will be called when a variable leaves scope and will be passed a pointer to the variable. I’ve defined the cleanup function to be $kb_scopeReleaseObject and here’s what it looks like:

void $kb_scopeReleaseObject( id *scopeReleasedObject )
{
    [*scopeReleasedObject release];
    *scopeReleasedObject = nil;
}

You don’t even really need to set the object to nil at the end there but I do because I’m crazy that way.

“I’ve Already Got Autorelease So I Don’t Care”

Good point. Using scope released objects will keep your peak memory usage down – the objects are released immediately rather than waiting for the autorelease pool to be drained – but, in general, autorelease pools are perfectly fine and are a pattern that Cocoa developers are accustomed to. So let’s put two good things together and see what we come up with.

KBScopeAutoreleased();

Drop that at the top of your scope and anything autoreleased between it and the closing of the scope will be automaticaly autoreleased. That’s handy if you’ve got a loop and want to keep your memory overhead down. As my friend and Rogue Amoeba colleague points out – Autorelease Is Fast. So just dropping a KBScopeAutoreleased() at the top of a loop will keep your memory overhead down at a very tiny speed cost.

Here’s what KBScopeAutoreleased() looks like:

#define KBScopeAutoreleased()
       NSAutoreleasePool *$kb_autoreleasePool##__LINE__             KBScopeReleased =
         [[NSAutoreleasePool alloc] init]

There’s a bit of C Macro Voodoo in there to make sure the variable name is unique but otherwise all it does is allocate a new NSAutoreleasePool and fix it up so it’ll be released when it exists scope.

Garbage Collection

Under Garbage Collection this becomes less useful but you can still use this trick to keep peak memory consumption down by changing the definition of KBScopeAutoreleased to be:

#define KBScopeAutoreleased()
  NSAutoreleasePool *$kb_autoreleasePool##__LINE__
        __attribute__((cleanup($kb_scopeDrainAutoreleasePool))) =
        [[NSAutoreleasePool alloc] init]

void $kb_scopeDrainAutoreleasePool( NSAutoreleasePool *pool )
{
    [*pool drain];
}

With that in place the NSAutoreleasePool will be drained when exiting scope which will trigger a garbage collection cycle.

That’s It

Really, not much to it but it’s kind of cool. For fun sprinkle KBScopeAutoreleased() around your code and your peak memory usage will drop.

Views: Post Processed

Visual Cues

If your application presents data to the user then there are some valuable tools you’ve been leaving on the table because they have traditionally been difficult to achieve with computer graphical interfaces. Mac OS X has some great graphics technology that changes the game – and I don’t mean Core Animation.

Core Image – Not Just For Images Anymore

Core Image is incredible. The amount of complexity that it abstracts and offers in a straight forward interface is staggering. I may be a little more appreciative of it since I’ve had occasion to do a lot of low level graphics programming against OpenGL or DirectX or whatever directly – Core Image exposes, with a minimum of fuss, all the best parts of playing with a modern pixel pipeline.

The basics of Core Image are dead simple – you provide a recipe for how you’d like your image cooked and it’ll give you a result. The recipes are in the form of a few simple objects strung together with some settings tweaked. If you’ve played with a modern Mac OS X image editor like Acorn (from Flying Meat) what you see in the filters window (developed by Rogue Sheep) is pretty much how the underlying implementation works. (Yes, I did just pimp friends.) So if you’ve got a basic grasp of stringing filters together in a graphics program then you’re in good shape to do so in your own application.

“But I don’t write a graphics app!”

Well, yes, in fact, you do. A Mac OS X desktop application uses a graphical interface to communicate information to the user. One of the wonderful things about graphics is that you can communicate lots of information very quickly. Even better than that is that the information doesn’t need to be accurate. That sounds preposterous – why on Earth would you want to show the user inaccurate information. Well, clearly, you don’t – but you don’t need to show exact information either – your job is to provide an environment with enough graphical cues so that the user can interact quickly and effectively with whatever data you’re presenting to them. If you’re not sold let’s look at scroll bars – they shrink in proportion with the size of the displayed information versus the size of the total amount of information. They’re pretty close to being accurate but not really – there’s a minimum size on that scroll bar thumb so it breaks down for huge amounts of data – and the other thing is, really, you don’t care – you get the idea but a pixel here or there doesn’t matter to you. It’s the same for Dock icons bouncing on load or even the classic lying progress meter – none of these are precise but they allow you, the human, to infer information quickly.

Humans are damn good at picking out important visual information rapidly. We’re good at it because otherwise 6,000 years ago we’d all have been eaten by Dinosaurs. We have some built in clues that can help us distinguish relevance – objects outside our focus are more blurry and those brightly colored draw our attention more than those which are muted and subdued. These cues are used daily by every sighted person to make sense of the vast amount of visual clutter we’re bombarded with.

“Where are you going with this?”

Core Image can be leveraged to provide your users with richer visual cues to help them more quickly grasp the information you are displaying. The graphics architecture of Mac OS X allows us to draw into different contexts – once we’ve got our data in a graphics context we can operate on it graphically as if it were an image. We’re going to use this ability to do some fancy post-process effects to our views that will add visual cues to the data represented. I’ll use an NSTableView subclass to demonstrate but it’s important to recognize that this technique is applicable to any view – and may even work better with your own custom views.

Post Processed Table Views

The idea is simple – we’ll present a typical table view to the user but allow for various ‘focusing’ effects to be applied to it. For each item in the table we’ll provide a degree indicating how much of the effect to apply. For a search results table, for example, we can blur results more the less relevant they are. Or if we’re looking at time based data we could apply a sepia tone to older entries – a common visual cue indicating that an item has aged.

Implementation

We need three things to make this work: the table as it would have appeared without any post processing, a set of floats indicating how much we’d like the effect applied, and a mask image we’ll use to graphically represent those floats. A good way to go about this on Mac OS X is to use CGLayers – they’re fast, efficient, and will be cached on the graphics card when possible. The process is reasonably simple: if we’re applying a post process effect then allocate the viewLayer which we’ll render the base NSTableView into; then allocate the viewMaskLayer in which we’ll draw the various shades of gray representing how much to apply the effect. Since the effect is multiplicative 1 (or White) represents a full application of the effect while 0 (or Black) represents not applying the effect at all.

So far, so simple. The next step isn’t much harder either. First – set the current graphics context to the viewLayer and ask our superclass to draw. That way we’ll capture what it would have drawn in our layer context rather than having it sent to the screen. Next we’ll need our table view data source to be able to provide us with per-item blending information. To achieve this we set our mask layer as the current graphics context then call -(CGFloat)tableView:(NSTableView*) tv postProcessFactorForRow:(NSInteger)row on our data source. We take the result, turn it into a shade of gray and fill the row rect with the color. Once we’ve iterated over the visible rows we now have a mask image which contains bands of gray each indicating how much to apply our filter.

Finally we apply the post process effect – we’ve got our source view and our mask and we just use a CIBlendWithMask filter to combine the two. The effect is up to you really – and it’s irrelevant to this approach. The code provided shows a sepia tone, a blur, a contrast and provides for a way to set custom effects. You can download the source code here, play around and see what works for you. Remember – we happen to be using a table view here but this technique is generally applicable – if your view presents a set of data then you can use this technique. This technique also works on Tiger – in fact the code just recently got Objective-C 2.0-ized but this was originally all written against Tiger without Core Animation in mind. If you want to use this on Tiger, go wild – you’ll just need to reverse the ObjC2.0 @properties and data source @protocol a bit but it’s dead easy.

Also – I’m not sure if I’ve made this explicit: if there’s code I’ve put up on my blog then it’s yours to use as you wish. That said, if you’re making an awesome UI for your Nuclear Armageddon machine I’d prefer you look elsewhere – Nuclear is so passé.

Update

Shamed by Gus for not having any visuals here’s a screen shot of a test app I just built. It shows a listing of the files on the Desktop – files of the same type as the selection are not blurred, files of a similar type (as determined by a UTI check) are blurred a little and totally dissimilar files are blurred even more. The effect is overstated for the sake of the example, it’s the technique which is really of interest.

KBPostProcessTableView.png

And here’s a movie of it in action: KBPostProcessTableView.mov

I also updated the code and included the test app and a project. Download it here.