Bug Responder

When the user takes an action in a Cocoa app the framework looks for the first object in the responder chain that can handle that action. Basically each object in the responder chain is asked if it implements the given action selector and if not passes the buck on to the next object in the chain. This works great since you can put code to handle user actions in the higher levels of your applications controller classes without having to worry about calling into them yourself from button handlers. The trade off here is that the further removed from the action you put the code the less contextual information you may have with which to make your decisions.

Generally this isn’t a problem. Your NSWindowController will handle an action for the relevant document or you handle it higher up for the application in general. Things become a little more complicated though when you have view controllers. These are very much like window controllers except they are responsible for only a sub-portion of the view hierarchy. As an example consider a view controller in charge of a date picker. The date picker may have several subviews, one for the day, month, year, maybe some more for the time. The idea of a view controller is that it can create a view hierarchy and manage it in a way thats opaque to the rest of the app. The date picker is a trivial example – you can imagine view controllers that manage a complex amount of state and touch their model area in a non-trivial way.

Often you’ll want to give the user the ability to report bugs in your software, sort of like what Safari does with it’s Bug Button. Ideally we’d click the bug button or chose an item from a menu and we’d get a nice bug report with contextual information as to what the user was looking at. There were a couple of ways to approach this. First is to add code in each controller that’ll handle a ‘reportBug:’ action, gather it’s information and send it to the centralized bug reporting class. This fails though because if I implement ‘reportBug:’ in my date picker then I lose the context in which I was trying to pick a date – and the bug report loses some of it’s usefulness. If I implement ‘reportBug:’ in my window controller then I can know the big picture but I don’t know what’s going on with the view controllers below me – they may have pertinent information that hasn’t been committed to the model layer yet. So it looks like we’re screwed either way.

My solution to the problem was to implement my ‘reportBug:’ action at the highest level in my application. It looks like this:


- (IBAction) reportBug: (id) sender
{
    KBBugReportController *bugReport = [KBBugReportController sharedBugReportController];
    [bugReport gatherBugReportInformation];
    [bugReport sendBugReport];
}

Nice and simple, isn’t it? Except how can ‘gatherBugReportInformation’ work? The trick is we walk the responder chain ourselves and ask each responder to contribute some context information. By the time we’ve walked the entire chain we’ll have travelled from the most specific information to the most general.

Here’s what that’d look like:

- (void) gatherBugReportInformation
{
    KBCoalescingDictionary *bugReportInformation = [KBCoalescingDictionary dictionary];
    NSResponder *responder = [[NSApp mainWindow] firstResponder];
    while ( responder != nil ) 
    {
        if ( [responder respondsToSelector: @selector( submitBugReportInformation: )] )
        {
            [(id)responder submitBugReportInformation: bugReportInformation];
        }
        responder = [responder nextResponder];
    }

    NSArray *titles = [bugReportInformation objectsForKey: kKBBugReportTitleKey];
    [self setTitle: [titles componentsJoinedByString: @" "]];

    NSArray *contents = [bugReportInformation objectsForKey: kKBBugReportContentKey];
    [self setContent: [contents componentsJoinedByString: @" "]];

    [bugReportInformation setObject: [NSDate date] forKey: kKBBugReportTimeStampKey];

    [self setBugReportInformation: bugReportInformation];
}

The method I use here is that responders implement an informal protocol or one method – ‘submitBugReportInformation:’. The method takes one argument a special NSMutableDictionary into which they can write whatever bug report info they want. I’ve got a few special keys defined for the bug report title, contents and time stamp. If you send the bug report in an email the title and contents come in handy. There is one trick here and that’s using a coalescing dictionary. All that does is that when it gets asked to set and object for a key that already exists it instead makes and array and sticks both objects into the key slot. Asking it objectForKey will return the last object set and asking it for objectsForKey (note the ‘s’) will return the array of all objects set for that key. This makes it easy to just write whatever you want into it without worrying about blowing away other objects data.

By the end of ‘gatherBugReportInformation’ we’ve travelled the chain and given each responder the opportunity to help us out with some context info. All we need to do now is send our dictionary off to our server somehow. We can either upload it with an HTTP POST request or send it off in an email. Either way I suggest letting the user have a look at what you’re going to send; you can get quite a backlash for not being polite about this kind of thing.

Shader Source Code

I’ve posted the source code to a basic implementation of the Cocoa Shaders I talked about here. There’s code for conditional shaders, shader lists, clip shaders, and affine transform shaders. As far as shaders that actually draw there’s a solid color shader and one for drawing an image. The image shader lets you tweak the compositing operation, source rect, and set a drawing scale.

Included is a sample app to show you how you could use this stuff in practice. The app has a custom view which draws using a shader. The demo shader will change its background from blue to red when the view is clicked. Its implemented using only the simple shaders provided.

Fancier gradient fill shaders, CoreImage based shaders and that kind of thing are left, for now, as an exercise for the reader. I mentioned it in the original post but if you’re looking to do gradients you’d do yourself a favour by looking at CTGradient.

Cocoa Shaders

Keeping up with the Joneses UI trends on Mac OS X can take a fair amount of programming and artistic effort. With the HIG becoming antiquated and new implicit design guidelines become more prevalent we find that Cocoa doesn’t provide us with au-currant controls right out of the box. So we’re left with either rolling our own or making do.

One of the first things I do when I download an app is poke around its resources. Often you’ll find all manner of bitmap images that lay out portions of the apps controls. Left sides, middles, right sides. Sometimes circles where the center pixel will be stretched for the length of the control. Apple’s applications are often stuffed full with these custom little images but it’s not just them, I’m sure that many of your favorite apps have similar resources.

“Well so what” you may ask, “We want nice looking controls, the cost of downloading a few extra images is next to nothing, and we can tweak the images at change the look of the controls, it’s no big deal”. And then Apple keeps talking about resolution independence and you curse as you have your artist (or yourself) laboriously pound out multi-rep tiffs containing all the DPIs you need. And your download bloats. But whatever, it’s the 21st Century and my binaries and tiffs are fat and it’s all good. Then a user complains that the left side of the “SomethingCool” button in your app is all distorted and you need to go and check the tiffs and see that one of the reps is screwy and fix it. These little pains are common to development but, often, we can avoid them.

Will Shipley wrote this in a post about implementing a gradient table view:

Which leads me to an important rule: In general, if I can replace an image with code, I do so. This is not something that’s intuitive, and it took me many years to decide that this is the best policy. Code is easier to change and understand than images are.

I read that and thought, “Shipley, my boy, you’re just so wrong”. A large chunk of my career has been spent writing video games. If you’ve ever looked around a video game there’s resources that control pretty much everything. One of the guiding design philosophies of modern game engine design is to push as many decisions out of the code and into the hands of the designer as possible. Given this approach the current resource heavy Mac app development makes perfect sense. Until you consider the context. On the games I worked on artists could outnumber the programmers 5 to 1. Apple, who’s apps are littered with resources, has the budget to employ however many the artists it needs. Indie Mac Shop, Inc does not. Indie Mac Shop has, likely, one maybe two programmers and some contact with a part time artist who may well be on contract. Perhaps they may do the artwork themselves, taking time away from their coding. Either way art, in the indie dev world, is harder to come by than code. Shipley, of course, is speaking from the perspective of a programmer who’s career has been spent at a relatively small shop and his advice is spot on for that environment. So turns out he’s onto something. Let’s see if I can save a little face by extending the concept he puts forward.

The Setup

Drawing involves two things: the resources used by graphical elements and the logic required to arrange these elements the right way. Resources include the colors, images and perhaps font information used in the element. The logic dictates in what order and where we employ these resources. If we call our resources “data” and our logic “code” it becomes clear that we can encapsulate our drawing away into a nice, reusable and, importantly, polymorphic class.

The Shader

In 3D graphics a “shader” is used to define and control how a surface is rendered. The basic principle is that for each point on a surface inputs are given to the shader and it returns the final color for that point. Typical shaders involve looking up a pixel from a texture map, modifing that pixels color based upon light intensity and direction, and perhaps adding a brighting color for a specular highlight (shiny!). This method has a valuable trait: the renderer doesn’t know or care what the shader is actually doing. All it does is offer the shader a ton of information (point on the surface, direction to the light, ambient light conditions, stuff like that) and the shader does its thing and spits out an answer. Internally the shader may be using a texture map or it may be doing some fancy math to fake a marble texture. It just doesn’t matter, the result is the same.

It’s a good idea; let’s steal it.

Thanks to the flexibility of Objective-C and Cocoa we can apply this approach to our drawing and build an elegant encapsulation. Our recipe involves Key Value Coding and NSPredicates.

A Cocoa Shader

Our first stab at a Cocoa shader might be as simple as:

@interface KBShader : NSObject
{
}

 - (void) drawInRect: (NSRect) rect;

@end

We can subclass that and add NSImages or whatever and when we need to draw we’ll just call drawInRect: on our instance. Simple, but it doesn’t quite meet the flexibility we’re aiming for. Ideally we want a way to pass information into the shader so it can make desicisions based on the environment its being used in. Like knowing the ambient light level in a room we’d like to know if our button is disabled or not. So we change our drawing method to look like this:

- (void) drawInRect: (NSRect) rect input: (id) input;

We pass input as an id rather than an NSDictionary. A dictionary would have been great and could easilly have encapsulated any state information we’d like to send to the shader. Its shortcoming, however, is that we’d need to maintain its state and make sure it matched the state of the element we’re trying to draw. That’s just error prone and boring code to write, so we won’t do it. Instead we leverage our nifty Cocooa Key Value Coding methods.

Using KVC we can treat any object as if it were a dictionary. Even better, by using key paths we can traverse the objects its accessors return and dig deep into its guts. So it’s a great fit for our shader – lot’s of environment information available to us in an easy to get to way. Using KVC we could write a shader implementation like this:

- (void) drawInRect: (NSRect) rect input: (id) input
{
    if ( [[input valueForKey: @"isEnabled"] boolValue] == YES )
    {
        [[NSColor redColor] set];
    }
    else
    {
        [[NSColor blueColor] set];
    }

    NSRectFill( rect );
}

Now that’s pretty decent. We can change the way we draw based upon the state of the object we’re supposed to be drawing. We can, of course, query a lot more attributes than just isEnabled.

Naming Shaders

I always liked how you could ask NSImage for an image just by calling imageNamed:. Shaders deserve the same treatment. It’s pretty easy – add a name ivar to the shader and some class methods to register and deregister a shader with a given name and you’re done. Then you can just say: [view setShader: [KBShader shaderNamed: @”CurrentEnVogueGlossEffect”]]; and you’re good.

Cocoa Shader Tree

There’s still room for improvement though. We don’t neccessarily want our shaders to be tied to one control. We’d like a ‘WhiteGloss’ shader, a ‘GrayGloss’ shader and maybe a ‘FlatWhite’ shader or something. And we’d really like to be able to say: “Button, when you’re enabled use the ‘WhiteGloss’ shader, when you’re pressed use ‘GrayGloss'”. We can do that by nesting some if’s and refering the call to other shaders but there’s a better way.

What we want is to have our shaders to be able to decide if they want to draw or not based on what’s been input. Cocoa has a nice way of giving us YES or NO answers based on a complex chain of tests – NSPredicate. Let’s use it for something better than some mabmy-pamby CoreData query – we’ll use it to draw something shiny.

(Just as a disclaimer don’t take this code literally – I’m not going to write the init’s and dealloc because it’s boiler plate and distracting).

@interface KBConditionalShader : KBShader
{
    NSPredicate *_predicate;
    KBShader *_yesShader;
    KBShader *_noShader;
}

- (id) initWithPredicate: (NSPredicate*) predicate
              ifShader: (KBShader*) yesShader
            elseShader: (KBShader*) noShader;

@end

@implementation KBConditionalShader

- (void) drawInRect: (NSRect) rect input: (id) input
{
    if ( [_predicate evaluateWithObject: input] == YES )
    {
        [_yesShader drawInRect: rect input: input];
    }   
    else
    {
        [_noShader drawInRect: rect input: input];
    }
}

@end

Now we’ve got a shader that doesn’t actually draw – it does control flow based on a test of the input. Thats kind of cool but so what? Let’s expand on that control flow idea.

@interface KBShaderList : KBShader
{
    NSMutableArray *_shaders;
}

- (id) initWithShaders: (NSArray*) shaders;

@end

@implementation KBShaderList

- (void) drawInRect: (NSRect) rect input: (id) input
{
    unsigned currentShaderIndex, numberOfShaders;
    numberOfShaders = [_shaders count];
    for ( currentShaderIndex = 0; currentShaderIndex < numberOfShaders; currentShaderIndex++ )
    {
        [[_shaders objectAtIndex: currentShaderIndex] drawInRect: rect input: input];
    }
}

@end

Now we can string shaders together. Why on earth would we want this? Well let’s say we want a nice gradient background with an image in the top left hand corner. We can reuse our gradient background shader and our draw an image shader and just chain them together. And we’ve got a conditional shader too…

KBConditionalShader *conditionalBackground =
    [KBConditionalShader shaderWithPredicate: [NSPredicate predicateWithFormat: @"(isEnabled==YES)"]
                                ifShader: [KBShader shaderNamed: kKBBlueGradientShader]
                              elseShader: [KBShader shaderNamed: kKBGrayGradientShader]];

KBShaderList *shader = [KBShaderList shaderWithShaders:
                            [NSArray arrayWithObjects: conditionalBackground, 
                                            [KBShader shaderNamed: kImageBadgeShader],
                                            nil]];

The resulting shader will conditionally change the background from blue to gray if the input object is enabled or not. Regardless of state it will draw an image badge. Conditional shaders and shader lists can be nested as much as you’d like so you can make some pretty rediculously complicated shaders if you really want.

Benefits

Shaders offer some real benefits over bundling up the drawing code with what ever element you’re drawing. First the obvious one – they consolodate the drawing code into one place and it can be shared easily between code that needs them.

Secondly is abstraction. The shader draws, that’s what it does. If you’re using CTGradient today and want to switch that to something else in the future you change it in your gradient shader and you’re done. You can even use different gradient shader implementations depending on platform if you want. Uh, you know, if that was something that’d make sense.

Thirdly they offer optimization and profiling opportunities. You can instrument your shader and discover that each time it’s asked to draw its in the same size rect. With a shader you can cache the results and just draw the cached bitmap instead.

And there’s an immense flexibility behind the shader model. Once you can set a shader for a view it’s realatively easy to write a shader picker that can dynamically change the shader. No, really, honest, I’m not talking about skinning even though it’s a natural fit, but it’s a valuable tool to have during development so you can quickly experiment with different looks. Or consider a ‘debug’ shader that when asked to draw dumps a bunch of control state to the console.

And, saving a big one for last, it’s a big step towards making Resolution Independence a far lesser burden. Some resources are inescapable (and for what it’s worth – multi res Tiff files are what the Apple boys recommend) but many of the images I see when I’m poking around could be replaced by shaders. When Apple changes some random UI element and you want to keep up it’s easier to pull out the color picker and change some RGB values in a shader than it is to ask an artist to whip something up in Photoshop. Your code will scale and the artist will be stuck doing x number of reps. Be kind to artists, they’ve got better things to do.