Keeping up with the Joneses UI trends on Mac OS X can take a fair amount of programming and artistic effort. With the HIG becoming antiquated and new implicit design guidelines become more prevalent we find that Cocoa doesn’t provide us with au-currant controls right out of the box. So we’re left with either rolling our own or making do.
One of the first things I do when I download an app is poke around its resources. Often you’ll find all manner of bitmap images that lay out portions of the apps controls. Left sides, middles, right sides. Sometimes circles where the center pixel will be stretched for the length of the control. Apple’s applications are often stuffed full with these custom little images but it’s not just them, I’m sure that many of your favorite apps have similar resources.
“Well so what” you may ask, “We want nice looking controls, the cost of downloading a few extra images is next to nothing, and we can tweak the images at change the look of the controls, it’s no big deal”. And then Apple keeps talking about resolution independence and you curse as you have your artist (or yourself) laboriously pound out multi-rep tiffs containing all the DPIs you need. And your download bloats. But whatever, it’s the 21st Century and my binaries and tiffs are fat and it’s all good. Then a user complains that the left side of the “SomethingCool” button in your app is all distorted and you need to go and check the tiffs and see that one of the reps is screwy and fix it. These little pains are common to development but, often, we can avoid them.
Will Shipley wrote this in a post about implementing a gradient table view:
Which leads me to an important rule: In general, if I can replace an image with code, I do so.
This is not something that’s intuitive, and it took me many years to decide that this is
the best policy. Code is easier to change and understand than images are.
I read that and thought, “Shipley, my boy, you’re just so wrong”. A large chunk of my career has been spent writing video games. If you’ve ever looked around a video game there’s resources that control pretty much everything. One of the guiding design philosophies of modern game engine design is to push as many decisions out of the code and into the hands of the designer as possible. Given this approach the current resource heavy Mac app development makes perfect sense. Until you consider the context. On the games I worked on artists could outnumber the programmers 5 to 1. Apple, who’s apps are littered with resources, has the budget to employ however many the artists it needs. Indie Mac Shop, Inc does not. Indie Mac Shop has, likely, one maybe two programmers and some contact with a part time artist who may well be on contract. Perhaps they may do the artwork themselves, taking time away from their coding. Either way art, in the indie dev world, is harder to come by than code. Shipley, of course, is speaking from the perspective of a programmer who’s career has been spent at a relatively small shop and his advice is spot on for that environment. So turns out he’s onto something. Let’s see if I can save a little face by extending the concept he puts forward.
Drawing involves two things: the resources used by graphical elements and the logic required to arrange these elements the right way. Resources include the colors, images and perhaps font information used in the element. The logic dictates in what order and where we employ these resources. If we call our resources “data” and our logic “code” it becomes clear that we can encapsulate our drawing away into a nice, reusable and, importantly, polymorphic class.
In 3D graphics a “shader” is used to define and control how a surface is rendered. The basic principle is that for each point on a surface inputs are given to the shader and it returns the final color for that point. Typical shaders involve looking up a pixel from a texture map, modifing that pixels color based upon light intensity and direction, and perhaps adding a brighting color for a specular highlight (shiny!). This method has a valuable trait: the renderer doesn’t know or care what the shader is actually doing. All it does is offer the shader a ton of information (point on the surface, direction to the light, ambient light conditions, stuff like that) and the shader does its thing and spits out an answer. Internally the shader may be using a texture map or it may be doing some fancy math to fake a marble texture. It just doesn’t matter, the result is the same.
It’s a good idea; let’s steal it.
Thanks to the flexibility of Objective-C and Cocoa we can apply this approach to our drawing and build an elegant encapsulation. Our recipe involves Key Value Coding and NSPredicates.
A Cocoa Shader
Our first stab at a Cocoa shader might be as simple as:
@interface KBShader : NSObject
- (void) drawInRect: (NSRect) rect;
We can subclass that and add NSImages or whatever and when we need to draw we’ll just call drawInRect: on our instance. Simple, but it doesn’t quite meet the flexibility we’re aiming for. Ideally we want a way to pass information into the shader so it can make desicisions based on the environment its being used in. Like knowing the ambient light level in a room we’d like to know if our button is disabled or not. So we change our drawing method to look like this:
- (void) drawInRect: (NSRect) rect input: (id) input;
We pass input as an id rather than an NSDictionary. A dictionary would have been great and could easilly have encapsulated any state information we’d like to send to the shader. Its shortcoming, however, is that we’d need to maintain its state and make sure it matched the state of the element we’re trying to draw. That’s just error prone and boring code to write, so we won’t do it. Instead we leverage our nifty Cocooa Key Value Coding methods.
Using KVC we can treat any object as if it were a dictionary. Even better, by using key paths we can traverse the objects its accessors return and dig deep into its guts. So it’s a great fit for our shader – lot’s of environment information available to us in an easy to get to way. Using KVC we could write a shader implementation like this:
- (void) drawInRect: (NSRect) rect input: (id) input
if ( [[input valueForKey: @"isEnabled"] boolValue] == YES )
[[NSColor redColor] set];
[[NSColor blueColor] set];
NSRectFill( rect );
Now that’s pretty decent. We can change the way we draw based upon the state of the object we’re supposed to be drawing. We can, of course, query a lot more attributes than just isEnabled.
I always liked how you could ask NSImage for an image just by calling imageNamed:. Shaders deserve the same treatment. It’s pretty easy – add a name ivar to the shader and some class methods to register and deregister a shader with a given name and you’re done. Then you can just say: [view setShader: [KBShader shaderNamed: @"CurrentEnVogueGlossEffect"]]; and you’re good.
Cocoa Shader Tree
There’s still room for improvement though. We don’t neccessarily want our shaders to be tied to one control. We’d like a ‘WhiteGloss’ shader, a ‘GrayGloss’ shader and maybe a ‘FlatWhite’ shader or something. And we’d really like to be able to say: “Button, when you’re enabled use the ‘WhiteGloss’ shader, when you’re pressed use ‘GrayGloss’”. We can do that by nesting some if’s and refering the call to other shaders but there’s a better way.
What we want is to have our shaders to be able to decide if they want to draw or not based on what’s been input. Cocoa has a nice way of giving us YES or NO answers based on a complex chain of tests – NSPredicate. Let’s use it for something better than some mabmy-pamby CoreData query – we’ll use it to draw something shiny.
(Just as a disclaimer don’t take this code literally – I’m not going to write the init’s and dealloc because it’s boiler plate and distracting).
@interface KBConditionalShader : KBShader
- (id) initWithPredicate: (NSPredicate*) predicate
ifShader: (KBShader*) yesShader
elseShader: (KBShader*) noShader;
- (void) drawInRect: (NSRect) rect input: (id) input
if ( [_predicate evaluateWithObject: input] == YES )
[_yesShader drawInRect: rect input: input];
[_noShader drawInRect: rect input: input];
Now we’ve got a shader that doesn’t actually draw – it does control flow based on a test of the input. Thats kind of cool but so what? Let’s expand on that control flow idea.
@interface KBShaderList : KBShader
- (id) initWithShaders: (NSArray*) shaders;
- (void) drawInRect: (NSRect) rect input: (id) input
unsigned currentShaderIndex, numberOfShaders;
numberOfShaders = [_shaders count];
for ( currentShaderIndex = 0; currentShaderIndex < numberOfShaders; currentShaderIndex++ )
[[_shaders objectAtIndex: currentShaderIndex] drawInRect: rect input: input];
Now we can string shaders together. Why on earth would we want this? Well let’s say we want a nice gradient background with an image in the top left hand corner. We can reuse our gradient background shader and our draw an image shader and just chain them together. And we’ve got a conditional shader too…
KBConditionalShader *conditionalBackground =
[KBConditionalShader shaderWithPredicate: [NSPredicate predicateWithFormat: @"(isEnabled==YES)"]
ifShader: [KBShader shaderNamed: kKBBlueGradientShader]
elseShader: [KBShader shaderNamed: kKBGrayGradientShader]];
KBShaderList *shader = [KBShaderList shaderWithShaders:
[NSArray arrayWithObjects: conditionalBackground,
[KBShader shaderNamed: kImageBadgeShader],
The resulting shader will conditionally change the background from blue to gray if the input object is enabled or not. Regardless of state it will draw an image badge. Conditional shaders and shader lists can be nested as much as you’d like so you can make some pretty rediculously complicated shaders if you really want.
Shaders offer some real benefits over bundling up the drawing code with what ever element you’re drawing. First the obvious one – they consolodate the drawing code into one place and it can be shared easily between code that needs them.
Secondly is abstraction. The shader draws, that’s what it does. If you’re using CTGradient today and want to switch that to something else in the future you change it in your gradient shader and you’re done. You can even use different gradient shader implementations depending on platform if you want. Uh, you know, if that was something that’d make sense.
Thirdly they offer optimization and profiling opportunities. You can instrument your shader and discover that each time it’s asked to draw its in the same size rect. With a shader you can cache the results and just draw the cached bitmap instead.
And there’s an immense flexibility behind the shader model. Once you can set a shader for a view it’s realatively easy to write a shader picker that can dynamically change the shader. No, really, honest, I’m not talking about skinning even though it’s a natural fit, but it’s a valuable tool to have during development so you can quickly experiment with different looks. Or consider a ‘debug’ shader that when asked to draw dumps a bunch of control state to the console.
And, saving a big one for last, it’s a big step towards making Resolution Independence a far lesser burden. Some resources are inescapable (and for what it’s worth – multi res Tiff files are what the Apple boys recommend) but many of the images I see when I’m poking around could be replaced by shaders. When Apple changes some random UI element and you want to keep up it’s easier to pull out the color picker and change some RGB values in a shader than it is to ask an artist to whip something up in Photoshop. Your code will scale and the artist will be stuck doing x number of reps. Be kind to artists, they’ve got better things to do.