Views: Post Processed

Visual Cues

If your application presents data to the user then there are some valuable tools you’ve been leaving on the table because they have traditionally been difficult to achieve with computer graphical interfaces. Mac OS X has some great graphics technology that changes the game – and I don’t mean Core Animation.

Core Image – Not Just For Images Anymore

Core Image is incredible. The amount of complexity that it abstracts and offers in a straight forward interface is staggering. I may be a little more appreciative of it since I’ve had occasion to do a lot of low level graphics programming against OpenGL or DirectX or whatever directly – Core Image exposes, with a minimum of fuss, all the best parts of playing with a modern pixel pipeline.

The basics of Core Image are dead simple – you provide a recipe for how you’d like your image cooked and it’ll give you a result. The recipes are in the form of a few simple objects strung together with some settings tweaked. If you’ve played with a modern Mac OS X image editor like Acorn (from Flying Meat) what you see in the filters window (developed by Rogue Sheep) is pretty much how the underlying implementation works. (Yes, I did just pimp friends.) So if you’ve got a basic grasp of stringing filters together in a graphics program then you’re in good shape to do so in your own application.

“But I don’t write a graphics app!”

Well, yes, in fact, you do. A Mac OS X desktop application uses a graphical interface to communicate information to the user. One of the wonderful things about graphics is that you can communicate lots of information very quickly. Even better than that is that the information doesn’t need to be accurate. That sounds preposterous – why on Earth would you want to show the user inaccurate information. Well, clearly, you don’t – but you don’t need to show exact information either – your job is to provide an environment with enough graphical cues so that the user can interact quickly and effectively with whatever data you’re presenting to them. If you’re not sold let’s look at scroll bars – they shrink in proportion with the size of the displayed information versus the size of the total amount of information. They’re pretty close to being accurate but not really – there’s a minimum size on that scroll bar thumb so it breaks down for huge amounts of data – and the other thing is, really, you don’t care – you get the idea but a pixel here or there doesn’t matter to you. It’s the same for Dock icons bouncing on load or even the classic lying progress meter – none of these are precise but they allow you, the human, to infer information quickly.

Humans are damn good at picking out important visual information rapidly. We’re good at it because otherwise 6,000 years ago we’d all have been eaten by Dinosaurs. We have some built in clues that can help us distinguish relevance – objects outside our focus are more blurry and those brightly colored draw our attention more than those which are muted and subdued. These cues are used daily by every sighted person to make sense of the vast amount of visual clutter we’re bombarded with.

“Where are you going with this?”

Core Image can be leveraged to provide your users with richer visual cues to help them more quickly grasp the information you are displaying. The graphics architecture of Mac OS X allows us to draw into different contexts – once we’ve got our data in a graphics context we can operate on it graphically as if it were an image. We’re going to use this ability to do some fancy post-process effects to our views that will add visual cues to the data represented. I’ll use an NSTableView subclass to demonstrate but it’s important to recognize that this technique is applicable to any view – and may even work better with your own custom views.

Post Processed Table Views

The idea is simple – we’ll present a typical table view to the user but allow for various ‘focusing’ effects to be applied to it. For each item in the table we’ll provide a degree indicating how much of the effect to apply. For a search results table, for example, we can blur results more the less relevant they are. Or if we’re looking at time based data we could apply a sepia tone to older entries – a common visual cue indicating that an item has aged.

Implementation

We need three things to make this work: the table as it would have appeared without any post processing, a set of floats indicating how much we’d like the effect applied, and a mask image we’ll use to graphically represent those floats. A good way to go about this on Mac OS X is to use CGLayers – they’re fast, efficient, and will be cached on the graphics card when possible. The process is reasonably simple: if we’re applying a post process effect then allocate the viewLayer which we’ll render the base NSTableView into; then allocate the viewMaskLayer in which we’ll draw the various shades of gray representing how much to apply the effect. Since the effect is multiplicative 1 (or White) represents a full application of the effect while 0 (or Black) represents not applying the effect at all.

So far, so simple. The next step isn’t much harder either. First – set the current graphics context to the viewLayer and ask our superclass to draw. That way we’ll capture what it would have drawn in our layer context rather than having it sent to the screen. Next we’ll need our table view data source to be able to provide us with per-item blending information. To achieve this we set our mask layer as the current graphics context then call -(CGFloat)tableView:(NSTableView*) tv postProcessFactorForRow:(NSInteger)row on our data source. We take the result, turn it into a shade of gray and fill the row rect with the color. Once we’ve iterated over the visible rows we now have a mask image which contains bands of gray each indicating how much to apply our filter.

Finally we apply the post process effect – we’ve got our source view and our mask and we just use a CIBlendWithMask filter to combine the two. The effect is up to you really – and it’s irrelevant to this approach. The code provided shows a sepia tone, a blur, a contrast and provides for a way to set custom effects. You can download the source code here, play around and see what works for you. Remember – we happen to be using a table view here but this technique is generally applicable – if your view presents a set of data then you can use this technique. This technique also works on Tiger – in fact the code just recently got Objective-C 2.0-ized but this was originally all written against Tiger without Core Animation in mind. If you want to use this on Tiger, go wild – you’ll just need to reverse the ObjC2.0 @properties and data source @protocol a bit but it’s dead easy.

Also – I’m not sure if I’ve made this explicit: if there’s code I’ve put up on my blog then it’s yours to use as you wish. That said, if you’re making an awesome UI for your Nuclear Armageddon machine I’d prefer you look elsewhere – Nuclear is so passé.

Update

Shamed by Gus for not having any visuals here’s a screen shot of a test app I just built. It shows a listing of the files on the Desktop – files of the same type as the selection are not blurred, files of a similar type (as determined by a UTI check) are blurred a little and totally dissimilar files are blurred even more. The effect is overstated for the sake of the example, it’s the technique which is really of interest.

KBPostProcessTableView.png

And here’s a movie of it in action: KBPostProcessTableView.mov

I also updated the code and included the test app and a project. Download it here.

4 thoughts on “Views: Post Processed”

  1. Humans are damn good at picking out important visual information rapidly. We’re good at it because otherwise 6,000 years ago we’d all have been eaten by Dinosaurs.

    Please, please tell me this is a joke.

Comments are closed.