“Copland 2010” refers to a series of articles, written in 2005, and a recent episode of Hypercritical, in which John Siracusa argues that at some point down the line Objective-C and the Cocoa APIs are going to have to be replaced with more modern technologies. Failing to address this sooner rather than later, as was the case with the Classic Mac OS, will leave Apple in a vulnerable, perhaps insurmountable, position relative to their competitors who have adopted “modernized” development technologies. Apple’s last-ditch attempt to bring modern technologies to the Classic Mac OS was codenamed Copland. Siracusa hints at the severity of the situation, if his argument turns out to be sound, by calling his projected development platform nadir “Copland 2010″.
In broad strokes: Siracusa is right and Siracusa is wrong. There are two parts to the Copland 2010 argument — a statement of the problem and a call to action with some suggestions. In the broadest sense it’s basically impossible to seriously debate the merits of the problem as he lays it out. Undoubtedly, at some point, Objective-C and the Cocoa APIs will be in the dust-bin of history. At issue, however, is the time scale. It’s unfair to hold Siracusa to 2010, in Avoiding Copland 2010, part 2 he writes, “I actually think the year 2010 is a bit too early, but I didn’t want to use a date that was too far in the future”. We’ll be fair and take 2010 to mean “near future”, in computer industry terms. Let’s call it between five to fifteen years.
I don’t think it’s at all likely that we’ll see the utility of Objective-C and the Cocoa APIs become totally eroded over the next ten years (the argument was first made in 2005, so we’ve used up five years of the near future). I do think it’s likely that during that period it will become more and more apparent what the next big thing is — where the industry is going. I don’t, however, think that any of the popular “modern” languages fit any better with the future of computing than Objective-C does. We’ll revisit this point.
Siracusa, during his Dark Age Of Objective-C episode of Hypercritical lays out the following criteria for a modern language:
- Automatic memory management
- Native strings
- Regular Expressions native (but a library is acceptable)
- Native object system
- Named parameters
- Succinct syntax for common operations
- Acknowledgement of concurrency
- Single vendor driving the development
I agree with most of these, as I think many developers would. We could quibble over the weights we assign to each but, by and large, that’s a decent list. Let’s use it as a score-card for modern Objective-C.
✓ Automatic memory management
Objective-C 2.0 can be garbage collected.
✓ Native strings
Objective-C has @”" strings which are mapped, via a compiler option, to NSString instances. If the requirement is for special operators to work on strings then I’d argue that’s actually an issue of not having a “succinct syntax”.
½✓ Regular Expressions native (but a library is acceptable)
Regular expressions are not native to Objective-C. There are libraries available and, as of iOS 4.0, Foundation includes a regular expression facility.
✓ Native object system
½✓ Named parameters
The Objective-C message sending syntax provides half a solution to this feature. Ideally, one would want to simply name their parameters, omit any that aren’t required in the call, and have it all just work. While Objective-C doesn’t go this far it does go far enough to significantly increase the readability of any given message send.
𐌢 Succinct syntax for common operations
This is a nice feature that has the potential for utter ruin. Too far down this road and one re-creates the C++ operator overloading insanity. Not far enough and we’re stuck typing an awful lot of code. Objective-C has many merits but being succinct is not among them. I would be happy to see some syntactic sugar added in future versions so that boxing basic types and creating arrays and dictionaries required less code. While verbose code makes it far more clear what is happening too much verbosity can distract from the intention by flooding the page with needless detail.
✓ Acknowledgement of concurrency
With the addition of blocks and Grand Central Dispatch Objective-C Cocoa not only acknowledges concurrency but is ahead of most of the pack on this one. Yes, it’s true — GCD and blocks are not actually Objective-C additions, they are built into the C layer. One could argue that Objective-C itself doesn’t address concurrency directly. I’d point to @synchronized and then tell you that being able to pick up on improvements to the C level is one of the great strengths of Objective-C.
✓ Single vendor driving the development
For all intents and purposes the only entity that matters in the Objective-C / OpenStep / Cocoa world is Apple. They are the tastemakers, the architects and the largest consumer.
So, modern Objective-C scores 6 points on a possible 8. Not too shabby. The only area it totally missed scoring on was in being succinct and the fix there isn’t one of doom and gloom, it’s one of implementing some relatively simple syntactic sugar. Without knowing for a fact I’m quite sure this sugar doesn’t exist because Apple largely tries to keep Objective-C distinct from the Cocoa frameworks. There’s not a lot added to Objective-C that’s specifically Cocoa flavoured — @properties with retain / copy semantics are the only thing that comes to mind.
Since Copland 2010 was originally written Objective-C has gained, by adding concurrency and regular expression support, 1½ points on the Siracusa Scale and I believe it is well positioned to pick up another half, or even full, point by adding some syntactic sugar that wouldn’t invalidate older code. Scoring 6 out of 8 now with a good clean shot at 6½ to 7 within a few years is hardly a disaster in slow-motion.
Consider the troubles the Classic Mac OS had. It had no memory protection so any process had, and in many cases relied upon, the ability to read and write to other process’ memory spaces. It had no pre-emptive multitasking which meant that any process could refuse, either intentionally or via a bug, to relinquish control of the computer back to the operating system and other applications. These are two fundamental underpinnings of modern computing and Classic Mac OS didn’t have them and, worse, had made, explicitly or implicitly, promises that developers could depend on the traits and symptoms exposed under such a model.
Objective-C is not so similarly hemmed in on such vital ground. For the most part Objective-C has managed to take on-board what I would argue are the two key technologies for the next few years: garbage collection and concurrency. Now, one can argue how usable Garbage Collection is right now with relation to the frameworks and existing code and one could argue that GCD isn’t a first class concurrency abstraction. And, sure, perhaps there are some points to be scored in either of those arguments. But it’s possible to sit down and write an Objective-C application today that is garbage collected and is highly concurrent by leveraging blocks and GCD. With the controversial dot-notation one could argue that even the succinct syntax deficit is being addressed. Certainly @property and @synthesized and the new ability to not even declare instance variables are all indicative of a direction to reduce the amount of boilerplate a developer is required to write.
Five years in from the near future of 2005 and we find Objective-C fairing quite well, and it’s become more apparent where further improvements can come from. Given all of this do I predict a crisis within the decade? No, I don’t. A decade is a long, long time in computer industry terms and I’ve no doubt I’ll be surprised by many of the changes to come — I just don’t believe that Objective-C has any obvious handicap that will prevent it from taking advantage of them. Indeed it’s possible, as with the iPhone, that Objective-C may find itself being able to cover enough of the sweet spot between lower-level pedantry and higher-level abstraction that it remains suitable for use in surprising ways.
Let’s set the argument that Objective-C is still viable aside for now. Let’s take the Siracusa position and accept that, eventually, we will need to transition off Objective-C. Because one day we will. The question then becomes — what do we do now?
Siracusa argues for a move to a more “dynamic” language. Probably something interpreted, something that scores an 8 out of 8 on the Siracusa Scale, something that is not bridged to Cocoa APIs but is the basis of those APIs. I don’t disagree, I think such a thing could be nice. Where we disagree is that I see Objective-C providing much of that right now and seeing potential in the future while he sees a need for a re-write.
When Windows 1995 was introduced it was, despite the ads Apple ran, the end of the road for the Mac OS. Windows 95 had many, many problems. I wrote a lot of lower level code that had to run under Windows 95, believe me, it had problems and would crash and burn and reboot and be a bitch. At home I preferred OS/2 and then Windows NT 4. Classic Mac OS had serious fundamental problems that were insurmountable fifteen years after the launch of the Macintosh. While there’s a lot of nail biting about this I see it as a real achievement — the Classic Mac OS, designed in 1984 as a cheaper, faster and more nimble version of the Lisa and the Xerox stuff, managed to last until 2002. A feat so impressive that I’ll go out on a limb and say it was the only piece of software that ever got a coffin and a funeral service.
By the late 90′s is was abundantly clear where computers where going: memory protection between processes, pre-emptive multitasking, advanced graphics capabilities and the Internet as a constant. Classic Mac OS was poorly positioned to provide any of these things. Many popular applications, and indeed the way the operating system switched tasks, relied upon there being no memory protection. There were applications that could be made to be fantastically responsive, such as audio apps, because they could deny any other process access to the CPU while they were busy. Despite having been the early leader in graphics by the late 90′s it had become apparent that the QuickDraw model was up against its limits and would require a reboot. With regard to 3D graphics the Macintosh, of that era, was woefully unprepared to deal with the revolution in GPU acceleration and rendering. The Classic Mac OS was hitting the wall on almost every front all at once.
This was apparent to Apple at the time. They’d started a number of projects but Copland is the most famous among the Mac crowd.
Copland was an attempt to fix the sins of the Classic Mac OS while retaining as much backwards compatibility as possible. Copland had a form of protected memory, preemptive multitasking, a new QuickDraw GX, and on and on. It was a full frontal assault on each and every bullet point that could be counted against Classic Mac OS.
It was a total failure and a largely unmitigated disaster. Its failure was so bad that at one point Apple looked into licensing Windows NT, then looked at buying BeOS and, finally, bought NeXT and that’s where we are today.
Siracusa worries that Objective-C is the weak link in today’s Apple technology stack. One day I suppose he may well be right. Here’s the thing though, and it’s the second part of the Copland 2010 argument — Siracusa suggests a complete, from the language-ground-up, re-write of the frameworks, and by implication the entire user level of the operating system and all third-party applications. If anything screams “Copland” to me it’s the suggestion that we all engage in a ground up reboot without really knowing where we’re going.
The Siracusa Scale is decent but, as I mentioned previously, I believe the weighting of each factor is up for debate. If one can write code that is asynchronous across dozens of cores yet requires you to write less succinctly which factor is to be favoured? I have an opinion on that and, as you may’ve guessed by how much typing I’ve done to make this point — I value the ease of leveraging multicore assets over the brevity of my code, especially when such brevity may simply make my code more obtuse.
Given what I’ve seen I’d predict that computers are going to move to be more heavily multi-core and distributed. Like betting on Quartz in 1999 because it could eventually be backed by GPU textures seemed like a given (see CoreAnimation for a re-do once the realities became apparent) betting on many cores seems obvious now. I don’t believe that the Siracusa Scale weighs this probability correctly. Simple, accessible, multi-core code will, I believe, be the most important facet of language and API design in the near future — starting now, give me fifteen years. It is my belief that Objective-C, with garbage collection, sitting atop a blocks and GCD enabled C substrate is in a terrific position to make the most of what is to come.
The Copland 2010 argument presupposes that the problem space is going to change drastically below the technology developed to service it. By that I mean that the Classic Mac OS was developed to service the requirements of a single user working in a single application at a time. Computers evolved and the assumptions behind the Classic Mac OS changed drastically. Eventually there was enough change that, by 2002 according to Apple, it was dead.
Here’s the thing with Objective-C and the Cocoa APIs — I believe that UIKit and AppKit (well, mostly) are well enough abstracted that they can serve their purpose for many more years. Ultimately, when dealing with an an interface API the permutations are limited to what can be expected from a given user control. Making fancier and more syntactically efficient ways of expressing the same behaviours encounters the problem of diminishing returns. Back in the Win32 / Classic Mac OS / PresentationManager days this might have mattered. The Cocoa APIs basically simply ask the developer to fill out a form when they’re asked to do so. I’m not sure how much more higher level and abstracted one can get without relinquishing control of how the data is displayed. I think Cocoa has largely nailed a sweet spot between the en-vogue 4GL languages of years gone by and the explicit configuration of other methods.
So, in the end, while I appreciate the thinking behind Copland 2010 I don’t believe it’s quite the issue Siracusa believes it is. Objective-C continues to evolve, and in directions I believe will be increasingly important in the future. I don’t believe we’re anywhere near the level of crisis that Apple hit with Classic Mac OS and I don’t believe that a total second-system re-write without a clear goal is the best prescription for the platform.
Ultimately, I believe the future of languages, APIs and computing is likely to move towards massive parallelism. I’d even predict that parallelism by task is the ultimate future — compute tasks will be routed to the component best suited to handle the request. Massive parallel computations might be handed to a GPU style device, simple math and branching handled by a CPU core and location refinement handled by something entirely different. Siracusa argues that abstraction is the all consuming beast of computer science, I think he’s mostly correct but I can’t help but feel he’s more worried about abstracting yesterday’s issues than tomorrow’s.
My money is on “asynchronous encapsulation of independent computation”. Now that is an abstraction — but it’s a more refined guess as to what the future might hold, and it’s one that I believe leaves Objective-C and Cocoa in good shape for the near future.