addressalign-toparrow-leftarrow-rightbackbellblockcalendarcameraccwcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-checkcircle-with-crosscircle-with-pluscrossdots-three-verticaleditemptyheartexporteye-with-lineeyefacebookfolderfullheartglobegmailgooglegroupshelp-with-circleimageimagesinstagramFill 1linklocation-pinm-swarmSearchmailmessagesminusmoremuplabelShape 3 + Rectangle 1ShapeoutlookpersonJoin Group on CardStartprice-ribbonShapeShapeShapeShapeImported LayersImported LayersImported Layersshieldstartickettrashtriangle-downtriangle-uptwitteruserwarningyahoo

"Good enough" is good enough! by Alex Martelli

* Description *

Our culture's default assumption is that everybody should always be striving for perfection -- settling for anything less is seen as a regrettable compromise. This is wrong in most software development situations: focus instead on keeping the software simple, just "good enough", launch it early, and iteratively improve, enhance, and re-factor it. This is how software success is achieved!

* Abstract *

In a 1989 keynote speech at a Lisp conference, Richard Gabriel had a "light relief" section where he caricatured a SW development approach he called "worse is better" (AKA "New Jersey approach") and contrasted it with what he called "the right thing" (AKA "MIT/Stanford approach")... and despite the caricatural aspects reluctantly concluded that NJ was the most viable approach, identifying several of the actual reasons (speed of development, less monolithic designs, systems more easily adaptable to a variety of uses [including changes in the underlying requirements], ease of gradual incremental improvement over time, ...).

The debate hasn't died down since (Gabriel himself contributing richly to both sides (!), sometimes under the pseudonym "Nickieben Bourbaki"). My favorite Gabriel quote is "The right-thing philosophy is based on letting the experts do their expert thing all the way to the end before users get their hands on it [snip] Worse-is-better takes advantage of the natural advantages of incremental development. Incremental improvement satisfies some human needs".

However, while the debate is still raging, reality has steadily been shifting away from "the right thing" (inherently "Cathedral"-centralized, with "Big Design Up Front" a must, conceived with academia and large firms in mind, and quite unsuited to always-shifting real-world requirements) and towards "the NJ approach" (suited to "Bazaar"-like structures, agile and iterative enhancement, dynamic start-ups and independent developers, in a world of always-shifting specs).

In this talk, I come down strongly on the side of "the NJ approach", illustrating it and defending it on both philosophical and pragmatical grounds.

I draw technical examples from several areas where the systems that won the "mind-share battles" did so by focusing on pragmatic simplicity ("good enough") to the expense of theoretical refinement and completeness (the quest for elusive perfection), leading to large ecosystems of developers bent on incremental improvement -- the TCP/IP approach to networking contrasted with ISO/OSI, the HTTP/HTML approach to hypertext contrasted with Xanadu, early Unix's simplistic (but OK) approach to interrupted system calls versus Multic's and ITS's perfectionism.

Within Python, I show how metaclasses' quest for completeness yielded excessive complexity (and 80% of their intended uses can now be obtained via class decorators for 20% of the complexity), and how well incremental improvement worked instead in areas such as sorting, generators, and "guaranteed"-finalization semantics.

The talk is not about lowering expectations: our dreams must stay big, bigger than we can achieve. It's about the best practical track towards making such dreams reality -- think grandiose, act humble. "Rightly traced and well ordered: what of that? // Speak as they please, what does the mountain care? // Ah, but a man's reach should exceed his grasp // Or what's a heaven for? All's silver-grey // Placid and perfect with my art: the worse!"

This talk is probably not perfect, but I do think it's good enough.

* Speaker *

Author of "Python in a Nutshell", co-author of "Python Cookbook", frequent speaker at Python conferences, once-prolific contributor to StackOverflow, and recipient of the 2006 Frank Willison Memorial Award for contributions to Python, Alex currently works as Senior Staff Engineer at Google.

Join or login to comment.

  • Alex

    I'm not sure I learned anything new from the presentation. Was the audience expected to be junior-ish?

    June 12, 2014

    • Alex

      Jerry, in your opinion, does memory safety include fully controllable allocation/deallocation or, as an alternative, automatic memory defragmentation? If you run out of memory due to fragmentation (just that, no leaks) after a period of time, you may have serious reliability/security issues.

      June 20, 2014

    • Jerry M.

      Alex, by memory safety I mean at least catching all memory problems including allocation failures, even if that thread then has to exit (hopefully gracefully, and maybe gets restarted or failed over). The worst thing is to press on, causing potential damage and opening exploitable holes (like Heartbleed). To run for a long time, a process mustn't fragment memory. Compaction is one approach. Many real-time threads do all their dynamic allocation from fixed pools of recycleable nodes to avoid fragmentation and to get predictable allocation times.

      June 20, 2014

Our Sponsors

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy