all bits considered data to information to knowledge

28Jul/140

Agile Methods for Disposable Software

I had a conversation the other day about agile software development with a friend of mine who is, by his own admission, a “real hardware engineer”. The focus was the Agile Manifesto:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

According to my friend, these statements are “either trivial, or naive, or plain wrong, or all of the above”.

NB: He also coined a hilarious ADHDDD term - Attention Deficit Hyperactivity Disorder (ADHD) Driven Development – and wrote a manifesto worth reading if only for its literary merits  🙂

Of course, as a certified Scrum Master, I beg to differ - but his attitude illustrates the point when a method (or its perception) can be taken ad absurdum. One of the common themes is that in the past the software was “designed to last” and Agile tell the developers “don’t think – code” … As any agile practitioner knows - nothing could be further from truth. Most of the COBOL written in 1960 is still running – but is it a good thing?  Borrowing from my friend's area of expertise, electronic tubes can work in stead of ASIC  - and ,arguably, with greater transparency (pun intended 🙂 ) – but would you really want to? Agile development does not preclude solid software architecture – quite opposite, it demands it! The fundamental quality attributes of the system are defined (and designed!) before a single line of code is written, and then – in close collaboration with the stakeholders - the rest of the requirements are being fleshed out.

We are living in the age of disposable software. This is a trade-off we've made to get the “latest and greatest” now - and cheap (preferably, free!).  Just take a look how things have progressed since 1980s - CPU, storage, RAM - all plummeted in price, and packaged software prices went through the roof...  (the Open Source movement only changes costs model – instead of “paying for software” one begins “paying for support and additional features”).

It might appear that we are rushing the development a la Netscape experiment of the 90s where a barely compiled program was foisted upon unsuspecting customers to debug…but it is a superficial analogy. We came a long way since those heady days.  We have a number of tools and frameworks at our disposal to shorten the development cycles – requirements gathering, build, testing, release – until we are getting to a nirvana of “continuous build” (and when we think that things cannot get any better we are ushered in a wondrous world of “continuous deployment”).

Is it better – or worse -  than a top-heavy process that takes time to spell out every requirement in a minute detail? The answer is, of course, it depends. A working software just-in-time for the market – albeit a buggy one – trumps comprehensive but obsolete one every time!

“For to him that is joined to all the living there is hope: for a living dog is better than a dead lion.”   Ecclesiastes 9:4

16Feb/130

An excellent bit of advice from the trenches: Startup DNA

Yevgeniy Brikman, staff engineer at LinkedIN who had "front row seats at very successful start-ups" (LinkedIN, TripAdvisor) shares his observations and insights in a few (well, 106) slides here (http://www.slideshare.net/brikis98/startup-dna)

A very interesting insight into minimizing "trial and error" cycle with dynamic languages (#35 ) and development methodologies/framework (#37) :  shortening the feedback loop allows for earlier maturity. Or, as he puts it quoting Jeff Atwood (of StackOverflow fame), " Speed of iteration beats quality of iteration".  Of course, without discipline and support of well defined processes and frameworks the speed alone could be a runaway train-wreck 🙂

Second observation that struck a chord with me on slide #50: "If you cannot measure it, you cannot fix it" .  [NB: Of course, this has a long history (and even longer attribution list) of being said at different times in various contexts by Lord Kelvin, Bill Hewlett, Tom Peters and Peter Drucker ]. The advice "Measure Everything" should not be taken too far, though:

"Not everything that counts can be counted, and not everything that can be counted counts. "  Albert Einstein

but in the context of the presentation Evgeniy's advice ought to be taken to the heart: collect server metrics, database metrics, client side metrics, profile metrics, activity metrics, bug metrics, build metrics, test metrics etc!

And the last (but not least) observation on sharing is arguably the most important one (slides #93-102).

"The best way to learn is to teach" -

Frank Oppenheimer

12Jul/110

A SCRUM lesson from my horse

Couple weeks ago I went horseback-riding in Black Butte Ranch, Oregon. This was the second time I found myself trying to control a beast with a will of its own, and a physical strength vastly exceeding my own. Only “control” would be a poor choice of words - I learned quickly that I can’t control my horse (Big Jake was the name)… After initial disagreement on several points, I found that I was able to influence it, and in the end we both finished the trail in one piece.

Then it occurred to me that this might be a good metaphor for Agile Software Development (SCRUM and such). You can’t control your team without killing off all that makes it agile; you have to work with it, finding just the right balance that will get you to the destination, alive.

Tagged as: , No Comments
13Apr/1014

Agile Calculus

One of Agile tenets is cycle shortening : development cycle, communication cycle, integration cycle… It had occurred to me that taking these to the logical conclusion results in the cycles becoming continuous.

Continuous development, continuous integration, continuous peer review (think XP)...All discrete activities meld into continuous processes.

17Dec/090

Prescription for Healthy Code

The following is a PDF version of the presentation I gave in October 2009 at an event organized by  Software Association of Oregon.  It outlines general principles of creating software quality culture for the development team, as well as lists specific examples of tools and processes available:

Prescription for Healthy Code

Here are an absolute minumum without which any software development effort  becomes amateurish:

  1. Thou shall not develop without version control
  2. Thou shall not develop without issue tracking system
  3. Thou shall perform code and design reviews
  4. Thou shall use patterns and frameworks

These apply to professional software development regardless of methodology, technology and acquired tastes. As highly recommended come these (in no particular order):

  • Unit testing
  • Coding standards
  • Continuous integration
  • Automated testing (functional, integration etc)
  • Developer documentation compiler
  • Coverage analysis
  • Refactoring tools/frameworks

Linkapalooza!

Introduction into Test-Driven Development
TDD in C# with NUnit
Best practices for test-driven development [examples in Java]

8Sep/090

The once and future king..

As labour day is winding down,  I am pondering the miracle of self organization.

It has been forgotten and rediscovered many times in the course of human history. A hierarchical system is usually very stable – as long as its composite pieces are happy (or have no choice but) to stay on their assigned places in the hierarchy; the moment a subject moves, the hierarchy crumbles down. Once free will is accepted as founding principle – as it is in most democratic societies today – a hierarchical organization becomes all but impossible. 

Coming down to a 100 feet view, I am looking into my own garden of agile software development. Agile teams by definition do not have fixed roles, the dynamics within the group defines current role that can last through the project, or might change for the next sprint;  there is no manager to assign tasks – this role was taken by Scrum Master (a.k.a agilitator); the group itself would disintegrate once the purpose that had brought them together is fulfilled.

Back to the Labour Day. A 120 years ago, Peter J. McGuire, General Secretary of the Brotherhood of Carpenters and Joiners, proclaimed the purpose of the holiday as an occasion to honour those "who from rude nature have delved and carved all the grandeur we behold." A human role as organizer of the nature was clearly taken for granted. Fast forward to the present day: a Dutch town of Drachten did away with all the traffic lights in the city, leaving it up to motorists and pedestrians to negotiate the road space. And the number of accidents went down! Agility in action.

I see it in trends where instead of beating the nature into submission the humans learn to “go with the flow”, discovering that there might be a different way. Maybe, we do not need a king, after all; maybe all we need is an agilitator.

29Aug/090

Reviewing code for fun and profit

Everybody would agree that a review is a good idea – code review, design review, performance rev…let’s stop right there 🙂

Yet there aren’t many as universally hated team activities as code review. A programmer is solitary animal – XP methodology notwithstanding – and generally does not like explaining his/her code to the group. Now, this does not apply to truly agile engaged team where people are fully aware of each other strengths and weaknesses, trust each other do do the best job possible, and do not get defensive at critique. The rest – read: most of the teams – do have these problems, to some degree.

As I see it, the defensiveness could have several plausible explanations, and none of them flattering:

  • incompetence – the programmer knows or suspects he’s not up to the team’s standards, and does not want this fact to be discovered by his peers and managers
  • arrogance – the programmer believes that team is not “qualified” to review his code; usually comes with primadonna syndrome
  • laziness – the programmer is getting paid to “produce code, not to discuss it”;  implies equavalency of good and bad code(design)
  • lack of reviewing skills – the programmer was never educated on the value of the code review (or the education did not sink in), much less in reviewing methods

NB: The list is far from being complete – feel free to add your own entries.

All of these could – and should – be addressed rather sooner than later if team is to succeed. The first two dwell mostly in project management domain, while fallacy of the last two could be rectified with proper education and process automation.

I will talk about automating the review process in upcoming blogs.

21Aug/090

Release as if there is no tomorrow

The product releases should be in meaningful increments, each release potentially becoming the last in line. This gives the customer (even if the customer is you) a sense of control, and produces a relatively self-contained system at each release...And a possibility to call it quits at any release time.

Branching, in my opinion, is not for release but rather for investigating alternatives; they will either be merged back into trunk, or abandoned. The last thing you need is multiple versions of the product floating around...Every release should be tagged, and - if there were changes to the environment -wrap up the envirionment (VM), and store it alongside with the product.

15Aug/090

Hey! You! Get off of my cloud

A cloud computing is a fancy name for virtualization on demand. And private cloud means that you own it lock, stock and barrel.
Software is "soft"  because it - at least in theory - is infinitely pliable, as opposed to hardware which is much less so... With advent of virtualization the grand divide between software and hardware becomes ever more blurred. Suddenly, you can run software on software; the hardware is abstracted to a mere plumbing -  the software computer to run a software application.

Cloud computing is all about efficient allocation of resources on demand, and what makes it "cloudy" is absense of need to know where the required resource will be allocated and how it is managed. In the cloud virtual computers can be created and destroyed at the moment notice (figuratively speaking), and you can string together a cluster of virtual computers to deploy an ambitious enterprise infrastructure - something that would require significant investment in hardware infrastructure just a few short years ago...
Of course, virtualization comes with a bunch of caveats (performance, complexity etc.), and, to be sure, cloud computing adds a few of its own (resource allocation and management come to mind...).

The public vs. private cloud computing boild down to whether you want to control the hardware that the cloud resides in or you let somebody else to do so thus achieving economy of scale. All drawbacks and benefits of owning vs. renting apply.

I would also argue that cloud computing brings agility (think SCRUM) into hardware world, and I will explore this topic later.