Sunday, March 22, 2009

How to Develop Good Software

Here are a few thoughts on where we are going right, and wrong, with commercial software development. Enjoy, agree, disagree, make comments, ...

Test driven design (TDD)
  • I really like the concept of TDD. Get the test team in early to think about the proposed software, and to commit that they can in fact test the software to be developed (how often are the QAers bought in right at the end of the project?). One recent project I was on ended up costing about three times the projected amount, as it turned out that we simply couldn't test the software (this software sent updated itineraries to customers based on changes to airline schedules... it worked fine in development, but was hard to test in pre-production as we didn't really have a way of simulating airlines cancelling flights.... except for calling in bomb-threats to airports that is...). Other good things about TDD include: Designing the system for testing - so there are no nasty surprises when it comes time to test. Mock out each component in the system, so you can test each part of the system (including the core) in isolation (unit tests have to run fast... otherwise there is little point in having them).
  • Test driven design works, if done well. Martin Fowler suggests taking very small steps (red/green refactoring), but if you are more experienced than this, I suggest taking bigger steps. If you are a competent developer, there is no point writing a test that doesn't compile, just to get to the "red bar" stage. If you know that you need to implement a stub service before any of the tests compile, then do so. This won't break the project, and this keeps your vision on the goal of producing working software. Too often I see very good engineers get bogged down by the religion and process of Test Driven Design/Agile Methods.
  • I do however like the idea of writing unit tests that fail first time around (note the difference between failing and simply not compiling). Red-bar (failed) tests the first time round prove that your unit test, in its initial state, will not give you a false positive. This is a lot better than writing unit tests that mistakenly give you a positive result, meaning that the defects will only be found at a later, more expensive stage of development.
  • Be careful with getting too carried away with unit tests. Be prepared to throw your unit tests out just as quickly as BAs throw out requirements. It is easy to get quick coverage (the 80/20 rule), but don't spend too long on any one test or part of the system.
  • Watch out for over-architecting the test set-ups. If you have a central system for example, that has requests coming in from a UI, makes calls to a web service, and perhaps writes to a database and/or message queue, well, there is already an exponential number of possible test combinations here. The web service, for example can be mocked out on the client side, the server side, you could also use SoapUI to mock both the server and the client, etc. So, a test setup could use a unit test which calls the business layer directly (bypassing the UI), mocks out the web service client, calls the real web service implementation, which in turn writes to a mock database. This is a valid test, which certainly tests the business layer. However, you could also configure the test to call the real web service client, which calls a fake web service implementation. So the real question to ask is "what am I trying to test within this unit test", and then focus on not getting confused between unit testing and integration testing. Integration testing, on the other hand, I see as a superset of unit testing... where you might not want to test for all test cases and inputs, but do want to test each pair of closely related components, as well as a complete (end to end) run of components as well (if that is possible). The only catch with this is that if you are calling third party/high latency/non-controllable services, you probably want to mock these out at the far end of the component if possible.
Disclaimer: The caveat to all of this, of course, is that the business must be supportive of investment into TDD up-front, which of course ties up BA resources, testing resources etc. On a blue-skies project this is not an issue, but for your typical large scale enterprise (where systems are already in place, multiple versions must be supported, there are large areas of code that are not really designed for test at all), it can take time (if actually possible at all) to gain traction with TDD.

Continuous Integration
  • I am a fan of Continuous Integration. Use something like final builder, set up triggers so that your code is compiled on every check-in, run all your unit tests on every build, have an installer built automatically, and push the installer out to a clean VM on every build as well (make sure that it extracts, installs, and runs correctly). For each company that I have set this up, it has taken a week, and saved a year. One of the most expensive parts of software development is QA, and when QAers don't have a reliable, trustworthy platform to test on, developers don't believe that the bugs are "real bugs"... they often suspect that the QAer has just highlighted an environment setup issue, and often this is the case. It truly surprises me how much money companies are willing to lose in having badly set up/under-resourced QA environments before they finally sort things out and become productive.
Architectural Overkill
  • Don't get bogged down by architecture and buzzwords. Nearly every company I have worked for has had some "glory project" where a framework/architecture is designed to revolutionise the way that the company writes code. There are problems with this: architectures that are designed by architects are too complex/abstract. Architectures need to come from developers, based directly on business needs... trust me on this one. Secondly, architectures need to be maintained as technologies are updated/replaced. I have never once seen a proprietary architecture/framework be actually used successfully in a large organisation... they tend to be pipe-dreams that are sold to management but have little useful content for developers (and can actually stifle productivity).
  • Don't get carried away with new technologies such as dependency injection/inversion of control, etc. The use of dependency injection is nice, and there are some great frameworks out there (check out SEAM for Java and the Unity application block for .Net), however, the frameworks and related hype can actually far outweigh the actual point of the technique. Dependency injection, for example, is just the concept of assigning objects at runtime... for example, creating an instance of a TextStreamWriter instead of an instance of a database writer, and passing this reference to the consuming component. That's about it. Inversion of control is basically the same concept, but the decision about which instance to initiate is made from another context (perhaps a unit test, a manager class, or even an xml configuration file).
Watch your Language(s)
  • Don't go crazy with complicated language constructs. Certainly, you can do a lot with annotations, xml configuration, dependency injection, etc etc, and your code will be more elegant and concise, but chances are it is harder to read, took just as long to write, and fewer people will be able to maintain/extend the code. So where is the real gain here?
Software Development Processes
  • Don't micromanage software teams. If you are using Scrum (check out this gem of a scrum parody), then great, but don't let the project manager rule the team (if he/she does, then you don't have Scrum, you have conventional software development). The importance of self organising teams is underrated and overlooked. The problem with micromanagement is that the team ends up working on project planning, estimates, and meetings, rather than actually coding. A scrum master really is just someone to keep the meetings on track, to report externally to the group, and to prioritise next stories in the sprint. There is still very much a role for project management in Scrum... the same role as traditional project managers - to ask if there are any problems/impediments within the team, and to fix these impediments as pragmatically as possible.
  • Everyone in a software development group must be professional. This includes the IT support staff, the BAs, the PMs, etc. Without formal, tertiary training in software development, the team is unfortunately reduced to the lowest common denominator. This might sound high and mighty, but software is hard to write and most projects fail. There is a reason for this... software is a hard topic to master, and the risk of failure is higher as the team size increases. Having uneducated people in the team - adopting a "let's just see how this goes" attitude - is a recipe for disaster. I guarantee that teams who design planes or perform surgery all have years of study behind them.
  • Be careful of the term "agile". Agile doesn't mean being simplistic, it means doing the simplest most logical thing because the business doesn't know which way it wants to go right now. If you know that the system must have certain performance characteristics, support multi-threading, support a particular set of business functions and/or complex workflows, don't ignore these requirements for the sake of being agile and "taking small steps". Take steps appropriate to your skill level and judgement.
  • I am incredibly wary of any technology or process that is hailed as the solution to guarantee the success of a project. A small, competent and fluid team of developers can make nearly anything work using any process or technology, whereas a technology or development process will fail with 100% certainty if the people behind it are not competent. It's all about getting what you paid for.
Writing the Good Stuff
  • Design patterns work. Use them. But don't get sucked in by every pattern book out there. Stick to the highly rated ones (GoF, Fowler's Enterprise Architecture Patterns, etc).
  • Stay consistent with your code. Don't use every language construct just because you can. If you do, your code will look bad, and probably be harder to debug.
  • Refactoring is useful, but it is always better to write things correctly the first time. This might involve thinking about code before you write it, spending time on design, writing pseudo code, writing prototypes (and then happily throwing them away).
  • I like the idea of writing as little code as possible. If there is a way to use existing code, an existing API, etc, then do it. Spend time searching around, play with existing libraries (the Microsoft Application Blocks are great for example), but at some point you will need to call it a day and just write what you need. As long as you write your code in a modular fashion, you should be able to swap it out when someone points out the library that you should have used all along.
Project Estimates
  • This idea is taken from Joel Spolsky and I like it a lot. Plot your estimated hours to complete a task against anyone who disagrees. Keep track of the records, and then once the task is complete, record the actual hours. Continue doing this for every task. The more you build this up, the more weight you will have when discussing estimates in the future. It often surprises me that BAs and PMs with very little software development experience will argue over how many hours to put into an estimate. I don't argue with my dentist when he tells me that it will cost $400 to remove a tooth, or my lawyer when she tells me that I will need to keep records of all income earned by my family trust. I figure they went to university for a reason, and know better than me. Why are software engineers not given the same respect?

2 comments:

  1. Nicely said. Keep this up, I like it.

    ReplyDelete
  2. best blog ive read, only because i won't read blogs and the fact its carl writing it made it way interesting!

    ReplyDelete