Why elaboration should be done with a SMALL team

RUP phases A [wikipedia:RUP] project consists of 4 different phases. The elaboration phase (the second phase in the RUP project lifecycle) is normally done with a small team. One of the important reasons for this is that you want to define the basic form of your architecture. This means you think about concepts, elaborate on them, develop proof of concepts and select the ones that work best for your specific project.

If there is one thing you can not use (early) in an elaboration phase of a project, it’s having too many team members. There’s probably not enough work to go round. Because of that the members who do not or can not contribute to defining the architecture or developing the proof of concepts will eventually end up being assigned production (or construction) tasks.

That’s where things go wrong: in an elaboration phase the architecture is designed and takes its basic form, but in construction the architecture should be stable and proven. In elaboration you define the standards and guidelines, while construction needs those things to be available. Tasks in elaboration take some time because they have to ‘grow’. Because of this they are harder to plan, while tasks in construction have a hard deadline. And as one of the ‘elaborators’, it’s pretty darn annoying when one of the ‘constructors’ comes up to your desk every two hours to ask if the part they are waiting on is ready. That kinda kills the whole goal of the elaboration phase…

‘100% code coverage, unless…’

When asked about my point of view about a guideline concerning code coverage, my answer always is: go for 100% code coverage. 100%, unless… Here is why.

When the guideline for code coverage is 80% (not an uncommon guideline) the devil is in the details. Or in the 20% which is not covered to be exact. When a developer ‘only’ needs to cover 80% of his code, you can expect him (or her) to start with the easiest scenario’s, and work his way towards the more difficult scenario’s until the coverage guideline has been met. This way, the scenario’s which are the most difficult to realize as a test aren’t hit. And, more important, the developer didn’t think about the most difficult scenario’s and how to test them. In that case the remaining 20% is completely unknown*. I know several testers and I can assure you: they don’t like that.

In case developers need to come up with 100% code coverage, or a good reason why they can’t get there (with a reasonable amount of effort), all code paths have been seen and analyzed. When a certain part if the code is not covered you can assert the risk, assess how much effort is needed to get the code coverage up to par, and make a weighed decision to do so, or not. In the last case you have an entry for the ‘unless’ category.

Sure, sometimes the tooling doesn’t allow us to achieve a clean 100% code coverage. Visual Studio 2005 for instance sometimes skips a closing curly bracket, so you only come near the 100% for a specific method. That is one in the ‘unless’ category. The risk of not making the 100% has been asserted, the reason for it has been identified and a decision has been actively made.

– You should not only look at the code coverage numbers. The quality of the asserts is pretty important too, as I mentioned earlier.
– When using [wikipedia:Test Driven Development] (strictly), these situations should be rare. Normally you start out with test cases, and then write code to make the test pass. Some even think that every line of code should add to a test case passing and that, when a line of code doesn’t do so, you shouldn’t have written it.

* ‘Completely unknown’ is a bit heavy on the drama, but it adds to the story, don’t you think so? ;)

By |November 26th, 2007|Methodology, Test|1 Comment

This is NOT a test.

Test Driven Development is hot, just like unit testing your software and any other kinds of (automated) testing. And as we all know: sometimes stuff that’s hot is misinterpreted, explained wrong or just simply used in a really bad way. Unfortunately, testing is no exception to this rule…

Not too long ago I had a discussion with a project manager who told me the code in his project had a code coverage close to 100%. Because of that, so he felt, his software would be darn close to perfect. I then asked him if testers had validated the unit tests his developers had made. The answer was no, but he didn’t see the need for that because “the code coverage was so high”. I’ll spare you the details of the discussion, but let’s say it led to a few new insights for at least one of us ;).

I’l try to make my point over here with some examples. Let’s first make a very simple (trivial) method.

static string LowercaseConcatenateStrings(string first, string second)

    result = first.ToLower();
    result += second.ToLower();

    return result;

As you can see this is a simple method which converts both input parameters to lowercase and concatenates them. Now let’s see a unittest which will result in 100% code coverage:

public void TestLowercaseConcatenate()
string result = UnitTestSample.LowercaseConcatenateStrings(“test1”, “test2”);
Assert.IsNotNull(result, “Returnvalue is null!”);

This unittest does result in 100% code coverage, but does not test all the scenario’s applicable to the method. A null value for one of either parameters results in an unhandled exception. It doesn’t even check if the value of the returned string is anything like it should be! These tests (and the code) are far from perfect, even though the coverage is 100%.

This is a simple example where it’s easy to determine the flaws, but you can imagine a more complex method would require an expert’s eye to ensure the quality of the asserts is up to par. A method of 20 lines with lots of calculations and object modifications, cannot be asserted with a check to see if the object merely exists.
Of course, and this should come as no surprise, testers are able to look at a piece of code or software a little bit different than developers do, so why not use their expertise. In short: check the quality of your assertions!

Our postal service is like a service bus

I ordered some stuff through the internet a week ago. The packet was sent through the Dutch postal service. They have a track and trace website, that shows where your parcel is at that moment, and where it has been. And that website made me realise the postal service is much like a service bus. I’ll compare it to a single service, to emphasise the differences. Let’s have a look, shall we … ?

Single service: a courier
Just like with a single service, sending a parcel with a courier makes it pretty obvious what route the parcel will travel. You tell the courier where the packet should go in their own terms, by filling in their specific form for sending a packet (comparable to a WSDL). You also know the courier will put the parcel in his car, and drive directly to the destination you entered. There are no stops in between and nothing is translated into internal codes to process the parcel.

Service bus: the postal service
When sending a packet through the postal service, you drop it of at the nearest postal office, or even a mailbox. You put the address where the parcel is supposed to go on the envelope (again with the SOAP), but you don’t do that in a way that is specific to the postal service you would like to use. You do use the normal address conventions (which you could compare to the SOAP protocol). You don’t know which stops it will make, and where it will go. 
I saw my parcel first went to a sorting centre in the far east in our country where it was stamped for internal processing. Those stamps don’t mean anything outside their company, because it is only used for internal routing. Next, it was sent to the south-west for another round of sorting and internal handling. From there, it went to the third and final sorting centre, close to where I live in the centre south of The Netherlands. It probably was stored there for the night. The next day, the parcel was put in the truck of the driver who eventually brought it to my house. The sender doesn’t know where the parcel went, but they know it was delivered. As will the next…

I used WebServices/SOAP/XML to make comparisons because I know something about them. I could have used other protocols or standards also, but I wanted to stick to one.
I know not all couriers work the way I described them nor does each and every postal services. And yes, traffic can be of influence to the route the parcel travels. But I had to dramatize to get my point across, ok? ;)

Get maintenance involved in your project. Early.

Last week we had a meeting with our contact at the department that will be maintaining the software we’re going to be developing in our upcoming project. Normally, these kinds of meetings take place in the end of a project, sometimes even under some (release) stress. At least, that was the case for all the previous projects I was involved in at the customer I’m currently at. Most of the time this is because people want to start developing the software instead of thinking about things that are not necessarily their area of expertise.

For this project we did things a bit differently. This is because the rough outline of our project is pretty clear, but the business analysts are still working on some (pretty important) details. Because of that we had the time to get a pretty good idea of when we will be done, and to for instance make a proposal for the deployment model for our solution. We made a diagram, took this to our contact and asked his opinion. Because of his expertise, he asked us some questions which normally would be forgotten up until the moment the solution was going to be released, or would never be asked at all. He also gave us some pointers, and a few whishes out of a maintenance point of view. Because of this we can help maintenance by taking their hints and making the software easier to maintain.

I already have the idea this is a better way to work. It might be because of the fact that it’s easier to “get things done” from maintenance (there’s lots of time before it has to be done?). But I’m absolutely sure it is because the software was thought through from different viewpoints. Not only the testers and developers have had something to do with it, but the people who have to keep it running did too.