Software-Quality Discussion List
Digest # 016

Tenberry Home
Software-Quality Home
Reducing Defects
Automated QA
Site Index

           Software-Quality Discussion List

      S O F T W A R E - Q U A L I T Y   D I G E S T

      "Cost-Effective Quality Techniques that Work"
List Moderator:                      Supported by:
Terry Colligan                       Tenberry Software, Inc.     
April 22, 1998                        Digest # 016




    Duplicated Issues and =3D

    Change in format

    ==== CONTINUING ====

    Re: Puddings

      John Cameron 

    RE: Software-Quality Digest # 015
      David Bennett 

    Re: Software-Quality Digest # 015 -- moderator
      Jerry Weinberg 

    Re: Software-Quality Digest # 015 -- David Bennett
      Jerry Weinberg 

    ===== NEW POST(S) =====

    Continuous Learning/Perceptions

    ==== BOOK REVIEW ====

    "Software Runaways", by Robert L. Glass



  For the first time ever, we have more material than I am
  comfortable in sending out in a single digest.  Therefore,
  I will defer a few article until the next issue -- which
  will follow shortly.

  We now have more than 350 subscribers, even though I haven't
  done any promotion yet.  If this keeps up, I'll have to get
  a bigger internet connection! ;-)

  Duplicated Issues and =3D

  I may have solved the mystery of why a few subscribers would
  get a duplicated and possibly garbled issue (equals characters
  replaced with =3D's).  It a long story, but I think it involves
  a load-related bug in my list server, improper handling of rich
  text format, and a silly default in my text editor.

  If anyone gets duplicated and/or garbled copies of this issue,
  please let me know.  (And no, I don't consider my normal writing
  to be garbled! ;-)

  Change in format

  I am making a slight change in the format of this issue.  When
  someone makes a very long post, I am trying to add "Moderator
  Comments" nearer to the text being commented on, even though this
  breaks up the long post.  I think it will be clear, but let me
  know if you don't like it.

==== CONTINUING ====

++++ New Post, New Topic ++++

Subject: Re: Puddings

Dear Terry,
Re your comment to Jerry Weinberg in the 14 April edition:
>  the British have an expression for this idea: "The proof
>  is in the pudding" -- although I never could figure out
>  what it meant!

The correct saying is: "The proof of the pudding is in the eating."
I.e. never mind the process, its the End-user Acceptance Test that
counts.  So you have understood its context correctly.

It can be used in a pejorative sense, "Never mind how slap-dash* a
cook I am, and what unmentionable ingredients go in the pot, if you
like the result, then so what?"

Roger Moore

P.S.  *Slap-dash = careless

++++ New Post, New Topic ++++

From: John Cameron 
Subject: Potpori

>  I believe the British have an expression for this idea: "The proof
>  is in the pudding" -- although I never could figure out what it
>  meant!

While almost everyone says the proof is in the pudding, it is a
corruption of "The proof of the pudding is in the eating".  I don't
think I need to explain what that means.

>  I still argue that for most applications, memory is so cheap that
>  it is a false economy to not have checking/logging/QA code always
>  built into the application.
I expect that for most of you, the reason that memory is really
cheap is because someone else is paying for it.  In my situation,
embedded systems, we pay for the memory that our customers use.  And
we will struggle mightily to save a dime on a board.  Regardless, we
recognize the economy of allowing space for checking and
instrumentation.  Does anyone else use memory you actually purchase?
And how do you feel about using it?

>  So I'm opening this for discussion -- do we want  job postings?
Job postings are probably OK, if they are from a developers or QA
people.  I don't think that Directors of Corporate Recruiting have
anything to contribute to this list.  Particularly when accompanied
by signatures that are pure spam.

One last thing, a note to Phillip.  I am older than dirt, but am
still learning.  There are a lot of us out there.  It is just a
coincidence that Earl was elderly.  If you met him at 22 you might
have thought the same thing.

++++ Moderator Comment ++++

  I can vouch for John being older than dirt -- he's even older
  than me!  ;-)

++++ New Post, New Topic ++++

From: David Bennett 
Subject: RE: Software-Quality Digest # 015

Hi Terry

A couple of further attempts at controversial statements, then a few

(Hopefully) Controversial statements

1. If you can't test it, you can't write it.

If you design or write code, you MUST devise a way to test it.  If
you cannot figure out a way to test it, you MUST NOT write it. An
excellent example is error/exception handling.  If you write code to
handle out of memory conditions, or disk full conditions, or
signals, you MUST devise a way to test the code.  If you don't have
a way to trigger the out of memory or disk space or signal, then you
MUST NOT attempt to handle it.  You'll get it wrong and make things

++++ Moderator Comment ++++

  While I agree that you should test your code if at all possible,
  I'm not sure I agree with how strongly you state things here.
  In particular, I think a "can't happen" case is important in
  complicated decision nets, and I don't see any way to test
  these cases automatically.  I do think the programmer should
  manually force these kinds of error cases during the development

2. Shorter code is always better code.

The human mind has serious limitations in understanding code.  One
very complex line of code can exceed that limit, but 10,000 very
simple lines of code will always do so. Divide 10,000 lines into 100
functions of 100 lines each and you may understand it piece by
piece, but you probably won't understand it as a whole. Find a new
algorithm (or external pre-written component) and rewrite the 10,000
lines as 100 lines or 10 lines or 1 line and there is an excellent
chance you will understand it, even if it is significantly more
difficult code.  For difficult, read: requires more background, more
training, more intellect, whatever.

Understandable is good, but shorter is better

++++ Moderator Comment ++++

  I think that Gerry Weinberg has it right here -- being
  understandable is what matters.  While there is a strong positive
  correlation between shortness and understandability, squeezing the
  last few lines out of a segment of code can reduce (sometimes
  dramatically) understandability.

  Shorter is good, but understandable is much better!


* I believe the British have an expression for this idea: "The proof
* is in the pudding"-although I never could figure out what it meant!
* (Perhaps someone with more worldly experience could 1) correct my
* quote; and 2) explain what it means! ;=)

The saying properly is "The proof of a pudding is in the eating".  I
think the meaning is now obvious - the way to tell whether a project
has been successful is to try out the final product.

I stated that I am reluctant to leave verbose tracing/logging/assert
code in shipping product because it makes reverse engineering much

* Like Terry, I can't imagine that tracing code would help reverse
* engineer the code.  It might even make it more difficult to reverse.
* One of the best techniques to inhibit reverse engineering is to add
* lots of modules that aren't part of the product.  It's like
* encryption, someone can always do it, all you can do is raise the
* cost to prohibitive levels.  Extra code raises the cost

If the code contains large amounts of easily readable, textual
logging and debugging material then anyone with a dump program can
easily find such things as: module and function names; general
module structure; which functions are in which modules; where to
look for key functions of interest; etc.  For example, if you want
to find and disable the registration mechanism, or the user counting
mechanism, or the time bomb, your job is made much easier if there
are trace statements with text like this:
    "Activation code: %s\n"
    "After 1st level encrypt: %s\n"
    "After 2nd level encrypt: %s\n"
    "Checksum part 1 %d part 2 %d\n"

++++ Moderator Comment ++++

  First, let me say that I don't really believe in disabling
  mechanisms, so I find it a little hard to be sympathetic.  Second
  (or perhaps it's the same point), I think these kinds of issues
  should be resolved by the licenses and/or sales terms.  If my
  customers want to disassemble/decompile the code I write, I say

  Another way of stating your point is "If the code contains large
  amounts of easily readable, textual logging and debugging
  material, it's easier to understand."  Exactly why I think it's
  important to write code this way!

And on the same theme...

* Regarding the ASSERT macros, how much benefit do you get from the
* stringized expression?  That's the first thing to go in our builds,
* since the file and line number are all the developer typically uses
* anyway.

* What I am suggesting is that there is a definition of ASSERT which
* works equally well for development and release.  It probably isn't
* the default implementation provided by most compiler vendors, for
* the reasons you state.

Now we get to the nub of the thing.  What you are basically saying
is that the goal of having the shipped version identical to the
developer version is sufficiently attractive that you are prepared
to make compromises or be selective in what you trace.  The
difference is, I'm not.

++++ Moderator Comment ++++

  No, that's not what I am saying.  What I am saying is that it
  isn't very hard to write code which is easy to trace, but meets
  the other goals you have laid out.  I don't think it's an either-
  or situation.  I do think you might have to spend an extra ten
  minutes on your tracing/logging/debugging code, if you must worry
  about keeping things disguised.

The registration code in our product is tricky, hard to disassemble,
and spread out to make it hard to find.  I am sure you can break it,
but only by a determined effort.  I hope you'll get discouraged
early on and give up.

[BTW Yes, I have tested that this is so!  It was when I attempted to
reverse-engineer my own code I discovered how much easier bits of
text can make it.  Have you tried it on your own code?]

++++ Moderator Comment ++++

  Actually, I find a lot of my old code hard to reverse engineer,
  and I wasn't even trying!  ;-)

When you write code like that it becomes even more important that it
is well structured, well commented and well traced or you will never
get it right.  Over the years, that code has broken about 3-4 times
(in QA) and without good tracing (in the debug version) we'd never
have been able to find out where and how.  That very tracing is what
I don't want to ship.

I'm not advocating a hard and fast position.  I was wavering there
for a while, but I'm becoming more and more convinced that there
will always be code (lots of code!) I want to put in the debug build
but less code (maybe only a little bit less) which I want to ship.

++++ Moderator Comment ++++

  Neither am I.  I'm just saying that, in my experience, the less
  difference there is between a debug version and the final product,
  the better things work.  I'm also saying that as we have moved to
  this position, we haven't noticed any increase in development
  time or effort.

I think the price paid is low.  Just occasionally, we get a
difference between QA'd product and debug product and it costs us a
few hours or a day or two to track it down.

The payoff is significant.

1. We can set a budget for space and speed of the tracing code in
the production build, and then cheerfully put in 10 times as much
and make it 10 times slower in the debug build.

2. We can be expressive as we like in the tracing and assert text,
including jokes, insults, profanity, comments on competitors
products, knowing that we are the only ones who will ever read it.

3. We can be sloppier in writing the debug code - it will never need
to pass QA!  However, the logging code which is going to be part of
the shipping product must be designed and implemented with care,
because it must be tested as well as the product itself.

++++ Moderator Comment ++++

  Point 1. has some merit.  I think point 2 is moot -- do it if
  it makes you happy -- it certainly isn't a "significant payoff"
  to me.

  However, I think that point 3. can be dangerous!  It might not
  cause problems for an experienced professional like you, but for
  lots of programmers I've known, I think there is a danger in the
  "sloppiness" to spread into release code.  Furthermore, I used
  to believe as you do, but I find that there is little time savings
  in writing sloppy code, even if only in temporary code.

The bottom line is that I think we should put more logging in our
production build, but I think you others should consider doing some
real debug builds.

And finally...

* (I have only written two Cobol programs in my life; I like
* learning new languages; but I do use command windows to copy
* files... ;-)

About 15 years ago I worked out that I had written at least a few
lines of code in at least 15 dialects of Basic.  I haven't written
in any of those dialects of Basic now for over 10 years.

I have written upwards of 1000 lines in at least 10 different
languages (Fortran, Cobol, Algol, PL/I, assembly, pascal, etc, etc).
In the last 5 years I have written only C and POWERflex (our own
language), awk and Perl, a little bit of VB and VBA.

I once wrote a chess program on punched cards.  I wrote an
edit-merge-update on digital cassette tape, booting from paper tape.
I wrote over 20,000 lines of UCSD Pascal, over 5,000 lines of
Digital Research PL/IG, over 10,000 lines of assembly language for a
Z80 Micro.  I haven't used any of those tools for at least 10 years.

I know the DOS command line processor extremely well, but now I
rarely use it.  I use Windows Explorer, Find, open dialogs and many
other new tools.  And I do it for just one reason.  I'm not trying
to impress anybody.  It's a question of attitude.

We are selling Windows software.  You cannot produce (specify,
design, code, test) top quality Windows software unless you live,
think and breathe Windows.  I won't argue with you about whether
Windows is worse than/better than Unix/Macintosh/etc.  We do
Windows, and I believe strongly you can't do good Windows unless
your tools are Windows.

As manager of this particular team, I actively discourage people
from sticking to old tools.  I'm the oldest programmer here, with
the greatest residue of old and bad habits.  If I can learn to use a
whole new bunch of tools very few months then so can every one else!

[was that controversial statement number 3?]

David Bennett
[ POWERflex Corporation     Developers of PFXplus ]
[ Tel:  +61-3-9888-5833     Fax:  +61-3-9888-5451 ]
[ E-mail: ]
[ Web home:   me: ]

++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: Re: Software-Quality Digest # 015

Moderator wrote:
>  I don't agree with the *demonstrate* idea at all.  At least for
>  the customers I typically have (and the managers I have worked
>  for), there was no one to demonstrate the code to who could
>  read/understand it.  The concern is/was completely on the
>  resulting program, not on the code.  The decision about whether
>  the code is defect-free is/was made by using it, or watching it
>  being used.

For the customers I typically work with, running the code is merely
part of the demonstration that the code is defect free.  To quote
Edsger Dijkstra, "testing can only reveal the presence of bugs, not
their absence."  It's hard to believe that in this day and age
people don't use "white box" (sometimes called "transparent box")
testing.  (I work with clients ranging from internal IT
organizations to shrink wrap vendors to 24-hour-day service
providers to embedded system builders to life-critical system
builders.  All of them understand this principle.)

++++ Moderator Comment ++++

  Who said anything about not using white-box testing?

  We certainly use while-box during our development process!

  My point was that for most of the programs I have been involved
  with, the person paying for the program either never saw the
  source code, or couldn't/didn't read it, or both!  (The both
  case is by far the most common.)

>  I believe the British have an expression for this idea: "The proof
>  is in the pudding" -- although I never could figure out what it
>  meant!  (Perhaps someone with more worldly experience could 1)
>  correct my quote; and 2) explain what it means! ;=)

Being half British, perhaps I can correct the quote to "The proof of
the pudding is in the eating."  I always presumed it meant that you
have to taste pudding to know if the recipe worked - and that's
certainly part of the truth.  But, it assumes pudding is relatively
homogeneous, which is not always the case.  If it's a plum pudding
(like little Jack Horner tested), it may contain whole plums.  If
you cut a small piece of pudding and eat it as a test of whether or
not it has plums inside, getting a piece without a plum doesn't tell
you very much about the absence of plums.  If, like Jack, you stick
in your thumb and pull out a plum, then you are indeed a good boy
for showing the presence of (at least one) plum.  Even if you're not
allergic to plums (as I am), I think you can see the significance of
the plum pudding example to software testing, with plums as bugs.
(If you like, try apples and worms.)

>  I am curious about what kind of environment you are imagining
>  where the decision to use code comes from looking at the code
>  itself, rather than the effect of running the code.

I don't mean, obviously, that it comes *solely* from looking at the
code. Though cleanroom people claim that this is possible, I still
believe you need some execution testing to validate the

Look, I'm not stating something new here.  Charles Babbage (!) knew
about this, and use to demonstrate the principle by setting his
difference engine to compute consecutive integers up to 999,999;
1,000,000; 1,000,001.  Then he would ask spectators to predict the
next number coming out - and, to their surprise, it would not be
1,000,002.  In other words, the successful passing of 1,000,000 test
cases did not guarantee that there were no errors in the program.
That was as true in 1837 as it is today, and presumably always will
be true.

No amount of testing can guarantee the correctness of a black box.
You show me the sequence of tests you have done on a black box; and
I will show you a program that can be inside that box, give all the
correct answers to your tests, and never give another correct answer
again. Without looking inside, you could never know which program is

From another point of view, being able to understand the program
greatly reduces the number of test cases you need to reach a certain
level of confidence in the program.

website =
email =

++++ Moderator Comment ++++

  Who ever said anything about black box testing?  Of course you
  can defeat it!

  I don't see how your comments about the well-understood failures
  of black-box testing support your conclusion about being able
  to demonstrate the understandability of code being so important.
  Maybe it's just because I don't usually have anyone to demonstrate
  it to...

++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: Re: Software-Quality Digest # 015

David Bennet quoted me:
>* first to your design, not your code.  You can seldom find large
>* space savings by tweaking code, though sometimes you can get huge
>* speed savings by analyzing performance and changing one place in
>* the code.  After all, there can't be more than one place that uses
>* over 50% of the time.

David replied:
>We have frequently tweaked code to make large space savings.
>Inexperienced programmers (and even good programmers who think it
>doesn't matter this time) are often quite careless about memory
>allocation.  In Windows 16, changing a program from static to
>dynamic allocation, or from small(er) to large(r) memory model can
>free up 50% or more of your 64K DSEG.

Sure, it's just that I consider the choice of static or dynamic
allocation a design choice, not a coding choice.

Then David said:
>Changing from a fixed array of fixed-sized strings to a linked list
>of pointers to packed text in a string table can reduce a compiler's
>memory utilisation dramatically.  A typically compiler with (say)
>10,000 * 32 byte names uses 32K; if the names average 12 bytes
>you're down to around 17K.  Multiply that by N (you pick N) for
>modern 32-bit systems.

Again, I consider this a design choice.

And finally, David said:
>In a badly performing application, it is not unusual to find a
>section consuming 70-80% of the time.  Fix that, and you may find
>another section now consuming 70-80%.  And again.  And again.  I
>know it isn't quite what you had in mind, but the net effect is the
>same.  We recently reduced the runtime for an SQL job by a factor of
>more than 10 times by changing middleware code.  We kept fixing
>stuff until the performance distribution no longer had any section
>over about 25% (except the SQL engine itself!)

We agree entirely here, if you read my original statement with
respect to time savings.  Each iteration, there can only be one such
saving place. I do worry about a programmer (and the program that
programmer writes) who writes code that allows you to take more than
one or two fruitful iterations like this.  But, except for our
definition of "design," I think David and I are in complete
agreement on this subject.

website =
email =

===== NEW POST(S) =====

++++ New Post, New Topic ++++

Subject: Continuous Learning/Perceptions

Phillip Senn's  post about querying the
backgrounds of the people on the list is an interesting idea. It got
me thinking of how many different systems I've used in the 20(!)
years since I started using computers - going back to junior year in
high school.

In the 14 years since college graduation, almost inevitably when I
changed jobs (9 jobs!) I changed systems(CPU/operating
systems/software) to something I knew little or nothing about and
picked it up quickly - not all of these were Q/A jobs but still it
has been a process of continuous learning which has kept me
interested. How much time do we as a group spend on continuous
learning and keeping up to date - it could be a job in itself.

I wonder how much though, our initial learning environment whether in
a classroom or on the job affects our perceptions in terms of
testing, coding, or just plain using computers. I consider myself of
a mini-computer, pre-gui, pre-object oriented background as that is
what I learned on in college. Would someone fresh out of school
knowing the latest and greatest stuff have a different view of

Just contributing some thoughts,

Leslie Pearson

==== BOOK REVIEW ====

Review of "Software Runaways"
by Robert L. Glass
Prentice Hall PTR 1998
278 pages
ISBN: 0-13-673443-X

Subtitled "Lessons Learned from Massive Software Project Failures",
this book covers a number of large software projects which failed
in fairly dramatic fashion.  The definition used in this book for a
runaway is

  "a project that goes out of control primarily because of the
   difficulty of building the software needed by the system."

Robert Glass has written more than ten books on software
development and software quality.  He writes a column on
programming for the "Communications of the ACM."  He believes that
we can learn about how to do projects well from studying those
projects which have failed.

The Chapters are:

  1. Introduction -- 18 pages

  2. Software Runaway War Stories -- 2 pages

    2.1 Project Objectives Not Fully Specified -- 66 pages

    2.2 Bad Planning and Estimating -- 15 pages

    2.3 Technology New to the Organization -- 36 pages

    2.4 Inadequate/No Project Management Methodology -- 45 pages

    2.5 Insufficient Senior Staff on the Team -- 28 pages

    2.6 Poor Performance by Suppliers of Hardware/Software -- 1 page

    2.7 Other -- 15 pages

  3. Sofware Runaway Remedies -- 22 pages

  4. Conclusions -- 4 pages

 There is also a seven-page index.

The projects covered include the baggage system at the new Denver
airport, a Florida welfare system, the FAA's attempt at new
software, the IRS's various project failures, and several different
large banking systems, ranging in time from the early 1970's to the

I found this book quite disappointing in a number of ways.  First,
I have thoroughly enjoyed Mr. Glass's writing in his previous books
and columns.  I was quite surprised to find that Mr. Glass only
wrote chapters 1, 3 and 4.  The bulk of the book (206 out of 253
pages) was either taken from various magazine articles not written
by Mr. Glass, or from other papers and studies.  In effect, Mr.
Glass is the editor, rather than the author.  Unfortunately, none
of the included text is up to Mr. Glass's high standards of
software development insight.

My second disappointment is that because the various descriptions
come from such different sources, different times and different
authors that I found it hard to make comparisons between projects
and to thereby learn from the examples.

My third disappointment is that some of these project were so long
in the past that I found it hard to relate to today's projects.
Some of the systems which failed did so for performance-related
issues -- at a time when a large, fast machine had 512KB of memory
and a 1 MIPS processor.  Should we conclude that these projects
would not be failures now?  (I don't think that is the message of
this book, but it sure isn't clear!)

My fourth disappointment is that the "lessons learned" aren't
really present in this book.  There is a list of remedies the
projects tried while they were failing, and a list of remedies that
the project teams planned to apply in the future.  There was no
evidence of whether or how effection these remedies might be.

I do like disaster movies, in general.  There are a lot of
disasters described in this book, but, unlike disaster movies,
there isn't much attention paid here to heroism..

My advice:  Read "Software Runaways" if you like disaster stories
or if you want to feel better about your own development problems

-- it's unlikely that they are going as poorly as the ones
described in this book.  This book may be useful for building a
case to scare your management into addressing problems of one of
your potential disasters.  I suspect that most readers won't find
much here to help them in their daily software quality work.

The Software-Quality Digest is edited by:
Terry Colligan, Moderator.

And published by:
Tenberry Software, Inc.     

Information about this list is at our web site,
maintained by Tenberry Software:

To post a message ==>

To subscribe ==>

To unsubscribe ==>

Suggestions and comments ==>

=============  End of Software-Quality Digest ===============

[Tenberry] * [Software-Quality] * [Zero Defects] * [Automated QA] * [Consulting] * [Site Map]
Last modified 1998.4.22. Your questions, comments, and feedback are welcome.