Software-Quality Discussion List
Digest # 010

Tenberry Home
Software-Quality Home
Reducing Defects
Automated QA
Site Index

           Software-Quality Discussion List

      S O F T W A R E - Q U A L I T Y   D I G E S T

      "Cost-Effective Quality Techniques that Work"
List Moderator:                      Supported by:
Terry Colligan                       Tenberry Software, Inc.     
March 23, 1998                       Digest # 010



    An Apology

    ==== CONTINUING ====

    Re: Understandability
      Jerry Weinberg 

    Re: Understandability
      "Jared M. Spool" 

    ===== NEW POST(S) =====

    S/W Maintainability Measures

      "Phillip Senn" 

      "Ernesto Torres Amador" 

      "Phillip Senn" 

    Any OS testers on this list? (Mark Wiley)

    Software-Quality email list
      "Danny R. Faught" 

    ==== BOOK REVIEW ====



  An Apology

  When I sent out issue 009, I had every intention of putting
  out an issue in a few days.  However, I got involved in a
  mini "death-march" for a project due April 1.  Also, I had
  a visit from the "death-march flu."  (Also, the dog ate
  my homework...  ;-)

  In the interest of getting this out, I have postponed the
  next book review and the discussion of cost-effective

==== CONTINUING ====

++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: Re: Understandability

Terry said,
>  But most importantly, your proposed measurement, although
>  bounded, is still too long for the purpose I want to put it
>  to -- to use during a code review.  Since we do code
>  reviews every day, I need measurements that can be in five
>  minutes, or even ideally one minute.  I think that the
>  desire for measurements that can be quickly taken is the
>  main reason that LOC is so popular, in spite of the various
>  objections people have raised.

Oh, but if you're using it for code reviews, then a slight
modification of the measurement of understandability costs
you NOTHING (okay, a few seconds).  At the beginning of the
review, the review leader asks, "How long did it take you to
understand this to the point you were confident in your
conclusions?"  In the review report, you present these
answers. They can be quantitative or qualitative ("too long,"
"much too long," "I never did understand it," "I had no real

website =
email =

++++ Moderator Comment ++++

  Yes, if you are disciplined enough to be doing formal code
  reviews, your suggestion would work.

  Unfortunately, (or, we think, fortunately), we are using an
  entirely different form of code reviews -- one that I
  created after trying to get the programmers in my previous
  company to adopt the "formal" process you have documented
  so well. I call the process "informal peer" code reviewing.
  It is a one-on-one process, where the programmer who has
  the code to be reviewed presents the code to a second
  programmer -- no pre-reading.  (I will provide more details
  in a longer article -- later! (sigh!))

  In any case, in our informal, peer, one-on-one code
  reviews, there is no independent attempt to understand the
  code. Therefore, your intelligent and reasonable answer
  doesn't work.  So, I'm still looking for something quick
  and objective to address understandability.  I understand
  ;-) that size is a poor substitute -- I just haven't found
  anything better

  (For those who might not know it, Gerry is the co-author of
  a fine book on formal code reviews, called "Handbook of
  Walkthroughs, Inspections, and Technical Reviews :
  Evaluating Programs, Projects, and Products" which is a
  candidate for reviewing.)

++++ New Post, Same Topic ++++

From: "Jared M. Spool" 
Subject: Re: Understandability

Jerry Weinberg wrote:

> Our of all the things you need to measure in software,
> "understandability" is probably the *easiest*.  You take a
> sample of several people from the population of people whose
> job it is to understand the product.

Jerry describes a basic usability test, as it is applied to a
code module.  In this case, the users are the other
developers, as they "use" the code to maintain/support it.

Usability testing is a science, but it is not very hard at
all. Jerry's description is fairly accurate, except that you
really don't need to worry about "x" (the time it takes to
understand).  In usability testing, what you worry about are
the reasons that prevent understandability.  As you identify
them, you change the artifact (in this case, the module), and
then try the test again to see if you've removed the
obstacles.  Once you've removed all the important obstacles,
your done.

The trick is to have the authors of the code there as you
perform the test.  That way, they learn what habits they have
that become obstacles and how to avoid them in the future.


p.s.  Usability testing is not Rocket Science.  (We know this
because NASA is one of our clients.  They suggested it might
be Brain Surgery, but we checked with our friends at the
Lahey Clinic -- it's not that either.  So, our current theory
is that it's Pool Cleaning.)

 Jared M. Spool                User Interface Engineering     800 Turnpike Street, Suite 101
 (978) 975-4343                   North Andover, MA 01845
 fax: (978) 975-5353                                  USA

      If you send me your postal address, you'll get
     the next issue of our newsletter, Eye For Design.

++++ Moderator Comment ++++

  I'm not sure I understand the connection between usability
  testing and understandability testing you are making.  Your
  writing suggests they are the same for code -- is this what
  you mean?

  P.S. I *LOVE* your p.s.!  I have already shamelessly used
  it in several emails -- although I changed the client
  names, so you can't sue for look and feel! ;-)

===== NEW POST(S) =====

++++ New Post, New Topic ++++

Subject: S/W Maintainability Measures

In the software quality literature, the term Maintainability
is defined as the ease with which a software system can be
changed to correct errors and enhanced to meet new

I am interested to formulating a measure for the
maintainability of a system (or a module) based on its
use of data structures.

For example, if the date structures in the module are
deeply nested then they are "intuitively" more difficult
to understand and thus to maintain.

Following is the factors that I can see affecting the
maintainability based on data structures:
1. Depth of the nesting.
2. Bredth of the data structures.
3. Relations between the data. ie if data structure A
   depends on data struct B and C then maintaining
   A is more difficult.
4. Access to the data structures. The more exposed a data
   structure the more difficult its maintenance.
5. Chuking of the data fields. Logical chunking makes for
   better maintenance.

Looking for the list members contributions.

++++ Moderator Comment ++++

  How do you define 2, 3, 4 and 5 in a measurable way?
  (I'm not arguing that you can't -- I'm just curious!)

  Another characteristic, which is even more important in my
  experience, is how consistently is the data structure
  accessed/created/modified.  Data structures which are
  consistently accessed are much easier to maintain.
  Sometimes I think the biggest quality and maintainability
  boost provided by C++ is that it provides mechanism to
  insure consistent data accesses.

  Don't flame me for dumping on C++ -- I use it on new code,
  because I like it and it works well for us. Also don't
  flame me for liking C++!

  However, if you'd like to post a message about how C++
  helps or hinders software quality, this is the email list
  for you!

++++ New Post, New Topic ++++

From: "Phillip Senn" 
Subject: Diagnostics

I want to thank Terry for not editing my last letter. I use
to work at a newspaper, where we would correct peoples'
letters to the editor.  It gave me mixed feelings because
there were so many blatant errors, and we didn't want to
embarrass the poor soul who wrote it. But I also recognize
people's need to be heard "as is". So I applaud Terry's
decision to show yours and my email "as is", warts and all!

I'm an avid fan of Star Trek, so while watching the umpteenth
episode of "The Next Generation", I was struck by how much
the crew relied upon their computers and how often they ran
diagnostic routines.  The scenario would go something like:
Captain: "Fire photon torpedoes Mr. Worf"! Worf: "Photon
torpedoes inoperable, sir!" Captain: "Jordie, better run a
level 2 diagnostic on the photon torpedoes!" This got me to
thinking - what in blazes in a diagnostic anyway?  And it's
not only found in STTNG either.  In "The Hunt for Red
October, the captain asked the sonar man to run a diagnostic.
The reply came back almost immediately, "Sonar's working,
sir!" This got me to thinking - so a diagnostic can be run
anytime.  For the sake of Hollywood, I assume the response
was almost immediate because you don't want to have an
audience sitting in the theater for 20 minutes waiting for
the results of a diagnostic subroutine. So after much
cogitating, I've come up with a couple of guidelines for
diagnostic programs. A level 3 diagnostic is internal.  In
other words, let's say you have a file that should always
contain ASCII data.  A level 3 diagnostic would read through
the entire file and report any records that have characters
that fall outside the bounds of A-z, 0-9, etc. Don't take
this too lightly!  Once you start getting cozy with your
database, you'll find all kinds of things to check in an
internal diagnostic. Fields that should either be 0 or 1.
Fields that should only contain numeric data. Fields that
don't conform to your business's rules.

A level 2 diagnostic is external.  In many systems, there are
detail records, and then there are summary buckets.  Sum the
detail records and make sure the total matches the summary
bucket.  Check for referential integrity in other words.

A level 1 diagnostic is the same as a level 2 diagnostic
except it cleanses itself after reporting the error.  It
posts the sum of the detail records into the summary bucket.
I've never created a level 1 diagnostic, but it is my
unfortunate position to have inherited a couple.  At least I
can say that I added a reporting feature to them so that I
could track down why they were being run to begin with.

And I don't think the users mind if you ask them "Why is this
the only item that weighs 7,000 lbs?"  I've found that
they're appreciative of you sweeping behind them to clean up
the data, and if it is in fact a 7,000 lb item, they're happy
to explain that part of their business.

++++ Moderator Comment ++++

  Are you proposing that these diagnostics be part of the
  program, or that they be separate programs?

  I'm a huge fan of internal checks (assert's in C/C++,
  and of building testability support into the application.
  It's an integral part of our quality process.

++++ New Post, New Topic ++++

From: "Ernesto Torres Amador" 
Subject: (questions)


I was reading all Digests, and I still don't understand about
what topic it's talking about. I think this listserv is for
discuss about the methods to warranty the quality of
software, right?

I'm a programmer, so when I creating my systems I (databases
app usually ) I create the structure of table (DBF) on EXCEL
and all code (I use FoxPro 2.6) I keeping tracking in a tree
of link between a routine and other (I use INFOTREE32
system), so that's let me work a lot easier, BUT I still
using my pen and paper all the time.

Also I put some routine in my systems that let me log every
error of the system, even more this routine close the option
of menu of that system to avoid a second error until I
discovery and repair it.

There are a good system that make this better for me?

Ernesto Torres Amador

++++ Moderator Comment ++++

  My goal for this email discussion group is to help improve
  the quality of the software we produce by talking about
  what works for each of us.  It's not particularly about
  warranteeing the quality of software -- at least not yet!
  Not many software developers are confident enough to
  try warranteeing their work.

++++ New Post, New Topic ++++

From: "Phillip Senn" 
Subject: Worries

Did I get kicked out of the software-quality listserv for
being too verbose? I had great plans of writing down all the
thoughts that have been running around in my mind for the
last 10 years.

++++ Moderator Comment ++++

  No, not at all!

  The main reason that there are so few issues is that I am
  writing 1/4 to 1/2 of each issue because we don't have
  enough members (or at least members who write!).   The
  *last* thing I would do is kick someone off for being to

  In fact, other than being *very* abusive, I can't imagine
  kicking someone out of software-quality -- we will do
  better with more help!

  Writing down all your thoughts sounds good, even great, to

++++ New Post, New Topic ++++

From: (Mark Wiley)
Subject: Any OS testers on this list?

I'm interested in discussing test automation issues and
related topics with other people in the OS testing field.
Anybody out there?

Mark S. Wiley                         Email:
		*** Software Testing, Death From Above ***
Manager of Quality Assistance                                             nCUBE

++++ Moderator Comment ++++

  We have the problem in our DOS extender products -- how
  do you automate the testing of asynchronous interrupts?

  Okay, Mark, how about some suggestions or questions?

  What kind of automation are you doing now?

++++ New Post, New Topic ++++

From: "Danny R. Faught" 
Subject: Software-Quality email list

Someone pointed out the Software-Quality mailing list to me,
and I went over and browsed the archives.  It's certainly
interesting - sort of an interactive newsletter, with quite a
bit of editorial content. In maintaining the swtest-discuss
mailing list, I've set up a number of automated features that
have greatly increased the quality of the content without
requiring active moderation.  The growth in membership that
you report is pretty phenomenal, though I suspect that like
swtest-discuss, most participants are lurkers and are
unwilling to post anything.

I was informed about your list with a query about possible
competition with swtest-discuss, but since my list covers
software testing, yours seems to have a broader charter.  In
fact, there have been participants asking where to find a
list covering broader software quality issues. If your list
takes hold, perhaps there are some opportunities for

For more information about swtest-discuss, send a "help"
message to


==== BOOK REVIEW ====

  (Not this time!)

The Software-Quality Digest is edited by:
Terry Colligan, Moderator.

And published by:
Tenberry Software, Inc.     

Information about this list is at our web site,
maintained by Tenberry Software:

To post a message ==>

To subscribe ==>

To unsubscribe ==>

Suggestions and comments ==>

=============  End of Software-Quality Digest ===============

[Tenberry] * [Software-Quality] * [Zero Defects] * [Automated QA] * [Consulting] * [Site Map]
Last modified 1998.3.26. Your questions, comments, and feedback are welcome.