Software-Quality Discussion List
Digest # 011


Tenberry Home
Software-Quality Home
Reducing Defects
Automated QA
Consulting
Site Index

============================================================
           Software-Quality Discussion List

      S O F T W A R E - Q U A L I T Y   D I G E S T

      "Cost-Effective Quality Techniques that Work"
============================================================
List Moderator:                      Supported by:
Terry Colligan                       Tenberry Software, Inc.
moderator@tenberry.com               http://www.tenberry.com
============================================================
March 25, 1998                       Digest # 011
============================================================

====IN THIS DIGEST=====

    ==== MODERATOR'S MESSAGE ====

    Moving Along!

    Strange Formatting

    ==== CONTINUING ====

    Self diagnostics
      "Phillip Senn" 

    Re: Any OS testers on this list?
      markw@ncube.com (Mark Wiley)

    Re: Any OS testers on this list?
      John Cameron 


    ===== NEW POST(S) =====

    Software-Quality questions
      "David C. Wan" 

    Questions and challenges
      David Bennett 

    S/W Maintainability Measure
      BIKER ON THE ROAD 


    ==== BOOK REVIEW ====

    (none)




==== MODERATOR'S MESSAGE ====

  Moving Along!

  We have a number of interesting posts from different people
  -- which means I don't have to write as much!  YEA!!!
  Since I am still savoring the mini-death-march project,
  that's good.  In about a week, things should ease up
  for me.  In the meantime, there's lots of good stuff in
  this issue -- both to agree with, and to argue with.
  Please do so.

  Strange Formatting

  After I sent Software-Quality #010 to the list daemon, I
  got back two confirmations, and I also got two postings on
  my internal accounts (I'm a subscriber to QA the digests!)
  One of the doubled postings was correct, while the second
  seemed to contain the same text, but fairly random sets
  of characters were turned into hex!  I'm not sure what you
  saw, but if you got either the doubled posting, or the
  "hexified" version:

    1) Please tell me via email

    2) I'm sorry.  (I *thought* I sent #010 the same way
       as all the other issues...)

  I will reboot the List Server tomorrow.  Please let me
  know if this issue arrives strangely.



==== CONTINUING ====

++++ New Post, New Topic ++++

From: "Phillip Senn" 
Subject: Self diagnostics

I used to work on an accounting package (called FACTS), that
was overall a pretty good bookkeeping system.  Although it
was complete in its ability to regurgitated the same
information that you have so tediously typed in, it had the
annoying habit of locking up every once in a while when
something that should NEVER happened actually did happen.

Let's say for instance you're reading along through an
index, and the record that the index is pointing to just
isn't there.  This should NEVER happen, right?  Well,
they're conclusion was "If you've got a corrupted file, then
you need to fix it before the system can finish this report"
(My wording). I guess they were consumed with the fear of
producing a report that had a wrong total or something, so
their solution was to abend the program by branching to a
stmt that said X=1/0 REM Corrupted Index.

Of course this NEVER happened.  Until you start customizing
the software. Then you might get it about once a month.  Add
30 clients out there and you start getting these error
messages about once a day.

I can hear what you're thinking, "Well, dumdum, you changed
the code and introduced this problem upon yourself!  You
have no right to complain!" I agree that the problem is
self-inflicted.  What I want to talk about is how to handle
this same problem more gracefully than committing hara-kiri.

This goes back to my previous email about running
diagnostics often.  If a diagnostic were run nightly, then
you could potentially see a problem before it became
critical. But let's say the power goes out, the file gets a
corrupt record, and the report is printing as we speak.
You're going to run a 'rebuild' program, right?  That's your
only option after everything is said and done, isn't it.
You've got to rebuild the index. So why not say instead of
"You've got a corrupted file, press  to exit", say
something like, "You've got a corrupted file, have everyone
sign off and press  to rebuild the index".

I can hear what you're thinking.  "You give users that
ability and they're going to be rebuilding indices every day
and you'll never know that there's a bug in the changes you
made."  Really?  I don't think so.  But let's say they've
done it twice this past month.  That's where you've got to
log these errors.  Nothing fancy, just a plain ASCII file
with date, time, user, and error condition. This log file
makes you look like a cybergod.  At your next visit, as soon
as you sit down at your client's desk, you will say
something like, "I see you had and error on the 28th.  What
happened?" Your client won't remember, trust me, but in
their eyes you've grown in stature.

++++ Moderator Comment ++++

  This seems like a great idea to me!  We use built-in and
  automated diagnostics for the new software that we develop,
  and find it unbelievably valuable, particularly when
  coupled with daily running of the diagnostics.

  You are using the same "daily running of automated
  diagnostics" to insure the quality of both an operational
  system and the data itself!

  I'm not really building or supporting those kinds of
  systems, but it seems like it would be very effective.

  What do our readers think about this?



++++ New Post, New Topic ++++

From: markw@ncube.com (Mark Wiley)
Subject: Re: Any OS testers on this list?

> From: markw@ncube.com (Mark Wiley)
> Subject: Any OS testers on this list?
>
> I'm interested in discussing test automation issues and
> related topics with other people in the OS testing field.
> Anybody out there?
>
> Markw
>
> ++++ Moderator Comment ++++
>
>   We have the problem in our DOS extender products -- how
>   do you automate the testing of asynchronous interrupts?
>
>   Okay, Mark, how about some suggestions or questions?
>
>   What kind of automation are you doing now?

Well, I have never tested DOS (in fact I have hardly used it
:-) since I have been in the UNIX camp for some time (since
about 1980). Now I'm testing Plan 9, a research OS from ATT
which is fairly similar to UNIX, but with a few interesting
differences.

The computer platform I test is a massively parallel
hypercube architecture supercomputer. Details about our
products can be found at www.ncube.com.

In order for the testing to have maximum validity, it must
run on the version of the OS we ship, rather than an
instrumented version with hooks for special testing. This
means that our testing relies on the execution of tests in
user space, which mimic aspects of the the applications
normally run on our system.

Since my experience is that most of the problems in an OS
are caused by race conditions or resource starvation issues,
our testing is oriented towards finding those kind of
problems.

The above problems tend to be load sensitive and
intermittent in nature. So, I run different mixes of the
small test applications to generate a high load on the
system. That tends to flush out problems with interrupts
handling in the devices that are tested, as well as other
problems.

The test framework relies on computer generated system call
wrapper routines which hide some of the error checking and
handling logic from the programmer. Template programs for
testing system calls were also generated by computer, and
then combined by hand into more comprehensive programs. Most
of the system call parameters can be specified on the
command line, making the tests highly flexible.

The test driver has a specialized language, which is
designed to simplify the control of large numbers of
programs executing in parallel. Combined with the highly
parameterized tests, we can do a good deal of test
development without writing additional C code.

That's probably enough of a description for now, but of
course I welcome additional comments and questions (just
don't ask me how to port it to DOS :-).

Markw
=============================================================
Mark S. Wiley                          Email: markw@ncube.com
        *** Software Testing, Death From Above ***
Manager of Quality Assistance                           nCUBE
=============================================================

++++ Moderator Comment ++++

  Mark's doing OS testing "in the large."

  I'm not sure I agree with the idea that you can't leave
  most, if not all, of the instrumentation in today's OS's.
  I doubt the extra code is more than a megabyte, and I
  really doubt that anyone other than you, Mark, would notice.

  (I presume you can't buy a nCUBE with only 4MB of RAM? ;-)

  Do you use a torture test, like the one that seemed to
  bring so much stability to Linux?



++++ New Post, New Topic ++++

From: John Cameron 
Subject: Re: Any OS testers on this list?

>I'm interested in discussing test automation issues and
>related topics with other people in the OS testing field.
>Anybody out there?
When I was in grad school learning about operating systems,
we were given an interrupt script complete with timings and
had to write an operating system that would be driven by the
script.  It should be fairly easy to write a test bed that
could be used for regression testing on a bare machine.  The
problem always comes down to creating something complex
enough to make you believe you are really exercising the
system and simple enough to give repeatable results.
Usually the repeatable means avoiding ambiguities.  In
operating systems that could be two things with the same
priority.  Even in our very controlled environment there
turned out to be two solutions.

Certainly you would want unit tests.  ie can you write a
byte to a disk, can you read it, can you put something on a
monitor?  And then build up from there. You may be thinking
that you just can't repeatably simulate a busy operating
system, and you are probably right.  But you can find most
errors with rather simple tests.

I am currently writing network firmware.  Devising strings
of bytes for loopback tests (tests where your characters are
sent right back to you) is an interesting exercise.  The
nice thing is that the signals are controlled and repeatable
and the software is monitored and produces a log.

Probably the most reliable technique I use is to walk
through every line of my code with some sort of debugger.
My stuff is part of an embedded system running under a real
time operating system and I have to use emulators and the
log comes from a serial port built into the board.  But I
still exercise EVERY line of code and look at EVERY result.

Most bugs are dumb.  As our host Terry has commented to me,
he's never seen a smart bug.  They are real easy to spot by
exercising and watching the code.

So, am I blathering away here?  Most of the discussion seems
to be theoretical and that's just not my style.

John
Not bug free, but getting awfully close.

++++ Moderator Comment ++++

  John's doing OS testing "in the small."

  John and I worked together for a number of years.  To
  explain the "smart bug" comment a bit more:  In my many
  years of managing programmers, and in finding and fixing
  my own bugs I have said and heard many others say many
  times, "What a silly/dumb bug!"  One day it occurred to me
  that I had never heard anyone say "Come look at this bug!
  It's really clever!"

  So I started keeping count -- so far dumb bugs are winning,
  by more than 7000 to zero!

  Since John was fond of the "What a DUMB bug!", I probably
  drove him crazy by pointing out that we'd never seen a
  smart bug -- each and every time!  ;-)  (I never met a
  cliche I didn't like!)

  Have you ever experienced a smart or clever bug?
  If so, would you please share it with us?



===== NEW POST(S) =====

++++ New Post, New Topic ++++

From: "David C. Wan" 
Subject: Software-Quality questions

Hello Terry or to whom it may concern :

I got the following questions about the Quality on
Scientific and Engineering Software :

  o Any latest formal standard (e.g. Metric, process etc.) ?

  o Who got the best tools to assist the Quality Enforcement
    and Assurance task ?

Thank you and look forward to getting your feedback at your
earliest convenience !

Regards,
David C. Wan (c00dcw00@nchc.gov.tw) 3/24/98

++++ Moderator Comment ++++

  The IEEE has defined a bunch of QA and software
  engineering standards.  I haven't taken the time to look
  at them yet, but a number of folks over in the
  comp.software.engineering newsgroup have said positive
  things about them.



++++ New Post, New Topic ++++

From: David Bennett 
Subject: Questions and challenges

Hi Terry

So far I'm not detecting a great deal of contribution to
software quality rocket science, apart from your own.  The
other threads seem to be
* use structured programming
* use code walk throughs
* format your e-mails in plain text

Maybe you should chop out that bit.  Here are two topics for
inclusion.

Topic 1 - write testing logic right into the product.
----------
This bit of yours I agree with 100%...
* I'm a huge fan of internal checks (assert's in C/C++, and
* of building testability support into the application.
* It's an integral part of our quality process.

PFXplus (our major product) contains about 10% of the source
code volume in testing, logging, assertions, validation
hooks, etc.

Comments anyone?  Is this too much?  Not enough?  About
right?

Most of the code gets removed in a production build
(pre-processor stuff), but a significant chunk stays in
there for validation.

Topic 2 - attempts at controversial statements.
----------
Please agree with/disagree with/ignore the following
statements.

1. Every production program is granted a budget of at least
5% of code volume, 5% of data volume and 5% of execution
time reserved for internal testing, consistency checks,
post-production debugging and similar functions (which are
not required to contribute at all to the primary purpose of
the software).

2. Any defect detected in QA testing *must* be incorporated
into a process of automated testing in such a way that the
chances of that same defect appearing and not being detected
in a future QA testing of that product is less than 1 in
1,000,000.

3. Any defect notified by a customer and successfully
reproduced and corrected must be treated as in item (2).

4. A programmer (or team) responsible for designing and
implementing a function (or program or module) must also be
responsible for designing and implementing test procedures.
QA is responsible both for measuring the accuracy and
coverage of test procedures, and (by running the test
procedures) for measuring the degree to which the function
is in compliance.

5. Every function or feature that has ever been in a product
and is still part of its specification, and every defect
which has ever been corrected, *must* be validated in every
release of the product.  This requirement demands automated
testing.

Question:

1. Does anyone disagree?  If so, please justify.

2. Does anyone comply?  If not, please defend.

Regards
David Bennett
[ POWERflex Corporation     Developers of PFXplus ]
[ Tel:  +61-3-9888-5833     Fax:  +61-3-9888-5451 ]
[ E-mail: sales@pfxcorp.com   support@pfxcorp.com ]
[ Web home: www.pfxcorp.com   me: dmb@pfxcorp.com ]


++++ Moderator Comment ++++

  Wow!

  My responses:

  We do about 10% internal checking code.  We also budget
  3-5% for record, playback and logging code to support fully
  automated QA.  We would never consider removing it from
  a production build, since we feel like we wouldn't have
  been testing what we ship!  Since we don't remove it, we
  use run-time checks to enable, rather than preprocessor.

  1. I think your numbers are too low.  With memory prices
     at $3-4 per megabyte, and 300MHz processors, we should
     be budgeting more for self-checking.  So what if your
     program is 15% slower? (it won't be!)  It's probably
     not noticeable, and next week Intel will have a chip
     that more than makes up for it!

  2. Absolutely agree!

  3. Absolutely agree!

  4. We have both the programmers *and* the QA team create
     tests.  I find that they think sufficiently differently
     that you don't really get much duplication.  Since we
     push full automation, too many tests is not a problem.

  5. Absolutely agree!



++++ New Post, New Topic ++++

From: BIKER ON THE ROAD 
Subject: S/W Maintainability Measure

A simple and "automateable" way to find the measure of
the data structure is the following process:

1. assign a value of 1 to each basic type of the data struct
2. the breadth of the struct is the sum of its sub-data
   breadths.
3. if the sub-data is another aggregate data struct then
   the process is applied recursively.

Similarly, we can apply similar measurements to each of the
factors I have mentioned in my previous mail to the list.

The most important step for formulating the measure is to
locate the factors that affect the maintainability of
the data structures of the program.

I am seeking the list members experience in determining
such factors -- currently identified factors

1. Depth of the nesting.
2. Bredth of the data structures.
3. Relations between the data. ie if data structure A
   depends on data struct B and C then maintaining
   A is more difficult.
4. Access to the data structures. The more exposed a data
   structure the more difficult its maintenance.
5. Chuking of the data fields. Logical chunking makes for
   better maintenance.

++++ Moderator Comment ++++

  I still vote for consistency of access as being more
  important than any of your 5.

  Have you been able to connect your measurements to your
  subjective opinions of the data structures?



==== BOOK REVIEW ====

  (Not this time!)

  (This slot is here to help in future issues, since I create
  each issue by starting with the previous one.)


=============================================================
The Software-Quality Digest is edited by:
Terry Colligan, Moderator.      mailto:moderator@tenberry.com

And published by:
Tenberry Software, Inc.               http://www.tenberry.com

Information about this list is at our web site,
maintained by Tenberry Software:
    http://www.tenberry.com/softqual

To post a message ==> mailto:software-quality@tenberry.com

To subscribe ==> mailto:software-quality-join@tenberry.com

To unsubscribe ==> mailto:software-quality-leave@tenberry.com

Suggestions and comments ==> mailto:moderator@tenberry.com

=============  End of Software-Quality Digest ===============

[Tenberry] * [Software-Quality] * [Zero Defects] * [Automated QA] * [Consulting] * [Site Map]
Last modified 1998.3.26. Your questions, comments, and feedback are welcome.