Software-Quality Discussion List
Digest # 015


Tenberry Home
Software-Quality Home
Reducing Defects
Automated QA
Consulting
Site Index

============================================================
           Software-Quality Discussion List

      S O F T W A R E - Q U A L I T Y   D I G E S T

      "Cost-Effective Quality Techniques that Work"
============================================================
List Moderator:                      Supported by:
Terry Colligan                       Tenberry Software, Inc.
moderator@tenberry.com               http://www.tenberry.com
============================================================
April 14, 1998                        Digest # 015
============================================================

====IN THIS DIGEST=====

    ==== MODERATOR'S MESSAGE ====


    ==== CONTINUING ====

    Re: leaving tracing in
      John Cameron 

    Re: understandability
      Jerry Weinberg 

    RE: Software-Quality Digest # 014
      David Bennett 

    Sizeof SW
      Rick Pelleg 

    Re: Software-Quality Digest # 011
      Danny R. Faught 

    Loved the diatribe!
      Charlotte Helander (helander@San-Jose.ate.slb.com)



    ===== NEW POST(S) =====

    Who are you?
      "Phillip Senn" 

    Re: QA Challenges!
      Rick Hower 

    can I post a job to the list???
      Sean Lally 

    Re: Book Reviews
      Danny R. Faught 


    ==== BOOK REVIEW ====

    (out of space)



==== MODERATOR'S MESSAGE ====

  This issue was delayed a bit while I finished my little
  death-march project, and then caught up on all the things which
  didn't get done while the project was running.

  We should be back to a 2 or 3 issues per week schedule, assuming
  that you continue to make posts, of course!

  I have a review of Robert L. Glass's book "Software Runaways"
  ready (well, okay, *almost* ready! ;-), but this issue was
  already too big, so I've saved it for next issue.  (How's that for
  building anticipation?)



==== CONTINUING ====

++++ New Post, New Topic ++++

From: John Cameron 
Subject: Re: leaving tracing in

>The presence of large amounts of tracing and assertions makes this
>much easier to do, which is one of the reasons we remove it.
>
>Question to those who leave tracing and checking code in shipped
>product: doesn't this worry you at all?
>
In my case, it is firmware on a proprietary machine.  There is
nothing they could do with it if they did crack it.  And if I ever
have to know what is happening in the field, all that must be done
is to hook up a serial port on a PC. Notebooks make great
analyzers.

Like Terry, I can't imagine that tracing code would help reverse
engineer the code.  It might even make it more difficult to
reverse. One of the best techniques to inhibit reverse engineering
is to add lots of modules that aren't part of the product.  It's
like encryption, someone can always do it, all you can do is raise
the cost to prohibitive levels.  Extra code raises the cost.

John



++++ New Post, New Topic ++++

From: Jerry Weinberg 
Subject: Re: understandability

>++++ Moderator Comment ++++
>
>  While you make valid and useful comments (as usual! :) about
>  optimization, I think you are counting too much on
>  understandability.  Two points:
>
>   - Code can easily be understandable, but wrong.  If the algorithm
>     or assumptions are wrong/inappropriate, all the understanding
>     in the world won't help.  (Think of a bubble sort of a hundred-
>     million element file.)

Absolutely.  Of course, if it's not understandable, it's far more
likely to be wrong, and far less likely to have someone notice
that it's wrong. Understandability is a necessary, but not
sufficient condition for being right.
>
>   - While understandability is positively correlated with many
>     positive attributes (maintainability, reliability, etc.),
>     it's merely a correlation -- being understandable doesn't
>     guarantee any of the other properties.

I totally agree.  Again, though, lack of understandability
guarantees that none of the other properties can be assured.
>
>  I use the following principles:
>
>    Most important:  meeting specifications & defect-free (in case
>       defects weren't mentioned in specification.)

I think this isn't quite stated correctly.  Most important is being
able to *demonstrate* that you've met specifications in a
defect-free way. The code must be *convincing*, otherwise we
wouldn't dare to use it, or if we dared, we'd be exposing ourselves
to potential harm/loss.
>
>    Important:  understandability, maintainability, robustness,
>       testability, consistency, locality of concepts
>
>    Nice-to-have: speedy implementation, small size, fast
>       execution.  (Although, as Gerry points out, these may well
>       be part of the requirements.)

With the above modification, I say "hurrah" to this list.

Jerry
website = http://www.geraldmweinberg.com
email = hardpretzel@earthlink.net


++++ Moderator Comment ++++

  I don't agree with the *demonstrate* idea at all.  At least for
  the customers I typically have (and the managers I have worked
  for), there was no one to demonstrate the code to who could
  read/understand it.  The concern is/was completely on the
  resulting program, not on the code.  The decision about whether
  the code is defect-free is/was made by using it, or watching it
  being used.

  I believe the British have an expression for this idea: "The proof
  is in the pudding" -- although I never could figure out what it
  meant!  (Perhaps someone with more worldly experience could 1)
  correct my quote; and 2) explain what it means! ;=)

  I am curious about what kind of environment you are imagining
  where the decision to use code comes from looking at the code
  itself, rather than the effect of running the code.



++++ New Post, Same Topic ++++

From: David Bennett 
Subject: RE: Software-Quality Digest # 014

Hi Terry

Great newsletter!  I don't know whether I'm learning a whole lot,
but I sure am enjoying agreeing with the moderator!  Problem is, I
find too much to comment on.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
A comment
- - - - - - - - -
* What can you do?
*
*   Rather than complain about how stupid or short-sighted your
*   management is, develop an explanation of how your quality
*   programs either reduce costs, for example, by reducing
*   maintenance and customer support, or increase revenues, for
*   example by increasing customer satisfaction and repeat business.
*   If you can't make a case that your quality saves money or
*   increases revenues, change your quality program until it does!

Absolutely.  Right on.  My line is...

We're too small a company to be able to afford to throw money away


by doing things twice, so we just have to do things right the first
time.

*   Marketing
*   ---------
*
*   (Repeat the above diatribe, substituting "customer satisfaction"
*   for "increased profits".)

Half an answer - works on old customers but hard to use on new ones.
Unfortunately, quality is measured more by what doesn't happen
(amount of post-sales support) than what does.  Here in Australia,
ISO 9000 certification is a selling point.  We don't have it, but at
least (some of) the customers know about it and differentiate.  The
US doesn't seem to be doing that much yet.

* Development
*   -----------
*
*   (Repeat the above diatribe, substituting "reduced delivery
*   times" and/or "more reliable schedules" for "increased profits"

The major point of us is that they get less bugs to fix.  Because
no-one really likes fixing bugs in code you thought you had
finished, that's a definite plus for the developers.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
To Jerry...

* first to your design, not your code.  You can seldom find large
* space savings by tweaking code, though sometimes you can get huge
* speed savings by analyzing performance and changing one place in
* the code.  After all, there can't be more than one place that uses
* over 50% of the time.

We have frequently tweaked code to make large space savings.
Inexperienced programmers (and even good programmers who think it
doesn't matter this time) are often quite careless about memory
allocation.  In Windows 16, changing a program from static to
dynamic allocation, or from small(er) to large(r) memory model can
free up 50% or more of your 64K DSEG.

Changing from a fixed array of fixed-sized strings to a linked list
of pointers to packed text in a string table can reduce a compiler's
memory utilisation dramatically.  A typically compiler with (say)
10,000 * 32 byte names uses 32K; if the names average 12 bytes
you're down to around 17K.  Multiply that by N (you pick N) for
modern 32-bit systems.

In a badly performing application, it is not unusual to find a
section consuming 70-80% of the time.  Fix that, and you may find
another section now consuming 70-80%.  And again.  And again.  I
know it isn't quite what you had in mind, but the net effect is the
same.  We recently reduced the runtime for an SQL job by a factor of
more than 10 times by changing middleware code.  We kept fixing
stuff until the performance distribution no longer had any section
over about 25% (except the SQL engine itself!)

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Regarding your 3 points, I use the same, but I phrase them this way.

A posting on the 3 priorities for software development.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1. Absolute priority: completed to specification (passes its
acceptance tests) within time and budget

2. High priority: well built.  Includes maintainable, robust,
understandable, testable, re-usable.

3. Lower priority: low resource usage: fast, small.  Includes all
use of optimisation, assembly language, "tweaking", etc

Item (1) can only be varied by management or the customer, if at
all.  There can be no extension on software to run the Olympic
Games; no budget overruns if a company has no more money.  If you
fail on this item, you fail.  Period.  If you succeed you get paid,
but you may have problems for the future.

Item (2) are the things you do which were not in the specification,
to minimise the risk of failing on item (1) and to minimise the
lifetime cost of the software, especially if you own it.  There is
no absolute failure in this category, just bad software with costs
or defects lying in wait.

Item (3) are things you do which were not in the specification and
were not overridden by something in Item (2) to minimise the running
cost of the software.  There is no absolute failure in this category
either, just slow software.

Microsoft Word is IMHO a good example of software which passes (1)
with flying colours and scores badly on (2) and (3).  I believe
DOS4/GW and our product (PFXplus) have succeeded on (1) and much of
(2), with some success in (3).

Where does your product fit?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Re my previous posting...

*   But I was mostly talking about the debugging traces, dumping
*   commands, etc., not the compiler-generated stuff.  I see little
*   benefit to removing this code from the release version.

No, we don't remove the code.  But we do compile it out.

* Obviously you have to protect your trade secrets.  However:
*     - My experience is that users are surprisingly smart and that
*       the few who will reverse-engineer things are easily daunted.
*       We've had more than one person reverse-engineer the guts of
*       the switching code of our DOS extender -- just so that they
*       could do a better job of reporting a bug.
*
*   My advise is:  don't worry about it, except that you might want
*   your assert() macro to not stringize the expression.

But that's the point.  For easiest debugging our ASSERTs do
stringise the expression, but we don't want to ship it that way.  It
takes a lot of effort to disassemble a page of C code, but you could
read our debug executable like a book!  It tells you everything that
happens, where it happens, lots of detail.  We won't ship that.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Regards
David Bennett
[ POWERflex Corporation     Developers of PFXplus ]
[ Tel:  +61-3-9888-5833     Fax:  +61-3-9888-5451 ]
[ E-mail: sales@pfxcorp.com   support@pfxcorp.com ]
[ Web home: www.pfxcorp.com   me: dmb@pfxcorp.com ]


++++ Moderator Comment ++++

  Regarding the ASSERT macros, how much benefit do you get from the
  stringized expression?  That's the first thing to go in our
  builds, since the file and line number are all the developer
  typically uses anyway.

  What I am suggesting is that there is a definition of ASSERT which
  works equally well for development and release.  It probably isn't
  the default implementation provided by most compiler vendors, for
  the reasons you state.



++++ New Post, New Topic ++++

From: Rick Pelleg 
Subject: Sizeof SW

I would like to point out another aspect of code size, important
to companies such as where I work:

Terry Colligan wrote in Digest #13:
> ...
>   Rather I was trying to give an example that spending lots
>   (or what seems like lots) of resources on testing, logging,
>   and other quality-specific code might be a good economic
>   trade-off.  I chose a megabyte because I thought it would
>   both seem wantonly wasteful to people, while actually only
>   costing $3 to $10 per system.
>
>   Actually, I just looked on the web at my favorite sites,
>   and the price range was $1.50 to $4 per megabyte for standard
>   SIMMs.  At these prices, it seems like you could afford
>   almost any amount of quality code...  But wait, maybe it's
>   disk space?  Here the prices are $.05 to $.10 per megabyte!

Since our applications are multimedia ones, our typical
customer has plenty of RAM for running our software. However,
an important distribution channel for us is the Internet.
This means that both our trial versions, and full, paid-for
versions (from our web store) are usually downloaded over
the customers' regular Internet connection. We have measured
the success rate of the very first customer encounter with
our code: downloading it. There is a clear inverse
relationship between the total download size, and the chance
of successfully finishing the download.

Thus, the sheer size of our software can directly cause our
our potential customer to abort his attempt to try out the
software. The typical web surfer will quickly move on to
another site after maybe only one download attempt, and we
may have lost him forever.

In this case there is an "all or nothing" effect; an
apparently insignificant increase in code size might be
just what makes us lose the potential customer.

Thus, until now we have always removed the extra testing/
/logging/diagnostics code from the shipping version of our
products.

However, there is a "but" to this story: recently, for
technical support reasons, this exact issue is being
re-considered. It is very likely that future versions
of at least some products will have some of the trace/log
functions/macros selectively left in the production code,
despite what was said above.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rick Pelleg, mailto:rickp@vdo.net
VDOnet Corp. Ltd. http://www.vdo.net
DISCLAIMER: My opinions, not necessarily my company's


++++ Moderator Comment ++++

  Several points:

   1) The internet bandwidth will be coming along, just like CPU
      speeds and memory.  My at-home connection is a cable modem that
      routinely gets me 40-80 KBytes per second, with occasional
      downloads at 120KBytes per second.  (about 1000 baud)
      This costs double the typical internet connection ($40/month
      rather than $20/month), but I think it's worth it.

   2) I bet you can't demonstrate your "all or nothing" effect.  It
      seems much more likely to me that each significant size increase
      (say each additional megabyte) will lose some percent of your
      surfers (say 20% per megabyte.)

   3) If you didn't tell anyone, how could they tell the difference
      between reliable code which was 10% bigger and code which had
      10% quality-related code (and was therefore reliable)?

   4) I suspect that downloads are not how you make money.  If you
      could download a product in a second, but the product crashed
      every 10 minutes, I suspect that you wouldn't be in business
      long.  If you could download the product in 10 minutes, but it
      never crashed, I suspect you could have quite a nice business.

  Your point of reduced support costs is another way that quality-
  oriented code can save money/resources.



++++ New Post, New Topic ++++

From: "Danny R. Faught" 
Subject: Re: Software-Quality Digest # 011

> ++++ New Post, New Topic ++++
> From: John Cameron 
> Subject: Re: Any OS testers on this list?
>
> Certainly you would want unit tests.  ie can you write a
> byte to a disk, can you read it, can you put something on a
> monitor?  And then build up from there. You may be thinking
> that you just can't repeatably simulate a busy operating
> system, and you are probably right.  But you can find most
> errors with rather simple tests.

Yes, when designing a new operating system from the ground up, you
definitely want to allow for testability at the unit level.  But how
often do you create a totally new code base for an OS?  We have to
improve the testability of our legacy code a little at a time,
meanwhile, I would that tha for most operating systems sold today,
you have to boot the whole thing to test any part of it.  I've been
trying to convince people not to call this unit testing.  Functional
testing, maybe, or when I'm in a strange mood, I call it "focused
system testing".

> ++++ Moderator Comment ++++
>   So I started keeping count -- so far dumb bugs are winning,
>   by more than 7000 to zero!
>
>   Since John was fond of the "What a DUMB bug!", I probably
>   drove him crazy by pointing out that we'd never seen a
>   smart bug -- each and every time!  ;-)  (I never met a
>   cliche I didn't like!)
>
>   Have you ever experienced a smart or clever bug?
>   If so, would you please share it with us?

I have seen a number of bugs where the response was, "Wow that one
was strange, I'm not surprised we allowed that one in."  The state
of the art nowadays is that you can get out most of the "dumb"
unit-level bugs (excepting operating systems, maybe :-), so the
interesting bugs are found during integration or later, and they're
much less intuitive, or perhaps, "smarter" than us.

> From: David Bennett 
> Subject: Questions and challenges

...

> ++++ Moderator Comment ++++
...
>   1. I think your numbers are too low.  With memory prices
>      at $3-4 per megabyte, and 300MHz processors, we should
>      be budgeting more for self-checking.  So what if your
>      program is 15% slower? (it won't be!)  It's probably
>      not noticeable, and next week Intel will have a chip
>      that more than makes up for it!

In my area, we design supercomputers using the fastest available
chips. Fudging performance by any amount means that the we reduce
the performance of the company's fastest available systems - the
systems we use to announce world record benchmark results, and this
is definitely noticeable.  So keep in mind that there are some
markets where a 15% performance hit is not acceptable.

-Danny


++++ Moderator Comment ++++

  I understand that there are some markets where a 15% time penalty
  would be unacceptable, and so in our systems, we provide a
  run-time switch to disable any checking that seriously impacts
  performance.

  I still argue that for most applications, memory is so cheap that
  it is a false economy to not have checking/logging/QA code always
  built into the application.  Note that I'm *not* saying that we
  haven't done a quality job if we haven't slowed things down by 15%
  and added an extra megabyte of quality-related code -- I'm just
  saying that we ought to think about how much space and performance
  we should budget for quality-support, and that it ought not be
  zero.



++++ New Post, New Topic ++++

From: helander@San-Jose.ate.slb.com (Charlotte Helander)
Subject: Loved the diatribe!

Hi,
  One way to sell management, developers, and ourselves is by
devising measurements that can be converted to dollars, a universal
language.   For example, we made our case for having peer review
training to management and sold the value of peer reviews to the
developers through dollars

  About three years ago we added three fields to our bug reporting
system: defect-cause, defect-origin, and hours-to-fix.  We limited
the choices for defect-cause to things like overlooked, logic,
unfinished, etc. The choices for defect-origin included
requirements, design, code, test cases, etc.   Hours-to-fix is
supposed to include everyone's time, not just the fixer's.  For
example, if I spend an hour discussing a solution with you, I
include those 2 hours in hours-to-fix

  Whenever a software developer fixes a bug, he/she is forced by the
bug system to supply these three pieces of data.  Over the years,
the four top categories have been design-logic, design-overlooked,
code-overlooked, code-logic.  These categories lend themselves to
early discovery by peer reviews.   We made the case for peer reviews
by totaling the hours-to-fix for the four top origin-cause
categories and converting the total to dollars.

  We now track the number of "issues" found during a peer review and
the time spent in a peer review.  We can compute things like "issues
found per hour of inspection" and compare that value to the time to
fix during system testing.  Everyone can see that peer reviews save
money.

  Quantifying peer reviews is relatively easy.  Other QA activities
are more difficult to quantify, but definitely worth the effort.

  Have other readers devised quantification mechanisms?  If so, for
what?

Charlotte Helander

*******************************************************
Charlotte Helander
Staff Software Process Engineer
Schlumberger - Automated Test Equipment
1601 Technology Drive
San Jose, CA  95110-1397
Phone:  (408)437-5129
FAX:  (408) 437-5246
email:  helander@san-Jose.ate.slb.com
********************************************************


++++ Moderator Comment ++++

  I think actual numbers would be a great help here. If we had real
  numbers from different organizations, people could see what
  techniques had positive paybacks, and perhaps the numbers could be
  used by all who are trying to justify a new quality program to
  management. Therefore, I propose:

    1) Charlotte should provide real numbers for what her group is
       experiencing to justify code reviews.

    2) I'll do the same for Tenberry in the next issue.

    3) I'll collect the data, and present it on our web site, so
       that others will be able to see actual numbers, and see how
       the justification works.



===== NEW POST(S) =====

++++ New Post, New Topic ++++

From: "Phillip Senn" 
Subject: Who are you?

In a group such as this, with a bunch of old grizzled veterans of
the code wars, we could very easily slip into dreams down memory
lane to talk about how it was when WE first got into computers.  I
know that I'm on the precipice of a slippery slope when I begin to
talk about my first job, but allow me to bring in an anecdote to
help illustrate a point.

At my first job, there was an elderly man named Earl.  He was a code
jockey from way back, and couldn't understand why everyone was going
to CRTs.  He liked having his card deck because it was something he
could feel - it was real.

"God help me," I thought to myself, "I will strive to never get in
the place where I'm like Earl."  Now I'm 39 years old, still not as
old as Earl was when I first met him, and I find myself shelling out
to DOS to do copy commands rather than running Windows Explorer.

And so I'm wondering about the demographics of the listserv.  I
kinda have a sneaking suspicion that most of the readers are COBOL
junkies or some other language that they've been comfortable in for
20 years.  Am I right?  I think that in building a system, a lot of
quality is sacrificed in the process of learning a new language.
Thus, it is the people that are "in their zone", that are most
intrigued by improving the quality of their work, provided it's in
the framework of what they're already doing.

Now I realize that a lot of these articles might end up being only
for my own benefit.  In other words, I can say to people "Oh yeah,
I'm a regular contributor to the Software Quality list server".  In
fact, I might want to show the editorial staff of a few magazines
where the SQ list archive is.  But I think that it would help a lot
if we could put together a questionnaire that kind of tells us who
we are.  I'll let Terry decide on the final output, which he no
doubt will put up on his web site, but here are a few questions that
I can come up with off the top of my head:

(Optional) email address:

A.  What hardware have you ever in your life programmed in?
(This will help stroke some people's egos).

B.  What hardware do you still program in to this day?

C.  What Operating Systems have you ever in your life worked with?

D.  What Operating Systems do you still work with to this day?

E.  What languages have you ever in your life programmed in?

F.  What languages do you still program in to this day?

This could be posted as three tables showing the breakdown of the
group.  If we/Terry/you wanted to, he could add the ability to drill
down and find the email address of people who have filled out the
optional field.

I think it's a good idea because we would all like to let people
know our qualifications, in case something terrible should go wrong
at work, and it might help define the group a little bit better.


++++ Moderator Comment ++++

  If this is interesting, I think I'll do it as a form on our web
  site.  Please don't answer Phillip's questions in posts to the
  list -- rather just address the issue of whether this information
  would be useful and/or interesting.

  Regarding Phillip's other points:

  I doubt that many magazine editors would be impressed by being
  published here, but I do discard posts that I think aren't helpful.
  As we get a larger group, I may have to start triming useful posts.
  So far, that hasn't happened.

  (I have only written two Cobol programs in my life; I like learning
  new languages; but I do use command windows to copy files... ;-)



++++ New Post, New Topic ++++

From: Rick Hower 
Subject: Re: QA Challenges!

Good response!
I hear the same griping from QA folks all the time, and I also hear
the QA managers constantly requesting more people, more money, etc.
My approach is always along the lines of what you're saying, i.e.
"How can I help get better products out faster and cheaper and
improve the bottom line."

--Rick Hower   author of The Software QA/Test Resource Center
                at http://www.charm.net/~dmg/qatest/

++++ Moderator Comment ++++

  And does it work?



++++ New Post, New Topic ++++

From: Sean Lally 
Subject: can I post a job to the list???

This job:

CNET seeks SQA pro's in San Francisco and Bridgewater New jersey.
For more details check out www.cnet.com/jobs



CNET is hiring!

http://www.cnet.com/Jobs

Sean Lally
Director, Corporate Recruiting
seanl@cnet.com
http://www.cnet.com

You're one click away from a totally new Internet.
Try CNET's new service: Snap! Online.
http://www.snap.com


++++ Moderator Comment ++++

  I wrote back to Sean saying that I didn't think it was appropriate.
  Upon reflection, I thought maybe others might have different
  views than me.  So I'm opening this for discussion -- do we want
  job postings?

  My reasons why not:

    1) We are geographically very diverse, from many different
       countries, not to mention states, territories, etc.  Any one
       job would be likely to interest only a very few readers.

    2) We are positionally diverse: so far, I've identified testers,
       programmers, Software Legends, QA engineers, educators,
       students, and moderators!  Plus managers of all the preceeding
       (except for students and Software Legends! ;-).  Again, any
       one job would likely only interest a very few readers.

    3) There are other places where jobs can be posted.

    4) Posting jobs doesn't scale very well, and I'd prefer to keep
       focused on the ways of improving software quality.

  What do you think?



++++ New Post, New Topic ++++

From: "Danny R. Faught" 
Subject: Re: Book Reviews

Terry Colligan wrote:
> ==== BOOK REVIEW ====
>
>   (Let's lynch that moderator guy!
>   Where is our book review!?  ;-)

Surely lots of participants have opinions on books they're read, and
can contribute them.

I wrote up a review of The Craft of Software Testing by Brian Marick
that I could contribute.


++++ Moderator Comment ++++

  So what's keeping you from contributing it???  ;-)

  Seems logical, but so far there have been no volunteers, and I have
  been too busy for the last two weeks to do one.  I hope to do two
  this week.

  If you (or anyone!) were to send in a book review, I would
  certainly publish it!




==== BOOK REVIEW ====

  (Review of Software Runaways, by Robert L. Glass coming soon!)


=============================================================
The Software-Quality Digest is edited by:
Terry Colligan, Moderator.      mailto:moderator@tenberry.com

And published by:
Tenberry Software, Inc.               http://www.tenberry.com

Information about this list is at our web site,
maintained by Tenberry Software:
    http://www.tenberry.com/softqual

To post a message ==> mailto:software-quality@tenberry.com

To subscribe ==> mailto:software-quality-join@tenberry.com

To unsubscribe ==> mailto:software-quality-leave@tenberry.com

Suggestions and comments ==> mailto:moderator@tenberry.com

=============  End of Software-Quality Digest ===============

[Tenberry] * [Software-Quality] * [Zero Defects] * [Automated QA] * [Consulting] * [Site Map]
Last modified 1998.4.15. Your questions, comments, and feedback are welcome.