Hi Beth:
Can you point me to where this is written?
I certainly can

Most of the popular standards are listed here:
http://www.12207.com/test1.htm
And when you say "software
industry", are you talking about some unilateral organization? I wasn't
aware that such a thing existed.
Are you serious? Ask some of the coders you know what industry they work
in. Yes, there are published standards that employees in the software
industry need to adhere to, enforceable in various ways, just as there are
road rules. Some of these standards are required of software testers and
business analysts. One of those standards is an official glossary of
industry terms and definitions.
Are you saying that every software company
accepts "your" definition of these terms?
No. They all accept the ISO/IEEE definitions, and so do I
And if the ISO defines "bug" as
you have stated, but the majority of people understand it otherwise, how is
their definition useful?
The official term is "defect". The term "bug" is officially interchangeable
with it, and has the same definition. People who work in the software
industry give such terms a precise meaning. The majority of the lay public
use medical and scientific terms wrongly too. For example, "Schizophrenia"
does NOT mean "multiple personalities", the temperature outside is NOT 63
degrees "above zero", and an aircraft's flight data recorder is NOT a "black
box", it's bright orange.
As I've said before, the term "bug" is a commonly accepted term for a type
of flaw (or defect) which was not intended by the designer of the software.
Nope. It has NEVER had that meaning within the industry. A Defect
describes software behaviour that does not conform to the CUSTOMER's
requirements. Always has been. Designers screw up too. In fact, their
errors are often the most costly and difficult to find.
The fact that *some* software manufacturers/designers choose to redefine the
term so that it includes what are commonly referred to as "design flaws"
and thus equate the term "bug" with "flaw" or "defect" does not make the
*commonly accepted* meaning any less commonly accepted.
Sorry Beth: That's not germane to this argument. More than 99 per cent of
people working in the software industry have the same very precise
definitions for the terms used in software quality assurance. They have to
have; they need to write them into contracts to deliver software.
If we were all to accept "your" definition (a bug is a defect is a flaw is a
bug), then we would need *new* terms to distinguish between a defect that
was by design and a defect that was not intended. Since we already have
those terms ("design flaw" and "bug"), why create new ones? And what terms
would *you* use to distinguish between the two?
Sorry: You need to understand how the commercial software development
industry works. "Design flaw" is not an industry term. However, that's not
the point. The point is that you are confusing the steps in a
tightly-defined process. The first step is to find an "unexpected
behaviour". When we do, it is formally logged as a "Defect" in the Defect
Management database. The Defect Review Committee first reviews it to see if
it really is a "Defect". It may be closed at that stage, without further
action. (This, by the way, can be a weakness in current software
engineering methodologies, but it is commonly allowed.)
If the Defect Review Committee formally "accepts" the situation as a
"Defect", the committee then assigns it a Severity, then refers it for
Analysis. Then, and not until then, can we can begin to ascertain the
cause, and thus detect which stage of the process failed. It's not until
then that you find out whether it's a coding error, a design error, a
specification error, a requirements error, or a simple misunderstanding by
the tester.
In software engineering, they are ALL "Defects". You are trying to say that
a Design Defect (it sux but we meant it to be that way) is somehow different
from a Coding Defect (it sux, because the programmer made a mistake) or a
Specification Defect (it sux because the business analyst got the wrong end
of the stick).
The only time you could classify bad software as "not" a bug would be if the
Customer asked for it to be that way. There are lots of examples of THAT in
the industry. The industry still doesn't let itself off the hook for those:
it's our professional responsibility to advise the customer that what they
are asking for is rubbish. If we didn't do that, it's still a bug. In
these litigious times, we take great care to advise the customer of this in
WRITING and make them SIGN any instruction to go ahead anyway. We would
have a very short career otherwise: because when the users hate it, the
customer is very likely to hire a lawyer to find someone ELSE to blame
Respectfully (but in total disagreement on the basis of usefulness in
communication and discourse and regardless of the ISO or Carnegie Mellon
University
Well, I totally respect your position, and your courage in defending your
position. However, following the debacle of the dot-bombs that burned money
and disappeared, the software industry has made huge strides in improving
the professionalism of its activities. One of the areas it has put a lot of
effort into is the whole subject of Software Quality Assurance, and as part
of that activity, has sought to tightly define the terms used in the
industry.
Like World Peace, it remains a work in progress: but we're all agreed that a
"Bug" is when it doesn't do what the customer asked for. Debate continues
as to whether we should further extend that to say a bug is when it doesn't
do what the customer "needed".
Cheers
--
Please reply to the newsgroup to maintain the thread. Please do not email
me unless I ask you to.
John McGhie <
[email protected]>
Microsoft MVP, Word and Word for Macintosh. Consultant Technical Writer
Sydney, Australia +61 (0) 4 1209 1410