By Gerald M. Weinberg, 1991, 0-932633-22-6 Gerald Weinberg understands the process of programming, and why the people aspects are so important. This isn’t a book about programming, but about ways to think about programming. He covers a great number of topics that are still relevant today. It’s too bad this book is out of print.

[pNN] Listing of Laws, Rules, and Principles

Crosby’s Definition of Quality: Quality is “conformance to requirements.” (p. 5)

The Quality Statement: Every statement about quality is a statement about some person(s). (p. 5)

The Political Dilemma: More quality for one person may mean less quality for another. (p. 6)

The Quality Decision: Whose opinion of quality is to count when making decisions? (p. 7)

The Political/Emotional Dimension of Quality: Quality is value to some person. (p. 7)

The Inadequate Definition of Quality: Quality is the absence of error. (p. 9)

Crosby’s Economics of Quality: “It is always cheaper to do the job right the first time.” (p. 19)

The Quest for Perfection: The quest for unjustified perfection is not mature, but infantile. (p. 21)

Boulding’s Backward Basis: Things are the way they are because they got that way. (p. 22)

The Superprogrammer Image: There is no knowledge of management as a development tool. (p. 25)

Using Models to Change Thinking Patterns: When the thinking changes, the organization changes, and vice versa. (p. 35)

The Formula for a System’s Behavior: Behavior depends on both state and input. (p. 59)

The First Law of Bad Management: When something isn’t working, do more of it. (p. 62)

Brooks’s Model (Rephrased): Lack of calendar time has forced more failing software projects to face the reality of their failure than all other reasons combined. (p. 74)

Brooks’s Rephrased Model (Rephrased): Lack of calendar time has forced more failing software projects to face the incorrectness of their models than all other reasons combined. (p. 74)

Why Software Projects Go Wrong: More software projects have gone awry for lack of quality, which is part of many destructive dynamics, than for all other causes combined. (p. 7G)

Why Software Projects Go Wrong (Part 2): More software projects have gone awry from management’s taking action based on incorrect system models than for all other causes combined. (p. 7G)

The Scaling Fallacy: Large systems are like small systems, just bigger. (p. 77)

The Reversible Fallacy: What is done can always be undone. (p. 89)

The Causation Fallacy: Every effect has a cause… and we can tell which is which. (p. 90)

Decisions by People: Whenever there’s a human decision point in the system, it’s not the event that determines the next event, but someone’s reaction to that event. (p. 111)

The Square Law of Computation: Unless some simplification can be made, the amount of computation to solve a set of equations increases at least as fast as the square of the number of equations. (p. 130)

The Natural Software Dynamic: Human brain capacity is more or less fixed, but software complexity grows at least as fast as the square of the size of the program. (p. 135)

The Size/Complexity Dynamic: Ambitious requirements can easily outstrip even the brightest developer’s mental capacity. (p. 144)

The Log-Log Law: Any set of data points forms a straight line if plotted on log-log paper. (p. 146)

The Helpful Model: No matter how it looks, everyone is trying to be helpful. (p. l54)

The Principle of Addition: The best way to reduce ineffective behavior is by adding more effective behavior. (p. 155)

An Additional Model: The way people behave is not based on reality, but on their models of reality. (p. l56)

The First Principle of Programming: The best way to deal with errors is not to make them in the first place. (p. 184)

The Absence of Errors Fallacy: Though copious errors guarantees worthlessness, having zero errors guarantees nothing at all about the value of software. (p. 185)

The Controller Dilemma: The controller of a well-regulated system may not seem to be working hard. (p. 197)

The Controller Fallacy: If the controller isn’t busy, it’s not doing a good job. If the controller is very busy, it must be a good controller. (p. 197)

The Difference Detection Dynamic: First, the smallest amount of the test time is spent on a few easy problems; and second, most of the easy problems are found early in the test cycle. (p. 202)

The Failure Detection Curve (the Bad News): There is no testing technology that detects failures in a linear manner. (p. 205)

The Failure Detection Curve (the Good News): Combining different detection technologies creates an improved technology. (p. 205)

The Army Principle: There are no bad soldiers; there are only bad officers. (p. 212)

The Army Principle (modified): There are no bad programmers; there are only bad managers who don’t understand the dynamics of failure. (p. 213)

The Self-Invalidating Model: The belief that a change will be easy to do correctly makes it less likely that the change will be done correctly. (p. 236)

The Ripple Effect: This effect involves the number of separate code areas that have to be changed to effect a single fault resolution. (p. 237)

The Modular Dynamic: The more modular you make the system, the fewer side effects you need to consider. (p. 238)

The Titanic Effect: The thought that disaster is impossible often leads to an unthinkable disaster. (p. 241)

The Pressure/Judgment Dynamic: Pressure leads to conformity leads to misestimating leads to lack of control leads to more pressure. (p. 255)

The Law of Diminishing Response: The more pressure you add, the less you get for it. (p. 262)

Weinberg’s Zeroth Law of Software: If the software doesn’t have to work, you can always meet any other requirement. (p. 275)

Managers Not Available: Busy managers mean bad management. (p. 276)

No Time to Do It Right: Why is it we never have time to do it right but we always have time to do it over? (p. 278)

The Boomerang Effect: Attempts to shortcut quality always make the problem worse. (p. 279)