Monday, February 13, 2017

Should I learn C++ or Python?

When I first saw this question on Quora, there were already 47 answers, pretty much all of them wrong. But the number of different answers tells you something: choice of programming language is more of a religious question than a technical one. The fact is that if you want to be a professional programmer, you should learn both—and at the same time.

When we teach programming, we always teach at least two languages at the same time, in parallel. Assignments must be done in both (or more) languages, submitted along with a short essay on why the solutions are different, and why the same. That’s the way to develop some wisdom and maturity in the coding part of your professional work.

Some of the respondents asserted that programming languages are tools. If that’s an appropriate metaphor, then how would you answer this question of a wannabe carpenter:

"Should I learn saws or screwdrivers?"

Do you think someone could be a top-flight carpenter knowing only one?

So, stay out of this quasi-religious controversy, which can never be settled. Instead, spend your valuable time learning as many different programming languages as possible, at least 5 or 6. You won’t necessarily use all of them, but knowing their different approaches will put you far above those dullards who say:

“I only know Language X, but it I still think it’s the best language in the world.”

Sunday, February 05, 2017

Fuzz Testing and Fuzz History

In 2016 I added a paragraph to the Wikipedia page on "fuzz testing." Later, the paragraph was edited out because it "lacked reference." The editor, however, suggested that I blog the paragraph and then use the blog as a reference, so the paragraph could be included. So, here's the paragraph:

(Personal recollection from Gerald M. Weinberg) We didn't call it fuzzing back in the 1950s, but it was our standard practice to test programs by inputting decks of punch cards taken from the trash. We also used decks of random number punch cards. We weren't networked in those days, so we weren't much worried about security, but our random/trash decks often turned up undesirable behavior. Every programmer I knew (and there weren't many of us back then, so I knew a great proportion of them) used the trash-deck technique.

The subject of software testing has many myths and distortions. This story of fuzz testing has several morals:

1. This type of testing was so common that it had no name. Apparently, it was giving the name "fuzz testing" around 1988, and the namers were thus given credit in the Wikipedia article for "inventing" the technique.

2. This is just one example of how "history" is created after the fact by human beings, and what they write becomes "facts." That's why I believe there are no such things as "facts"—not in the sense of "truths."

3. In any case, this is one example of why we ought to be wary of labeling "inventors" of various techniques and technologies. For instance, Gutenberg is often labeled the "inventor" of moveable type, though moveable type existed and was widely used long before Gutenberg. Gutenberg used this idea in ways that hadn't been employed before. That was his "invention," and a worthy one it was, but if we're to understand the way technology develops, we have to be more precise in our definition of what was invented and by whom.

Finally, I have no idea who "invented" fuzz testing. It certainly wasn't me.

NOTE: If someone would like to update the fuzz testing article on Wikipedia, they're welcome to reference this blog post.

Wednesday, January 25, 2017

What is the right reason to leave a job?

As a consultant, I frequently leave jobs. I also help many people decide whether or not to leave their jobs. I have learned there is no one "right" reason for leaving, but I've accumulated a list of many "good" reasons for leaving. I'll give some examples:

In my career, I have left jobs when 

- the job I was hired to do was finished.

- the job I was hired to do could not be finished.

- the job I was hired to do would be finished just fine without me.

- I was not able to do the job I was hired to do.

- the job I was hired to do wasn't worth doing.

- I was no longer learning new things (that's my most frequent reason for leaving)

- they told me that my pay was going to be "temporarily" delayed

- they asked me to do something illegal or unethical

You may notice that I never leave just because someone is going to pay me more money. If I was hired on to do a job, I feel committed to see that the job is finished, or going to be finished, or will never be finished. Only when my commitment is fulfilled am I ready to move on to bigger things. I don't think it's a good idea to leave behind me a trail of broken commitments.

Another good reason for leaving is not one I've experienced yet, but it's when they ask you to do something dangerous to your life or health. Very few jobs are worth dying for.

And here's a useful principle when leaving: If possible, don't quit until you have the next job set up. Why? Because it's much easier to get a new job when you already have a job. Employers tend to be suspicious of unemployed people.

Wednesday, January 11, 2017

Foreword and Introduction to ERRORS book


Ever since this book came out, people have been asking me how I came to write on such an unusual topic. I've pondered their question and decided to add this foreword as an answer.

As far as I can remember, I've always been interested in errors. I was a smart kid, but didn't understand why I  made mistakes. And why other people made more.

I yearned to understand how the brain, my brain, worked, so I studied everything I could find about brains. And then I heard about computers.

Way back then, computers were called "Giant Brains." Edmund Berkeley wrote a book by that title, which I read voraciously.

Those giant brains were "machines that think" and "didn't make errors." Neither turned out to be true, but back then, I believed them. I knew right away, deep down—at age eleven—that I would spend my life with computers.

Much later, I learned that computers didn't make many errors, but their programs sure did.

I realized when I worked on this book that it more or less summarizes my life's work, trying to understand all about errors. That's where it all started.

I think I was upset when I finally figured out that I wasn't going to find a way to perfectly eliminate all errors, but I got over it. How? I think it was my training in physics, where I learned that perfection simply violates the laws of thermodynamics.

Then I was upset when I realized that when a computer program had a fault, the machine could turn out errors millions of times faster than any human or group of humans.

I could actually program a machine to make more errors in a day than all human beings had made in the last 10,000 years. Not many people seemed to understand the consequences of this fact, so I decided to write this book as my contribution to a more perfect world.

Not perfect, of course, but more perfect. I hope it helps.


For more than a half-century, I’ve written about errors: what they are, their importance, how we think about them, our attempts to prevent them, and how we deal with them when those attempts fail. People tell me how helpful some of these writings have been, so I felt it would be useful to make them more widely known. Unfortunately, the half-century has left them scattered among several dozen books, so I decided to consolidate some of the more helpful ones in this book.

I’m going to start, though, where it all started, with my first book where Herb Leeds and I made our first public mention of error. Back in those days, Herb and I both worked for IBM. As employees we were not allowed to write about computers making mistakes, but we knew how important the subject was. So, we wrote our book and didn’t ask IBM’s permission.

Computer errors are far more important today than they were back in 1960, but many of the issues haven’t changed. That’s why I’m introducing this book with some historical perspective: reprinting some of that old text about errors along with some notes with the perspective of more than half a century.

1960’s Forbidden Mention of Errors
From: CHAPTER 10
Leeds and Weinberg,
Computer Programming Fundamentals PROGRAM TESTING
When we approach the subject of program testing, we might almost conclude the whole subject immediately with the anecdote about the mathematics professor who, when asked to look at a student’s problem, replied, “If you haven’t made any mistakes, you have the right answer.” He was, of course, being only slightly facetious. We have already stressed this philosophy in programming, where the major problem is knowing when a program is “right.”

In order to be sure that a program is right, a simple and systematic approach is undoubtedly best. However, no approach can assure correctness without adequate testing for verification. We smile when we read the professor’s reply because we know that human beings seldom know immediately when they have made errors—although we know they will at some time make them. The programmer must not have the view that, because he cannot think of any error, there must not be one. On the contrary, extreme skepticism is the only proper attitude. Obviously, if we can recognize an error, it ceases to be an error.

If we had to rely on our own judgment as to the correctness of our programs, we would be in a difficult position. Fortunately the computer usually provides the proof of the pudding. It is such a proper combination of programmer and computer that will ultimately determine the means of judging the program. We hope to provide some insight into the proper mixture of these ingredients. An immediate problem that we must cope with is the somewhat disheartening fact that, even after carefully eliminating clerical errors, experienced programmers will still make an average of approximately one error for every thirty instructions written.

We make errors quite regularly
This statement is still true after half a century—unless it’s actually worse nowadays. (I have some data from Capers Jones suggesting one error in fewer than ten instructions may be typical for very large, complex projects.) It will probably be true after ten centuries, unless by then we’ve made substantial modifications to the human brain. It’s a characteristic of humans would have been true a hundred centuries ago—if we’d had computers then.

1960’s Cost of errors
These errors range from minor misunderstandings of instructions to major errors of logic or problem interpretation. Strangely enough, the trivial errors often lead to spectacular results, while the major errors initially are usually the most difficult to detect.

“Trivial” errors can have great consequences
We knew about large errors way back then, but I suspect we didn’t imagine just how much errors could cost. For examples of some billion dollar errors along with explanations, read the chapter “Some Very Expensive Software Errors.”

Back to 1960 again
Of course, it is possible to write a program without errors, but this fact does not obviate the need for testing. Whether or not a program is working is a matter not to be decided by intuition. Quite often it is obvious when a program is not working. However, situations have occurred where a program which has been apparently successful for years has been exposed as erroneous in some part of its operation.

Errors can escape detection for years
With the wisdom of time, we now have quite specific examples of errors lurking in the background for thirty years or more. For example, read the chapter on “predicting the number of errors.”

How was it tested in 1960
Consequently, when we use a program, we want to know how it was tested in order to give us confidence in—or warning about—its applicability. Woe unto the programmer with “beginner’s luck” whose first program happens to have no errors. If he takes success in the wrong way, many rude shocks may be needed to jar his unfounded confidence into the shape of proper skepticism.

Many people are discouraged by what to them seems the inordinate amount of effort spent on program testing. They rightly indicate that a human being can often be trained to do a job much more easily than a computer can be programmed to do it. The rebuttal to this observation may be one or more of the following statements:
  1. All problems are not suitable for computers. (We must never forget this one.)
  2. The computer, once properly programmed, will give a higher level of performance, if, indeed,
    the problem is suited to a computer approach.
  3. All the human errors are removed from the system in advance, instead of distributing them
    throughout the work like bits of shell in a nutcake, In such instances, unfortunately, the human errors will not necessarily repeat in identical manner. Thus, anticipating and catching such errors may be exceedingly difficult. Often in these eases the tendency is to overcompensate for such errors, resulting in expense and time loss.
  4. The computer is often doing a different job than the man is doing, for there is a tendency– usually a good one—to enlarge the scope of a problem at the same time it is first programmed for a computer. People are often tempted to “compare apples with houses” in this case.
  5. The computer is probably a more steadfast employee, whereas human beings tend to move on to other responsibilities and must be replaced by other human beings who must, in turn, be trained.
In other words, if a job is worth doing, it is worth doing right.

Sometimes the error is creating a program at all.
Unfortunately, the cost of developing, supporting, and maintaining a program frequently exceeds the value it produces. In any case, no amount of fixing small program errors can eliminate the big error of writing the program in the first place. For examples and explanations, read the chapter on “it shouldn’t even be done.”

The full process, 1960
If a job is a computer job, it should be handled as such without hesitation. Of course, we are obligated to include the cost of programming and testing in any justification of a new computer application. Furthermore we must not be tempted to cut costs at the end by skimping on the testing effort. An incorrect program is indeed worth less than no program at all because the false conclusions it may inspire can lead to many expensive errors.

We must not confuse cost and value.
Even after all this time, some managers still believe they can get away with skimping on the testing effort. For examples and explanations, read the section on “What Do Errors Cost?”

Coding is not the end, even in 1960
A greater danger than false economy is ennui. Sometimes a programmer, upon finishing the coding phase of a problem, feels that all the interesting work is done. He yearns to move on to the next problem.

Programs can become erroneous without changing a bit.
You may have noticed the consistent use of “he” and “his” in this quoted passage from an ancient book. These days, this would be identified as “sexist writing,” but it wasn’t called “sexist” way back then. This is an example of how something that wasn’t an error in the past becomes an error with changing culture, changing language, changing hardware, or perhaps new laws. We don’t have to do anything to make an error, but we have to do a whole lot not to make an error.

We keep learning, but is it enough?
Thus as soon as the program looks correct—or, rather, does not look incorrect—he convinces himself it is finished and abandons it. Programmers at this time are much more fickle than young lovers.
Such actions are, of course, foolish. In the first place, we cannot so easily abandon our programs and relieve ourselves of further obligation to them. It is very possible under such circumstances that in the middle of a new problem we shall be called upon to finish our previous shoddy work—which will then seem even more dry and dull, as well as being much less familiar. Such unfamiliarity is no small problem. Much grief can occur before the programmer regains the level of thought activity he achieved in originally writing the program. We have emphasized flow diagramming and its most important assistance to understanding a program but no flow diagram guarantees easy reading of a program. The proper flow diagram does guarantee the correct logical guide through the program and a shorter path to correct understanding.

It is amazing how one goes about developing a coding structure. Often the programmer will review his coding with astonishment. He will ask incredulously, “How was it possible for me to construct this coding logic? I never could have developed this logic initially.” This statement is well-founded. It is a rare case where the programmer can immediately develop the final logical construction. Normally programming is a series of attempts, of two steps forward and one step backward. As experience is gained in understanding the problem and applying techniques—as the programmer becomes more immersed in the program’s intricacies—his logic improves. We could almost relate this logical building to a pyramid. In testing out the problem we must climb the same pyramid as in coding. In this case, however, we must take care to root out all misconstructed blocks, being careful not to lose our footing on the slippery sides. Thus, if we are really bored with a problem, the smartest approach is to finish it as correctly as possible so we shall never see it again.

In the second place, the testing of a program, properly approached, is by far the most intriguing part of programming. Truly the mettle of the programmer is tested along with the program. No puzzle addict could experience the miraculous intricacies and subtleties of the trail left by a program gone wrong. In the past, these interesting aspects of program testing have been dampened by the difficulty in rigorously extracting just the information wanted about the performance of a program. Now, however, sophisticated systems are available to relieve the programmer of much of this burden.

Testing for errors grows more difficult every year.
The previous sentence was an optimistic statement a half-century ago, but not because it was wrong. Over all these years, hundreds of tools have been built attempting to simplify the testing burden. Some of them have actually succeeded. At the same time, however, we’ve never satisfied our hunger for more sophisticated applications. So, though our testing tools have improved, our testing tasks have outpaced them. For examples and explanations, read about “preventing testing from growing more difficult.” 

If you're as interested in errors as I am, you can obtain a copy of Errors here:

ERRORS, bugs, boo-boos, blunders

Thursday, December 22, 2016

What does it take to become a good management consultant?

When the question in the title came up, all the answers seemed to be answering a different question, something like 

What does it take to get a good job as a management consultant in a large consulting firm?

Lots of people who are not good consultants get jobs as management consultants in large consulting firms. And lots of good consultants can't get jobs with such firms. I know these things because I've been a consultant (a consultant's consultant) to a number of such firms.

If you really want to know what it takes to become a good management consultant, the answer begins with the observation that there are many different styles of good management consulting. The most important quality good management consulting requires is the ability to know yourself, both the good and the bad, along with the ability to retain the good things and improve the bad ones.

To take one example, consider health. I’ve watched many would-be management consultants fail because they couldn't control their drinking or eating habits when traveling on an expense account.

Or another example is would-be consultants who think their domain knowledge will be sufficient for doing a good job, but they lack "people skills." They believe, erroneously, that they can be arrogant, offensive, non-empathetic consultants but people will hire them because they're so smart and well-informed. They're wrong, and most of them never realize why they're failing. That's why I've written a number of books for consultants who wish to improve their consulting success:

Want to see a list of my books?

Saturday, December 17, 2016

What's the most touching thing to say to a teacher?

What's the most touching thing to say to a teacher?

The question was, "What's the most touching thing to say to a teacher?" There were many fine answers, and all of them said more or less the same thing: thank your teacher for changing your life.

I agree that telling your teacher “thank you” can be touching, but in my 60+ years of teaching, it’s only the second most touching thing. So, what’s the most touching?

What touches me most is when a student teaches me something I did not know. That shows me that the student has become a contemporary, a grown-up person who will go on to teach others, part of the great chain of “paying forward.” When that happens, I know that I have succeeded in some small way of helping that student and the world in which we all live. That’s what touches me the deepest.

Moreover, in my career such learning has happened thousands of times. If I am a better teacher today, a better human being, I owe it all to my students. Thank you, students. Thank you.

See, Experiential Learning for more on how and why I learn from my students, so you, too, can be touched by what they teach you.

Monday, November 28, 2016

How do I find cheap freelance hardware and software developers?

The question was, "How do I find cheap freelance hardware and software developers?"

I warned the questioner to be very careful about what he was asking for:

First of all, you don’t want “cheap” developers; you want inexpensive developers.

Second, the expense of developers is not their hourly or daily rate. It’s the total cost of building and delivering the software and hardware you want.

In my experience, the least expensive developers have much higher rates than the more costly ones. The deliver what you want, the first time, in less time, with less trouble.

However, a high hourly rate doesn’t guarantee an inexpensive product. Freelance developers can charge anything they want, so price doesn’t necessarily indicate value.

Instead, speak to references about any developer you’re considering. Find out first hand what you’re going to get for what you’re paying.

And, by the way, don't think you'll save money by hiring individual developers. Your best bet will generally be to choose a team, perhaps an Agile team, but in any case, a team that has a history of working well together.