Many years ago, I read Donald Trump‘s The Art of the Deal (New York: Random House, 1987)
The book is often-cited as one of the best-selling business books ever written. Others use the content of the book to register complaints about his Presidency, claiming that what Trump wrote is inconsistent with what he now says and does.
But, the larger question is, “does The Art of the Deal even qualify as a business book?” And, exactly how big of a best-seller is it? As of this writing, the book is in the top 100 of three Amazon.com best-seller sub-categories.
I found some information about these questions; click here to read these questions.
“It’s difficult to weigh Trump’s opus against other “business books” for two reasons.
Put this in my “I wish I had time to read every book that interests me” category.
Here’s a question: “can you trust the numbers you read?” The answer, apparently, is “no.” And this stance of “distrust” is one that we should probably take in the business book arena.
Here’s the info. It comes from Jack Shafer, from Slate.com this morning: By the Numbers: A terrific new book of essays encourages us all to be skeptical about statistics. Here’s an excerpt:
If you’re a journalist, a gluttonous consumer of news, or are easily swayed by the slapdash, stop what you’re doing and go buy a copy of Sex, Drugs, and Body Counts: The Politics of Numbers in Global Crime and Conflict. Set aside a couple of hours tonight to read three or four of the essays that academics Peter Andreas and Kelly M. Greenhill collected in it. Then, sit down in front of your computer and send me an e-mail to thank me for helping to end your enslavement to the dodgy numbers that taint journalism and public policy. It’s not just a good book. It’s a great book. And it belongs forever on your bookshelf.
That is just about the strongest endorsement of a book you can read. Will I have time to read it? It sounds like I need to make time.
What are the implications for business books/business studies? I think the thinking goes something like this. We are all starved for data. We have been taught to distrust anecdotal data. We want hard data. But, what if the hard data is softer than we think? What if the numbers are not reliable?
In the book (according to Shafer), journalists and authors have a tendency to accept numbers that are repeated by others, without going back to get the hard, firm source. And the editors/authors of this book discovered, in many instances, that numbers were… well, consider this paragraph:
Sex, Drugs, and Body Counts performs similar forensics on the assertion, oft-repeated in government reports, that al-Qaida allots 10 percent of its budget to operational costs and 90 percent to administration and infrastructure. When you trace the claim to its origin—a report on terrorism—you find no footnote or sourcing at all. The author apparently concocted it from thin air.
“Concocted from thin air.” That qualifies as a reason to be skeptical about the numbers you read. And it reminds us of this cautionary tale. Did Tom Peters, in In Search of Excellence, fake some data? Maybe he did. A Fast Company article says that he said he “faked the data:”
This is pretty small beer, but for what it’s worth, okay, I confess: We faked the data.
Whether Peters faked some data or not, there is little doubt that other authors have relied on data that was faked/fudged/made up/manipulated in a multitude of ways. So, at least, here is my recommendation: be very wary of accepting numbers that you read. If “distrust” is too strong a warning, then at least make “Trust, but verify” your mantra.
One of the ongoing discussions that I am having, in my own head and with others, is just how reliable all of these studies about success and efficiency really are. Does anybody know anything? In The Black Swan, Nassim Nicholas Taleb quotes “the legendary screenwriter William Goldman, who was said to have shouted ‘Nobody knows anything.’”
The Black Swan describes the problem as this: the world is not predictable, the world is random, and black swans genuinely throw us into a new set of problems, reminding us that “nobody knows anything.” (All swans were known to be white until someone went to Australia and saw a Black swan –thus, a black swan is any new happening/discovery that throws all of our previous “knowledge” overboard). Taleb argues that we are so in need of certainty that: “We have seen how good we are at narrating backward, at inventing stories that convince us that we understand the past.”
I write this to discuss another aspect of this problem – do management consultants really know anything? I admire the work they do. They work hard, approach their tasks with great seriousness, and genuinely try to help companies do better. But the question is one of actual capability – do they really know what they think they know?
That is the question raised by an extensive article in The New Yorker, NOT SO FAST: Scientific management started as a way to work. How did it become a way of life? by Jill Lepore. (Read it here).
The article quotes from the book The Management Myth: Why the Experts Keep Getting It Wrong by Matthew Stewart, which I have blogged about before in my post Are Consultants Worth Their Pay? — Are there genuine experts that provide value? (which you can read here).
The New Yorker article begins with this:
Ordering people around, which used to be just a way to get things done, was elevated to a science in October of 1910, when Louis Brandeis, a fifty-three-year-old lawyer from Boston, held a meeting at an apartment in New York with a bunch of experts who, at Brandeis’s urging, decided to call what they were experts at “scientific management.” Everyone there—including Frank and Lillian Gilbreth, best known today as the parents in “Cheaper by the Dozen”—had contracted “Tayloritis”: they were enthralled by an industrial engineer from Philadelphia named Frederick Winslow Taylor, who had been ordering people around, scientifically, for years. Speedy Taylor, as he was called, had invented a new way to make money. He would get himself hired by some business; spend a while watching people work, stopwatch and slide rule in hand; write a report telling them how to do their work faster; and then submit an astronomical bill for his services. He is the “Father of Scientific Management” (it says so on his tombstone), and, by any rational calculation, the grandfather of management consulting.
Whether he was also a shameless fraud is a matter of some debate, but not, it must be said, much: it’s difficult to stage a debate when the preponderance of evidence falls to one side. In “The Management Myth: Why the Experts Keep Getting It Wrong” (Norton; $27.95), Matthew Stewart points out what Taylor’s enemies and even some of his colleagues pointed out, nearly a century ago: Taylor fudged his data, lied to his clients, and inflated the record of his success. As it happens, Stewart did the same things during his seven years as a management consultant; fudging, lying, and inflating, he says, are the profession’s stock-in-trade.
And here is a description of how this approach “worked:”
Taylor is the mortar, and the Gilbreths the bricks, of every American business school. But it was Brandeis who brought Taylor national and international acclaim. He could not, for all that, have saved the railroads a million dollars a day—the number was, as a canny reporter noted, the “merest moonshine”—because, despite the parade of experts and algorithms, the figure was based on little more than a ballpark estimate that the railroads were about five per cent inefficient. That’s the way Taylorism usually worked. How did Taylor arrive at forty-seven and a half tons for Bethlehem Steel? He chose twelve “large, powerful Hungarians,” observed them for an hour, and calculated that, at the rate they were working, they were loading twenty-four tons of pig iron per man per day. Then he handpicked ten men and dared them to load sixteen and a half tons as fast as they could. They managed to do it in fourteen minutes; this yields a rate of seventy-one tons per man per ten-hour day. Taylor inexplicably rounded up the number to seventy-five. To get to forty-seven and a half, he reduced seventy-five by about forty per cent, claiming that this represented a work-to-rest ratio of the “law of heavy laboring.” Workers who protested the new standards were fired. Only one—the best approximation of an actual Schmidt was a man named Henry Noll—loaded anything close to forty-seven and a half tons in a single day, a rate that was, in any case, not sustainable. After providing two years of consulting services, Taylor billed the company a hundred thousand dollars (which works out to something like two and a half million dollars today), and then he was fired.
It reminds me of the fact, now widely know, that in the modern classic In Search of Excellence, Peters and Waterman “made up’ some of the numbers – “faked the data.” Though the phrase “faked the data” did not come directly from the mouth of Peters, there is certainly an acknowledgement that the numbers were not fully pure. Here’s the quote by Peters from an article in 2001 in Fast Company:
This is pretty small beer, but for what it’s worth, okay, I confess: We faked the data. A lot of people suggested it at the time. The big question was, How did you end up viewing these companies as “excellent” companies? A little while later, when a bunch of the “excellent” companies started to have some down years, that also became a huge accusation: If these companies are so excellent, Peters, then why are they doing so badly now? Which I’d say pretty much misses the point.
[In] Search [of Excellence] started out as a study of 62 companies. How did we come up with them? We went around to McKinsey’s partners and to a bunch of other smart people who were deeply involved and seriously engaged in the world of business and asked, Who’s cool? Who’s doing cool work? Where is there great stuff going on? And which companies genuinely get it? That very direct approach generated a list of 62 companies, which led to interviews with the people at those companies. Then, because McKinsey is McKinsey, we felt that we had to come up with some quantitative measures of performance. Those measures dropped the list from 62 to 43 companies. General Electric, for example, was on the list of 62 companies but didn’t make the cut to 43 — which shows you how “stupid” raw insight is and how “smart” tough-minded metrics can be.
Were there companies that, in retrospect, didn’t belong on the list of 43? I only have one word to say: Atari.
Was our process fundamentally sound? Absolutely! If you want to go find smart people who are doing cool stuff from which you can learn the most useful, cutting-edge principles, then do what we did with Search: Start by using common sense, by trusting your instincts, and by soliciting the views of “strange” (that is, nonconventional) people. You can always worry about proving the facts later.
Notice that last line, You can always worry about proving the facts later, and remember the Taleb quote: “We have seen how good we are at narrating backward, at inventing stories that convince us that we understand the past.”
We already know that not all the companies in In Search of Excellence, and later in Good to Great, maintained their place of excellence/greatness. Collins has followed up with How the Mighty Fall, wresting with this problem.
I think we need to keep seeking the knowledge and wisdom we need to discover what makes a company great, and then what keeps a company great. But at the end of the day, I’m not sure we will ever “know.” And that is ok – if we acknowledge our limitations. But when “experts” imply that they do know, and then predictions/projections do not come true, or remain true, it calls into question other observations and findings. And when numbers are fudged, or even made up, it absolutely undermines all credibility. Some management consultants fudge their numbers. And any fudging of numbers can deal a serious blow to credibility. And when numbers are fudged, companies think they know more than they actually know.
Management consultants can be valuable. But ultimately, who really knows anything? Or, to put it another way, is management consulting actually a science? Or is it more of an art, imprecise, requiring an extra-heavy dose of ethics because of the imprecision involved?
Here’s one more thought from the New Yorker article:
Business schools have been indicted before. Earning an M.B.A. has been found to have little correlation with later business success. Business isn’t a science, critics say; it’s a set of skills, best learned on the job. Some business schools, accused of teaching nothing so much as greed, now offer ethics courses. Stewart argues that this whole conversation, about people, production, wealth, and virtue, is a conversation about ethics, and is better had within a liberal-arts curriculum.
You might want to also read my post Dehumanized — A Cause for Alarm in Education, and in the World of Business Books.
You can purchase my synopses of The Black Swan, and Karl Krayer’s synopsis of Good to Great, with audio + handout, at our companion site, 15minutebusinessbooks.com.