NY Times runs another Op/Ed column arguing science should not try to extend human lifespan

The New York Times opinion page seems to be on a full-bore campaign against radical extension of human life.

For the second time in less than a month, the Times has decided that the voices in favor of not pursuing life extending technologies and therapies need to be heard. Three weeks ago it was so-called bioethicist Daniel Callahan who questioned the value of extending human life much beyond the 78 years that the average American now enjoys. . Now the Times has found room for a column by Roger Cohen—a supporter of the U.S. invasion of Iraq and defender of Rupert Murdoch—to make exactly the same argument that Callahan made.

Like Callahan, Cohen depicts radical life extension not as a blessing and a sign of success as a species, but as a burden on society because of current limitations on both natural and medical resources and a lack of jobs in society. Cohen is unable to exercise even an iota of imagination to conjure a world run by renewable resources in which there is fairer distribution of the rewards of work, people have fewer children and everyone regardless of age has access to education, food, medical care and adequate shelter. All he sees are the problems of taking care of the elderly instead of the great joy that life can provide at any age.

Cohen cites statistics that suggest that the 56% of Americans don’t want to undergo medical treatments to live to 120 or more. Of course the question is theoretical. I know a lot of very active people in their 80’s and 90’s—some with pain or illness, some without, but not one of them is sitting around waiting for or longing for death.

At the end his article, Cohen waxes philosophical about the relationship between death and meaning. Like many before him, he claims that human life has no meaning without death. His exact words: “This resistance to the super-centenarian dream demonstrates good sense. Immortality — how tempting, how appalling! What a suffocating trick on the young! Death is feared, but it is death that makes time a living thing. Without it life becomes a featureless expanse. I fear death, up to a point, but would fear life without end far more.”

That’s fine for him, and I also know that many long for death because they believe in an afterlife that will be a better, happier place.

But for me, human life is the ultimate value and extending it and making it more comfortable is the greatest good. I for one would not be bored with a longer life, even with eternal life: I could study more about human history, human society, evolution and science. I could learn more languages. I could visit more of the world—at a more leisurely pace than current junkets abroad since I would have more time. I might even travel in space. I love playing games and watching sports, but even more, I get a great sensual pleasure out of preparing food and eating. As for sex—even if I ever became unable to achieve an erection, I would still take immense joy in the many other pleasures we label as sexual. Cohen says that death gives our lives meaning. I disagree: I think the knowledge we are going to die imbues all pleasure with melancholy or sadness. I’m not the first to express this belief—it was part of the philosophy of the ancient Roman and Greek Epicureans.

I love life and I don’t want anyone to take even a minute away from me. The thought that humans keep extending our lives through the pursuit of knowledge keeps me from despair. The idea that the human species could survive the destruction of the earth when the sun burns out by transporting large numbers of people to another planet in another solar system sustains my hope.

But I also realize that we have to change our ways for humans to survive as a species and for us to attain radical life extension for all. It will take a more equitable distribution of wealth, a focus on renewable resources, replacement of the accumulation of material things as the ultimate goal of life, an end to expensive and destructive wars, the basing of community decisions on science and not on convenience or the best interests of a few—in short it will take a repudiation of our wasteful, materialistic, war-mongering society. That’s something that those advocating against life extension don’t seem willing to contemplate.

 

Small problem with Joseph Epstein’s complaint about meritocracy: where is it?

Every once in a while, a white male who has made his living as a “responsible conservative” or a conservative parading as a centrist produces an article bemoaning the fact that we are now ruled by a meritocracy. Through the years, George Will, Irving Kristol and William Buckley Jr. can count themselves among the many so-called public intellectuals who have bemoaned the coming of the meritocracy.

The latest is Joseph Epstein, a long-time conflater of civic virtues with the rights of the privileged, in a Wall Street Journal article titled “The Late, Great American WASP.” Like most of predecessors, Epstein contrasts the current meritocracy with the former system in which the most powerful people were likely to be male, Protestant, of British descent, from wealthy and well-established families with many connections to business opportunities and attended an Ivy League school. Epstein defines WASP as the ruling class that dominated politics, economics (by which I think he means business) and education until it was gradually replaced by a meritocracy starting after World War II. By putting a right-wing slant on carefully-selected anecdotes, Epstein hopes to prove that when WASPs ruled we muddled through pretty well and that now that we have a meritocracy, as witnessed by the Clinton and Obama presidencies, we are pretty much going to hell in a hand basket.

The problem is that we do not have a real meritocracy, and certainly not in politics, business or education. Epstein can’t make his argument without this assumption, which is patently false.

In the days of WASP ascendancy, the most powerful people in most fields did go to an Ivy League or Ivy-type schools, and that’s still the case. If you don’t believe me, pick any field outside sports, even entertainment, and start investigating the backgrounds of the most powerful people in it. In all cases you’ll find an inordinate percentage and often a majority came from wealthy families or went to a top echelon school, be it Harvard, Yale, Duke or Stanford.

In the old days, mostly rich and well-connected kids—kids from the ruling elite—got to go to these handful of schools, and that’s still the case. As many researchers have noted, legacies get bigger breaks in admissions decisions at Ivy League schools than do athletes and minorities. That’s what got our second president Bush into Yale (and his opponent in the 2000 election, Al Gore, too), a fact that Epstein ignores in substantiating his side argument that Bush II turned himself into a non-WASP.

There is a very good reason that so many kids who get into the top schools are wealthy: they have all the advantages. The latest research shows that kids from the poorest of backgrounds lose from 10-13 IQ points because they have to dedicate too much of their brains to thinking about their next meal. That point spread spans the difference between being a smart kid and a genius. The wealthy have an edge over the middle classes because they can afford to spend more in the ever-escalating race to prepare children: The more money the family makes, the more likely the child will get special classes, travel abroad, summer camps with intellectual enrichment, SAT tutors, SAT prep courses, educational consultants, subject tutors. The wealthy parents are more likely to make large contributions to the university.

Take a look at the statistics: the U.S. currently has less mobility between the classes and less upward mobility than at any time in more than a century. The social mobility in today’s United States is lower than that of any other westernized industrial or post-industrial nation. Poor people move up to the middle or upper classes less frequently here than in any of the nations that had royalty and a rigid class system for centuries.

Parts of our American society do operate as a meritocracy. Bill Clinton, Barack Obama and Joe Biden all prove that the brightest and most talented do achieve positions of power. Harvard, Yale and Stanford do accept the “best and the brightest” alongside the merely good who come from money. But that was always the case when the WASP’s ruled as well. Even in the days of European royalty, even in the bad old slave days of ancient Rome, if you had a near photographic memory, could compute large sums instantaneously or displayed perfect musical pitch, the rich folk were going to find you and make sure you could help them run their society. That hasn’t changed one bit. But despite what you may have heard from your parents or may think about your own children, those extremely talented people are so rare as to be statistically irrelevant when discussing whether or not we have a meritocracy.

What has changed is that it’s not just the white males anymore in the positions of power. An increasingly ethnically and racially diverse ruling elite has emerged, but it is an elite based more on money and connections than on true merit.

Epstein’s argument fails both in its logic and in its details. He calls Laura Bush a “middle class librarian.” It’s true that Laura’s profession was/is librarian, but I would not call her background middle class by any means: Her father was a home builder and successful real estate developer, two professions that lead to both wealth and power in the local economy. In his latest book, The Myth of Liberal Ascendancy, William Domhoff documents the enormous political influence that real estate interests have had on local and regional politics. By the way, Laura’s maiden name was Welch and her mom’s was Hawkins. She was raised as Methodist. Sounds like an upper class (for Midlands, Texas society) WASP to me.

Later in the article, Epstein claims that the two strongest presidents since 1950 are Truman, who never attended college, and Reagan, who went to the antithesis of Ivy—a small Christian college. Epstein states Truman and Reagan’s greatness matter-of-factly as if it’s common knowledge and readily accepted by most people. In the case of Reagan, believing that he was a great or a detestably awful president is a litmus test for political views: right-wingers and right-wingers-in-centrist-clothing rate him highly; progressives rate him as one of our worst presidents. Now most people do rate Truman highly, but I personally consider him the worst president in American history by virtue of his having approved dropping two atom bombs on civilian targets. The larger point is that Epstein pretends that his own opinion is evidence that the meritocracy doesn’t work as well as the old WASPocracy did.

Articulate and well-bred conservatives railing against the so-called meritocracy reflect the broader anti-intellectualism that the ruling elite imposes on American society via the mass media. But whereas the reason for the anti-intellectual message in movies and ads remains hidden, it stands out crystal clear in arguments such as Epstein’s: It’s about power. In a true meritocracy, the most talented are in charge in whatever the field, not the rich and connected. In even the least complex of agrarian societies, talent manifests itself as knowledge and the ability to accumulate and use knowledge. Conservatives represent traditional society in which the wealthy rule. They fear a society in which the most capable for each job gets that job as opposed to keeping themselves and their offspring in the best and best-paying positions. So when the wealthy aren’t busy buying up the best and the brightest to do their bidding and justify their hold on power, they try to disparage intellectual activity.

Thumbs up to A&E for suspending “Duck Dynasty” celebrity, thumbs down for ever creating the show

When Sean Hannity, Bobby Jindal, Sarah Palin and other right-wingers come out in favor of freedom of the speech, you know that someone has just said something false, stupid and insulting about a group routinely demonized by ultra-conservatives.

In this case, these Christian right illuminati are standing up for a bearded and backward backwoodsman’s right to slur gays.

The latest right-wing freedom fighter to speak his mind and stand up for religious values is Phil Robertson, one of the stars of “Duck Dynasty,” a reality show about a family business that sells duck calls and other duck hunting paraphernalia in the swampy backwoods of Louisiana. The Robertson family thrives by displaying rural values and wearing their fundamental Christianity on both their overalls and their long, untamed beards.

Robertson’s outrageous views emerged in answer to this question by a GQ interviewer, “What, in your mind, is sinful?” Robertson’s response was not that growing inequality was sinful, not that chemical warfare was sinful, not that cutting food stamp benefits for children was sinful, not that herding people into camps was sinful, not that torture or bombing civilians were sinful, not that paying immigrants less than minimum wage was sinful, not that polluting our atmosphere and waterways was sinful.

No, in answering this softball of a question, none of these horrible sins came top of mind to Robertson. What did was male homosexuality: “Start with homosexual behavior and just morph out from there. Bestiality, sleeping around with this woman and that woman and that woman and those men…It seems like, to me, a vagina—as a man—would be more desirable than a man’s anus. That’s just me. I’m just thinking: There’s more there! She’s got more to offer. I mean, come on, dudes! You know what I’m saying? But hey, sin: It’s not logical, my man. It’s just not logical.” Note that women never enter the picture except as preferred receptacle—it’s all about his antipathy to male homosexuality.

There can be no doubt that Robertson has the right to speak these ugly opinions. But shame on the public figures who have decided to select this particular instance to defend the right to free speech. I suppose it’s easier for them to defend his right to speak than to defend his views, which they may or may not believe but certainly want certain voters to think they believe.

And there can be no doubt that A&E had the right to suspend Robertson. I’m delighted they did, but whether they should have or not is not that interesting a question, certainly not as interesting as considering whether A&E ever should have run the series in the first place. “Duck Dynasty” is the most popular reality TV show ever on cable TV. Like all reality TV, storylines are scripted, so what we’re seeing is not reality, but a kind of cheaply-produced semi-fiction produced in a quasi-documentary style that lends a mantle of credibility to its insinuation that we are viewing reality. The great invention of reality TV is the divorcing of fame from any kind of standard: these people are not actors, sports stars, born wealthy or royalty. They haven’t even slept with the famous, as the Kardashians have. Like the Jersey wives, the Robertsons represent the purest form of celebrity—famous for nothing more than being famous.

A&E and the show’s producers have always sanitized and romanticized the harsh aspects of the Robertsons’ lives even to the point of beeping our “Jesus” from the speech of the bearded boys. Suspending Robertson is part of the continuing strategy to hone down the rough spots of rural American life. Besides, the network had no choice but to act quickly or risk a boycott of the entire network by sponsors and gay rights groups.

Moreover, A&E had everything to gain and nothing to lose by suspending Robertson. Those offended by Robertson’s views will never tune in or ceased watching a long time ago, but perhaps there are still those out there who haven’t watched yet and share Phil Robertson’s views. After all, even the premiere of the fourth season—the most watched nonfiction program in cable history—only drew 11.8 million. That’s a drop in the bucket of the 45% of the population who believe homosexuality is a sin (or so reports a recent Pew study).

(Having lived only within the borders of large cities for more than 40 years I find these numbers shocking, but in many ways, we have two societies now: blue and red, urban and suburban, multicultural and religious fundamentalist. I’m a resident of the blue, urban, multicultural world and tend to interact only with others who share my views on social and political issues.)

The gay-bashing controversy also serves as this week’s “Duck Dynasty” media story. Only the Kardashians seem to get more stories about them than the Robertsons.

I won’t blame A&E for developing shows for the rural market, but I do blame it for developing these particular shows. Reality TV is the end game of the Warhol aesthetic—the apotheosis of branding elements into human deities called celebrities through a medium that has ostensibly avoided the distortions created by the artist’s mediation. But it’s only apparent, since it is not reality we see but an imitation of reality made to seem real by the suppression of most artistic craft.

Suburbanites, denizens of new cities, rural hunters—every major demographic group gets its own lineup of reality TV in post-modern America. In all cases, the producers varnish reality and give it a dramatic shape that at the end of the day feeds on commercial activity and conspicuous consumption. You wouldn’t catch Snooki squatting in a duck blind, nor Phil Robertson clubbing in South Beach. But they represent the same value of undeserved celebrity selling mindless consumption.

NY Times uses anecdotal thinking to create feeling food stamp fraud is rampant in article saying it’s minor

News features often use examples or anecdotes to highlight a trend that is the subject of the story. Sometimes all the writer has as proof of his or her thesis are the examples, so the article strings together a couple of anecdotes to demonstrate that a new trend is unfolding; such as people eating strange foods in expensive restaurants or craving limited edition cosmetics. Quite often, though, the anecdote depicts the reality of a real trend; for example, more families in homeless shelters or the problems signing up for health insurance on an exchange.

In the case of either a real or false trend or idea, it is common that the article starts with an anecdote that shows us the trend or idea at work. Instead of saying, “people are eating ants,” we get a description of a dish or a pleased gourmand crunching away. Beginning inside an anecdote brings the story alive and makes the reader react emotionally before the mind engages with the facts of the matter. An early advocate of starting inside a case history instead of with a statement of thesis was the Roman poet Horace, when he suggested in Ars Poetica about two thousand years ago that the writer “begin in the middle” (in media res). Horace, like most great writers, understood that showing something was much more powerful than merely telling people about it.

How strange, then, that the New York Times would publish an article that reports a fact, but only provides case histories that go counter to that fact. Moreover, the article begins with a case history counter to the facts under report, which means that by the time most readers get to the facts, the anecdote has convinced them of the very opposite of what the facts prove.

What isn’t surprising is that the article disproves a long-held right-wing belief and that the anecdotes in the article support the disproved belief.

The issue is food stamp fraud, people illegally using food stamps to buy liquor, gasoline or other forbidden items. In “Food Stamp Fraud, Rare but Troubling,” Kim Severson correctly reports that food stamp fraud is practically non-existent, a mere 1.3% of the total of food stamp aid given, down from more than 4% in the 1990s before debit cards replaced paper food stamps. Compare this paltry 1.3% to 10%, the current figure for Medicare and Medicaid fraud (typically by physicians, as Severson’s article does not mention). Or compare the $3 billion lost to food stamp fraud, overpayments and government audits combined to the estimated $100 billion a year that insurance fraud costs insurers and their customers.

I’m not denying that the anecdotes occurred. Certainly, a relatively small number of people try to defraud the government by misusing food stamps, but the statistics suggest that the problem is practically non-existent and not worth mentioning or worrying about. The demagogues stating that food stamp fraud is an enormous problem are trying to promote antipathy toward recipients of social benefits, the so-called “welfare queens” accusation. The facts of the article demolish this view as it concerns food stamps.

We can only speculate on how this story developed: Did the editor assign the writer an article that would support the right-wing view that food stamp fraud is rampant, a reason they want to cut the program (and let hundreds of thousands face food insecurity) and did the facts turn the article a different way, leaving the writer with nothing but anecdotes to support the editor’s goal? Or was it the opposite: a conservative reporter trying to put a right-wing face on the facts through anecdotes that go counter to those facts?

Or did the writer pit anecdotes against facts as a way to present a “fair and balanced” story? If so, the writer forgot that anecdotes are as much like facts as apples are like oranges.

Unless of course, the writer has read Daniel Kahneman’s Thinking Fast and Slow, in which the eminent social scientist uses numerous controlled experiments to prove that people will believe a single anecdote that conforms to their ideas over multiple facts that disprove them. In other words, the writer could have cleverly constructed a story to influence the reader to believe the very thing that the article disproves through providing random anecdotes that go counter to the underlying facts. The facts say, “No food stamp fraud,” but the richly detailed case histories may convince us otherwise.

“Food Stamp Fraud, Rare but Troubling” is thus a masterpiece in deniable deception. The article claims to prove one thing—and it does, except for those internal heart strings plucked so expertly by the anecdotes that sing to right-wingers that they were right all along.

Detroit’s bankruptcy latest attempt of wealthy to steal from poor

Kudos to Ross Eisenbrey of the Economic Policy Institute for rejecting the notion that overly generous pensions led to Detroit’s bankruptcy.

Instead of pensions, Eisenbrey cites several reasons for Detroit’s financial problems:

  • A depleted revenue stream as wealthy people moved to nearby municipalities, taking advantage of the city as an economic driver while destroying the city’s tax base.
  • Bad financial deals with banks, including interest rate swaps, which are contracts in which two parties agree to exchange interest rate cash flows, based on a specified amount from a fixed rate to a floating rate, from a floating to a fixed, or from one floating rate to another floating rate. Each side is betting that a certain set of economic conditions will prevail, so that they come out ahead on the swap. As Eisenbrey details, these swaps were profitable for Wall Street banks and exposed Detroit to financial risks that ended up costing the city $600 million in additional interest.
  • Corporate subsidies and tax loopholes for businesses that did not create enough jobs to justify these gifts to private sector companies.

Unmentioned by Eisenbrey is the fact that all three of these forces represent the same theme: rich folk squeezing a city dry of its wealth and then leaving it to flounder. Wealthy suburbanites benefited from living near Detroit without paying taxes to the city. Wealthy banks essentially benefited from selling Detroit’s politicians a bill of goods. Wealthy company owners lowered their operating costs without giving back enough in new jobs.

As Eisenbrey advocates, the burden of solving Detroit’s financial problems should not fall on the Motor City’s middle class and working class people who have worked long years for pensions that they negotiated and upon which they depend to survive. Funny isn’t it: while it’s not okay to break the financing contract with the banks, politicians think nothing of breaking the contracts they signed years ago with its workers. Eisenbrey wants Detroit to say “enough is enough” to the banks and walk away from the onerous interest rate swaps and other financing gimmicks. The banks have made enough money on the Motor City already.

Eisenbrey also wants to end the loopholes and special deals to corporations and have the state of Michigan chip in more money to pay Detroit’s bills. I would add a special regional tax based on income (or as in France, on wealth) that the state would collect for the city from Bloomfield Hills, Grosse Point, Birmingham, Franklin and the other nearby and distant Detroit suburbs.

In his very perceptive article, Eisenbrey also suggests that Detroit’s emergency manager Kevyn Orr, Michigan Governor Rick Snyder and other civic leaders are mischaracterizing Detroit’s problems by focusing on the $18 billion in long-term debt the city owes. It’s another example of right-wing politicians defining the issue in terms that benefit their constituencies. Let’s set aside the possibility that $18 billion may be a grossly overstated estimate. Eisenbrey’s correctly reasons that municipalities cannot liquidate the way private companies can, so the size of the debt is not the issue. All that matters is the cash flow—how much money Detroit needs to pay its bills each month. Right now Detroit faces a $198 million cash flow shortage.

Cash flow is easy for municipalities to deal with, at least in theory—raise taxes or lower costs. The city has already cut costs not only to the bone, but to the marrow. Now it’s time to raise taxes, but on a regional level.  Too long wealthy suburbanites have sucked Detroit dry. It’s now time for them to give something back.

But that’s not going to happen. More likely is that Detroit will become a model for the latest way for the rich to continue their 30+ year war on the rest of us: declare a city in financial trouble and use that excuse to gut pensions and worker’s salaries, thus putting even more downward pressure on the wages of private sector workers and insuring the continuation of the low-tax regime that has a financial chokehold on most families.

Why did the FDA make its new antibiotic restrictions voluntary instead of mandatory?

Were you as delighted as I was when I read the headline that the Food and Drug Administration has a new policy prohibiting the use of antibiotics to speed the growth of pigs and other animals cultivated for human consumption? Trace antibiotics in the animals we eat have contributed to the increasing resistance of bacteria to the antibiotics we use to treatment infections. The new policy forbids use of antibiotics as growth stimulants and also requires farmers to get prescriptions each and every time they want to treat a sick animal with antibiotics.

On the surface it looks like a great victory for every American because it is going to make all of us safer and less likely to die in an illness. The New York Times version of the announcement points out that two million people fall sick and 23,000 die every year from anti-biotic resistant infections. CNN reports that in April the FDA said that 81% of all the raw ground turkey the agency tested was contaminated with antibiotic-resistant bacteria. Currently every hospital patient encounters the danger of opportunistic infections that don’t respond to antibiotics.

Every one of the 15 news reports I read hail it as big news: “major new policy,” ”broad measures” and “sweeping plan” are some of the descriptions of the FDA action.

But before we break out the champagne, let’s read the fine print: It’s all voluntary.

Virtually all the news stories bury this fact or downplay it.  For example, the Times says that, based on comments made during the discussion period that proceeds all federal regulation, rules and advisories, the FDA was confident that drug companies would comply (which I suppose means refusing to sell antibiotics to farmers without prescriptions for specific animals).

Then there’s the matter of a three-year phase-in period. No one has bothered to explain why anyone would need three years to implement this plan: just stop doing it, right away.

As some reports have noted, health officials have warned about the overuse of antibiotics leading to increased resistance since the 1970’s. In other words after 40 years of warnings, studies, discussions and negotiations regarding a major public health challenge, the best we can come up with is a voluntary plan.

Have no doubts about it: Some drug company somewhere in the world will continue to sell this stuff to farmers and farmers will still use it.

If the federal government were really serious about lowering the amount of antibiotics humans ingest in their food and water, it would have set mandatory regulations that took effect within 30 days. But such an action would take a cash stream from drug manufacturers and raise the cost of raising domesticated animals. Farmers and meat processors would make less money and consumers would likely pay a little more for their ground round and chicken nuggets.  It’s worth it, though, as the eradication of the use of antibiotics will make everyone in the United States (and the world) safer from the threat of contracting a life-threatening infection every time they have an operation and safer from the risk of an epidemic of virulent and untreatable infections.

Industry pressures most assuredly caused the wishy-washy action of asking drug makers to resist the urge to make more money. The news behind the news then is that once again, our government has compromised the health, safety and economic well-being of its citizens to enable a small group of companies to continue making money. The additional illnesses and deaths are paid for by all of society, bringing down the costs or raising the profits for a small segment of society.  It’s another example of shifting of the costs from companies to society at large, and it demonstrates once again that unfettered free market capitalism does not lead to the greatest good for the most people.

Serious economists must be laughing at Wall Street Journal attempt to use Laffer Curve to support tax cuts

Wall Street Journal editorials often twist facts, leave out key facts, make incorrect inferences from facts or just plain get the facts wrong.  But the editorial titled “Britain’s Laffer Curve” shows that sometimes the editorial writers simply have no idea what the facts are saying.

In this editorial, the Journal wants to show that cutting taxes leads to increased tax revenues and invokes the notorious Laffer curve to do so. Laffer Curve theory has been around for ages but is associated with right-wing economist Arthur Laffer who supposedly drew it on a paper cocktail napkin for some government luminaries during the 1970’s.  When I interviewed Laffer in 1981 for a television news report, he denied the myth.

What the Laffer Curve postulates is that as taxes are raised, less money circulates in the economy and rich folk are less likely to invest to make more money, since they are keeping so little of it. Research suggests that neither of these statements are true, but by assuming that they are true, one could imagine a situation in which taxes are so high that by lowering them, you raise the amount of revenue that is raised by the government.  Laffer Curve theory proposes that there is a theoretical point at which the tax rate is at a level that produces the most revenues possible from an economy. Laffer Curve theory also predicts that there are occasions when raising taxes will indeed raise significantly more revenue and lowering taxes will indeed lower revenues—it depends on whether we are on the upward or downward slope of the imaginary Laffer Curve.

President Ronald Reagan and a slew of right-wingers since him have used the theory of the Laffer Curve to justify cutting taxes. They assume that no matter what the conditions are, we are always on the side of the imaginary Laffer Curve on which a cut in taxes always leads to an increase in revenues.

The Journal of course takes it for granted that taxes are always too high, especially on businesses, even though they are currently still much lower than during most of the last hundred years and certainly far lower than when Laffer supposedly took Mont Blanc to napkin.

The editorial in question proudly states that since Great Britain cut its corporate tax rate from 28% to 22% in 2010 the British Treasury has gained from 45 cents to 60 cents in additional taxes for every one dollar of revenues lost by cutting the tax rate. In other words, economic growth (or more people paying all their taxes) compensated for 45%-60% of the revenues lost through the tax cut.

Now that may or may not prove the existence of a Laffer Curve that can describe the relationship between tax revenues and taxes collected. But it does prove that you cannot use Laffer Curve economics to justify a tax break.   Even after the Laffer Curve effects, the British government is still 55%-40% in the hole, meaning it must find other sources of revenues or cut government spending by that amount.

And where did the shortfall go? To businesses and their owners, AKA rich folk, who history suggests will invest their additional wealth in the secondary stock market and luxury goods, neither of which really help the economy to grow.

The Journal wants us to believe that the experience of Britain should make us want to cut taxes to raise government revenues. But what the example shows is that cutting taxes leads to a loss of government revenue and a net transfer of an enormous amount of wealth from the poor and middle class to the wealthy. It’s as if the editorial writers have looked at a blue sky and declared, “Look at that blue sky. It proves that the sky is always yellow.” They see the facts, but that doesn’t persuade them from believing what they want to believe is true.

Real economists the world over must be laughing at the Journal and its editorial board’s gross misinterpretation of the facts. Except, of course, those economists in the pay of right-wing think tanks.

Increase in adults reading juvenile fiction another sign of infantilization of Americans

The title of Alexandra Alter’s Wall Street Journal article on adults reading fiction written for middle-schoolers describes the situation perfectly. “See Grown-ups Read. Read, Grown-ups, Read” suggests not middle school, but an elementary school reading
level.  Alter’s story describes one of the many ways that our mass culture is infantilizing adults, turning them into oversized children.

Alter finds several reasons why adults like reading fiction written at the reading, intellectual and maturity level of 12- to 15-year-olds:

  1. The Harry Potter series of books continues to influence reading choices.
  2. There is less of a difference in tastes between generations today than in the past.
  3. There is less of a stigma in adults reading children’s books for pleasure.
  4. The quality of literature for middle-school children has increased and the themes have become more mature.

The first three reasons are euphemistic ways to say that many adults now maintain the interests of childhood or pursue childhood interests. Of course, Alter avoids the negative judgment implied—and meant—by my expression, “the infantilization of adults.”  As one of the several experts Alter quotes puts it, “It used to be kids who would emulate what their parents were reading, and now it’s the reverse.”

The fourth reason is worth analyzing further. Let’s accept the premise that the quality of the writing in books for the middle school audience has improved and the themes and situations are more complex than in the past. The easy rhetorical response is that these books are still for children and not for adults. There is no stream of consciousness writing, no shifting of perspectives without signally the shift (known as free indirect discourse), no long elegant Proustian sentences, and no modernistic imagery. Even today’s new and improved middle school fiction falls short of the best of fictional writing for adults. In addition, the themes covered are those of interest to the middle schooler and thus inherently less complicated than what should be of interest to adults.

Alter peppers the article with quotes from experts, but all of them are authors, editors or publishers of juvenile fiction. No place does she have room for the views of a sociologist, psychologist or philosopher, who might fear, as I do, that adults are losing their capacity for complex thought by reverting to their childhood joys and activities, be it juvenile fiction, theme parks or shoot-shoot-bang-bang video games. In fine Wall Street Journal free-market tradition, the article is about a growing market. In the Journal’s view, all free markets are good and the results of free market growth are always good. The editorial slant of the newspaper reflects a modern version of Voltaire’s buffoonish professor, Dr. Pangloss. He’s the one who keeps repeating in Candide that everything is for the best in the best of all possible worlds. For the Journal, everything is for the best when the free market is operating.

Besides, infantilization of adults is good for Journal advertisers and the American consumer economy is general. Infantilization makes people less able to understand the fine print, less able to understand if what is for sale is really of value. It leaves people less in control of their emotions and more insecure and susceptible to manipulation, just as children and teens are when compared to mature adults. In short, it’s easier to sell products and services—especially useless ones—to the less mature mind.

 

While celebrating the life of Nelson Mandela, let’s not forget that segregation still exists

Segregation is the separation or isolation of individuals or groups from a larger group or from society. Segregation has taken many forms throughout history: refugee camps, work camps, concentration camps, castes, class systems, quarantines, slave quarters, homelands, ghettos, pales, redlined districts, housing development covenants, mass transit seating and classrooms, to name some of the more prevalent means of denying people the right to enter or leave.

Except for medical quarantines, not one of the myriad means to segregate are fair, moral, ethical, humanistic, righteous or tolerable to the fair, moral, ethical, humanistic, righteous and tolerant person. While it enriches a pluralistic society when individuals of a group—say Jews or Pakistanis—move to the same neighborhood and open specialty stores catering their cultural predilections, to restrict these or other groups to areas undermines any society or nation. The same is true if a group tries to keep others out, either everyone or another specific group. A free society demands free access to everyone to all areas that offer free access to anyone, except of course for private property not engaged in civic affairs, commerce or other public ends.

Nelson Mandela defeated a particularly pernicious form of segregation called apartheid.  He resolutely withstood years of jail to lead a movement that eventually negotiated with the defenders of apartheid and defeated them in a democratic election. He fulfilled the vision of Gandhi, the dream of Martin Luther King.  That he began his public career supporting violence only makes more poignant the story of his achieving the good he sought peacefully. It also demonstrates the caliber of the man—always growing, always improving, always questioning.

In celebrating Mandela’s long life, however, let us not forget the many forms of segregation that still exist today throughout the world, including the abominable irony of an apartheid-like system in a nation controlled by a national group that suffered one of the most horrifying examples of segregation in recorded history.

In the United States, our most harmful form of segregation is the separation of rich from poor in access to education. Educational segregation—enforced by expensive private schools, private lessons and gerrymandered public school districts, has unleveled the playing field, helping to create what is the least socially mobile country in the western world. In the United States, it is harder for people to leave the lowest fifth in income and wealth and easier for someone in the highest fifth to remain there than in any other industrialized country. It makes a mockery of our democratic ideals for it to be so hard to climb the economic ladder. Education has usually been the way that the poor have become rich in open societies; thus the connection between educational segregation and growing inequality of wealth and opportunity.

But educational segregation is merely one form of this pox on society that we need to address. The situation in Israel and the occupied lands is morally intolerable.  The Wikipedia article titled “Racial Segregation” details legal and de facto segregation in Bahrain, Canada, Fiji, India, Malaysia, Mauritania, the United Kingdom and Yemen. This list doesn’t include prisoner and refugee camps.

The mass media is already trying to homogenize Nelson Mandela, as they have successfully done for Martin Luther King, turning the day of remembering King’s life into a general day of service to the community, which whitewashes that he dedicated his life to one particular kind of service: peaceful disobedience to oppose racial discrimination.  In the same way, the mass media is already focusing on Mandela the peaceful fighter for democratic elections and freedom. But freedom for South African Blacks involved much more than getting the right to vote.  Mandela’s fight was to create a pluralistic post-racial society of equal access, equal treatment, equal rights and equal opportunity.

The only way to appropriately honor Nelson Mandela is to continue the fight—the peaceful fight—against segregation of every kind, wherever it is.

What current media fascination is most like AIDS news coverage in the 1990’s? Hint: Lots of K’s involved

To those old enough to remember the 1990’s, the phrase “AIDS story of  the day” will resonate, because in fact there was a new story about some aspect of AIDS virtually every day of the week in the mass media: research into its origin or cure, its spread, measures to prevent it, art and literature about AIDS or by artists with AIDS, changing cultural patterns, types of condoms, famous people outed because they contracted AIDS, protests by AIDS victims, the impact of AIDS on communities and cities, the spread to the heterosexual community, vignettes of sufferers and their families, the overcoming of prejudices, funding challenges, studies and reports from other countries. Every day it was something new as reporters, magazines, newspapers and TV programs tried to top each other with the new or unusual related to this dreaded plague.

That there was a constant onslaught of news stories over pretty much an entire decade was understandable. It was a worldwide epidemic of a horrible disease that was related to sexual practices or intravenous drug use with an unknown cause. The story of the world’s reaction to AIDS—finding its cause and then the means to ameliorate if not prevent it, while gaining a new respect and tolerance for its victims—represents humanity at its best.

How ironic then that the contemporary news phenomenon that most resembles the AIDS story in its longevity and number of story angles is not a monumental medical epic involving millions, but the private bantering and peccadilloes of a family of rich but garish narcissists.

Only those who ignore the mass media don’t know to whom I’m referring: It’s the Kardashians.

Every day, a story about one or more Kardashians appears on the Yahoo! home page, Google News, the news pages of popular email portals such as Verizon’s and Time Warner’s, many of our finest tabloid newspapers like The Daily News and gossip-based televisions shows like Entertainment Tonight and The Wendy Williams Show. More staid and serious news media such as Wall Street Journal and New York Times cover the family with some frequency.

Their loves, flirtations and breakups, frustrations, life events and parties, purchases, vacations, clothes, cars and other toys, family relationships, faux pas and ignorant statements, rumors, popularity and the very fact that they are a phenomenon are all grist for the Kardashian mill. Even the Kennedy family at its height did not command so much constant attention, partially because they flourished before the age of 24/7 Internet and television media.

And why so much news coverage for a pack of uneducated conspicuous consumers of luxury products?

  1. Their parents are rich.
  2. They tend to couple with famous people, mostly second rate professional athletes.
  3. They have starred in a succession of reality TV programs in which they inelegantly portray garishly ostentatious lives of conspicuous consumption and family bickering.

In short they are pure celebrities, famous for being famous, or more bluntly, famous for sleeping with famous people.  The fact that much of the detail of their lives and adventures may be created by a stable of reality show and public relations writers matters little. The post-modern blending of reality and fantasy is accepted as gospel by so much of the news media that the Kardashian universe has become the fulfillment of the Karl Rove dream of replacing a reality-based world with an ideologically determined one.

The Kardashian ideology, embraced by the show’s sponsors and the owners of the many media outlets that cover their antics, is worship of the commercial transaction. Peruse the stories (but not too many) and you will find that virtually all them involve buying or giving/taking something someone has bought. The Kardashians’ many complex but frangible relations all boil down to shopping. What Lamar got Khloé, where Kourtney shopped, what designer jewelry Kris was wearing.

Every day the sheer volume of Kardashian stories overwhelms coverage of more important matters. Just now, for example, I found 69.7 million stories about the Kardashians in Google News, but only 140,000 on the car bomb attack in Yemen and a mere 6,000 about the Illinois pension overhaul. Several months ago I reported a study by some Stanford scientists which demonstrated how to provide enough electricity for the entire world through wind power, which garnered exactly one news story throughout the Googlesphere.

Even the most ostensibly high-minded mainstream news media are prisoners of the need to make money by appealing to advertisers. And advertisers like stories that exhort readers to buy expensive toys. And even more do they like stories which advocate the idea that every emotion and human expression must manifest itself in a commercial transaction—buying something.  And most of all they like stories which glorify the shopper as the person to be most admired and honored.