Thumbs up to A&E for suspending “Duck Dynasty” celebrity, thumbs down for ever creating the show

When Sean Hannity, Bobby Jindal, Sarah Palin and other right-wingers come out in favor of freedom of the speech, you know that someone has just said something false, stupid and insulting about a group routinely demonized by ultra-conservatives.

In this case, these Christian right illuminati are standing up for a bearded and backward backwoodsman’s right to slur gays.

The latest right-wing freedom fighter to speak his mind and stand up for religious values is Phil Robertson, one of the stars of “Duck Dynasty,” a reality show about a family business that sells duck calls and other duck hunting paraphernalia in the swampy backwoods of Louisiana. The Robertson family thrives by displaying rural values and wearing their fundamental Christianity on both their overalls and their long, untamed beards.

Robertson’s outrageous views emerged in answer to this question by a GQ interviewer, “What, in your mind, is sinful?” Robertson’s response was not that growing inequality was sinful, not that chemical warfare was sinful, not that cutting food stamp benefits for children was sinful, not that herding people into camps was sinful, not that torture or bombing civilians were sinful, not that paying immigrants less than minimum wage was sinful, not that polluting our atmosphere and waterways was sinful.

No, in answering this softball of a question, none of these horrible sins came top of mind to Robertson. What did was male homosexuality: “Start with homosexual behavior and just morph out from there. Bestiality, sleeping around with this woman and that woman and that woman and those men…It seems like, to me, a vagina—as a man—would be more desirable than a man’s anus. That’s just me. I’m just thinking: There’s more there! She’s got more to offer. I mean, come on, dudes! You know what I’m saying? But hey, sin: It’s not logical, my man. It’s just not logical.” Note that women never enter the picture except as preferred receptacle—it’s all about his antipathy to male homosexuality.

There can be no doubt that Robertson has the right to speak these ugly opinions. But shame on the public figures who have decided to select this particular instance to defend the right to free speech. I suppose it’s easier for them to defend his right to speak than to defend his views, which they may or may not believe but certainly want certain voters to think they believe.

And there can be no doubt that A&E had the right to suspend Robertson. I’m delighted they did, but whether they should have or not is not that interesting a question, certainly not as interesting as considering whether A&E ever should have run the series in the first place. “Duck Dynasty” is the most popular reality TV show ever on cable TV. Like all reality TV, storylines are scripted, so what we’re seeing is not reality, but a kind of cheaply-produced semi-fiction produced in a quasi-documentary style that lends a mantle of credibility to its insinuation that we are viewing reality. The great invention of reality TV is the divorcing of fame from any kind of standard: these people are not actors, sports stars, born wealthy or royalty. They haven’t even slept with the famous, as the Kardashians have. Like the Jersey wives, the Robertsons represent the purest form of celebrity—famous for nothing more than being famous.

A&E and the show’s producers have always sanitized and romanticized the harsh aspects of the Robertsons’ lives even to the point of beeping our “Jesus” from the speech of the bearded boys. Suspending Robertson is part of the continuing strategy to hone down the rough spots of rural American life. Besides, the network had no choice but to act quickly or risk a boycott of the entire network by sponsors and gay rights groups.

Moreover, A&E had everything to gain and nothing to lose by suspending Robertson. Those offended by Robertson’s views will never tune in or ceased watching a long time ago, but perhaps there are still those out there who haven’t watched yet and share Phil Robertson’s views. After all, even the premiere of the fourth season—the most watched nonfiction program in cable history—only drew 11.8 million. That’s a drop in the bucket of the 45% of the population who believe homosexuality is a sin (or so reports a recent Pew study).

(Having lived only within the borders of large cities for more than 40 years I find these numbers shocking, but in many ways, we have two societies now: blue and red, urban and suburban, multicultural and religious fundamentalist. I’m a resident of the blue, urban, multicultural world and tend to interact only with others who share my views on social and political issues.)

The gay-bashing controversy also serves as this week’s “Duck Dynasty” media story. Only the Kardashians seem to get more stories about them than the Robertsons.

I won’t blame A&E for developing shows for the rural market, but I do blame it for developing these particular shows. Reality TV is the end game of the Warhol aesthetic—the apotheosis of branding elements into human deities called celebrities through a medium that has ostensibly avoided the distortions created by the artist’s mediation. But it’s only apparent, since it is not reality we see but an imitation of reality made to seem real by the suppression of most artistic craft.

Suburbanites, denizens of new cities, rural hunters—every major demographic group gets its own lineup of reality TV in post-modern America. In all cases, the producers varnish reality and give it a dramatic shape that at the end of the day feeds on commercial activity and conspicuous consumption. You wouldn’t catch Snooki squatting in a duck blind, nor Phil Robertson clubbing in South Beach. But they represent the same value of undeserved celebrity selling mindless consumption.

NY Times uses anecdotal thinking to create feeling food stamp fraud is rampant in article saying it’s minor

News features often use examples or anecdotes to highlight a trend that is the subject of the story. Sometimes all the writer has as proof of his or her thesis are the examples, so the article strings together a couple of anecdotes to demonstrate that a new trend is unfolding; such as people eating strange foods in expensive restaurants or craving limited edition cosmetics. Quite often, though, the anecdote depicts the reality of a real trend; for example, more families in homeless shelters or the problems signing up for health insurance on an exchange.

In the case of either a real or false trend or idea, it is common that the article starts with an anecdote that shows us the trend or idea at work. Instead of saying, “people are eating ants,” we get a description of a dish or a pleased gourmand crunching away. Beginning inside an anecdote brings the story alive and makes the reader react emotionally before the mind engages with the facts of the matter. An early advocate of starting inside a case history instead of with a statement of thesis was the Roman poet Horace, when he suggested in Ars Poetica about two thousand years ago that the writer “begin in the middle” (in media res). Horace, like most great writers, understood that showing something was much more powerful than merely telling people about it.

How strange, then, that the New York Times would publish an article that reports a fact, but only provides case histories that go counter to that fact. Moreover, the article begins with a case history counter to the facts under report, which means that by the time most readers get to the facts, the anecdote has convinced them of the very opposite of what the facts prove.

What isn’t surprising is that the article disproves a long-held right-wing belief and that the anecdotes in the article support the disproved belief.

The issue is food stamp fraud, people illegally using food stamps to buy liquor, gasoline or other forbidden items. In “Food Stamp Fraud, Rare but Troubling,” Kim Severson correctly reports that food stamp fraud is practically non-existent, a mere 1.3% of the total of food stamp aid given, down from more than 4% in the 1990s before debit cards replaced paper food stamps. Compare this paltry 1.3% to 10%, the current figure for Medicare and Medicaid fraud (typically by physicians, as Severson’s article does not mention). Or compare the $3 billion lost to food stamp fraud, overpayments and government audits combined to the estimated $100 billion a year that insurance fraud costs insurers and their customers.

I’m not denying that the anecdotes occurred. Certainly, a relatively small number of people try to defraud the government by misusing food stamps, but the statistics suggest that the problem is practically non-existent and not worth mentioning or worrying about. The demagogues stating that food stamp fraud is an enormous problem are trying to promote antipathy toward recipients of social benefits, the so-called “welfare queens” accusation. The facts of the article demolish this view as it concerns food stamps.

We can only speculate on how this story developed: Did the editor assign the writer an article that would support the right-wing view that food stamp fraud is rampant, a reason they want to cut the program (and let hundreds of thousands face food insecurity) and did the facts turn the article a different way, leaving the writer with nothing but anecdotes to support the editor’s goal? Or was it the opposite: a conservative reporter trying to put a right-wing face on the facts through anecdotes that go counter to those facts?

Or did the writer pit anecdotes against facts as a way to present a “fair and balanced” story? If so, the writer forgot that anecdotes are as much like facts as apples are like oranges.

Unless of course, the writer has read Daniel Kahneman’s Thinking Fast and Slow, in which the eminent social scientist uses numerous controlled experiments to prove that people will believe a single anecdote that conforms to their ideas over multiple facts that disprove them. In other words, the writer could have cleverly constructed a story to influence the reader to believe the very thing that the article disproves through providing random anecdotes that go counter to the underlying facts. The facts say, “No food stamp fraud,” but the richly detailed case histories may convince us otherwise.

“Food Stamp Fraud, Rare but Troubling” is thus a masterpiece in deniable deception. The article claims to prove one thing—and it does, except for those internal heart strings plucked so expertly by the anecdotes that sing to right-wingers that they were right all along.

Detroit’s bankruptcy latest attempt of wealthy to steal from poor

Kudos to Ross Eisenbrey of the Economic Policy Institute for rejecting the notion that overly generous pensions led to Detroit’s bankruptcy.

Instead of pensions, Eisenbrey cites several reasons for Detroit’s financial problems:

  • A depleted revenue stream as wealthy people moved to nearby municipalities, taking advantage of the city as an economic driver while destroying the city’s tax base.
  • Bad financial deals with banks, including interest rate swaps, which are contracts in which two parties agree to exchange interest rate cash flows, based on a specified amount from a fixed rate to a floating rate, from a floating to a fixed, or from one floating rate to another floating rate. Each side is betting that a certain set of economic conditions will prevail, so that they come out ahead on the swap. As Eisenbrey details, these swaps were profitable for Wall Street banks and exposed Detroit to financial risks that ended up costing the city $600 million in additional interest.
  • Corporate subsidies and tax loopholes for businesses that did not create enough jobs to justify these gifts to private sector companies.

Unmentioned by Eisenbrey is the fact that all three of these forces represent the same theme: rich folk squeezing a city dry of its wealth and then leaving it to flounder. Wealthy suburbanites benefited from living near Detroit without paying taxes to the city. Wealthy banks essentially benefited from selling Detroit’s politicians a bill of goods. Wealthy company owners lowered their operating costs without giving back enough in new jobs.

As Eisenbrey advocates, the burden of solving Detroit’s financial problems should not fall on the Motor City’s middle class and working class people who have worked long years for pensions that they negotiated and upon which they depend to survive. Funny isn’t it: while it’s not okay to break the financing contract with the banks, politicians think nothing of breaking the contracts they signed years ago with its workers. Eisenbrey wants Detroit to say “enough is enough” to the banks and walk away from the onerous interest rate swaps and other financing gimmicks. The banks have made enough money on the Motor City already.

Eisenbrey also wants to end the loopholes and special deals to corporations and have the state of Michigan chip in more money to pay Detroit’s bills. I would add a special regional tax based on income (or as in France, on wealth) that the state would collect for the city from Bloomfield Hills, Grosse Point, Birmingham, Franklin and the other nearby and distant Detroit suburbs.

In his very perceptive article, Eisenbrey also suggests that Detroit’s emergency manager Kevyn Orr, Michigan Governor Rick Snyder and other civic leaders are mischaracterizing Detroit’s problems by focusing on the $18 billion in long-term debt the city owes. It’s another example of right-wing politicians defining the issue in terms that benefit their constituencies. Let’s set aside the possibility that $18 billion may be a grossly overstated estimate. Eisenbrey’s correctly reasons that municipalities cannot liquidate the way private companies can, so the size of the debt is not the issue. All that matters is the cash flow—how much money Detroit needs to pay its bills each month. Right now Detroit faces a $198 million cash flow shortage.

Cash flow is easy for municipalities to deal with, at least in theory—raise taxes or lower costs. The city has already cut costs not only to the bone, but to the marrow. Now it’s time to raise taxes, but on a regional level.  Too long wealthy suburbanites have sucked Detroit dry. It’s now time for them to give something back.

But that’s not going to happen. More likely is that Detroit will become a model for the latest way for the rich to continue their 30+ year war on the rest of us: declare a city in financial trouble and use that excuse to gut pensions and worker’s salaries, thus putting even more downward pressure on the wages of private sector workers and insuring the continuation of the low-tax regime that has a financial chokehold on most families.

Why did the FDA make its new antibiotic restrictions voluntary instead of mandatory?

Were you as delighted as I was when I read the headline that the Food and Drug Administration has a new policy prohibiting the use of antibiotics to speed the growth of pigs and other animals cultivated for human consumption? Trace antibiotics in the animals we eat have contributed to the increasing resistance of bacteria to the antibiotics we use to treatment infections. The new policy forbids use of antibiotics as growth stimulants and also requires farmers to get prescriptions each and every time they want to treat a sick animal with antibiotics.

On the surface it looks like a great victory for every American because it is going to make all of us safer and less likely to die in an illness. The New York Times version of the announcement points out that two million people fall sick and 23,000 die every year from anti-biotic resistant infections. CNN reports that in April the FDA said that 81% of all the raw ground turkey the agency tested was contaminated with antibiotic-resistant bacteria. Currently every hospital patient encounters the danger of opportunistic infections that don’t respond to antibiotics.

Every one of the 15 news reports I read hail it as big news: “major new policy,” ”broad measures” and “sweeping plan” are some of the descriptions of the FDA action.

But before we break out the champagne, let’s read the fine print: It’s all voluntary.

Virtually all the news stories bury this fact or downplay it.  For example, the Times says that, based on comments made during the discussion period that proceeds all federal regulation, rules and advisories, the FDA was confident that drug companies would comply (which I suppose means refusing to sell antibiotics to farmers without prescriptions for specific animals).

Then there’s the matter of a three-year phase-in period. No one has bothered to explain why anyone would need three years to implement this plan: just stop doing it, right away.

As some reports have noted, health officials have warned about the overuse of antibiotics leading to increased resistance since the 1970’s. In other words after 40 years of warnings, studies, discussions and negotiations regarding a major public health challenge, the best we can come up with is a voluntary plan.

Have no doubts about it: Some drug company somewhere in the world will continue to sell this stuff to farmers and farmers will still use it.

If the federal government were really serious about lowering the amount of antibiotics humans ingest in their food and water, it would have set mandatory regulations that took effect within 30 days. But such an action would take a cash stream from drug manufacturers and raise the cost of raising domesticated animals. Farmers and meat processors would make less money and consumers would likely pay a little more for their ground round and chicken nuggets.  It’s worth it, though, as the eradication of the use of antibiotics will make everyone in the United States (and the world) safer from the threat of contracting a life-threatening infection every time they have an operation and safer from the risk of an epidemic of virulent and untreatable infections.

Industry pressures most assuredly caused the wishy-washy action of asking drug makers to resist the urge to make more money. The news behind the news then is that once again, our government has compromised the health, safety and economic well-being of its citizens to enable a small group of companies to continue making money. The additional illnesses and deaths are paid for by all of society, bringing down the costs or raising the profits for a small segment of society.  It’s another example of shifting of the costs from companies to society at large, and it demonstrates once again that unfettered free market capitalism does not lead to the greatest good for the most people.

Serious economists must be laughing at Wall Street Journal attempt to use Laffer Curve to support tax cuts

Wall Street Journal editorials often twist facts, leave out key facts, make incorrect inferences from facts or just plain get the facts wrong.  But the editorial titled “Britain’s Laffer Curve” shows that sometimes the editorial writers simply have no idea what the facts are saying.

In this editorial, the Journal wants to show that cutting taxes leads to increased tax revenues and invokes the notorious Laffer curve to do so. Laffer Curve theory has been around for ages but is associated with right-wing economist Arthur Laffer who supposedly drew it on a paper cocktail napkin for some government luminaries during the 1970’s.  When I interviewed Laffer in 1981 for a television news report, he denied the myth.

What the Laffer Curve postulates is that as taxes are raised, less money circulates in the economy and rich folk are less likely to invest to make more money, since they are keeping so little of it. Research suggests that neither of these statements are true, but by assuming that they are true, one could imagine a situation in which taxes are so high that by lowering them, you raise the amount of revenue that is raised by the government.  Laffer Curve theory proposes that there is a theoretical point at which the tax rate is at a level that produces the most revenues possible from an economy. Laffer Curve theory also predicts that there are occasions when raising taxes will indeed raise significantly more revenue and lowering taxes will indeed lower revenues—it depends on whether we are on the upward or downward slope of the imaginary Laffer Curve.

President Ronald Reagan and a slew of right-wingers since him have used the theory of the Laffer Curve to justify cutting taxes. They assume that no matter what the conditions are, we are always on the side of the imaginary Laffer Curve on which a cut in taxes always leads to an increase in revenues.

The Journal of course takes it for granted that taxes are always too high, especially on businesses, even though they are currently still much lower than during most of the last hundred years and certainly far lower than when Laffer supposedly took Mont Blanc to napkin.

The editorial in question proudly states that since Great Britain cut its corporate tax rate from 28% to 22% in 2010 the British Treasury has gained from 45 cents to 60 cents in additional taxes for every one dollar of revenues lost by cutting the tax rate. In other words, economic growth (or more people paying all their taxes) compensated for 45%-60% of the revenues lost through the tax cut.

Now that may or may not prove the existence of a Laffer Curve that can describe the relationship between tax revenues and taxes collected. But it does prove that you cannot use Laffer Curve economics to justify a tax break.   Even after the Laffer Curve effects, the British government is still 55%-40% in the hole, meaning it must find other sources of revenues or cut government spending by that amount.

And where did the shortfall go? To businesses and their owners, AKA rich folk, who history suggests will invest their additional wealth in the secondary stock market and luxury goods, neither of which really help the economy to grow.

The Journal wants us to believe that the experience of Britain should make us want to cut taxes to raise government revenues. But what the example shows is that cutting taxes leads to a loss of government revenue and a net transfer of an enormous amount of wealth from the poor and middle class to the wealthy. It’s as if the editorial writers have looked at a blue sky and declared, “Look at that blue sky. It proves that the sky is always yellow.” They see the facts, but that doesn’t persuade them from believing what they want to believe is true.

Real economists the world over must be laughing at the Journal and its editorial board’s gross misinterpretation of the facts. Except, of course, those economists in the pay of right-wing think tanks.

Increase in adults reading juvenile fiction another sign of infantilization of Americans

The title of Alexandra Alter’s Wall Street Journal article on adults reading fiction written for middle-schoolers describes the situation perfectly. “See Grown-ups Read. Read, Grown-ups, Read” suggests not middle school, but an elementary school reading
level.  Alter’s story describes one of the many ways that our mass culture is infantilizing adults, turning them into oversized children.

Alter finds several reasons why adults like reading fiction written at the reading, intellectual and maturity level of 12- to 15-year-olds:

  1. The Harry Potter series of books continues to influence reading choices.
  2. There is less of a difference in tastes between generations today than in the past.
  3. There is less of a stigma in adults reading children’s books for pleasure.
  4. The quality of literature for middle-school children has increased and the themes have become more mature.

The first three reasons are euphemistic ways to say that many adults now maintain the interests of childhood or pursue childhood interests. Of course, Alter avoids the negative judgment implied—and meant—by my expression, “the infantilization of adults.”  As one of the several experts Alter quotes puts it, “It used to be kids who would emulate what their parents were reading, and now it’s the reverse.”

The fourth reason is worth analyzing further. Let’s accept the premise that the quality of the writing in books for the middle school audience has improved and the themes and situations are more complex than in the past. The easy rhetorical response is that these books are still for children and not for adults. There is no stream of consciousness writing, no shifting of perspectives without signally the shift (known as free indirect discourse), no long elegant Proustian sentences, and no modernistic imagery. Even today’s new and improved middle school fiction falls short of the best of fictional writing for adults. In addition, the themes covered are those of interest to the middle schooler and thus inherently less complicated than what should be of interest to adults.

Alter peppers the article with quotes from experts, but all of them are authors, editors or publishers of juvenile fiction. No place does she have room for the views of a sociologist, psychologist or philosopher, who might fear, as I do, that adults are losing their capacity for complex thought by reverting to their childhood joys and activities, be it juvenile fiction, theme parks or shoot-shoot-bang-bang video games. In fine Wall Street Journal free-market tradition, the article is about a growing market. In the Journal’s view, all free markets are good and the results of free market growth are always good. The editorial slant of the newspaper reflects a modern version of Voltaire’s buffoonish professor, Dr. Pangloss. He’s the one who keeps repeating in Candide that everything is for the best in the best of all possible worlds. For the Journal, everything is for the best when the free market is operating.

Besides, infantilization of adults is good for Journal advertisers and the American consumer economy is general. Infantilization makes people less able to understand the fine print, less able to understand if what is for sale is really of value. It leaves people less in control of their emotions and more insecure and susceptible to manipulation, just as children and teens are when compared to mature adults. In short, it’s easier to sell products and services—especially useless ones—to the less mature mind.

 

While celebrating the life of Nelson Mandela, let’s not forget that segregation still exists

Segregation is the separation or isolation of individuals or groups from a larger group or from society. Segregation has taken many forms throughout history: refugee camps, work camps, concentration camps, castes, class systems, quarantines, slave quarters, homelands, ghettos, pales, redlined districts, housing development covenants, mass transit seating and classrooms, to name some of the more prevalent means of denying people the right to enter or leave.

Except for medical quarantines, not one of the myriad means to segregate are fair, moral, ethical, humanistic, righteous or tolerable to the fair, moral, ethical, humanistic, righteous and tolerant person. While it enriches a pluralistic society when individuals of a group—say Jews or Pakistanis—move to the same neighborhood and open specialty stores catering their cultural predilections, to restrict these or other groups to areas undermines any society or nation. The same is true if a group tries to keep others out, either everyone or another specific group. A free society demands free access to everyone to all areas that offer free access to anyone, except of course for private property not engaged in civic affairs, commerce or other public ends.

Nelson Mandela defeated a particularly pernicious form of segregation called apartheid.  He resolutely withstood years of jail to lead a movement that eventually negotiated with the defenders of apartheid and defeated them in a democratic election. He fulfilled the vision of Gandhi, the dream of Martin Luther King.  That he began his public career supporting violence only makes more poignant the story of his achieving the good he sought peacefully. It also demonstrates the caliber of the man—always growing, always improving, always questioning.

In celebrating Mandela’s long life, however, let us not forget the many forms of segregation that still exist today throughout the world, including the abominable irony of an apartheid-like system in a nation controlled by a national group that suffered one of the most horrifying examples of segregation in recorded history.

In the United States, our most harmful form of segregation is the separation of rich from poor in access to education. Educational segregation—enforced by expensive private schools, private lessons and gerrymandered public school districts, has unleveled the playing field, helping to create what is the least socially mobile country in the western world. In the United States, it is harder for people to leave the lowest fifth in income and wealth and easier for someone in the highest fifth to remain there than in any other industrialized country. It makes a mockery of our democratic ideals for it to be so hard to climb the economic ladder. Education has usually been the way that the poor have become rich in open societies; thus the connection between educational segregation and growing inequality of wealth and opportunity.

But educational segregation is merely one form of this pox on society that we need to address. The situation in Israel and the occupied lands is morally intolerable.  The Wikipedia article titled “Racial Segregation” details legal and de facto segregation in Bahrain, Canada, Fiji, India, Malaysia, Mauritania, the United Kingdom and Yemen. This list doesn’t include prisoner and refugee camps.

The mass media is already trying to homogenize Nelson Mandela, as they have successfully done for Martin Luther King, turning the day of remembering King’s life into a general day of service to the community, which whitewashes that he dedicated his life to one particular kind of service: peaceful disobedience to oppose racial discrimination.  In the same way, the mass media is already focusing on Mandela the peaceful fighter for democratic elections and freedom. But freedom for South African Blacks involved much more than getting the right to vote.  Mandela’s fight was to create a pluralistic post-racial society of equal access, equal treatment, equal rights and equal opportunity.

The only way to appropriately honor Nelson Mandela is to continue the fight—the peaceful fight—against segregation of every kind, wherever it is.

What current media fascination is most like AIDS news coverage in the 1990’s? Hint: Lots of K’s involved

To those old enough to remember the 1990’s, the phrase “AIDS story of  the day” will resonate, because in fact there was a new story about some aspect of AIDS virtually every day of the week in the mass media: research into its origin or cure, its spread, measures to prevent it, art and literature about AIDS or by artists with AIDS, changing cultural patterns, types of condoms, famous people outed because they contracted AIDS, protests by AIDS victims, the impact of AIDS on communities and cities, the spread to the heterosexual community, vignettes of sufferers and their families, the overcoming of prejudices, funding challenges, studies and reports from other countries. Every day it was something new as reporters, magazines, newspapers and TV programs tried to top each other with the new or unusual related to this dreaded plague.

That there was a constant onslaught of news stories over pretty much an entire decade was understandable. It was a worldwide epidemic of a horrible disease that was related to sexual practices or intravenous drug use with an unknown cause. The story of the world’s reaction to AIDS—finding its cause and then the means to ameliorate if not prevent it, while gaining a new respect and tolerance for its victims—represents humanity at its best.

How ironic then that the contemporary news phenomenon that most resembles the AIDS story in its longevity and number of story angles is not a monumental medical epic involving millions, but the private bantering and peccadilloes of a family of rich but garish narcissists.

Only those who ignore the mass media don’t know to whom I’m referring: It’s the Kardashians.

Every day, a story about one or more Kardashians appears on the Yahoo! home page, Google News, the news pages of popular email portals such as Verizon’s and Time Warner’s, many of our finest tabloid newspapers like The Daily News and gossip-based televisions shows like Entertainment Tonight and The Wendy Williams Show. More staid and serious news media such as Wall Street Journal and New York Times cover the family with some frequency.

Their loves, flirtations and breakups, frustrations, life events and parties, purchases, vacations, clothes, cars and other toys, family relationships, faux pas and ignorant statements, rumors, popularity and the very fact that they are a phenomenon are all grist for the Kardashian mill. Even the Kennedy family at its height did not command so much constant attention, partially because they flourished before the age of 24/7 Internet and television media.

And why so much news coverage for a pack of uneducated conspicuous consumers of luxury products?

  1. Their parents are rich.
  2. They tend to couple with famous people, mostly second rate professional athletes.
  3. They have starred in a succession of reality TV programs in which they inelegantly portray garishly ostentatious lives of conspicuous consumption and family bickering.

In short they are pure celebrities, famous for being famous, or more bluntly, famous for sleeping with famous people.  The fact that much of the detail of their lives and adventures may be created by a stable of reality show and public relations writers matters little. The post-modern blending of reality and fantasy is accepted as gospel by so much of the news media that the Kardashian universe has become the fulfillment of the Karl Rove dream of replacing a reality-based world with an ideologically determined one.

The Kardashian ideology, embraced by the show’s sponsors and the owners of the many media outlets that cover their antics, is worship of the commercial transaction. Peruse the stories (but not too many) and you will find that virtually all them involve buying or giving/taking something someone has bought. The Kardashians’ many complex but frangible relations all boil down to shopping. What Lamar got Khloé, where Kourtney shopped, what designer jewelry Kris was wearing.

Every day the sheer volume of Kardashian stories overwhelms coverage of more important matters. Just now, for example, I found 69.7 million stories about the Kardashians in Google News, but only 140,000 on the car bomb attack in Yemen and a mere 6,000 about the Illinois pension overhaul. Several months ago I reported a study by some Stanford scientists which demonstrated how to provide enough electricity for the entire world through wind power, which garnered exactly one news story throughout the Googlesphere.

Even the most ostensibly high-minded mainstream news media are prisoners of the need to make money by appealing to advertisers. And advertisers like stories that exhort readers to buy expensive toys. And even more do they like stories which advocate the idea that every emotion and human expression must manifest itself in a commercial transaction—buying something.  And most of all they like stories which glorify the shopper as the person to be most admired and honored.

So-called bioethicist would rather see people die than change society

In a New York Times Op/Ed column, Daniel Callahan, co-founder of the Hastings Center, a bioethics research institution, questions the wisdom of extending human life.

Callahan rightfully calls aging “a public issue with social consequences” and mentions two of the ramifications of more people living into old age: 1) More medical costs for society; 2) Fewer jobs for the young, as the old extend their working lives.

But instead of seeing health care and the workforce as challenges to overcome as we extend the amount of time people can live, he sees them instead as reasons not to extend life. He doesn’t say it explicitly, but his underlying argument essentially throws people under the bus when their usefulness to the economy appears to end.

The increase in medical costs to treat the elderly should not be seen as society’s burden, but rather as our joyous reward for having created a world in which people can live longer and continue to thrive. That so many people live longer is a sign of success, not a reason to stop the advance of medical research. We expanded educational institutions to meet the large increase in the population of children when the Baby Boomers started popping out. What is so different about expanding medical and social programs for our increasing population of the elderly?

The jobs issue is a little more complicated, primarily because our automated economy does not create enough jobs for everyone willing and able to work. But instead of artificially creating job openings by kicking out people at a certain age, we could fix what’s wrong with our economic system. Here are some thoughts:

  • As more people live to 90, 100 and beyond, they will need more caregivers, which creates jobs for younger people.
  • Local organic farming requires more human labor. If we created an agricultural system that relied on a mix of industrial and older techniques, it would create many more jobs for the young. The key, of course, is to make certain that these jobs pay a decent wage.
  • Unless there is a pressing financial reason, those in their 60’s, 70’s and beyond typically don’t want to work 40-hour weeks. Job-sharing, especially between the old experienced hand and the young go-getter, makes a lot of sense.
  • We are currently not spending enough on many job-creating enterprises, such as fixing our roads and bridges, hiring enough teachers to decrease class sizes, exploring out-of-space and developing renewable energy sources and systems.

To make any of these ideas work requires two actions that give conservatives the willies: 1) More government management of the economy; and 2) A more equitable distribution of the wealth.

When people use economic arguments to justify denying people basics such as nutrition, healthcare or education, I always wonder if they include themselves. Evidently the 83-year-old Callahan does not, as he admits to having received a seven-hour heart operation and to using oxygen at night for his emphysema.  In our current world, the rich—and I include Callahan—can afford to keep themselves alive and have nice cushy jobs from which they can keep drawing income for decades after turning 65.

Callahan’s sole concern is that as currently constructed, our society and economy cannot afford to extend such privileges to everyone.  While he seems to care about the social good, he argues explicitly from the point of view of someone who doesn’t believe or want society to improve or change. He is happy living in a world dominated by the politics of selfishness, the idea that “I got mine, who cares about anyone else.” He sees an increase in the very old as a threat to that world, as opposed to being a sign that we are making progress towards a better one.

We all know people whose lives are so filled with pain and suffering that to the outsider it seems as if they would be better off dead. Focus on these poor souls (and don’t ask if they want to remain alive in their pain) and Callahan’s argument that life extension may not be an absolute good makes a tad of sense.

But instead, try focusing on the many vibrant 80 and 90 year olds around. Even those who are not so active can still enjoy their friends, their favorite foods, music, outings and games, sports teams, reading, the changing of the seasons, the chirping of birds, the affection of pets, the delight in seeing the flowers pop up in the spring, in short the sheer joy of existence.  We should be doing as much as possible to extend that joy for all people.

What is the biggest cause in the drop in crime rates?

The latest statistics demonstrate that New York City’s Draconian stop-and-frisk policy has not been the cause for a precipitous drop in the rate of violent crime in the five boroughs. Even after NYC’s finest curtailed stop-and-frisk without cause, crime rates continued to plummet.

I’ve been meaning for some time to analyze why crime rates have dropped and continue to drop across the United States, but especially in urban areas outside of Chicago.  Despite the right’s wails and lamentations about unsafe communities, most of us live in far safer places than we did a decade or two ago. Interestingly enough, the crime rate is down most precipitously in that modern Sodom or Gomorrah, the Big Not-So-Rotten Apple.

Why has crime decreased?

First, I want to discount the idea that crime fell as a result of increased incarceration of individuals, victims to the many 3-strikes-you’re-out and anti-crack laws passed in the late 70’s and 80’s. We have filled our prisons with a bunch of people—black males to a large extent—who don’t deserve to be incarcerated. All they have done is minor kid’s stuff or drugs. We have the highest incarceration rate in the western world and yet we still have the highest rate of violent crimes. No doubt, some small percentage of those locked up for years for tooting crack might have committed future crimes, but some percentage of those locked up learned criminal ways in prison and became lost to society.  I’m thinking the net effect disproves the idea that locking more people up than any other industrialized nation led to a drop in crimes rates.

One of the gun lobby’s many fantasies is that the increase in open carry and other gun rights leads to a decrease in crime, because the criminals won’t want to run into someone who would shoot back. This absurd claim crumbles to lies as soon as we look at the facts: Forget that the incidents of citizens stopping criminals by pulling out their gun are extremely rare. Consider that the higher the prevalence of guns in any country in the world, the higher the rate of deaths and injuries from guns in that country. More guns equal more violent deaths. Also consider the fact that while there are more guns out there now, fewer households own guns today than 20 years ago, continuing a trend that is more than 50 years old now.  Fewer people own more guns. I think it’s likely that the decline in gun owners may have led to a drop in crime.

So far, I’ve consider some bogus arguments conservatives make about the drop in crime. Now let’s take a look at three legitimate arguments which I think have been factors in the continued drop in crime, but not any as the primary cause.

Let’s start with the end of the use of lead paint: This theory goes that crime increased soon after we started using lead-based paint in apartment buildings, because children would eat the paint chips and suffer one or more of the side effects, which include learning disabilities resulting in decreased intelligence, attention deficit disorder and behavior issues, all predictors of criminal behavior. Once we stopped using lead paint, the crime rate went down (even thought the rate of diagnosing ADD continues to soar). It’s a very believable theory backed by evidence that suggests but does not prove causality. Not enough research has done on the affect of lead paint on human adherence to social norms, but the explanation does sound plausible.

We can also look at the growth of dispute resolution programs in the schools as another factor in lowering the rate of crime. I think it was some time in the 80’s when these programs began, first in urban areas. Having sixth grade kids mentor first-graders, throwing middle school kids in with high schoolers, bringing together groups of students from different schools to talk about race, religion and other hate issues, the growth in organized sports leagues—all of this additional socialization had to turn many
marginal children away from crime.

My own pet theory is that the growth of video game play helped to lower the crime rate.  The idea is that people work out their anger and anti-social urges playing Grand Theft Auto and Call of Duty: Black Ops.  So while I despair that most video games tend to infantilize young men, preventing their ideas and thought processes to mature, I do think that the games have kept many young men busy and out of trouble.

I do reject one non-conservative theory: A professor has postulated that the legalization of abortion has resulted in fewer unwanted children born and that unwanted children commit more crimes. The problem with this theory is that the introduction of birth control pills assuredly prevented the birth of more children than did the legalization of abortion. But the introduction of the pill paralleled the increase in the crime rate in the 1960’s and early 70’s, at least at first.

Lead paint, growth in socialization programs and video games all played a role in the decrease in crime, without being the main cause. Sociologists and historians who calculate crime rates in many cultures through centuries report that the rate of crime is primarily a function of the number of 16-29 year old males in the population. Most crime is committed by young men, so the higher percentage of young males in the population, the higher the crime rate.

The facts certainly match this theory until about 2003. When the Baby Boom turned 16, crime rates started to soar. Males aged 16-29 represented the largest percentage of our population in our nation’s history.  When Generation X—otherwise known as the Baby Bust—started to turn 16 and Baby Boomers started turning middle-aged, crime rates started dropping. Now the birth rate increased again with the Millennial generation (AKA Generation Y, although judging from the high achievements of its female members, maybe Generation Non-Y is a better moniker!). But when the Millennials started turning 16, the crime rate did not pick up again.

My thought is that the impact of the Millennials on the overall population is far less than that of the truly outsized Baby Boom generation. So while we have more 16-29 year old males, this demographic segment is not as great a percentage of the whole as it was at the height of Boomer young adulthood.  The end of lead paint, greater socialization, the growth of video games, a decline in gun ownership and other factors still unidentified all combined to keep the crime rate going down.  By this theory, if the Millennials were as large a factor as the Baby Boom generation had been, the crime rate might still not have risen, but not to Boomer levels because of these additional factors.