« I'm on the Edge of Something Shattering | Main | There Was a Pregnant Pause before He Said OK »

May 07, 2010

Comments

Dr. Pangloss' nose fell into his soup a long time ago.

The syphilis infection is in the American bone now.

Too late.

Heck, we're going to take suspected terrorists' citizenship rights away, except of course for their Second Amendment rights.

In a few years, we'll be taking out streakers on the baseball fields with cruise missiles.

Iraqi civilian deaths: Between 96,037 and 104,7542

Odd that a post about the damage done by the Iraq war should repeat one of the most damaging lies about the Iraq war: the minimizing of Iraqi casualties.

To June 2006, the Iraq Family Health Survey found 151,000 violent deaths.

The Lancet survey, also to June 2006, found 601,027 violent deaths out of 654,965 excess deaths.

The ORB survey, to August 2007, found 1,033,000 violent deaths.

I found it ironic that the same people who wanted to minmize the damage done by the US attack on Afghanistan decried Marc Herold and his methods when he concluded that, six months after October 2001, the US had killed at least 6000 Afghans.

Since then, when more accurate methods proved that Marc Herold's methods, used by the Iraqi Body Count website, would tally no more than 10% of the body count, this useful undercount has become the preferred method of counting casualties by those who support the war - and who so violently attack as "political" anyone who attempts to publish the more accurate figures, that even people who do not support the war, find it safer to use the count preferred by the war's supporters.

There's no basis for using Iraq Body Count numbers now. Actual scientists who have PhDs and who teach at major universities have done real work to figure out the numbers. They've conducted real studies and published them in reputable peer reviewed journals. We don't need to rely on the work of ego-driven amateurs who have methodological flaws large enough to drive a truck through.

I can understand why the authors would prefer to avoid getting into sterile arguments about the true death toll, but "Total deaths: Between 110,663 and 119,380" is simply wrong. That 119,380 figure is not a maximum, nor does Iraq Body Count pretend that it is; it relates only to reported deaths.

A better way of putting it would be: at least 110,663 violent deaths, per Iraq Body Count, with an unknown number of (a) unreported violent deaths and (b) non-violent deaths due to increased mortality resulting from the conflict.

Nice New Pornographers lyric for the post title, Eric. You and von are going to eventually run out of songs!

i notice how that chart fails to list the increase in peance and freance. the bias is palpable.

That's OK Phil, they'll just make more ;)

Isn't there some sort of rule regarding Asia and land wars? Like, not doing it, or something?

Sure does seem simple.

The easiest way to handle the death toll issue without taking a stand on any particular study is simply to say "hundreds of thousands of deaths" and that's what a lot of people do. Even the New England Journal of Medicine study that is cited by critics of the 2006 Lancet paper found a death toll of 100-225,000 by June 2006, which was 2 to 5 times greater than IBC's number for the same period. That study also agreed with the numbers in Lancet 1 for the first 18 months of the war.

Anyway, it's safe to say that everyone who has tried to measure it has gotten numbers much higher than IBC's, so citing their numbers as though they are authoritative is moronic.

" found a death toll of 100-225,000 "

I should have said 100-225,000 killed by violence.

The easiest way to handle the death toll issue without taking a stand on any particular study is simply to say "hundreds of thousands of deaths" and that's what a lot of people do.

That's what I do, FWIW.

I hate dandelions. When I find one on my lawn, I rip it out of the ground. Then, just to show it how much I hate it, I shake its little puff-balls until all the white stuff has flown off and scattered all over the place. They'll get the message sooner or later.

Eric, if you had it to do over again, how would you have handled the Taliban situation in the 9/11 aftermath? No snark here, I am just wondering what alternative scenario in Afghanistan/Pakistan would have left us in a markedly different situation? Further, if there is, in hindsight, a better way to have reacted to the Al Qaeda/Taliban, who was advocating it at the time, if anyone? I am clear that you would not have invaded Iraq, so that is not any part of the question. Thanks.

McTex: I think I would have gone about the Afghanistan/al-Qaeda/Taliban strategy about the same way initially.

It's more the mission creep that I have an issue with. That, and the absolutely unrealistic goals that we've now established, the time the occupation will need to take to achieve them, the costs, the unlikelihood that Pakistan will buy in, etc. Someone asked me this on another thread on this site last week. This is how I answered:

I was for the invasion. I'd say I began turning around the Prez campaign (too late in retrospect I believe). What turned me was a sense of mission creep, and also a sense of the unlikelihood of success as defined.

I supported going in and breaking up al-Qaeda, which we did. I supported the overturning of the Taliban regime, and the initial attempt at establishing a working order. However, mission creep began to set in with respect to the latter goal, and it became increasingly clear that: (1) the factions we were backing were and are corrupt, violent and often as retrograde as the Taliban with respect to woman's rights and other liberal sensibilities (in fact, there are former Taliban factions in Karzai's government); (2) the people were not enthused with that government and there was too much indigenous support for the Taliban and other non-Talib resistance groups; (3) trying to impose this government on the Afghan people would require too much carnage in the name of "helping" those Afghans lucky enough to escape with life and limb; (4) when I realized that Pakistan wasn't going to play ball as much as pay lip service; and (5) when it became clear that the strategists were talking about another 10-15 years of intense, extremely expensive military occupation, which even they gave mixed prospects for success.

Just wanted to point out that Iraq has a population of about 30M. So even a civilian death toll of 100K would like a million civilian deaths in this country.

Plus a million and a half internal dislocations and almost 2 million fleeing the country. Scale those up by ten for comparison.

I just don't think Americans have any concept whatsoever of how what we do affects other people in the world.

Eric: That's what I do, FWIW.

Why claim that in a post where you didn't?

(Sure, you're quoting. But you're quoting these people with approval as "detailed and informative" - even though they're lying about the number of Iraqi deaths, and, it appears, you know they're lying.)

McKinneyTex: Eric, if you had it to do over again, how would you have handled the Taliban situation in the 9/11 aftermath?

Not Eric, obviously, but it required a fair degree of madness to think that attacking a country that had not attacked the US and presented no possible threat to the US, was any kind of sensible response to a terrorist attack on the US. Missile attacks on civilians are not, never have been, a good response to terrorist attacks, if by "good" we mean "ensuring no further terrorist attacks take place".

But Americans wanted a military response, and they got it. Never mind that considered as a response to 9/11, it was as insane a waste of resources as was the Iraq war.

I just don't think Americans have any concept whatsoever of how what we do affects other people in the world.

we barely give a crap about how what we do affects other Americans.

Thanks, Eric. Can't argue with you on this one.

Jes, are saying the Taliban/Afghanistan was "a country that had not attacked the US and presented no possible threat to the US"?

I think the figure for veterans' health and disability care counts direct costs only. That would fail to include economic losses from unemployment or underemployment due to war injuries. It would fail to include damage to families - higher divorce rates, the impact of death or injury of a parent, etc. And it would fail to include what seems likely (to me, anyway) to be a shortened life span for some wounded veterans.

I just don't think Americans have any concept whatsoever of how what we do affects other people in the world.

And when some Iraqi blows himself up some department store in NYC in a few years we'll be all like "Dude, after all we did for you getting rid of Saddam Hussein?"

Getting into an argument over the most accurate number distracts from the larger point that Eric is making here, which is that -- even at the most minimum figure that supporters of the war can agree upon, (which is quite likely a gross undercount) -- the cost is staggering. Getting agreement on that point is far more important than having the fight about the numbers *again.*

Nothing changes, whatever the actual figures are, until there is agreement over the larger point. So you either keep hammering on an argument that failed to convince (whatever its validity) or you let that lie for the time being in order to find the grounds that will allow enough consensus to achieve some positive change.

I don't care if war supporters will only believe a number that is 25%, 90% or 99% off of the reality on the ground so long as they agree that that number is too big, that we need to back down from our current belligerent stance and that they sour on the idea of repeating this approach again the next time someone kicks us in the shins to get a rise out of us.

Getting into an argument over the most accurate number distracts from the larger point that Eric is making here

If someone said "boy, the Holocaust was awful, over 600,000 Jews were killed", I'd be inclined to beat them with a 2x4. Then I'd look at them and say "what, what's your problem? I just tapped you with this No. 2 pencil." I'm not Jewish. I just find this cute attempt to purge our victims from history and memory to be morally obscene. Hundreds of thousands of people were brutally murdered. Pretending that 9 out of ten of those murders never happened is wrong. And I don't care if doing the right thing is a distraction. No one ever says, "eh, we should say that the Holocaust killed over 3,000,000 Jews because it would be distracting to have a big argument with all the Holocaust deniers" -- that's absurd. But because Jews are people and Iraqis aren't, everyone appreciates the absurdity in that context.

Look, there is no scientific basis for believing that IBC numbers. A bunch of amateurs made up their own methodology that has enormous flaws. Just like if you asked a 5-year old to audit the Federal Reserve, the result is complete garbage.

We all know the truth: if the IBC numbers were 2,000,000 instead of 100K, we would not be having this discussion. No one, and I mean no one at all, would be citing the IBC numbers. And no one would be questioning our refusal to cite the IBC numbers. There wouldn't be anguished comments about what a horrific distraction it is to even consider basic credibility. Am I right? Does anyone care to disagree?

Getting agreement on that point is far more important than having the fight about the numbers *again.*

My religious beliefs compel me to be a dick about this. I'm really sorry that my refusal to help whitewash the extermination of 900K people irritates you. I trust that you'll get over it.

Turbo,

Did any of the studies conclude that 1 million Iraqi civilians died of violent causes?

(note: civilians + violent causes)

Did any of the studies conclude that 1 million Iraqi civilians died of violent causes?

ORB might have but I don't usually consider that one since it wasn't published in a peer reviewed journal. The others did not. However, the studies were all completed by mid 2006. After that, it became too dangerous to do fieldwork in Iraq. Because that's when the violence increased. Tim Lambert has done some simple calculations taking the death rates calculated in the various studies and projecting them out through mid 2008. Lancet2 projections for violent deaths over this wider time range do in fact exceed 1 million. I don't believe the Lancet studies distinguished between combatant and civilian deaths. In fact, I'm not sure that it is ethically possible to conduct a survey that asks that question (i.e., one that does not endanger the lives of respondents).

I'm confused as to why we would limit our concern to violent civilian deaths only. If grandma can't get medical treatment because the ministry of health has been taken over by Shiite fanatics who have decided that Sunnis don't get medical care, she's just as dead as if a Shiite fanatic put a bullet in her head. If you can't afford formula for your baby because you've been driven from your home and have lost your business, her death is still a problem. Right? And in conflicts where state authority completely breaks down and large numbers of people are forced to carry weapons and engage in violence just to survive, I'm not sure why we should only care about civilians. If Dad had to join a local militia to defend against rival militias that were trying to ethnically cleanse the neighborhood, does his death somehow count less? I mean, do you insist that 9/11 death totals aren't legitimate unless military/DOD deaths are excluded from the count? Or that the only 9/11 deaths that count are those on the planes since everyone else was killed by fire or building collapses?

What is the principled justification for these restrictions?

My religious beliefs compel me to be a dick about this. I'm really sorry that my refusal to help whitewash the extermination of 900K people irritates you.

Im sorry that you absurbly chose to characterize this dispute as an attempt to whitewash mass murder. Sometimes, people disagree about stuff that's important, but disagreeing with you on this point does not make someone complicit in mass murder. This is the inane "objectively pro-X" argument in different clothing.

disagreeing with you on this point does not make someone complicit in mass murder.

I think there's a pretty big distinction between whitewashing mass murder and being complicit in mass murder. Don't you? Obviously, people who disagree with me on this point are not complicit in mass murder (well, not anymore than your average US/UK citizen). But they are participating in a purging of hundreds of thousands of deaths from memory, a purging that would be shocking if done to groups of more favored victims.

And I might not raise the whitewashing issue if the cost (ZOMG! A DISTRACTION! On a blog on the INTERNET!) wasn't so trivial. But since we're not currently engaged in open heart surgery, I think the terrible distraction cost is bearable.

Turb -- But they are participating in a purging of hundreds of thousands of deaths from memory, a purging that would be shocking if done to groups of more favored victims.

And here is the heart of our disagreement. We are not "participating in the purging of hundreds of thousands of deaths from memory." We are, at worst, bracketing them off and suspending their utility as an obstruction for the time being in an effort to get enough buy-in to stop killing even more people for no gain.

I think there's a pretty big distinction between whitewashing mass murder and being complicit in mass murder. Don't you?

Not so much as you, I guess. Whitewashing mass murder makes one a moral accessory after the fact, so I find it a pretty serious accusation to level at someone.

Not to get all Stalinesque, but for the purposes of arguing that the war was a stupid, stupid thing with all manner of bad consequences, it's not important whether there were 100k or 1000k civilian deaths- either is a horrific consequence. That's not to say that it's not terribly important historically that we keep the real numbers in mind, or that I don't appreciate the initial correction to Eric's quoting the 100k number as a ceiling without some addendum. Just that getting bogged down in that number helps apologists deflect the real argument- it's more straightforward to say "let's just stipulate that there were at least 100k civilian deaths", at least in my experience.
And you can disagree with that, and Im totally sympathetic to that position. Wanting to say it each time- something like 500k human lives- is a noble instinct. But not agreeing with that stance doesn't mean that Eric *wants* to minimize the body count for some reason, particularly if his goal is to make the overall argument that the war was a terrible decision without adding additional complexities around statistics etc.

And here is the heart of our disagreement. We are not "participating in the purging of hundreds of thousands of deaths from memory." We are, at worst, bracketing them off and suspending their utility as an obstruction for the time being in an effort to get enough buy-in to stop killing even more people for no gain.

We would never do this for Jews killed in the Holocaust or Americans killed on 9/11. Don't you agree? I've asked this question several times and have yet to receive an answer.

Perhaps I'm confused, but I don't see how "bracketing" the number of deaths advances the discussion. The causal mechanism eludes me. I mean: the IBC methodology is garbage. Do you really believe that are a great many people who prefer obviously flawed amateur methodologies over respected scientists'? And do you think these same people will take any argument about Iraqi suffering seriously? If Jes and I and Donald had been silent on this thread, can you explain precisely what events would have turned out differently that would have gotten more buy-in?


Whitewashing mass murder makes one a moral accessory after the fact, so I find it a pretty serious accusation to level at someone.

I don't see it that way at all. A mid ranking functionary in the German government who is ordered to destroy some government records is doing a very bad thing. But his behavior is really not comparable to that of an SS officer in the camps. Destroying information is not the same as destroying human beings. Our relationship with history and memory is a fairly complex thing and I suspect that by your standard, there are billions of people in the world today who are accessories to ethnic cleansing. I mean, lots of Japanese people deny that atrocities in China ever happened...do you think that every one of those is an accessory to those atrocities?

But not agreeing with that stance doesn't mean that Eric *wants* to minimize the body count for some reason

Er...did I say anything about Eric? I'm a little shocked that CAP's staff would make this sort of mistake, but since Eric didn't write the quoted parts, I can't really blame him for it. Especially since he did say that he describes the death toll by saying "hundreds of thousands". Are you confusing me with Jes?

Turbulence writes:

"There's no basis for using Iraq Body Count numbers now. Actual scientists who have PhDs and who teach at major universities have done real work to figure out the numbers. They've conducted real studies and published them in reputable peer reviewed journals."

Indeed. Here's one showing that the Lancet study data are fabricated:
http://www.informaworld.com/smpp/content~db=all~content=a921401057

"We all know the truth: if the IBC numbers were 2,000,000 instead of 100K, we would not be having this discussion. No one, and I mean no one at all, would be citing the IBC numbers. And no one would be questioning our refusal to cite the IBC numbers."

Well, if the IBC numbers were 2 million, then each of these 2 million deaths would be supported by cited reports with documentary evidence on the place, time and location of the persons killed, and therefore would not be able to be ignored.

(And then some other people would be fabricating "estimates" of 20 million)

I have to confess I've done the bracketing thing myself, Turb--to increase the chances of getting a letter to the editor published I said "at least hundreds of thousands of Iraqis have been killed or wounded". That would be consistent with IBC's numbers (100,000 dead and three times that wounded). I felt a little guilty doing it. I did say "at least".


You're right about the double standard. I remember one graph in the NYT early in the Iraq War when the IBC number was about 30,000 (the first Lancet paper already showed it was probably far greater) and they were comparing this to various other conflicts and what irritated me was that in those other conflicts they used numbers that clearly weren't based on IBC-style methodologies. Consequently the Iraq numbers came out looking much smaller in comparison.

As for the importance, I think it does matter that we at least be honest about the range of estimates and when we compare numbers, be clear about the methodology used to derive it. We had a lot of people justifying the invasion of Iraq on the grounds that Saddam had killed 300,000 or 500,000 or 1 million or even more of his own people. I think the Human Rights Watch number was around 300,000 and the rest were either made up or concocted by adding together guestimates of the Iran/Iraq war death toll and the toll under sanctions. How do we know any of those numbers? I'm not sure. But people casually threw around these immense figures,sometimes almost in a competitive spirit, just to see who could condemn Saddam as the biggest mass murderer. I have never once seen an article that went into detail about where the estimates for Saddam's murders came from, and I also know from reading that there was a pretty wide range of estimates for his various crimes. For instance, the number of Kurds killed in the Anfal campaign has been claimed to be anywhere from 50,000 to 200,000. Usually you'd see the largest number cited.

There needs to be some sort of epistemological consistency here, because otherwise you have a casual acceptance of gigantic numbers attributed to Saddam, and much smaller (and almost certainly incorrect) numbers attributed to our own actions.

On the civilian/insurgent distinction, it's meaningless. If there were, say, 100,000 insurgents killed it just means that a huge number of people died fighting the invading Americans or the government we put into place.

We would never do this for Jews killed in the Holocaust or Americans killed on 9/11. Don't you agree? I've asked this question several times and have yet to receive an answer.

I think there's a lot less uncertainty in the body count for 9/11; the uncertainty that exists (eg deaths afterwards due to respiratory issues) shouldn't cloud the issue either.
If we were discussing 9/11, and I used one count and someone else insisted on adding some number of post-attack deaths to the count, Id not want to quibble on the issue. A lot of people died, and the exact number probably isn't relevant to moral, political, etc questions around the attack.
Now if that's the question itself, eg as with Holocaust deniers, then by all means let's debate the numbers.

Destroying information is not the same as destroying human beings. Our relationship with history and memory is a fairly complex thing and I suspect that by your standard, there are billions of people in the world today who are accessories to ethnic cleansing.

That sounds right to me. There were/are people who minimized the atrocities of the contras, for example. Doing so not only apologizes for past terror, it enables terror in the future by making it clear that those who practice terror or back those who commit terror can hope to cover their tracks and not live out their days in infamy &/or a jail cell.
As a society, we mostly recognize Holocaust deniers as a special breed of scum. But we repeatedly fail to label similar activities as beyond the pale when it's less politically convenient. Ronald Reagan was an piece of human filth, and if there is a Hell he is roasting there alongside Goebbels. Why we view the one as a monster and the other as a genial, well-intentioned uncle is beyond me.

Actions that enable mass murder are morally complicit in mass murder. Apologists for terrorism are morally complicit in terror. Is it the same as pulling the trigger? Im not sure- but I am sure that it's the mass of men willing to use evil and then wash their hands who enable the few genuine crazies to act out their destructive fantasies.

Indeed. Here's one showing that the Lancet study data are fabricated:
http://www.informaworld.com/smpp/content~db=all~content=a921401057

Where "fabricated" means "man, I sure wish this wasn't true".
To save anyone the trouble of reading this particular pile of dogcrap, I will provide an example:
The author notes that the L2 death rate/pop/month falls almost on a straight line with two other well-known studies. He then estimates the probability that this almost-coincidence was random (fairly low, he finds), and suggests that this is indicative of fraud.
Of course, there have been more than 2 well-known studies of death rates in conflict regions. What the author has proven is that he was able to cherry-pick two such studies where he could cerry-pick a parameter where a near-straight line could be drawn.
I cannot describe how desperate or innumerate someone would have to be to fall for this sort of hand-waving.

Fred, maybe you've missed it somehow, but even if one shoots down L2 there is still L1 and the IFHS survey which both found death tolls that were several times higher than the IBC figures for the same time period.

That's what's consistently odd about the Lancet skeptics. The critics of Lancet 1 could not believe the true violent death toll could be several times greater than IBC's number. But that's the conclusion of the IFHS study that they cite nowadays to refute L2--it is in good agreement with L1 on the first 18 months.

I think IBC ended up doing more harm than good. We would know about as much about the death toll in Iraq if they had never existed and their number is now cited as though it is an exact count.

Donald writes,

"Fred, maybe you've missed it somehow, but even if one shoots down L2 there is still L1 and the IFHS survey which both found death tolls that were several times higher than the IBC figures for the same time period."

Donald, I haven't missed them, and by "several" you mean 3 (or possibly 2, or possibly 5, or possibly less, or possibly more). Estimates are estimates, and in this context they have a lot of problems which means they aren't very definitive.

And really, the IFHS and L1 "find" similar things mainly via particular interpretations that are somewhat heavy handed. Similarity is not found in the data itself. In the case of L1, this meant excluding outlier data which results in a lower (but perhaps more reliable) central estimate. In the case of IFHS, contrarily, this meant adding on an arbitrary, but large, upward adjustment for presumed "under-reporting" in surveys, which L1 does not do. A seeming close similarity is created by these particular, and debatable interpretations. It is not found in the data. And I never said L1 or the IFHS had to be wrong anyway. Maybe they're right. And maybe not.

That's one of the reasons why it's good to also have IBC that gives a detailed record of facts that doesn't rest on these kind of abstract arguments and speculations about what might be likely or not likely.

Carleton says:

"To save anyone the trouble of reading this particular pile of dogcrap"

Yes, wouldn't want people to read it.

"I cannot describe how desperate or innumerate someone would have to be to fall for this sort of hand-waving."

As innumerate and desparate, I suppose, as the author, the journal editors and the peer reviewers. But not everyone can be as serious and scholarly as someone in a comment thread on the internet.

I support any and all uses of the New Pornographers in blog post titles.

Fred--That's a fair point about the agreement between L1 and IFHS. I would also add that the IFHS interviewers told people they came from the government--that wouldn't encourage me to be forthcoming if I had a son who had died at the hands of either the Americans or the Iraqi government or associated death squad.

"That's one of the reasons why it's good to also have IBC that gives a detailed record of facts that doesn't rest on these kind of abstract arguments and speculations about what might be likely or not likely."

It would be good to have them, if people (including, unfortunately, IBC spokespeople sometimes) didn't claim more for their data than they should. And anyway, if I recall correctly IBC data about most of their deaths ultimately comes from official sources. It wasn't the case that reporters on the scene tracked down most of those 100,000 civilian dead--there were numbers released from Iraqi government agencies. Sometimes reporters would go to morgues, but ultimately they reported the numbers they were given.

What I have taken away from the debate about the Iraq death toll is increased skepticism about most of the numbers I see in the papers. How do we know anything at all about Saddam's death toll? I remember reading one story in the Guardian (typically, not in an American paper) that of the mass graves which had been investigated from the Saddam era, the number of bodies uncovered was far less than had been expected. Which didn't keep Tony Blair from talking as though we had actually uncovered several hundred thousand bodies.

Here's the Guardian article I was remembering about Saddam's mass graves. I have never seen a similar article in the American press explaining how little we know for sure about Saddam's death toll--we just get large numbers reported with no background about where the numbers come from.

link

That's one of the reasons why it's good to also have IBC that gives a detailed record of facts that doesn't rest on these kind of abstract arguments and speculations about what might be likely or not likely.

If a guaranteed undercount is what you want, they've got just the product for you. Not that their product is necessarily a bad thing in and of itself- but it's very useful as propaganda for those who want to minimize the human tragedy of Iraq for political purposes.

In the case of L1, this meant excluding outlier data which results in a lower (but perhaps more reliable) central estimate

iirc L1 excluded outlier data from Fallujah to avoid biasing the results upwards, not the reverse.

Yes, wouldn't want people to read it.

Of course, people are free to read it; it's like Red State with footnotes, yippee. But I wouldn't want someone to read it thinking it was informative in any way.

As innumerate and desparate, I suppose, as the author, the journal editors and the peer reviewers. But not everyone can be as serious and scholarly as someone in a comment thread on the internet.

Congrats on looking as if you were responding without bothering to contest the single criticism I did make of the study. If only there were an award for intentionally ignoring things.

To my surprise, Morning Edition did a decent story on the Iraqi casualty controversy back in 2009. Note what the Iraqi morgue worker says about the statistics that are given to the media--

link

ah, misread that one piece, that's what you said about Fallujah.

I really wish people wouldnt use IBC as anything but a verified minimum. The problem statistically is that there's no way of knowing
1)at any given point in time how good of a proxy it is for overall violent deaths
2)whether that relationship changes over time as eg the violence shifted from insurgents v US to sect v sect
[nb this is similar to what is done by climate scientists with eg tree core series; by comparing recent series with known temperatures and other proxies, we can estimate how well our tree core models compare with known temperatures before using them to help extrapolate unknown temperatures]

It's clear that it's very likely a large undercount (although it probably contains incidents of overcounts or repeated counts as well). But the only way to know how much of an undercount would be to either count all of the deaths somehow, or use some statistical method to estimate the deaths and compare.
Unfortunately, there are those who use IBC to attempt to invalidate the very studies that ought to be measuring its success as a proxy.

ah, misread that one piece, that's what you said about Fallujah.

Although, in my defense, I can't imagine how you say this with a straight face. Are you saying that you wouldn't have criticized the study for leaving in an obvious high outlier and basing their headline statistical conclusion on that?

Turb- We would never do this for Jews killed in the Holocaust or Americans killed on 9/11. Don't you agree? I've asked this question several times and have yet to receive an answer.

Okay, here's my answer. How much I'd accept depends on the duration and scale of the conflict and the associated destruction.

I'd expect the number of Jews killed in the Holocaust would fluctuate for several years following a major war as people's hopes, fears, and spin settled down to more verifiable numbers and historians had a crack at building and vetting the archive. I think the info that ended up on the wrong side of the Berlin wall would take longer to firm up than those we had possession of. Having years to pore and wrangle over how best to interpret the records helps.

9/11 would never be in that sort of flux for long being so much smaller in scale, happening over such a short time, and being unaccompanied by widespread destruction of the related infrastructure. We have fairly complete records of who was there and our ability to keep and analyze those records was undamaged by the events.

So while I agree with you that those who cling to the IBC numbers do so out of a vested ideological interest in minimizing culpability I also don't think that their vested interest will carry the day in the years immediately following the end of this conflict.

Becoming curious, I do a little googling- who is Michael Spagat and what does he write? Apparently, he writes things such as studies of Iraqi war casualties (using IBC as a definite source), or how Amnesty International has an anti-government bias in their reporting from Columbia (using a US State Dept source as a definitive source), or a study suggesting a high bias to street-cross street cluster sampling (based on- get this- "estimated parameters" rather than any actual new data, estimating that it's "plausible" the surveyed population is about 5 times as exposed to violence then the non-surveyed population with no quantifiable justification whatsoever).
And here I am, having to work for a living.

"Are you saying that you wouldn't have criticized the study for leaving in an obvious high outlier and basing their headline statistical conclusion on that?"

They excluded it from one of the headline conclusions, "100,000", but then smuggled it back in for another one "84% by US forces" without really telling anyone. So it wound up looking to most people like 100,000 killed in violence with 84,000 of them killed by US forces. I suspect there might have been some less skepticism of it if the authors didn't try to spin it so much this way, and, for that matter, didn't try to pretend that their 100,000 was some kind of rock bottom figure because it didn't contain the outlier, rather than the central estimate within a huge range of uncertainty, which it was.

"Congrats on looking as if you were responding without bothering to contest the single criticism I did make of the study. If only there were an award for intentionally ignoring things."

I think your criticisms are kind of a silly tantrum so I was going to just let them pass, but if you insist: "there have been more than 2 well-known studies of death rates in conflict regions." True, but not many more. This would be "cherry picking" 2 from maybe 3 or 4 relevant cases that might also be sensible to use for such an extrapolation.

This "cerry-pick a parameter where a near-straight line could be drawn," line just doesn't really make sense. It just plots the lines based on the number of months and population estimates actually used in the studies. Footnotes 20 and 21 discuss what might happen to the picture if you used a wrong number of months for one of the studies or a wrong number for a population estimate used in another of the studies, but concludes that it would barely change it. The person who quantified the chances of such an alignment cited in the paper is apparently Mark van der Laan, who is "Professor of Biostatistics and Statistics at UC Berkeley."
http://www.stat.berkeley.edu/~laan/Laan/laan.html

Clearly another innumerate.

This particular piece of the paper is somewhat separate from the rest in the nature of the argument too. This part is disussing the possibility of "falsification" which is defined as the creation of false data by one or more of the study authors (such as pre-determining the estimate by extrapolation from other studies), while most of the other issues focus on fabrication, which is defined as the creation of false data by one or more of the field workers (with or without the knowledge of any of the authors).

Spagat is pretty cautious about this particular falsification point, saying: "this three-point diagram (Figure 2) provides statistical evidence of data falsification although it is not definitive; we reject the hypothesis that the alignment arose by chance at the 5% level but not at the 1% level."

More broadly, the paper discusses "evidence relating to data fabrication and falsification, which falls into nine broad categories." You make some (weak) complaints about one of these categories, which constitutes about 2 pages in a 42 page paper (about 4% of the argument). Not a very good sign for someone complaining about others "intentionally ignoring things".

"Becoming curious, I do a little googling- who is Michael Spagat and what does he write?..."

Geez, a whole 15 posts went by before getting to the ad-hominems. You must be slacking. The one you reference on the issue of cross-street sampling is this one:

http://www.prio.no/Research-and-Publications/Journal-of-Peace-Research/Article-of-the-year/Article-of-the-Year-2008/

It was the 2008 Article of the Year in a peer-reviewed journal, in which "the authors show convincingly that previous studies [Lancet 2006] which are based on a cross-street cluster-sampling algorithm (CSSA) have significantly overestimated the number of casualties in Iraq." - Clearly another set of innumerate and desperate PhD authors, journal editors, peer-reviewers and award juries. The conspiracy is vast.

I suspect there might have been some less skepticism of it if the authors didn't try to spin it so much this way, and, for that matter, didn't try to pretend that their 100,000 was some kind of rock bottom figure because it didn't contain the outlier, rather than the central estimate within a huge range of uncertainty, which it was.

Yeah, they hid it so carefully that you had to actually read the abstract of the paper to find their secret. If only they weren't so incompetent at hiding it maybe it would never have got out. I think their error was publishing it- it's really tough to keep a secret when you f&$*ing publish it.
But you certainly are an ace detective, to have found their secret out. They tried to fool you with their publishing the secret, but there's no stopping you when you're on the trail. Ferreting out things that are hiding in plain sight.

True, but not many more. This would be "cherry picking" 2 from maybe 3 or 4 relevant cases that might also be sensible to use for such an extrapolation.

You really think there have been about 4 statistical studies of death rates in areas of conflict? You want to stick with that answer, or maybe try something else that's remotely plausible?
Or, we could make a little side bet- if I can name more than 5 articles, will you write a short essay about how great Barack Obama is and post it? Whereas if I can't find more than 5, Ill write one on Himmler or whomever your fav celebrity is these days.

This "cerry-pick a parameter where a near-straight line could be drawn," line just doesn't really make sense. It just plots the lines based on the number of months and population estimates actually used in the studies.

The point that you've missed is that the authors picked the studies (from a pool of more than 5), and picked the parameter to measure, and found a good fit, and then tested it statistically as if these were three data points measured independently.
If someone told you that George W Bush did a great job on the economy because the 11th, 15th, and 27th months of his presidency had more economic growth than the similar months of the administration of Chester Arthur, you would wonder if maybe those months and that president weren't picked for a reason.
Or, normal people would wonder that. You, apparently, would not.

The person who quantified the chances of such an alignment cited in the paper is apparently Mark van der Laan, who is "Professor of Biostatistics and Statistics at UC Berkeley."Clearly another innumerate.

If you want to attack papers written by a bunch of PhDs, you probably shouldn't be relying on the argument from authority. Why do you trust this authority and not the authority of the Lancet authors? Aside from the fact that one is telling you what you want to hear and the other isn't, that is.

Not a very good sign for someone complaining about others "intentionally ignoring things".

If I said that Id reviewed the whole paper you might have something resembling a point. That was the first thing I saw, and it was pretty awful- one does *not* manually select data points beforehand and then subject them to a statistical analysis as if they were randomly selected.
I mean, I could show that the article in question has *exactly* the same number of words as several other published articles, and claim that the odds of this are vanishingly small. And they would be vanishingly small, if I selected them at random. If I hand-pick the articles to measure, then there is no statistically valid measurement to make, because the sample is not randomly selected from the universe of possible candidate articles.
That is literally stats 101. Have you *taken* stats 101?
I mean, even if we took your silly claim that there are only 5 papers of this type other than the one under examination, that's already 10 different combinations of 2- if he tried all 10 and manually selected the best fit (which he certainly appears to have done) that would have a strong effect on his probability assessment.

It was the 2008 Article of the Year in a peer-reviewed journal, in which "the authors show convincingly that previous studies [Lancet 2006] which are based on a cross-street cluster-sampling algorithm (CSSA) have significantly overestimated the number of casualties in Iraq." - Clearly another set of innumerate and desperate PhD authors, journal editors, peer-reviewers and award juries.

First, argument from authority again. Kind of embarrasing for you to keep going back that particular well.
Second, that paper does not contain a single shred of new data. Or new analysis. It merely speculates about how far off the method used could be if the author's speculative theory that this method of sampling produces overcounts is correct. It is literally this simple:
1)I think that this method overcounts casualties because cross-streets probably have more casulties
2)If cross-streets have 5 times as many casualties, then the study's estimates will be about 5 times too large (with some corrections for eg movement from one area to another)
3)I think my assumptions are probably correct, but let's vary them up and down to make them look like some kind of range of results that would arise from actual data rather than speculation.
I mean, when I said "speculated parameters" and "plausible" parameters, I wasn't using scare quotes. Those are quotes from the paper itself.
To wit, here is the money graph, where the author takes a wild-ass guess as to how much more mortality is likely on cross-streets than other streets:

(1) The relative probability of death for anyone present in Si
(regardless of their zone of
residence) to that of So is q = qi
/qo . It is likely that the streets that define the samplable
region Si
are sufficiently broad and well-paved for military convoys and patrols to pass, are
highly suitable for street-markets and concentrations of people and are, therefore, prime
targets for improvised explosive devices, car bombs, sniper attacks, abductions and drive-
by shootings. Given the extent and frequency of such attacks, a value of q = 5 is plausible.
Indeed, many cities worldwide have homicide rates which vary by factors of ten or more
between adjacent neighbourhoods

(Gourley et al., 2006).

That is literally all he has. There is some valid statistics in pointing out the errors possible in this type of sampling, what factors will cause what magnitude of errors, etc. But pulling a number totally out of his ass and claiming that this is a likely number, and then using that ass-generated number to claim that the study vastly overstates mortality is sheer grandstanding. It has no basis in the data.

Im imaging how Dr.Fred would take it if one of the Lancet paper's authors just yanked a number out of their asses and used it to estimate 100ks of casualties. Presumably, he wouldn't be so amazingly amazed by their credentials that he wouldn't notice this. Yet when faced with what amounts to a well-publicized *guess* that the 2006 Lancet paper was off, he somehow misses the call.
And what's really amazing is that they *published* it, and usually you're pretty good about finding out stuff that's been *published* for you. But not this time.

Whereas if I can't find more than 5

ooh, Ive got a better idea than an essay as your reward if I can't find 5- Ill give you your nuts back. Deal?

nous: We are, at worst, bracketing them off and suspending their utility as an obstruction for the time being in an effort to get enough buy-in to stop killing even more people for no gain.

We had an election in the UK on Thursday and while there are much larger aspects to it which I want to blog about on my own journal, there was one tiny piece of news that nevertheless stuck out for me.

In England, besides being a Parliamentary election, it was also a council election. In Barking, 12 councillors from one party all lost their seats. I don't live or work there - I don't live anywhere near there or shop there or take public transport there - but it made the national news because those 12 councillors were all British National Party (BNP).

The BNP are a party of white supremacists, holocaust deniers, aggressive right-wingers. They would be fascists, except they have never had a state to be a fascist party of. Sporadically, they gain a little power (they got 12 councillors in Barking in a local council election with a horrifically low turn-out: they have two MEPs who got in at the end of a list vote, also in a very low turn-out) but they are a party vulnerable to high turn-outs and tactical voting, since, for the most part, their views are anathema to the great majority of people in the UK, and they never get tabloid support because they don't have any political clout to be worth pandering to.

I understood far more about US politics when I realized that the Democratic party are the US equivalent of the Conservatives: the Republican party are the US equivalent of the BNP.

The Republicans have managed to make it unthinkable in the US to seriously discuss the number of Iraqis who have been killed by the US war on Iraq. Just as the BNP would, if they had the political power of the Republicans, make it unthinkable in the UK to seriously discuss the number of Jews killed by the Nazis in the Holocaust.

That it's regarded now as quite sensible to stick to the lowest count, the only one that your holocaust-deniers will allow you to use without attacking you for it, just shows what power like that will do to honest discussion.

I don't accuse Eric of anything more than being sensible: had he cited his own beliefs as he quoted the even-more sensible people who wrote this kindasortacount of the costs, doubtless he would have found himself under attack by worse than me. Much more sensible to let me or someone like me put forward the facts in a comment for which he cannot be held responsible.

Just because the Berlin Wall was mentioned in passing: There is a heated debate in Germany about how many people died at the Berlin Wall and it's not one between GDR apologists and everyone else (i.e. not a mini Holocaust denial analogon). One would assume that it would be much easier to get an agreement on such a small sample where noone has to risk death assembling the data but that's obviously not the case.

"Yeah, they hid it so carefully that you had to actually read the abstract"

If this is supposed to mean that the large CI for the 100k is in the abstract and the paper, of course. I was referring to how the authors, and many of their supporters insisted on presenting their 100k as some kind of rock bottom minimum figure. One quote that comes to mind is "There’s no chance it could have been only 30 or 40 [thousand].", and attacking people for pointing out that their 100k was really just a central estimate within in a huge range.

"You really think there have been about 4 statistical studies of death rates in areas of conflict?" No, but I do think there have been very few national estimates of violent deaths in wars produced by sample surveys (and certainly "well-known" ones), which is what lines up so strangely here. If you extend this to ones that measured just overall mortality rates during a war, or "excess deaths" you could add more examples, but this would be bringing in oranges to compare to apples in this context. In any case, I'd be interested to see a list of ones you believe are relevant.

On the issue of "appeal to authority", I completely agree with you. Such appeals were what I was responding to, and why I was stressing such appeals in a way where i thought the sarcasm would not be difficult to see. If we must accept something like the Lancet article because it was written by persons with PhD's or because it was in a peer-reviewed journal, then...

"If I said that Id reviewed the whole paper you might have something resembling a point. That was the first thing I saw,"

...because this was the only thing attacked on Deltoid, with the other 96% of the paper was again ignored. And so this carries over to your regurgitations here.

"But pulling a number totally out of his ass and claiming that this is a likely number, and then using that ass-generated number to claim that the study vastly overstates mortality is sheer grandstanding. It has no basis in the data."

I don't think the paper settles on which bias-factor is the most likely one. However, the authors of the Lancet paper did do exactly what you describe. They picked a bias factor for their methodology of 0 (which they "ass-generated", to use your charming phrase). Then they proceed not just as if this is likely, but that it is a certainty, and build their estimates accordingly. The contribution of the paper on cross-streets is to show that this "ass-generated" assumption about their sampling methodology, on which all of the Lancet estimates rest, has no foundation, and the true bias factor could be very extreme in the context of the Iraq war.

Fred: I was referring to how the authors, and many of their supporters insisted on presenting their 100k as some kind of rock bottom minimum figure.

You know, Fred, I would suggest you go educate yourself on how cluster sample surveys (the method used for the Lancet survey) will invariably produce an undercount, or what the term "confidence interval" actually means, but I gather that your ideological beliefs have made it impossible for you so to educate yourself up to now, and I can't see how that could change.

Suffice to say: those who understood the methodology of cluster sampling (which is simple enough that anyone capable of throwing stones at the side of a barn can understand it, if they want to) could see why the Lancet results had to be an undercount of mortality: and anyone who understands confidence intervals in data, would understand that 100,000 was in fact a rock-bottom figure.

Anyone determined not to understand either had a rooted ideological objection to the figures, or had read only the "explanations" of those with that rooted ideological objection.

Go read How To Lie With Statistics, Fred. It's a great little book, and it will mean that you only need to be fooled by ideologues if you want to be fooled...

"cluster sample surveys ... will invariably produce an undercount" ... "anyone who understands confidence intervals in data, would understand that 100,000 was in fact a rock-bottom figure." ...etc.

Sheesh. You are really misled.

Criticism of L2 would be easier to take if the critics would admit that the IBC number is almost certainly a huge undercount. It would not surprise me in the slightest if there is some sort of bias in L2's data, but then, it wouldn't surprise me if there was a bias in the study published in the NEJM. Saying that you work for the very government that is torturing and killing many of its own citizens may not be the best method for encouraging full disclosure.

As for IBC, every single poll and survey I've read about seems to indicate they're missing the majority of the violence and the funny thing is, at times they at least pay lip service to this, though much less so after L1 came out. There was, for instance, a poll in early 2007 that found that 17 percent of the households had suffered at least one death or serious injury, which suggests a minimum of over half a million dead or wounded, and possibly a lot more if casualties are not randomly spread among households, but tend to cluster (as would be the case if family members are often found together when a bomb goes off or a death squad pays a visit). Plus the worst hit households, the ones that were wiped out or dissolved, would not show up in that poll.

@nous:
We are not "participating in the purging of hundreds of thousands of deaths from memory." We are, at worst, bracketing them off and suspending their utility as an obstruction for the time being in an effort to get enough buy-in to stop killing even more people for no gain.

A somewhat relevant exercise: go grab a random American voter and ask them what the death toll for the US's intervention in Vietnam was. I'm going to guess if they don't give you a blank stare, they may well throw out a number like 50k (or 58k if they're trying to be precise). If you're clear about wanting a total or civilian figure, I've the utmost confidence you'll be straight back into blank stare territory.

Maybe I'm just cynical, but I can't even imagine there being a strong push in the future to cause public opinion to acknowledge that the US invasion of Iraq resulted in more deaths than what's proclaimed by the conventionally accepted narrative that this sort of "pragmatic" bracketing off helps legitimatize.

I guess I'm in this monolith of "the critics", so...

"every single poll and survey I've read about seems to indicate they're missing the majority of the violence"

From page 39 of the Spagat paper cited above:

(2) The ILCS estimated 24,000 war-related deaths of civilians and combatants compared to
an IBC figure of about 14,000 deaths of civilians for the ILCS coverage period.45
(3) Benini and Moulton (2004), a study of Afghanistan since 2001 done by colleagues of the
L2 authors at Johns Hopkins, compared mortality estimates from a population-based
survey with a body count based on media monitoring that used methods that inspired
IBC’s approach (Herold, 2004). The survey found 5576 killed. This compares to a mediabased
count of 3620 civilians killed for the same period.

But as i said before, I think most of the surveys (let alone polls) have a lot of problems in this context.

"There was, for instance, a poll in early 2007 that found that 17 percent of the households had suffered at least one death or serious injury, which suggests a minimum of over half a million dead or wounded"

Interpretation is again a big question here. I think you're talking about this BBC poll:
http://abcnews.go.com/US/story?id=2954886
http://news.bbc.co.uk/2/shared/bsp/hi/pdfs/19_03_07_iraqpollnew.pdf

And your data point comes from question 35. This question does not ask about "death or serious injury", but about "physical harm" which can be interpreted much more loosely than you allow. It also gives no way to separate what portion might be deaths vs. other types of "physical harm". Another issue is the household boundaries are not set in a credible way, but are set by requesting in the question that the respondents set them for the purpose of that question. Unlikely that 100% of respondents followed this request. That approach may be fine for the purpose of getting this percentage for an opinon poll, but not for conversion into a national estimate. Another issue is that the response rate is 56% for the poll, leaving it unclear what the numbers on physical harm might be for 44% of the sample. Another issue is that there is some oversampling of some cites which are violent, but i can't tell whether the percentage given for q35 are weighted accordingly. In short, it will take a lot of wild guessing to say what number of deaths is correct or incorrect, reasonable or unreasonalbe, relative to this poll. I think people may just read into it what they'd like to see.

Fred: “...if the IBC numbers were 2 million, then each of these 2 million deaths would be supported by cited reports with documentary evidence on the place, time and location of the persons killed....”

Anyone who believes this should just try looking at the IBC database. There are numerous “incidents” such as this one (incident code x073): 1482-2009 deaths recorded by 19 Baghdad hospitals. So the place is somewhere in Baghdad. The time is any time from 20th March to 9th April 2003. The location of the bodies is not specified, but if you follow the link to The Age, one of the two newspapers cited as a source (the other link has rotted), you’ll learn that some of the bodies were buried in the hospital grounds.

Large “block entries” like this account for a substantial proportion of IBC’s numbers. (I checked a couple of yaers ago and found the proportion was about one-third). If the total was embarrassingly large, rather than conveniently small, warbloggers would be derisively waving this “documentary evidence” in the faces of critics of the war, dismissing the numbers as mere guesses, many of them supplied by parties with an axe to grind. Nor would they be wrong: many estimates come from the Ministry of Health during the period when it was controlled by the Sadrists. It seems to have suited them to play down the numbers. In other circumstances it might have suited them to do the opposite; then we would be hearing that cluster surveys supervised by health specialists are really a much more trustworthy source.

"Anyone who believes this should just try looking at the IBC database. There are numerous “incidents” such as this one (incident code x073): 1482-2009 deaths recorded by 19 Baghdad hospitals."

I've looked at the IBC database, and I believe it (hey, I wrote it). Even a bulk entry of that type already has more documentary details on the victims, and a greater basis for verification, than has been provided for the (smaller, even than that one entry) number of violent deaths recorded by any household survey in Iraq to date. Nor do such surveys resolve questions about parties supplying the raw information. In the case of the Lancet, the data is supplied by an anonymous Iraqi field team and by anonymous respondents. If people can find a way to question or dismiss raw data supplied by "19 Baghdad hospitals" to some reputable news agencies, they can certainly do the same for the former. (Unless you believe data becomes more reliable the less you know about who's providing it.)

I'd probably agree that it would be easier to question the veracity of something like IBC data is the proportion of bulk entries of that type is higher than if lower.

I think that's the poll I was remembering, but I don't think "physically harmed" is that hard to interpret--it means dead or wounded and so one should only strike "seriously wounded" from what I said. I didn't say one could distinguish between dead and wounded. I somehow doubt that the 44 percent who didn't respond would be mostly people happy with how things were going in Iraq. The poll can't provide anything resembling a precise number,but when one household out of six responding says that they've had at least one person physically harmed, I think it's some evidence that at least one household out of six has suffered a casualty.

" Even a bulk entry of that type already has more documentary details on the victims, and a greater basis for verification, than has been provided for the (smaller, even than that one entry) number of violent deaths recorded by any household survey in Iraq to date."


The number comes from a government which tries to minimize the number of deaths. Believing such figures are reliable is exactly like believing whatever numbers might have been provided by the government in Afghanistan in the 1980's. Anyone citing such a source as superior to "any household survey" would have been (rightly) mocked by all the people who embrace such a source in this particular war.

Supposing one could show that the L2 survey was fraudulent, it's going way too far to embrace the officially provided numbers of a lying government over any household survey.

This is a pretty good example of the harm IBC could do--we've now got people talking about a methodology that amounts to a combination of some limited honest reporting combined with officially provided statistics as the gold standard for casualty statistics, at least if it is for one of our wars. If it is some other war, then surveys or guesses pulled out of the air (the bigger the better) will be preferred.

"I think that's the poll I was remembering, but I don't think "physically harmed" is that hard to interpret--it means dead or wounded and so one should only strike "seriously wounded" from what I said."

"It means" = "I guess it to mean". What about someone who was raped, or kidnapped? There's plenty that can come into that besides violent deaths and injuries.

"I didn't say one could distinguish between dead and wounded. I somehow doubt that the 44 percent who didn't respond would be mostly people happy with how things were going in Iraq."

The issue is their unknown proportion of physical harm.

"The poll can't provide anything resembling a precise number,but when one household out of six responding says that they've had at least one person physically harmed, I think it's some evidence that at least one household out of six has suffered a casualty."

That's what you'd choose to guess, obviously.

"The number comes from a government which tries to minimize the number of deaths."

The number looks like it comes from 19 separate hospitals in Baghdad in 2003. I've seen no evidence at all that they were trying to minimize anything. I have seen some allegations of this type much later than that. But the issue i was discussing was not completeness, but whether the deaths reported in such a case, or in the raw data of a survey, were themselves true deaths. Whether some number of true deaths had been "minimized" to some smaller number of true deaths was not the issue.

"This is a pretty good example of the harm IBC could do--we've now got people talking about a methodology that amounts to a combination of some limited honest reporting combined with officially provided statistics as the gold standard for casualty statistics, at least if it is for one of our wars. If it is some other war, then surveys or guesses pulled out of the air (the bigger the better) will be preferred."

I can't make much sense of this. I don't think there is a "gold standard" for casualty statistics. Nor does it make sense to me why one would only be interested in making sure any statistics were only equally good or (as you seem to be fine with) equally bad across different wars. Seems like you'd prefer a race to the bottom as long as it brings some kind of weird form of equality across wars. Doesn't make sense to me. Seems like people should use whatever methods help get as much quality information about it as possible.

I can't make much sense of this. I don't think there is a "gold standard" for casualty statistics.

You rather seem to be arguing that media-reported casualties (generally derived 2nd to 4th hand from governmental sources) would be just that.

Seems like people should use whatever methods help get as much quality information about it as possible.

...where "quality information" means... what? "Information that supports my preconceptions?"

" Seems like you'd prefer a race to the bottom as long as it brings some kind of weird form of equality across wars. Doesn't make sense to me. Seems like people should use whatever methods help get as much quality information about it as possible."

This is stupid. Deliberately so. I complain that a method guaranteed to produce undercounting is used for the deaths inflicted in US wars, and other methods (sometimes accurate, sometimes just guesses) which produce larger numbers are used for those of our enemies. You come back with the above. You aren't arguing in good faith.

Fred: Sheesh. You are really misled.

Yeah: both my education and my upbringing have misled me to prefer a scientific understanding of the data available. Sad really, but there you go.

Or, what Donald said: This is stupid. Deliberately so.

If you extend this to ones that measured just overall mortality rates during a war, or "excess deaths" you could add more examples, but this would be bringing in oranges to compare to apples in this context.

This is just a classic example of what people who don't understand stats try to do. If you've got a theory, you test the theory. You don't cast your net just wide enough to catch a sample that turns out to be statistically significant and then, post-facto, declare that the pool you want to sample & present the results as if the the criteria were decided without knowledge of the data.
If you hear someone say "5 of the top 7 recipients of funds from X industry belong to party Y", you are sure that the 8th is from the other party, and maybe the 9th too (leaving 5 of 9, not that impressive). And it's entirely possible that they've chosen their other criteria (exact definition of "the industry" in question, the time window, etc). They've manually chosen the screen to emphasize the distrubion they want to prove, but they've actually demonstrated nothing, really, other than their desire to see a predetermined outcome.
In any given case, the post-facto screen can probably be defended on specific points, but that's not how it works in statistics. You don't hunt for the relationship and then define the terms- not if you're honest about it. At a minimum, you don't then analyze via statistics that assume an honest predefined screen.
To recap, the basic errors committed by this chart:
1)He's clearly choosing his sample, not randomly selecting or putting the entire universe of possible results up; even if there are only 5 papers which could possibly fit his critera, that's 10 possible 2-point combinations. He used his numbers to reject the null at 5% but not 1%. But with 10 trials and picking the 'best' candidate, his actual p-value would be much higher.
2)There's no reason to think that he wouldn't have expanded his search to a larger pool of candidates if he didn't find the result he wanted to see. Or change the item being measured- maybe he'd look for correlations in the rate of civilian/military casualties over time, ratios of females/males, adults/children, etc. The point is, when one is dishonest, one can probably find statistical 'evidence' to support a case by hunting carefully enough. Does the economy do worse when Democrats are in power? First, you check periods when Dems control the WH. Or Congress. Or, just the Senate or the House. You expand your time periods- maybe there's a 2 year lag, or a 4 year lag. You expand your correlations to state governments (more data = more possible combinations of screens). You find that "When Dems control the a state legislature, economic growth in that state is 25% lower than average (given a 3-year lag for policies to take effect) in the study period of 1983-1996". That looks impressive to the layman, but it's garbage from a statistics standpoint.
3)Even worse statistically, he ignores that these points aren't independent of each other. That is, we absolutely expect more people to be killed over longer periods of violence. If you plotted every possible study on a chart, we'd expect a strong general trend from lower-left to upper-right. But he treats them as independent. This is, once again, really basic stuff, obvious even to the layman.
If the points aren't truly independent (ie the underlying process drives some basic trends such as this one), then you overestimate the rejection of the null. Or, to put it another understandable way- the null hypothesis for unrelated casualties-over-time studies is a general trend from lower left to upper right, not a random collection of data points.
4)Finally, on a non-stats point, Im not sure why the Lancet authors would commit fraud in this manner. There's absolutely no value to their paper to having this value fit a perfect line from two unrelated studies.

On the issue of "appeal to authority", I completely agree with you. Such appeals were what I was responding to, and why I was stressing such appeals in a way where i thought the sarcasm would not be difficult to see.

Credible, if I had made such an argument. Weird if you're using this in a response to my argument, since I hadn't. Downright bizarre when you only use this 'sarcasm' argument as a defense and fail to offer your real defense of these questions.
Sarcasm doesn't mean "making a bad argument and then pretending to have not meant it when it gets wrecked".

...because this was the only thing attacked on Deltoid, with the other 96% of the paper was again ignored. And so this carries over to your regurgitations here.

I didn't read it on Deltoid, actually; your track record of being wrong gets another notch. But there's a general principle at work here- if one part of a paper shows profoundly poor quality, there's not much chance of the rest being good.
Also, it's the easiest to analyse without referring to data- I've got no way to check their assertions about the data, but I can easily spot a hack stats job.

I don't think the paper settles on which bias-factor is the most likely one. However, the authors of the Lancet paper did do exactly what you describe. They picked a bias factor for their methodology of 0 (which they "ass-generated", to use your charming phrase). Then they proceed not just as if this is likely, but that it is a certainty, and build their estimates accordingly.

They chose a sampling regime that they thought produced a reasonably representative sample. And, as I said, the could reasonably be criticized by someone for that point. But real criticism would require some actual data, not a totally unsupported guess that their sample wasn't representative. The only support he provides is a homicide gradient found in some cities, but 1)there's no evidence that this homicide gradient has anything to do with cross-stress v non-cross street homes (and casual experience suggests that this is not the case: Parts of Manhattan are safe and parts of Brooklyn aren't- it's not that big streets in NYC are very safe and small streets are very dangerous) 2)there's no evidence that any Iraqi site has a similar gradient, or a demonstration that such gradients are common and 3)there's no reason to think that criminal homicide rates and death rates due to a occupation/civil war are similar in distribution.
So, it's a totally trivial result: yes, if this method of cross-street cluster sampling oversamples violent areas, then it will produce a high estimate. That is so obvious that it's barely worth saying out loud. It's like saying that a drug trial would overestimate the effect of the drug if the study allowed people who weren't showing positive effects to drop out- useless without data demonstrating that this actually occurred.
His criticism can be generalized to any cluster sampling- the nature of cluster sampling makes it vulnerable to that sort of attack. But in order to mount a credible attack, one has to show that the sampling technique *actually* shows bias, not that it *can*. Cluster sampling *can* always have bias. Any sampling regime other than totally random selection *can* have bias.

Nor does it make sense to me why one would only be interested in making sure any statistics were only equally good or (as you seem to be fine with) equally bad across different wars.

So, we don't want to be able to make apples-to-apples comparisons of various conflicts across the world and over time? That seems like an incredibly valuable thing to be able to do in and of itself.

Donald, I am arguing in good faith whether you want to believe it or not. What I wrote is what your statements suggest to me.

Jes, If you'd prefer scientific understanding you really need to re-evaluate the comments you made above, which I quoted, because they are completely wrong, and I think plenty of the other resident experts here could tell you the same. They just won't pipe up to say so because you're a 'good guy' and that would take time away from smearing 'bad guys' like me, or IBC.

Jes: They just won't pipe up to say so because you're a 'good guy' and that would take time away from smearing 'bad guys' like me, or IBC.

I think you'll find that I really don't have "good guy" status on Obsidian Wings: more like "supremely irritating guy" status. AFAIK, every single regular commentator on this thread is normally absolutely delighted to point out when they think I'm wrong.

But I'm not. And you are.

Enough meta! Back to demolishing your misunderstanding of statistical and scientific methodology!

Jes, If you'd prefer scientific understanding you really need to re-evaluate the comments you made above, which I quoted, because they are completely wrong, and I think plenty of the other resident experts here could tell you the same. They just won't pipe up to say so because you're a 'good guy' and that would take time away from smearing 'bad guys' like me, or IBC.

OK, now we're moving from tragedy to farce. Are you high? Because the notion of people not arguing with Jes because she's the 'good guy' is just...well, it takes my breath away. Let me clue you in: you are very very wrong here.

Look Fred, I'm sure your trolling is a lot of fun, but seeing as how you've been completely outclassed by several people on this thread, why don't you scurry on along back under whatever rock you came from? I mean, at least until you learn something about statistics or epidemiology or demography. Because the level of ignorance you're displaying is just plain sad.

"If you extend this to ones that measured just overall mortality rates during a war, or "excess deaths" you could add more examples, but this would be bringing in oranges to compare to apples in this context.

This is just a classic example of what people who don't understand stats try to do. If you've got a theory, you test the theory. You don't cast your net just wide enough to catch a sample that turns out to be statistically significant and then, post-facto, declare that the pool you want to sample & present the results as if the the criteria were decided without knowledge of the data."

"People who don't understand stats" think we should be comparing apples to apples. While, apparently, people who "understand" think we should be comparing trends in violent deaths to ones in crude mortality, or to maybe, rates of heart attacks, or car purchases. To do otherwise is "cherry picking".

In any case, i hope the confusion over what I said about the issue of how many examples are actually relevant here has been resolved. I would have considered the relevant examples to be ones measuring the same variable, and thus pointed out correctly that there aren't very many viable candidates to choose from. You, on the other hand, consider the viable candidates to be ones conflating measurements of all kinds of different variables. So, for example, Spagat should have selected the points randomly from some universe of survey estimates of violent deaths, survey estimates of crude mortality, poll questions on how many approve of Obama's job performance, etc. etc.

Maybe though what you should do is write a letter to the journal. If these grave problems are as "obvious" and "innumerate" as you suggest, it should be easy for you to convince the editors of their and their reviewers huge mistake here, get yourself published, and move the discussion forward.

"3)Even worse statistically, he ignores that these points aren't independent of each other. That is, we absolutely expect more people to be killed over longer periods of violence. If you plotted every possible study on a chart, we'd expect a strong general trend from lower-left to upper-right. But he treats them as independent. This is, once again, really basic stuff, obvious even to the layman."

I think if this is obvious it is mistaken. They are three separate wars and there isn't really a reason why the one that went longer than the other should be higher on this chart. For example, if you believe the Lancet article, then about 2.4% of the country were killed in about 3.5 years. But another war may not kill that many over 10 years or 20 years.

"I didn't read it on Deltoid, actually"

Don't believe you, sorry.

On the CSSA paper:

"They [Lancet authors] chose a sampling regime that they thought produced a reasonably representative sample."

And the argument of the paper on CSSA is that this is not a reasonable choice. Earlier you wrote:

"Im imaging how Dr.Fred would take it if one of the Lancet paper's authors just yanked a number out of their asses and used it to estimate 100ks of casualties."

In essence, that is precisely the argument about their choice of a sampling scheme. By choosing a CSSA scheme, this assumes that the bias for streets favored by this scheme is 0. This assumption is then used to estimate over 600k deaths.

Dr. Fred didn't really react to this, but the authors of the CSSA paper did, and good for them. They wrote an academic paper (rather than just a tantrum on a blog) showing that the "Lancet paper's authors just yanked a number out of their asses and used it to estimate 100ks of casualties."

But hey, I'd suggest you write a letter here as well. This one was even given a major award by the journal. If it is so obviously "trivial" and "useless". You should be able to again get yourself published easily, and move the discussion forward again.

Fred: I am arguing in good faith whether you want to believe it or not.

Fred: Don't believe you, sorry.

Res ipse loquitur.

Turb, putting aside the vacuous insults which waste most of the space in your post (you could have at least thrown in some words like "bad faith" or "poopy-head" to make it more fun), Jes' comments were,

"cluster sample surveys ... will invariably produce an undercount" ... "anyone who understands confidence intervals in data, would understand that [L1's] 100,000 was in fact a rock-bottom figure."

I know, and I'm sure several others here who've been very interested in attacking me, I'm sure know, that such statements are wrong. Yet no one cares to point this out. Why? I come to the simple conclusion that it is because Jes comments (here, at least) are seen as helpful in this little revival of a witch hunt against IBC, and is therefore a 'good guy' here, even if a 'bad guy' elsewhere.

On the point at hand I think that Prof Spagat yet again provides an excellent demonstration, in this case of the falsehood of something like "cluster sample surveys ... will invariably produce an undercount".

Really, something so simply wrong as that comment should not need to have evidence presented against it, especially when i know many people attacking me here know better too, but I found this here:
http://personal.rhul.ac.uk/uhte/014/Research.htm

Which lists all kinds of published papers relevant to conflict study (this 'hack' has been fooling just tons of journals, apparently). It also has some good draft papers.

One is this one:
http://personal.rhul.ac.uk/uhte/014/Pittsburgh%202009.pdf

It evaluates random cluster samples of various sizes drawn from a known universe. Needless to say, many such samples, particularly smaller ones, badly overestimate.

Lots of other good and informative papers there on the Lancet study too, besides the two I already mentioned earlier. Many here will read with great interest I'm sure.

"Does the economy do worse when Democrats are in power? First, you check periods when Dems control the WH. Or Congress. Or, just the Senate or the House. You expand your time periods- maybe there's a 2 year lag, or a 4 year lag. You expand your correlations to state governments (more data = more possible combinations of screens). You find that "When Dems control the a state legislature, economic growth in that state is 25% lower than average (given a 3-year lag for policies to take effect) in the study period of 1983-1996". That looks impressive to the layman, but it's garbage from a statistics standpoint."

I'd like to register this opportunity to completely agree with you Carleton, and wish you had been around for the at least 10-15 times I had that exact discussion with cactus at the angrybearblog. (And we won't even touch on the problem of identifying economic programs by which a comparison of Presidents by party would lead you to believe that Clinton had more economic policy in common with FDR than he did with Reagan.).

envy - "A somewhat relevant exercise: go grab a random American voter and ask them what the death toll for the US's intervention in Vietnam was. I'm going to guess if they don't give you a blank stare, they may well throw out a number like 50k (or 58k if they're trying to be precise). If you're clear about wanting a total or civilian figure, I've the utmost confidence you'll be straight back into blank stare territory."

Yeah, I get this a lot in the classes I teach about war, rhetoric and media. You are right that this is a problem, but it's not *the same problem* that Turb focuses on above. It's a question of pedagogy, really, and about how to get to where you are asking and answering the right questions at the right time. Questions of what war looks like and who is affected by it are different than questions of policy. Students are much more resistant to the latter if they sense any ideology at work.

Probably, not much of an answer to things and a bit of a threadjack itself, but it's where I'm coming from here.

You are right that this is a problem, but it's not *the same problem* that Turb focuses on above. It's a question of pedagogy, really, and about how to get to where you are asking and answering the right questions at the right time.

I'd not argue that this isn't the matter at hand. What I would argue is that the American body politic (to say nothing of the keepers of its conventional wisdom) seems unlikely to decide at a later date that now it's finally the right time to be asking these questions. Past experience from all our recent wars very strongly suggests otherwise; the tendency is to entrench as pleasing a narrative as can be managed, then ram fingers firmly into the collective ears.

In any case, i hope the confusion over what I said about the issue of how many examples are actually relevant here has been resolved. I would have considered the relevant examples to be ones measuring the same variable...

You are missing the point in a fairly obtuse way. Perhaps intentionally. The point is that there is more than one variable in the study than deaths per month over time (nb I gave some examples). So an unscrupulous statistician could pick any one of those variables and analyse it for a fit against any one of a number of other studies. Finding a good candidate fit, he then analyses this fit as if it were the only candidate- but obviously, if you analyse eg 100 candidate papers and variables, you would expect to find on average one candidate that reached a p value of .01.
Because that's what p value *means*- that he expected to find that degree of correlation about 1% of the time. Hardly surprising when selected from a signifacant pool of possible matches.

So, for example, Spagat should have selected the points randomly from some universe of survey estimates of violent deaths, survey estimates of crude mortality, poll questions on how many approve of Obama's job performance, etc. etc.

Well, he's going to have to pick from a pool of studies that measure similar variables. Which presumably you know, but one can never tell with you what you understand, what you mistakenly think you understand, and what you're merely rambling around as if you understood.

The general point, which you do not grasp, is that statistical analysis is invalid if the sampling is not random. If one preselects items for inclusion, screening for exactly the result that one wishes to find, it's almost impossible not to be able to do so. You can prove eg that women, on average, are taller than men- if you get to measure the subjects beforehand and pick which ones are included in the analysis.

Ive tried to explain this with simple examples such as the one above. If you have the reading complehension skills to process this, then understanding is within your grasp if you choose to go down that road.

I think if this is obvious it is mistaken. They are three separate wars and there isn't really a reason why the one that went longer than the other should be higher on this chart. For example, if you believe the Lancet article, then about 2.4% of the country were killed in about 3.5 years. But another war may not kill that many over 10 years or 20 years.

Again, basic comprehension: the longer a conflict goes on, the more people will have been killed. The lines from the graph from any such study will trend from lower to higher numbers as time goes on. Some short wars will kill more people than other longer wars- but the general trend will certainly exist.
This is not the same as a random selection of points within the entire possible space.
Ergo, we would expect a dozen totally unrelated studies to show a very rough fit to an upward-sloping line.
Ergo, the proper test here for rejection of the null is not a purely random one. If I were to do such a thing, I would gather all of the possible relevant studies and compare for significance compared to those data.

Maybe though what you should do is write a letter to the journal. If these grave problems are as "obvious" and "innumerate" as you suggest, it should be easy for you to convince the editors of their and their reviewers huge mistake here, get yourself published, and move the discussion forward.

Is that an argument? It looks like another distraction. Are you going to tell me that it was another attempt at sarcasm?

"I didn't read it on Deltoid, actually"
Don't believe you, sorry.

Ok, so let's drop that as the irrelevancy that it is and move on. Don't really know why you brought it up, but you seem to do that quite a bit- non sequiturs in place of actual arguments.

They wrote an academic paper (rather than just a tantrum on a blog) showing that the "Lancet paper's authors just yanked a number out of their asses and used it to estimate 100ks of casualties."

I said that it's a reasonable general criticism. But, as pointed out, it is a criticism that can be levied in general against any cluster sampling scheme.
What the author has done here is say "cluster sampling gives bad estimates when the clusters aren't representative". That is not exactly news. And for no reason other than apparent bias or desire for headlines, he's included his own 'plausible' number. Which really looks to me like an unlikely guess- why not 10, or 50?
He hasn't added anything to the discussion that anyone with stats 101 didn't understand- anything other than random dampling can produce bias if the samples aren't representative.

You should be able to again get yourself published easily, and move the discussion forward again.

There is always another hack paper, and always another person willing to believe it because it suits their political purposes. And no one who understands these things has any illusions about their worth.

I'd like to register this opportunity to completely agree with you Carleton, and wish you had been around for the at least 10-15 times I had that exact discussion with cactus at the angrybearblog.

I don't read that. But you probably don't believe me- you have very selective control of your organ of belief.

Seb,
Sorry, I ran your comment together with fred's- yeah, getting people to understand that cherry-picked statistics aren't actually 'statistics' in a meaningful sense is an ongoing uphill battle on the internets. It's like so many other sociopolitical discussions where I think, as a society, we fall down bc the argument for (eg "these are undisputed facts") is much quicker to make and easier to understand than the argument against.

Tell you what Dr. Wu, I think you convinced me.

When i read the Lancet study back in 2006, "the first thing I saw" was their Figure 4 graphic, which I believed was wrong and a piece of hackwork. So therefore I discarded the rest of whatever might have been in there as also hackwork, which is of course a reasonable thing to do. Of course, it would be strange for a graph way down in the body of the paper to be the first thing I happened upon (unless directed specifically to it by something else), but that's just the way it went. When I read scientific papers i usually count up the total number of paragraphs, weighted by word-count, and do a random selection of a starting paragraph, and here i just happened to land on Figure 4. In your case, it happened to land you somewhere in the middle of a 42 page paper. In either case, we each have a good basis for judging the papers as a whole based on our respective judgments of hackwork in the first things we saw, and can discard both.

On the CSSA paper, i now think you've got it right here too. This kind of case could be made against the sampling methodology of any cluster survey. There's nothing special here. Indeed, I might do a survey of Iraq and decide as a reasonable sampling methodology that I will select a start house by choosing the house in the area covered in the most bullet holes or bomb damage. Someone might challenge this as not a reasonable choice of sampling methodology for getting a representative sample of violence, but such a case could be made for any survey. Unless my critics themselves get funding, hire teams of interviewers and go to Iraq to collect data on houses with lots vs. little bullet and bomb damage, it's just trivial criticism and does not detract in any way from the sound scientific estimates i might publish based on my data.

Progress at last.

And the same old tune goes on . . . but now, with improved spelling!

Fred: I am arguing in good faith whether you want to believe it or not.

Fred: Don't believe you, sorry.

Res ipsa loquitur.

In your case, it happened to land you somewhere in the middle of a 42 page paper

I explained this to you: I scanned past the points that I couldn't verify until I found something that I could easily analyze without referring to the underlying data. Im not in a position to verify their info about re death certificates.

Indeed, I might do a survey of Iraq and decide as a reasonable sampling methodology that I will select a start house by choosing the house in the area covered in the most bullet holes or bomb damage.

I think we all recognize that no amount of work by the Lancet authors could produce a work that you would accept. So, on a judgment-related matter such as this one, I would naturally expect you to take such a ludicrous position.
Would the Lancet studies have been better if they had more detail on street selection? Sure. If they had done so, would you be satisfied with the result? Of course not. I think that even you recognize this- this is not about some specific detail of the study, this is about denial.

When i read the Lancet study back in 2006, "the first thing I saw" was their Figure 4 graphic, which I believed was wrong and a piece of hackwork

Close, but not quite correct. Naturally, you've replaced my suggested methodology (analysis of method) with your preferred method: outcome-based belief.
You've also become confused between a poorly-designed graph and a basically dishonest statistical analysis. Everyone makes mistakes re making things more confusing that they need to be. Not everyone commits intellectual fraud. Prescreening data and then applying a statistical test that assumes random selection is dishonest.

Indeed, I might do a survey of Iraq and decide as a reasonable sampling methodology that I will select a start house by choosing the house in the area covered in the most bullet holes or bomb damage.

For a moment there, I thought that you had finally understood, and were using this as an analogy to show that you grasped the importance of random selections v intentionally-biased ones.
But no, on a second reading I see that you merely have accidentally produced a work of ironic ignorance.

I do notice that you've abandoned any explicit support of the idea that prescreening data that ought to show a general trend and then performing a statistical analysis as if they were independent and randomly-selected is acceptable. Not sure if you've actually come around and find it embarrassing to admit it or if you've preferred to, X-files style, continued to believe despite not being able to defend the proposition.

Returning upthread past the Fred madness.

McKinneyTexas: Jes, are saying the Taliban/Afghanistan was "a country that had not attacked the US and presented no possible threat to the US"?

McK, are you attempting by this question to allege that Afghanistan did attack the US?

Because if so, you are wrong: there has never, in the entire history of the United States from 1776 to the present day, been any occasion when Afghanistan was ever able either to attack the US or to present any possible threat to the US. (To individual Americans actually in Afghanistan, sure. But to the US or to anyone in the US, no.)

If you are, as I suspect you are, attempting to claim that the al-Qaeda terrorist group which attacked the US quite successfully on September 11 2001 was somehow equivalent to "Afghanistan/the Taliban", that's about as accurate a claim as asserting that the pro-life terrorist movement is equivalent to "the United States/the Republican party".

The pro-life terrorist movement has killed far more people than al-Qaeda (70,000 women a year for decades). Supposing there were such a thing as a feminist military, would it be justified in launching a missile attack on major US cities until the US surrendered pro-lifers such as George W. Bush for trial in the Hague and execution on IWD?

(I hope it's clear that's offered as a fantasy counter-example, not an excuse to start arguing about abortion. Though I'm sure Fred has stats about that, too.)

Al-Qaeda is not the Taliban. Terrorist attacks by al-Qaeda do not require a "safe harbor" to plan. The US attack on Afghanistan in revenge for 9/11 killed thousands of Afghans who probably would have had difficulty finding New York on a map - if they'd ever in their lives seen a map.

(And while I respect Carleton Wu's efforts enormously, his effort merely demonstrates that Fred's mind is made up and can't be confused by facts.)

Again, basic comprehension: the longer a conflict goes on, the more people will have been killed.

You seem to be ignoring the possibility of unkilling people at some point in a conflict. How convenient. (I'll let Fred take it from here.)

9/11 was an inside job.
Most of the world knows it by now.
And a huge number of Americans as well, now understand that 9/11 was a classic false-flag operation, augmented by a massive mainstream media psyop.

Get with the reality ... before its too late ..

http://losalamos911truth.blogspot.com/

The comments to this entry are closed.