Drone panic: New weapon, old anxieties
Poor drones and drone-operators: as the latest generation of weaponry on the battlefield, it’s now their turn to be subject to that great generational scrutiny of moral and ethical suspicion. Poor thinking public: we get treated to these arguments as if they were all original or unique to drones.
This piece from the Atlantic, addresses (among many other things) many of these themes. Unfortunately, it perpetuates a number of very tired tropes about military technology and tactics.
The first overused and under-scrutinized argument is the fear that drones make war “easier to wage” because “we can safely strike from longer distances.” Well, we’ve had that ability since the birth of air power and missile power, it just makes it a lot easier to hit certain kinds of targets at a certain tempo. After all, it’s only the pilots who are “far away,” the drones themselves still operate from bases with real, flesh-and-blood people who are potentially exposed to retaliation, and those bases are not necessarily any further away than bases for manned aircraft (in most cases they service both). It’s notable, actually, that the US actually requires significant international cooperation to maintain the bases it wants to launch drone strikes from, and only in Libya has the US ever used drones against a government it admitted to being at war with (as opposed to Yemen, Pakistan, and Somalia, where the US strikes non-state actors with some degree of compliance from the local government).
This does of course make war-making easier to some extent, but why we should find this alarming is unclear. Similar effects could be achieved with advances to manned aircraft, stand-off guided missiles, advanced naval weapons, and so on. Drones are an incremental advancement in technology, so far they have no prevented a major alteration in the offense-defense balance, they have merely made military operations we already preferred to fight – such as the US targeting of al Qaeda with air strikes in 1998 – more effective, but without the acquisition of bases capable of generating large volumes of combat sorties, a sustained war would not be “easy” by any means – and those bases would also increase manned-aircraft sortie generation. The loss-of-strength gradient still applies.
Lin, in the same thought, notes:
He compared our use of drones with the biblical David’s use of a sling against Goliath: both are about using missile or long-range weapons and presumably have righteousness on their side. Now, whether or not you’re Christian, it’s clear that our adversaries might not be. So rhetoric like this might inflame or exacerbate tensions, and this reflects badly on our use of technology.
I’m going to have to assume he’s alluding to Buddhists or Hindus or some other faith, because the story of David and Goliath, being in the Old Testament (it’s, a Jewish reference, if that’s not clear), is present not just in Christianity but in Islam! Let’s review (2:251):
So they defeated [the Philistines] by permission of Allah , and David killed Goliath, and Allah gave him the kingship and prophethood and taught him from that which He willed. And if it were not for Allah checking [some] people by means of others, the earth would have been corrupted, but Allah is full of bounty to the worlds.
So, that covers that. I really doubt that China is concerned about Abrahamic rhetoric by American policymakers, even if it does distrust the expansion of Christianity in China (the CCP is far more piqued by their secular political self-righteousness), and – wait, weren’t we talking about drones?
On that subject, we get to one of my favorite charges against new military technology, that it will embolden our enemies because they will think we are cowards (emphasis mine):
Relatedly, we already hear criticisms that the use of technology in war or peacekeeping missions aren’t helping to win the hearts and minds of local foreign populations. For instance, sending in robot patrols into Baghdad to keep the peace would send the wrong message about our willingness to connect with the residents; we will still need human diplomacy for that. In war, this could backfire against us, as our enemies mark us as dishonorable and cowardly for not willing to engage them man to man. This serves to make them more resolute in fighting us; it fuels their propaganda and recruitment efforts; and this leads to a new crop of determined terrorists.
This has been said, I think, about virtually every military innovation since the machine gun, and they should have said it about artillery too if they didn’t. Sebastian Junger, in his excellent War, explains it well:
A man with a machine gun can conceivably hold off a whole battalion, at least for a while, which changes the whole equation of what it means to be brave in battle…. Machine guns forced infantry to disperse, to camouflage themselves, and to fight in small and independent units. All that promoted stealth over honor and squad loyalty over blind obedience….
As a result much of modern military tactics is geared toward maneuvering the enemy into a position where they can essentially be massacred from safety. It sounds dishonorable only if you imagine that modern war is about honor: it’s not. It’s about winning, which means killing the enemy on the most unequal terms possible. Anything less simply results in the loss of more of your own men.
Does anyone think it matters that soldiers of counter-insurgent or counter-guerrilla forces, think insurgent tactics are dishonorable? Do you think Taliban or Badr Brigade members wrote missives to each other worrying if Dragunovs, L-shaped ambushes and IEDs with explosively formed penetrators were going to cause them to lose the respect of the American occupiers? War is not won with style points. It is conducted with the maximum amount of lethal force a country feels appropriate for the accomplishment of its political goals, and policymakers must intervene to ensure it does not exceed that. Now, yes, avoiding being hated by the civilian population does matter, but if you replace “robot patrols” with “tank patrols” or “white men with body armor and wraparound sunglasses,” you get a similar answer – the problem isn’t unique to drones, because people tend to have a basic and healthy distrust of armed foreigners anyway.
As for winning the enemy’s respect, we should remember, as Betty White once put it, “I’m going to attack you with this, and you use respect to defend yourself.” The enemy already thinks American soldiers are cowards with expensive machines, this is nothing new. Beyond the tanks, the over-the-horizon artillery barrages, and the enormous bases, there is air power. To the average terrified insurgent soldier, there is little moral difference whether it is a Harvest Hawk or a Reaper slinging Griffins at them. In either case, it is a nigh-invincible machine which the Americans can dispense death from. To a guy with an Kalashnikov or a DShK, that Harvest Hawk’s crew might as well be in Creech – they can’t be stopped. It’s not that we’re killing people with drones that drives terrorist recruitment, it’s that we’re killing people. If we just sent American fighting men and women into hostile villages wearing nothing but loincloths and swinging heavy sticks, people would still be very mad at you when you beat somebody to death.
The additional disrespect that weapons technology earns soldiers is generally more than outweighed by the casualties and incurs and fear it induces in the enemy. To the extent that it causes anger, it is through collateral damage and the death of compatriots, but drones are hardly unique in doing so. The way in which weapons – any weapons – are used in war generally far outweighs the weapons themselves in the overall psychological effect.
Many of these new ethical questions are in fact very old. Take this example:
Without defenses, robot could be easy targets for capture, yet they may contain critical technologies and classified data that we don’t want to fall into the wrong hands. Robotic self-destruct measures could go off at the wrong time and place, injuring people and creating an international crisis. So do we give them defensive capabilities, such as evasive maneuvers or maybe nonlethal weapons like repellent spray or Taser guns or rubber bullets? Well, any of these “nonlethal” measures could turn deadly too. In running away, a robot could mow down a small child or enemy combatant, which would escalate a crisis. And we see news reports all too often about unintended deaths caused by Tasers and other supposedly nonlethal weapons.
So what we’re worrying about is a robot creating a situation where an American covert asset uses lethal force against armed pursuers and tries to evade capture, creating a massive diplomatic crisis? Gee, I didn’t know Ray Davis was a Terminator!
An additional misplaced concern the drone program is disproportionately saddled with is that of collateral damage:
Another worry is that the use of lethal robots represents a disproportionate use of force, relative to the military objective. This speaks to the collateral damage, or unintended death of nearby innocent civilians, caused by, say, a Hellfire missile launched by a Reaper UAV. What’s an acceptable rate of innocents killed for every bad guy killed: 2:1, 10:1, 50:1? That number hasn’t been nailed down and continues to be a source of criticism. It’s conceivable that there might be a target of such high value that even a 1,000:1 collateral-damage rate, or greater, would be acceptable to us.
How on earth is this a new problem because of drones? This was a problem when we were using manned aircraft and it is a problem with naval missile and gunfire support and artillery as well. We’ve never had a magic ratio that resolved this debate and never will. This problem, again, precedes drone warfare and will outlive it. But the concern is particularly strange since drones actually allow us to reduce collateral damage significantly compared to many alternatives, since it is easier for a drone to loiter and strike with greater precision than it would be for the pilot of a jet aircraft.
This next problem isn’t Lin’s, but it is reflective of one of the stranger standards in IHL:
Let’s say we were able to create a robot that targets only combatants and that leaves no collateral damage–an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a rule by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than 25% field mortality and 5% hospital mortality.
This rule does not particularly make sense to me, since, again, the lethality of the weapon seems to depend on the circumstances under which it is employed. A well-trained sniper with a rifle, who’s setting up his shots to kill, is going to be more lethal than an infantryman who is going to use a rifle to provide suppressing fire. In some cases ammunition which is extremely lethal is effectively banned – hollowpoints and exploding bullets come to mind (although countries have no qualms about firing explosive-tipped rounds from vehicular chain guns at people). But again, I don’t see this problem as being perfectly unique to robots, since in theory a sufficiently advanced targeting system on another vehicle-mounted weapon would accomplish the same thing, except the human pressing the button would be physically closer to the action.
Then, there’s the final proliferation argument, which I’ve tangentially addressed before in a previous post.
Related to this is the all-too-real worry about proliferation, that our adversaries will develop or acquire the same technologies and use them against us. This has borne out already with every military technology we have, from tanks to nuclear bombs to stealth technologies. Already, over 50 nations have or are developing military robots like we have, including China, Iran, Libyan rebels, and others.
So what? Our adversaries are going to develop technologies to fight us with regardless of what we do. Does anyone really think that if we stopped using drones, our enemies would do the same? Of course not, countries develop military technologies to further their own interests, not to spite Americans. My previous skepticism about the drone war “coming home” still holds, as it should be extremely clear by now that any country stupid enough to host a drone base outside US borders for striking American targets, let alone the country stupid enough to try flying the drones in the first place, would be on the receiving end of a very large amount of firepower.
The piece concludes:
Integrating ethics may be more cautious and less agile than a “do first, think later” (or worse “do first, apologize later”) approach, but it helps us win the moral high ground–perhaps the most strategic of battlefields.
No, the moral high ground is not the “most strategic of battlefields.” War is not a morality play. In every war, the victor has done horrible, horrible things, not necessarily any better than the things the vanquished did to avoid their fate. There are plenty of legitimate ethical trade-offs in war, particularly because ethical questions intersect with the political nature of war itself. However, the notion that we can “win the moral high ground” and that this is a “strategic battlefield” is a quixotic endeavor. What adversaries, neutrals, and observers understand as “moral” is not going to be the same in war, and war can absolutely be won without proving one’s moral worth to the enemy population: just ask the Japanese after World War II. Yes, there are ethical considerations about drones, but these are largely the same questions that have dogged us since the advent of the machine gun and air power. Overemphasizing the supposed uniqueness of drone war ethics and the importance of being “respected” by the enemy the US is trying to maim and kill will just set us up for another round of disappointment, when the new supposedly ethical and humanitarian variations on warfare fail, yet again, to make the violent imposition of a foreign power’s will a palatable experience for those on the receiving end.