Did We Almost All Die in November 1983?

ss20

For years now I’ve been telling my students that the closest the United States and the Soviet Union came to using nuclear weapons against each other was during the 1962 Cuban Missile Crisis.  But a new book by British television-producer-cum-historian Taylor Downing has convinced me that perhaps the closest the world came to an exchange of nuclear weapons was on November 9th, 1983.  Does that date not ring any special bells for you?  That’s precisely what makes Downing’s 1983: Reagan, Andropov, and a World on the Brink such a frightening and unsettling read.

Downing begins by ably chronicling how US-Soviet relations reached a nadir during the first three years of Ronald Reagan’s presidency (1981-1983).  On the American side, the Reagan administration kept upping the Cold War ante: giving harsh speeches denouncing the Soviet Union as an “evil empire”; pursuing new weapons systems that threatened to completely upend the existing strategic balance of power; and evincing little interest in conducting normal diplomatic relations with the USSR.  On the Soviet side, its top leadership was “geriatric,” sick, and increasingly paranoid.  Soviet premiere Yuri Andropov spent most of his time in office in a hospital bed hooked up to a dialysis machine in a highly guarded military hospital.

The Kremlin’s mistrust of the West was fed by a KGB intelligence network that had strong incentives to tell its masters what they wanted to hear, as opposed to what the KGB actually believed to be true.  Beginning in 1981, the Soviets launched “Operation RYaN,” a worldwide spying program designed to spot any clues that the US and its NATO allies were preparing for a surprise attack against the Eastern Bloc.  While this option was never seriously considered by the Reagan administration, Downing shows how over the course of 1983 the Soviet leadership interpreted many US actions as preparations for war.  For example, following the October 23rd terrorist attack on the US Marines barracks in Beirut, all US military facilities around the world went on a heightened state of alert as a precaution.  But the Soviets ascribed this worldwide increase in US military readiness not to the terrorist bombing, but instead to preparations for a surprise attack.  Similarly, another key indicator that the KGB tracked was the amount of communications traffic between Washington and London, on the grounds that if NATO were gearing up for war there would be an increase in contact between the US and its most important ally.  At the end of October, communications traffic between Washington and London spiked off the charts… but Soviet analysts did not connect this uptick to the fact that the US had just suddenly invaded the former British colony of Grenada, much to the consternation and vociferous protests of Margaret Thatcher and her government.

Matters came to a head during early November, as NATO began its annual Able Archer military exercises from its headquarters in Brussels.  The Soviet leadership became genuinely convinced that that year’s exercise—Able Archer 83—was merely a ploy, cover for the surprise all-out nuclear assault that the West had been organizing.  As the West’s practice war games reached their apex, the Soviets went to their highest level of military preparedness: the top leadership descended into bunkers deep underground; fighter jets were pre-positioned on tarmacs with their engines left running, capable of being airborne within 3 minutes; the Soviet Navy left port and took up its battle stations; and mobile ballistic missile launchers were instructed to leave their bases, disperse across the Russian countryside, and stand by for orders to launch their deadly payloads.

In the end, the long night of November 9th, 1983 passed without any nuclear missiles being launched.  Able Archer 83 quietly concluded without incident two days later, and slowly the Soviets relaxed their guard.  But for Downing the kicker is that the West had absolutely no idea at the time of how afraid the Soviets were.  What was just another pretty routine week in Brussels and Washington had been experienced as near-existential terror in Moscow, but no one in either the CIA or the Pentagon realized it.

It wasn’t until 1990 that the first reports began to circulate in American intelligence circles of how close the November 1983 war-scare had come to becoming a reality, and not until 1996 that the CIA commissioned an internal review of the episode.  Nowadays senior American intelligence official like Robert Gates agree that the inability of the CIA to pick up on the signs of extreme panic that had gripped the Soviet leadership in 1983 was an immense intelligence failure: “We may have been at the brink of nuclear war and not even known it.”

All of this of course makes for very timely reading today, as a new American administration casually insults foreign countries, refuses to engage in the day-to-day work of diplomacy, plays politics with the findings of the intelligence community, and engages in needless bellicosity.  But while Downing nods in this direction at the very end of his book, the main strength of his work lies in rendering this era of the Cold War in vivid, engaging prose combined with excellent historical insights.  Downing doesn’t write like an academic at all, and I very much mean that as a compliment.  He’s particularly adept at putting disparate historical events all into the same narrative; he shows how different episodes like the downing of Korean Airlines flight 007 or Mikhail Gorbachev’s visit of London in December 1984 all interconnect.  All in all, 1983: Reagan, Andropov, and a World on the Brink is an excellent piece of historical writing perfect for students and the general public; it’s well worth your time if you’re interested in either the Cold War, nuclear weapons, or espionage.

The Rise of a New Regulatory Power in the East?

One form of power states can have in global politics is regulatory power (also sometimes called market power).  The idea is that if your state contains a large, rich domestic market, you can derive influence over other parts of the world eager to gain access to it.  For instance, the United States is by far the largest market in the world for pharmaceuticals, which gives decisions taken by the U.S.’ Food and Drug Administration a significant international impact.  Pharmaceutical companies from all over the world have lobbyists in Washington D.C. who carefully scrutinize the agency’s every move (in fact, Big Pharma is the industry that spends the most on federal lobbying).

Scholars of the European Union (EU) in particular have seized upon the idea of regulatory power.  Most international observers agree that the EU is a powerful actor on the global stage, but few agree as to why.  The EU doesn’t have the most jaw-dropping military in the world (as Donald Trump seems to have recently discovered, the U.S. accounts for the bulk of NATO’s combat readiness – one NATO estimate claims that US defense expenditures effectively represent 72% of the Alliance’s overall defense spending).  And while the EU collectively provides just over half of the Official Development Assistance in the world, claims that it is a significant “civilian power” have yet to attract many adherents.  Others argue that the EU’s power stems from its consistent commitment to human rights and other global norms, but critics retort that the EU is just as hypocritical as any other great power when its interests are on the line.

All of which leaves some EU-philes to fall back on the size of its market and argue that the EU’s main influence in the world comes in the form of regulatory power.  And it is true that the EU’s Common Market is the richest market in the world, even with the U.K. poised to leave in a few years’ time.  Industries in developing countries sometimes live and die as the result of internal EU regulatory whims.  On topics like vehicle emissions standards and food safety regulations, when the EU speaks, the world listens (especially at places like the WTO).

As with so many other things, however, the emergence of China as the major new economic power threatens to disrupt this regulatory status quo.  You can see it in lots of places (for example, regulations surrounding renewable energy), but recently it’s become apparent to me in an unusual corner of the world economy: digital games.

Both the Chinese government as well as Chinese society more broadly are worried about their children spending too much time playing digital games, particularly on mobile phones.  In 2008, China became the first country to officially declare internet addiction a clinical disorder, and the country’s relationship with digital gaming has only become more complicated in the years since.

For instance, earlier this year the Chinese government forced all digital game companies that release games in the PRC to publicly report the formulas that calculate their in-game item drop rates.  For context, in many kinds of digital games, you get rewards for accomplishing various in-game tasks: perhaps a better sword or a cool-looking suit of armor if it’s an RPG, or perhaps a unique color scheme for your character in an MMO.  Game makers discovered decades ago that having an element of randomness to these rewards kept people more engaged (and playing longer) than if it was a simple matter of doing X leading to Y.  Accordingly, semi-randomly generated items that “drop” when the player is successful remains a core mechanic for many of the world’s leading digital games.

The Chinese government is now forcing game makers to publicly reveal the rates at which such items are generated. While the government’s announcement didn’t give a lot of detail about its rationale for the move, most observers agree that the main goal is to try and limit excessive gaming: if players can do the math themselves and realize that it will on average take them dozens or even hundreds of hours of performing a same repetitive action to obtain a given piece of loot, they might just give up on the whole thing.  (Or they might just decide to buy the desired loot at the in-game store using real-world currency, but that’s a separate problem.)

Chinese gaming companies are increasingly paying attention to these signals emanating from Beijing.  Last month, the world’s biggest digital game maker, Tencent Holdings,  took the unprecedented move of voluntarily restricting how many hours a day its younger users could play King of Glory, the leading mobile game in China.  Henceforth, players younger than 12 will be restricted to only one hour of playtime per day, and those between 12 and 18 will be limited to two hours a day.  (In addition, the age-verification system, which is already linked to real-world identities, will be beefed up).

Why would a publicly-traded company interested in its bottom line volunteer to limit access to one of its most profitable products?  Perhaps because the influential state-run newspaper People’s Daily had recently run a slew of editorials against the game, calling it “poison,” with a predictable drop in the company’s share price.

Overall, the big takeaway here is that the Chinese government is displaying a willingness to directly regulate a global media industry in a way that used to largely be the domain of Western nations.  What we are witnessing emerging in China right now has the potential to re-shape the global entertainment industry in a way not seen since the rise of the Motion Picture Association of America (MPAA) and its film ratings system in the 1930s.  With digital games having overtaken movies in terms of both their sales and cultural salience, China is taking the lead on regulating an industry that promises to be one of the most dynamic of the 21st century, with consequences that will likely ripple out for decades to come.  Stay tuned… and don’t spend too much time grinding for that loot.

 

The Coming Personalization of American Foreign Policy

If 2016 has taught me anything, it’s the folly of making predictions.  Accordingly, this post represents not a prediction about the future, but instead a way of thinking about how President-Elect Donald Trump seems to be approaching foreign policy and especially diplomacy as the January inauguration draws near.

Feminist thinkers have long used the phrase “The personal is political.”  In an unintended way, this phrase arguably captures a great deal of Trump’s mindset.  Many observers have noted Trump’s preference for people over institutions; he seems to put his trust in flesh-and-blood individuals over disembodied organizations, and loyalty and personal connections go a long way with him.  Furthermore, while his own promises seem important to him (although perhaps selectively), policies, practices, and traditions he has not personally helped develop seem to hold little sway.  All of this leads to a personalization of policy-making: an environment where Trump and a small, inner band of confidantes formulate policy on topics that directly matter to him while keeping established stakeholders at arm’s length.

It will of course not be the first time in American history that the Diplomat-in-Chief has evinced these tendencies: the Nixon White House was permeated by a thick atmosphere of paranoia, racism, xenophobia, and anti-intellectualism that even Trump and company will have trouble rivaling.  (Those interested in India and desirous of a close-up look of the Nixon White House should pick up Gary Bass’ excellent The Blood Telegram.)  And look, even that human-rights-abusing, genocide-enabling administration managed to generate a few foreign policy successes.  So perhaps not all is lost.

Yet, personalizing American foreign policy opens the door to a wide range of potential pitfalls. Perhaps counter-intuitively, it can become difficult for third-party observers to separate the signal from the noise in highly personalized atmospheres, since there aren’t the lower echelons of the bureaucracy to consistently reinforce the desired message.  Did Trump’s decision to accept a congratulatory phone call from the Taiwanese president represent a drastic rethinking of America’s diplomatic stance towards the island?  No one really knows (including perhaps Trump), because the policy is not broadly emanating from across the communications apparatus of the American state.

Making politics personal carries other risks.  For instance, in a thoughtful article Bloomberg Businessweek discusses the heightened risks that Trump-branded real estate, particularly skyscrapers, are likely to face during his administration.  Who should pay to secure these highly visible, newly prominent buildings?  (For a map of their locations around the world, see here.)

One way increased personalization may be measurable in the near future could involve seeing if Trump nominates a larger-than-usual number of political appointees at the ambassadorial level.  Over the past half-dozen administrations, the percentage of American ambassadors drawn from outside the State Department’s pool of career diplomats has varied between 26% and 38%, according to data maintained by the American Foreign Service Association.  (For those curious, after several bungled nominations early on, the Obama administration ended up clocking in at around 30% political appointees over his two terms – which, depending on the exact data you use, is either the lowest or second-lowest number of political appointees by a modern 8-year president.)

A larger number of political appointees by the Trump Administration would signal a desire to bypass the State Department and keep its “experts” at bay, as well as political patronage on a larger-than-usual scale, even for Washington, D.C.  Indeed, we’ve seen Trump advocate for political appointees like Anna Wintour (!?) in the past.  And in an interesting twist, Trump is not limiting himself to nominating American ambassadors: he suggested a few weeks ago (via a tweet, of course) that he wouldn’t mind if the United Kingdom appointed former UKIP leader Nigel Farage as its ambassador to the United States.

Fortunately, there is probably a ceiling on how many political appointees Trump could name, if only because few well-heeled Americans are clamoring to be the U.S.’ top representative in Dushanbe.

The Amazingness of Imperial British Record-Keeping

Would you like to know who passed their driver’s license exam in Uganda in June of 1912?  I know, who wouldn’t?!?  Well, thanks to the amazing thoroughness of Britain’s colonial records as well as the fantastic Africana archive at Northwestern University, now you can!

Uganda May 1912 Driver's Licenses - Cropped.jpg

I find it interesting that while the historian in me loves the completeness and detail of records preserved in old archives, the civil libertarian in me is aghast when the U.S. government collects infinitely larger troves of such data today.  If the NSA simply recast its mission as one of aiding future historians, I would be way more on board.  It will be fascinating to see how historians 50 years from now will view the present era with the help of Big Data.

(Also, when did “motorcycle” become a single word?)

GDP Sure Stinks as Our Go-To Measure of Economic Activity

Measuring the size of national economies is hard.  That’s clearly true in the case of developing countries, where underlying economic data is often not available, made-up, or deliberately manipulated.  But even for rich countries it’s difficult to know how to factor in all the different kinds of economic activity humans engage in.  How should the value of multinational corporations be divvied up across the various countries they are present in?  How should public goods provided by the state be valued?  Should you attempt to measure non-market transactions, like the labor traditionally provided by “stay-at-home” mothers?  What about accounting for negative externalities, like the increasing threat of climate change?  What base year should you use?  And how should you deal with (highly variable) exchange rates?  The Economist recently asked if Brexit had helped France’s economy overtake the U.K.’s, and the best it could come up with was a tepid “probably”.

Every now and then methodological changes by national statistical authorities visibly highlight the artificiality of GDP figures. Consider the following few cases:

  • On November 5, 2010, Ghanaians went to bed thinking their country had a GDP per capita of about $753, placing them among the poorest countries in the world.  The next morning they woke up, however, to newspaper accounts proclaiming that the National Statistics Office had changed the base year for calculating GDP from 1993 to 2006, which (along with other methodological changes) had caused the country’s per capita GDP estimate to jump to $1318.  Overnight Ghana had become a solidly middle-income country!  Woohoo!
  • A recent European change in the way the investments of multinational corporations are counted in GDP figures caused Ireland’s GDP to grow by 26% in 2015…  at least on paper.  But, as an economist at University College Dublin tactfully put it, “It’s complete bullshit!”
  • Speaking of bovine shit, India’s 2015 GDP revisions for the first time officially included the value of the “organic manure” that the country’s livestock produce.  Just like that, India’s GDP increased by 9.1 billion rupees (roughly $135 million), but not before some serious academic work had been done calculating the “average evacuation rates” of various species (who says academics never have any fun!).  The Wall Street Journal has a good primer on India’s new GDP figures… and how other “real-world” statistics like the quantity of exports don’t seem to corroborate them much.

Perhaps the solution, then, should be to just get rid of GDP altogether, as more and more people are suggesting.  But then how would the hordes of quantitatively-minded political science Ph.D.s indulge in their favorite pastime of building econometric castles out of data made of sand?