How the 2016 United States Presidential Election Redefined Risk Management for the Better

The results of the 2016 United States Presidential election were shocking to some due to the polling statistics and the mainstream media's narrative around the highly probable election outcome which turned out to be wrong.

No matter what you think of Trump and the final the result of the election it will be studied for generations because it defied conventional wisdom and members of the mainstream media asserted it would be a long shot for Trump to pull off a victory.  Ultimately, there was an industry wide systematic failure in the way the polling models were designed and executed.  So how did an obscure Professor from Stony Brook prove the big polling companies wrong? 

Professor Helmut Norpoth's Primary Model

On March 7, 2016 Professor Norpoth predicated that Donald Trump would defeat Hillary Clinton or Bernie Sanders with a confidence level of 87% or 99% respectively.  Professor Norpoth turned out to be right and proved that many of the major polling and news organizations were just flat out wrong.  Well, he didn’t get everything right because his model predicated Trump would win the popular vote which turned out to be false, but Trump did win the Electoral College. 

Primary Model.png

Professor Norpoth’s primary model is based on two major inputs which are the results of election primaries and the “swinging of the pendulum” or the tendency for the White House party to change after two consecutive terms.  In an interview with Fox News Professor Norpoth described that since Barack Obama didn’t do as well getting reelected as he did getting elected it was a strong indication that the 2016 Presidential race would be a swing election.  In other words, the momentum was already in favor of Republicans based on historical data regardless of the nominee.

What does it mean to risk management?

According to ISO 31000, risk is defined as the “effect of uncertainty on objectives”.   Hence, risk management is attempting to manage uncertainty and the way this is done across many sectors of the economy is to build quantitative models.  The results of the 2016 US Presidential election are a solemn reminder of one crucial, but seemingly easy to forget fact.  The models we build are only approximations of reality, but not reality itself.  But what happens when an entire industry (pollsters) essentially rely on the same fundamental model without making any modifications based on new information?

The map is not the territory

Alfred Korzybski was a Polish scholar who famously said that “the map is not the territory.”  Korzybski was highlighting the fact that a map is really an abstraction of the territory and while it may be useful to us it is not the territory itself.  This is an obvious, but perhaps blurry distinction as our reliance and dependence on technology continues to grow. 

Have you ever had the experience of driving to an address under the guidance of some type of global positioning system (GPS) system and it took you to the wrong location?  While it may have been an inconvenience at the time it illustrates our reliance on models and algorithms and the high expectations we set for 100% accuracy all of the time.

   
  
 
 
  
    
  
 Normal 
 0 
 
 
 
 
 false 
 false 
 false 
 
 EN-US 
 X-NONE 
 X-NONE 
 
  
  
  
  
  
  
  
  
  
 
 
  
  
  
  
  
  
  
  
  
  
  
  
    
  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  
   
 
 /* Style Definitions */
 table.MsoNormalTable
	{mso-style-name:"Table Normal";
	mso-tstyle-rowband-size:0;
	mso-tstyle-colband-size:0;
	mso-style-noshow:yes;
	mso-style-priority:99;
	mso-style-parent:"";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	mso-para-margin-top:0in;
	mso-para-margin-right:0in;
	mso-para-margin-bottom:10.0pt;
	mso-para-margin-left:0in;
	line-height:115%;
	mso-pagination:widow-orphan;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";
	mso-ascii-font-family:Calibri;
	mso-ascii-theme-font:minor-latin;
	mso-hansi-font-family:Calibri;
	mso-hansi-theme-font:minor-latin;
	mso-bidi-font-family:"Times New Roman";
	mso-bidi-theme-font:minor-bidi;}
 
    

 

Is there really strength in numbers?

Two common questions which are asked from a financial institution's senior leadership to consultants implementing regulatory compliance systems are:

  • How are other banks doing it?
  • What is the industry standard?

These are actually both really good questions which the senior leadership should be asking their consultants and peers at other financial institutions.  It's the idea that the collective intelligence of a diverse group of practitioners will sum up to a more comprehensive and accurate model than any one financial institution can come up with by itself.  There is merit in this notion, but what if the collective gets it wrong because of complacency, inadaptability and a general banking culture which discourages dissenting opinions?  If risk management is truly about managing uncertainty then isn't there an inherent risk with a strict adherence to an industry standard?  Or should financial institutions meet their regulatory expectations and adhere to industry standards, but simultaneously strive to identify limitations in their current models and create a culture which embraces innovation and creativity?