Keith Furst Keith Furst

Why "Know Your Geography" (KYG) Is Key in Fighting Financial Crime

The FFIEC BSA/AML Examination Manual frames institutional risk assessment across four pillars: products and services, customers, transactions, and geographic locations. Most compliance programs invest heavily in the first three. Geography, despite being explicitly called out as a core risk dimension, remains the least developed.

That gap is getting harder to defend. In the first quarter of 2026 alone, FinCEN issued an expanded Southwest Border Geographic Targeting Order covering counties and zip codes across Arizona, California, New Mexico, and Texas, imposed a separate GTO on banks and money transmitters in Hennepin and Ramsey Counties in Minnesota tied to a $300 million child nutrition fraud ring, and published advisories on Chinese money laundering networks facilitating cartel proceeds. Every one of these enforcement actions is geographic at its core. And every one exposes institutions that treat geography as a checkbox rather than a data-driven risk dimension.

The Binary Flag Problem

Most banks assess geographic risk using county-level HIDTA designations. A county is either HIDTA or it is not. This approach has two problems.

First, it is too coarse. A county can contain dozens of zip codes with wildly different risk profiles. Flagging an entire county as high-risk generates thousands of unnecessary alerts on low-risk activity, while missing concentrated pockets of risk in counties that fall just outside the designation. The result is noise — and noise erodes analyst trust in the system.

Second, it is static. County-level flags change infrequently. Drug trafficking patterns, MSB concentrations, and financial crime typologies shift faster than county designations can keep up. Institutions relying on binary flags are always looking at yesterday's risk landscape.

The alternative is machine-learning-driven risk scoring at the zip code level. Instead of a binary flag, each zip code receives a five-tier classification — Very Low through Very High — derived from over a billion data points across government, financial, and proprietary sources. The result is up to 67 percent reduction in false positive alerts compared to county-level flagging, because the scoring isolates actual risk concentrations rather than painting entire regions with a single brush.

Why Geography Matters More Now

Three recent FinCEN actions illustrate why geographic intelligence has moved from "nice to have" to regulatory expectation.

The first is the expanded Southwest Border GTO, effective March 7, 2026. FinCEN now requires MSBs in designated counties and zip codes across four states to file Currency Transaction Reports for cash transactions between $1,000 and $10,000. The March 2026 expansion added Maricopa and Pima Counties in Arizona along with Bernalillo, Doña Ana, and San Juan Counties in New Mexico — areas not covered by the 2025 orders. FinCEN explicitly tied the expansion to evolving patterns in cartel-related cash movement and fentanyl trafficking proceeds. The key detail: the GTO targets specific zip codes within those counties, not the counties wholesale. FinCEN itself is operating at the zip code level.

The second is the Minnesota fraud GTO issued in January 2026. Fraud rings operating through the Feeding Our Future program stole at least $300 million from federal child nutrition funds, laundering proceeds through shell companies, MSBs, and wire transfers to foreign jurisdictions. FinCEN responded with a GTO covering Hennepin and Ramsey Counties, requiring reporting on transactions of $3,000 or more sent outside the United States. The agency also issued four notices of investigation to Minnesota MSBs and published an alert with red flag indicators for financial institutions. The geographic concentration of the fraud — centered in Minneapolis and St. Paul — was itself a detectable signal that traditional transaction monitoring missed because it was not looking at geographic clustering.

The third is FinCEN's December 2025 announcement of a "data-driven border operation" using advanced data processing to identify illicit networks along the Southwest border. FinCEN signaled that geographic targeting is not a temporary tool but a scalable enforcement model that could be replicated in other regions. The Minnesota GTO, issued weeks later, proved the point.

Cross-Attribute Mismatch: The Signal Not Everyone Is Looking For

Geographic risk is not just about where a transaction occurs. It is about whether the geographic attributes of a transaction are internally consistent.

When a customer's address is in Ohio, their phone area code maps to New York, their IP address geolocates to Eastern Europe, and their counterparty banks through a Florida institution with no branches within 500 miles of the counterparty's stated address — that constellation of mismatches is itself a risk signal, independent of whether any single attribute triggers an alert.

This is cross-attribute anomaly detection: comparing address to phone, address to IP, phone to IP, customer to counterparty, and counterparty to their financial institution. Each comparison yields a match flag and a distance. When multiple dimensions disagree, the probability of legitimate activity drops sharply.

Traditional monitoring systems evaluate each attribute in isolation. They check whether an IP is on a blocklist. They check whether a phone number is valid. They check whether an address matches KYC records. What they do not check is whether all of those attributes point to the same geography. That gap is where money mules, shell companies, and synthetic identities hide — because the individual data points pass validation even when the composite picture is incoherent.

What "Know Your Geography" Actually Means

KYG is the geographic parallel to KYC. Just as Know Your Customer requires understanding who is transacting, Know Your Geography requires understanding where — and whether "where" is consistent across every data point in a transaction.

In practice, KYG means collecting over a billion data points from government sources (DEA, FinCEN, Census, ONDCP, FDIC), normalizing them across zip code, county, CBSA, state, and country layers, engineering predictive features, and applying machine learning to produce risk scores at the zip code level. The output is not a single flag but a multi-dimensional risk profile: drug trafficking tier, industry risk concentration, border proximity, TBML vulnerability, elderly population concentration, MSB density, and more.

This matters operationally in three places.

At onboarding, geographic risk scores feed directly into CDD risk rating. A customer in a Very High drug trafficking zip with elevated MSB concentration gets proportionate enhanced due diligence — not because the county is flagged, but because the specific zip code warrants it.

In transaction monitoring, geographic enrichment provides new rule dimensions. A wire to a high-elderly-concentration zip from a distant, unknown sender triggers differently than local activity. A counterparty banking 400 miles from their stated address triggers differently than one banking locally. These are signals that do not exist without geographic enrichment.

In investigations, geographic context turns a suspicious activity report from a narrative into a map. When an analyst can see that five SARs in a quarter all involve counterparties in the same three zip codes, and those zip codes sit in a HIDTA region with elevated TBML vulnerability, the investigation has a geographic thesis before the first interview.

The Regulatory Direction Is Clear

FinCEN's actions in 2025 and 2026 are not subtle. The agency is issuing geographic targeting orders with increasing frequency, expanding their geographic scope, and explicitly stating that this model is scalable. The Southwest border GTO has been renewed and expanded three times in twelve months. The Minnesota GTO applied the same framework to benefits fraud in the Midwest. FinCEN's healthcare fraud advisory published in March 2026 noted a 330 percent increase in BSA reporting on healthcare fraud since the pandemic, with geographic concentration as a key detection indicator.

For financial institutions, the question is no longer whether geographic risk matters. It is whether your program can demonstrate that it assesses geographic risk with the same rigor it applies to customer, product, and transaction risk. If your geographic risk assessment still consists of county-level HIDTA flags and a few lines in your BSA/AML risk assessment, you are behind where regulators expect you to be — and behind where the data can take you.

Geography is not a checkbox. It is a risk dimension. And it is time to start treating it like one.

Read More
Keith Furst Keith Furst

THE FUTURE OF ENTITY DUE DILIGENCE

The world has gone through an incredible amount of technological transformation over the past ten years.  While it may seem hard to imagine that change will continue at this pace, it’s not only likely to continue, but it will accelerate. There are various functional areas within institutions that support global commerce, but some have been laggards in adopting new technology for a plethora of reasons.

Structural market trends will force organizations to innovate or they will be subject to consolidation, reduction of market share, and, in some circumstances, complete liquidation.  Future proofing the entity due diligence process is one key functional area that should be part of an organization's overall innovation road map because of the impacts of trends such as: rising regulatory expectations, disruptive deregulation initiatives, emergence of novel risks, explosion of data, quantifiable successes in artificial intelligence (AI), and changing consumer expectations.

 

Introduction

The world has gone through an incredible amount of technological transformation over the past ten years.  While it may seem hard to imagine that change will continue at this pace, it’s not only likely to continue, but it will accelerate. There are various functional areas within institutions that support global commerce, but some have been laggards in adopting new technology for a plethora of reasons.

Structural market trends will force organizations to innovate or they will be subject to consolidation, reduction of market share, and, in some circumstances, complete liquidation.  Future proofing the entity due diligence process is one key functional area that should be part of an organization's overall innovation road map because of the impacts of trends such as: rising regulatory expectations, disruptive deregulation initiatives, emergence of novel risks, explosion of data, quantifiable successes in artificial intelligence (AI), and changing consumer expectations.

Entity due diligence continues to be a struggle for many financial institutions as regulatory requirements such as beneficial ownership continue to expand in breadth and depth.  One of the fundamental struggles to resolve is the identification of the entity with the information used during the due diligence process which can be scattered across various data sources, manual to access and screen, and at times riddled with data quality issues.

The future of entity due diligence is not written in stone, but as will be argued later there is an opportunity to shape its future.  The future of entity due diligence is not only about shaping its future, but also includes us, people, and the role we play in that future.

This paper will outline some of the key trends that will drive the transformations of the entity due diligence process and what the future could start to look like.

Modeling Possible Futures

To understand the future entity of due diligence, we should attempt to understand, to a certain degree, some of the history which led up to its current state.  While it would be impractical to document every historical incident which led up to the current legislative and institutional frameworks, it is useful to imagine entity due diligence as a result of actions and interactions among many distributed components of a complex adaptive systems (CAS).  

The study of CAS or complexity science emerged out of a scientific movement where the goal was to understand and explain complex phenomena beyond what the traditional and reductionist scientific methods could offer.  The movement’s nerve center is the Santa Fe Institute in New Mexico, a trans-disciplinary science and technology think tank, which was founded in 1984 by the late American chemist, George Cowan.  Researchers at the institute believe they are building the foundational framework to understand the spontaneous and self-organizing dynamics of the world like never before.  The institute’s founder, Mr. Cowan, described the work they are doing as creating, “the sciences of the 21st century.

CAS are made up of a large number of components, sometimes referred to as agents, that interact and adapt.  Agents within these systems adapt in and evolve with a changing environment over time.

CAS are complex by their very nature, meaning they are dynamic networks of interactions, and the behavior of the individual components doesn’t necessarily mean the behavior of the whole can be predicted, or even understood.  They are also adaptive so that the individual and collective behavior can self-organize based on small events or the interaction of many events.  Another component of these systems is that emergent patterns begin to form which can be recognized. e.g. formation of cities

Another key element of CAS is that they are intractable.  In other words, we can’t jump into the future because we need to go through the steps.  The white line in the image below shows the steps of a system from the past to the present, and the red dendrite like structures are other possible futures that could have happened, if a certain action was taken at a particular point in time.  

Source: YouTube TEDxRotterdam - Igor Nikolic - Complex adaptive systems

Source: YouTube TEDxRotterdam - Igor Nikolic - Complex adaptive systems

These models suggest that no one knows everything, can predict everything, or is in total control of the system.  Some entities have greater influence over the evolution of the system as a whole than others, but these models imply that everything can influence the system, even a single person.

The entity due diligence process involves many agents interacting and responding to one another which include, but are not limited to: financial institutions, companies, governments, corporate registries, formation agents, challenger firms, criminals, terrorists, and many more. By examining the complexity of the due diligence space, and how technology is constantly reforming agents relationships to one another, can firms, and people within those firms, help chart a course for the future of due diligence and their place within it?

The Evolution of Criminals and Terrorists and Reactions from Law Enforcement

The history of the anti-money laundering (AML) regime in the United States could be understood through the lens of a CAS.  Before any AML legislation existed the US government prosecuted criminals such as Al Capone, for tax evasion as opposed to other crimes.  

In 1970, the US passed the Banks Records and Foreign Transactions Reporting Act, known as the Bank Secrecy Act (BSA), to fight organized crime by requiring banks to do things such as report cash transactions over $10,000 to the Internal Revenue Service (IRS).  

Another key development in the history of the AML regime was the prosecution of a New England bank for noncompliance with the BSA, another seemingly small event, had far reaching consequences as it prompted Congress to pass the Money Laundering Control Act of 1986 (MLCA).

But as we know today, the US government passed the BSA, but criminals and criminal organizations continue to evolve, as cash structuring or using money mules are common methods to avoid the reporting requirement.

Drug Cartels

Clearly, the rise of the Colombian and Mexican drug cartels in the 1970s and 80s show how agents of a CAS act and react to one another.  In 1979, Colombian drug traffickers were killed in a shootout in broad daylight in Dadeland Miami mall.  This event and many others clearly got the attention of US law enforcement and in 1982, the South Florida Drug Task force was formed with personnel from the Drug Enforcement Agency (DEA), Customs, Federal Bureau of Investigation (FBI), Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), Internal Revenue Service (IRS), Army, and Navy.  

By the mid 1980s, as the South Florida Drug Task force started to succeed in reducing the flow of drugs into the US via South Florida, the Colombian cartels reacted and outsourced a lot of the transportation of cocaine to the US via Mexico with the help of marijuana smugglers.  

Mexican drug cartels continue to evolve and innovate as they have reportedly used unmanned aerial vehicles (UAV), more commonly referred to as drones, to fly narcotics from Mexico over the southwest border to San Diego and even experimented with weaponizing drones with improvised explosive device (IED) equipped with a remote detonator.

9/11 and the Evolving Threat of Terrorism

The above examples pale in comparison to the impacts of the 9/11 terrorist attacks on the United States that killed almost 3,000 people and caused billions of dollars in property damage, economic volatility, cleanup costs, health problems for people living or working near the site, job loss, tax revenue loss, and many other cascading effects.  Shortly after the 9/11 terror attacks, Congress passed the Patriot Act which was designed to combat terrorism, including its financing.

However, as the US and other countries have enacted laws to prevent the funding of terrorism, terrorists have reacted or evolved by opting to use legitimate funding sources such as government benefits, legitimate income, and small loans to launch low cost attacks. The French government estimated the November 2015 Paris attacks cost a maximum of 20,000 euros.

Vehicles have been used as ramming weapons in terror attacks in London, Berlin, Nice, Barcelona, and New York which simply amounted to the cost of fuel, and possibly a rental charge.  This also points to the changing nature of the types of people who are engaging in these types of attacks that usually have a criminal background and are radicalized by content online as opposed to operating within a well financed and organized cell of a larger terror group.

But is it only the content online that radicalizes people or does most terrorist recruitment happen face-to-face?  According to research by Washington University’s Professor Ahmet S. Yahla, Ph.D., just over 10 percent of the 144 people charged with ISIS-related offenses in US courts were “radicalized online.”  Professor Yahla ran a program, in a Turkish city on the southern border with Syria, to intervene with school-aged children at risk for recruitment by terrorist organizations.  Professor Yahla found that most families weren’t aware that their children were being approached by terrorist recruiters and were open to intervening to ensure their children were not radicalized.  

The Professor’s assertion that most of terrorist recruitment happens face-to-face seems to make sense because we have all experienced the power of a personal referral.  If, a person you like and trust makes a recommendation to you then you are much more likely to act on it, rather than be prompted to act through some passive media online. This also implies that networks of people exist, in countries where attacks take place, that believe in various terrorist ideologies, but not all of those believers take up arms.  

Naturally, this leads to the idea that even if internet service providers (ISP) and social media companies could remove a lot of the terrorist propaganda online, it wouldn’t stop so called ‘lone wolf’ terror attacks as the system would evolve and rely more on face-to-face recruitment as small communities of people with shared beliefs and values find ways to congregate.

However, this doesn’t mean that laws such as the Patriot Act are not effective against preventing terrorism because they are without a doubt, they are creating barriers against large scale terror attacks, but terrorism and its agents are constantly evolving.  While the Patriot Act and similar laws allow for broader surveillance powers by governments, terrorists know this and have reacted by using encrypted messenger applications such as the Telegram to communicate and spread propaganda.

All of this suggests that the fight against terrorism needs to be a multi-pronged approach including, but not limited to preventing terror groups access to the financial markets, intervening early with at-risk youth, tackling various socio-economic issues that could contribute to feelings of alienation of impressionable youth, and countering compelling social media delivered by ISIS through public private partnerships (PPP) by launching an ongoing and strategic counter-narrative.

Drivers of Regulation

The motivations for new regulations can come in many different forms such as combating terrorism, drug trafficking, money laundering, tax evasion, securities fraud, financial fraud, acts of foreign corruption, etc.  As discussed in the previous section, terrorism continues to evolve as there have been a growing number of low-cost of attacks which will keep it at the top of political agendas for years to come.

Data and technology are integral parts of what are driving new regulations on various fronts. The rise of smartphones, explosion of data, proliferation of the internet of things (IOT), and other sensor technology has fundamentally changed the speed at which data can be accessed, transferred, analyzed, and acted upon.  There will be various regulatory rules examined later in this paper which, in one way or another, can be linked to technological transformations.

The most pressing example of how technological transformation can influence regulation can be observed with the Panama Papers data leak.  While it could be argued that it was simply the actions of one person, or a group of people, the leak would only be possible if the technological framework was already in place to store, distribute, and analyze all of those documents rapidly and truly understand the implications of them.  

Would the Panama Papers data leak even be possible in 1940?

Atomic Secrets Leak

Maybe not on the scale of the Panama Papers, but data leaks are not something completely new as several Americans and Britons helped the Union of Soviet Socialist Republics (USSR) become a nuclear power faster than it could on its own, by leaking military secrets.  Some scientists contend that the USSR and other countries would have obtained a nuclear bomb on their own, but the data leaks likely accelerated the process by 12 to 18 months or more.

Klaus Fuchs is commonly referred to as the most important atomic spy in history.  Fuchs was born in Germany and was actively engaged in the politics of the time.  Fuchs immigrated to England in 1933 and earned his PhD from the University of Bristol.  Eventually, he was transferred to Los Alamos labs in the early 1940s where he began handing over important documents about the nuclear bomb’s design and dimensions to the USSR.  There were other spies giving nuclear secrets to the USSR with various motivations such as communist sympathies or thinking that the more countries that had access to nuclear technology would decrease the probability of nuclear war.

The amount of nuclear secrets leaked to the USSR may have not been that massive in terms of storage space if we imagine all of those documents being scanned as images or pdf files, but the implications of the USSR having that information during World War II was very serious.  So, data leaks have not emerged out of nowhere, but the scale and speed at which information can be distributed is, clearly, very different today than it was 70 years ago. It’s much easier to stick an universal serial bus (USB) drive into a computer rather than trying to walk out of an office building with boxes full of files.

The Data Leak Heard Round the World

The Panama Papers had a tsunami effect on global regulations as it prompted countries around the world to re-evaluate their corporate registry requirements and the use of shell companies to hide beneficial ownership.  The first news stories about the Panama Papers leak appeared on April 3, 2016 and just over a month later the US Financial Crimes Enforcement Network (FinCEN) issued the long awaiting Customer Due Diligence (CDD) final rule.  The magnitude of the Panama Papers is revealed in the sheer volume of documents, entities, and high profile individuals involved as shown in the image below.

Source: International Consortium of Investigative Journalists

Source: International Consortium of Investigative Journalists

Data is a significant part of the driver for new regulation which stem from a wide range of activities including money laundering, corruption, human trafficking, tax evasion, etc.  There are a wide variety of regulatory rules, which institutions had to prepare for across the globe, coming into full force into 2018 such as the Payment Services Directive (PSD2) in the European Union, the CDD final rule, the New York State Department of Financial Services (NYDFS) risk based banking rule, and others.

The impact of the Paradise papers is still yet to be fully realized, but it's fair to assume that it will contribute to the trend of increased regulation and scrutiny of the financial services industry.  Whether or not there is an immense amount of explicit wrongdoing identified the public perception of offshore tax havens and shell companies continues to take on a negative light.

Based on current regulatory expectations for financial institutions and other structural and market trends such as the digital experience, new and evolving risks, increased competition, technological progress, growth of data, and the need to make better decisions and control costs the entity due diligence process will be drastically different than it is today for the financial services industry.

The Murder of a Journalist

The Panama Papers are still having cascading effects across the globe and one of the most tragic examples of this in 2017 was the assassination of the Maltese journalist, Daphne Caruana Galizia, by a car bomb.  Mrs. Galizia was a harsh critic of Maltese political figures accusing some of corruption and international money laundering much of which was revealed by the Panama Papers.  She also highlighted the links between Malta’s online gaming industry and the mafia.  The only other journalist killed in the EU during 2017 was Kim Wall who was allegedly killed and dismembered by the Danish inventor, Peter Madsen.

The difference between the murder of the two journalists, was that in the case of Mrs. Galizia, there is a strong indication that her reporting on corruption and money laundering could be the underlying motive for her death.  However, the death of Mrs. Wall appears to be more random and unplanned event as the circumstances that led up to her death are still unclear.  

According to the Committee to Protect Journalists (CPJ) there were 248 journalists killed since 1992 who reported on corruption and only 7 of them happened in the EU.  The top 5 countries for murdered journalists, who reported on corruption, since 1992 was the Philippines, Brazil, Colombia, Russia, and India totaling 34, 26, 24, 21, and 17 respectively.  

Outside of reporting on corruption, one of the main drivers of murdered journalists in Europe since 1992 to the present, was war and terrorism which makes the case of Mrs. Galizia, all the more shocking.  The war in Yugoslavia created a high-risk reporting environment for journalists and left 23 of them dead.  On January 7, 2015, two brothers marched into the Charlie Hebdo offices and massacred 12 people, 8 of them journalists, which was the worst attack against the Western media since 1992.  

Source: http://www.voxeurop.eu/en/2017/freedom-press-5121523

Source: http://www.voxeurop.eu/en/2017/freedom-press-5121523

The worst attack against the media worldwide in recent memory occurred in the Philippines in 2009, when gunmen killed 32 journalists and 25 civilians in the Maguindanao province which is a predominantly Muslim region of Mindanao.  Terrorism was the common factor that linked the Charlie Hebdo attack and the Maguindanao massacre.  However, there is another factor common factor that underlies the murder of Mrs. Galizia in Malta, the Charlie Hebdo journalists in France, and the reporters in the Philippines which is that terrorism and corruption can be linked to shell companies.

This is not to say that any of those specific attacks were explicitly linked to use the of shell companies, but there is a common theme that criminals, corrupt politicians, and terrorists use shell companies as a tool to hide their identities as exposed by the Panama Papers.

There is already action being taken by the European Parliament as it passed a motion in November 2017, stating that Malta’s country police and judiciary “may be compromised.”  The murder of this journalist is extremely tragic and it could prompt more aggressive moves from the EU to get its member states up to regulatory standards, especially in the area of beneficial ownership.

It could be argued that the EU is only as strong as its weakest link, so can the EU accept its members exhibiting low standards of justice and law enforcement?

Source: Bureau Van Dijk - A Moody’s Analytics Company

Source: Bureau Van Dijk - A Moody’s Analytics Company

Regulations Coming into Force in 2018

There are three important financial service regulations coming into force in 2018 which are the CDD final rule in the United States, PSD2 in the European Union, and the NYDFS risk based banking rule in New York.

The CDD final rule was on the US legislative radar for some time, but it appears the Panama Papers expedited its approval.  While the CDD final rule is a step in the right direction it does place additional burdens on financial institutions without addressing other issues such as specific states in the US still allowing companies to incorporate without collecting and verifying beneficial ownership.  

The reported titled Global Shell Games: Testing Money Launderers’ and Terrorist Financiers’ Access to Shell Companies, stated that, “It is easier to obtain an untraceable shell company from incorporation services (though not law firms) in the US than in any other country, save Kenya.”  There are two bills that were introduced to the Senate and Congress which are supposed to address the weak state corporate formation laws and this will be discussed in a later section.

PSD2 is one of the most unique financial regulations because it is arguably one of the few instances where legislators adopted a proactive as opposed to reactive regulatory framework. The motivations for the regulation were to increase market competition, consumer protection, and the standardization of infrastructure in the payments sector by mandating banks create mechanisms for third party providers (TPP) to access customer’s bank information. What’s interesting about this regulation is that it will have a significant impact on the market, but there wasn’t a discernible negative event which drove its adoption.  

Instead the regulation is forward looking, as its supporters realized, quite astutely, that incredible amounts of change are on the horizon for the financial services sector due to technological disruptions.  Since, EU regulators can see the future coming, to a certain degree, they have enacted legislation to help shape the future of payments with sensible regulations which will help increase standardization, security, and consumer protection.

The NYDFS risk based banking rule is also very interesting because it introduces more accountability on New York regulated financial institutions’ senior leadership to certify the institution’s AML program.  The rule highlights that the reviews it conducted have identified many shortcomings in transaction monitoring and filtering programs and a lack of governance, oversight, and accountability.  

Customer Due Diligence (CDD) Final Rule

In the US, the long awaited CDD final rule adds the fifth pillar to an effective AML program.  In reality, many financial institutions have already integrated CDD into their overall AML program.  The other four pillars of an effective AML framework, which are built upon an AML risk assessment, are: internal controls, independent testing, responsible individual(s), and training personnel.

Source: Data Derivatives

Source: Data Derivatives

For example, in terms of transaction monitoring some banks would use the risk rating of a customer as another way to prioritize their alert queues.  In other words, a customer which is perceived by the institution as being higher-risk would have their activity investigated first when compared to similar activity of a lower-risk client.

One of the major impacts to financial institutions in 2017 was the preparation to comply with the new beneficial owner requirements for the impending May 11, 2018 deadline.  There were definitely challenges to comply with this new rule from an operational perspective, especially at large banks, because of the siloed structure of client onboarding and account opening processes.  Banks can have dozens of onboarding and account opening systems and trying to coordinate where the know your customer (KYC) process is for a particular customer, at a given point in time, and when the handoff to the next system happens, can be extremely challenging.

Also, the account opener certification created wrinkles in the operations process, and questions arose regarding how long a certification was active and if one certification could support multiple account openings.

Also, the CDD final rule is based on trusting the customer, but as discussed in a recent article, in the Journal of Financial Compliance (Volume 1 Number 2), by the Head of Financial Institutions & Advisory in North America for Bureau van Dijk - A Moody's Analytics Company, Anders Rodenberg, there are three inherent flaws of asking companies to self-submit beneficial ownership information which are:

  1. lack of knowledge;

  2. missing authority; and

  3. no line of communication when changes happen.

While US banks would be in compliance with the rule if they simply collected beneficial ownership based on what their customers supplied, it may not accurate all of the time.  This is why it would be prudent for a financial institution to collect beneficial ownership from the client, and also use a third party data source such as Bureau Van Dijk’s Orbis as another method of coverage and verification.  

If, there was a gap between what the customer provided and what Orbis has then it could be a factor to consider when determining which customers should undergo enhanced due diligence (EDD).  Also, the more beneficial owners identified, either disclosed or uncovered, creates an opportunity to screen more individuals through adverse media, politically exposed persons (PEP), and sanctions lists.  This allows for greater confidence in a institution’s risk-based approach given the broad coverage.

Finally, financial institutions in the US and elsewhere should be monitoring global standards and consider that the CDD Final Rule or similar regulations could be amended from the trust doctrine to the trust and verify concept as detailed under the Fourth Anti Money Laundering Directive (AML4).

Sanctuary Corporate Formation States

There are two bills, aimed at addressing beneficial ownership gaps in the US, that were introduced to the Senate and Congress which are the True Incorporation Transparency for Law Enforcement (TITLE) Act (S. 1454) and the Corporate Transparency Act (H.R. 3089/S. 1717).  Both proposals cite similar findings of how criminals can exploit weakness in state formation laws. The TITLE Act specifically mentions the Russian arms dealer, Victor Bout, who used at least 12 companies in the US to, among other things, sell weapons to terrorist organizations trying to kill US citizens and government employees.  Additionally, the TITLE Act refers to other major national security concerns such as Iran using a New York shell company to purchase a high-rise building in Manhattan and transferring millions of dollars back to Iran, an Office of Foreign Asset Control (OFAC) sanctioned country, until authorities found out and seized the building.

Both bills have reasonably good definitions of beneficial ownership and the requirements to collect identifiable information.  Also, both bills allow up to $40 million dollars available for implementation costs with the new rules which will be funded by asset forfeiture funds accumulated from criminal prosecutions.  There are common sense proposals in both bills such as exemptions for publicly traded companies, companies with a physical presence, minimum number of employees, and minimum annual revenue.

The one distinction between the two bills is that the TITLE Act requires states to comply with the bill, but it doesn’t penalize states if they fail to comply.  If, history is any guide then the TITLE Act could lead to a scenario similar to sanctuary cities, where local city governments fail to cooperate with federal immigration authorities.  

If, the TITLE Act was adopted then how do we know that states such as Delaware will comply?  Could the US could end up with ‘sanctuary corporate formation states’?

The Corporate Transparency Act takes a slightly different approach where the states are not required to collect beneficial ownership, but if states are in noncompliance with the statue then companies incorporating in those states are required to file their beneficial ownership information with FinCEN.  While it could be argued this bill is slightly better than the TITLE Act, its not clear how FinCEN will monitor and enforce companies out of compliance with this regulation.  

The ideal scenario for the US would be that all states collect beneficial ownership in the same standardized fashion or in another extreme example, if some states fail to comply, then the federal government would take over the authority to form corporations and remove the rights from states.  The latter scenario would not be ideal and would be very unlikely to get passed by the Congress and the Senate unless there were some extreme circumstances, possibly equivalent to another 9/11 terrorist attack tied to the use of shell companies in the US.  There would be significant economic and social implications of such a change so it appears that beneficial ownership doesn’t have enough political capital to initiate such a drastic move.

It also gets into the nuts and bolts of legal theory.  Would the supremacy clause of the US constitution hold up or would states try to nullify federal law, arguing that it was unconstitutional?

Payment Services Directive (PSD2)

On October 8, 2015 the European Parliament adopted the European Commission’s updated PSD2.  The regulation is ushering in a new era of banking regulations which is positioned to increase consumer protection and security through innovation in the payments space.  The idea is that consumers are the rightful owners of their data, both individuals and businesses, and not the banks.  The regulation mandated that banks provide payment services to TPP via an application programming interface (API) by January 13, 2018.

Essentially, PSD2 will allow TPP to create customer centric and seamless interfaces on top’s of banks operational infrastructure.  On a daily basis individual and business customers interact with various social and messaging platforms such as Facebook, LinkedIn, WhatsApp, Skype, etc.  This regulation will allow a whole array of TPP to enter the payments space accessing banking data of their loyal customer base and even initiate payments on the their behalf assuming the consumer already gave authorization to do so.

Consumers are demanding real time payments as many other parts of the digital experience is real time, so why should it be any different for payments?  Financial institutions have made some progress in the payments space as evident by the successful launch and market penetration of Zelle backed by over 30 US banks.  Zelle allows customers of participating banks to send money to another US customer, usually within minutes, with an email address or phone number.  

The integration is rather seamless because Zelle, can be accessed through the customer’s existing mobile banking application and a separate Zelle application doesn’t have to be downloaded, but it’s now being offered for customers of non-participating banks.  It appears that the launch of Zelle, which required extensive collaboration of many leading US banks, was initiated by the risk, some challenger payment providers such as Venmo and Square Cash posed to the industry.

The European Parliament astutely realized that to encourage innovation and competition in the payments space, new rules would need to be enacted.  The fact that PSD2 was initiated by the government actually strengthens cybersecurity as opposed to diluting it.  The reason is that whether we like it or not, innovation will march on and new FinTech players will continue to emerge. Since, the European Parliament is taking a proactive role in the evolution of the payments space it allows the industry as a whole to think about best practices for security and authentication.

PSD2 is requiring strong customer authentication (SCA) which falls into three basic categories such as:

  • Knowledge (e.g. something only the user knows)

  • Possession (e.g. something only the user possesses)

  • Inherence (e.g. something the user is)

The third category, inherence, opens up one of the most promising applications of artificial intelligence which is biometric authentication, but more specifically facial recognition.  This will apply to both individual customers and entities because financial institutions may start to store biometric data of executives who are authorized to perform specific transactions.  Biometrics and how it could impact entity due diligence will be discussed later in this paper.

New York State Department of Financial Services (NYDFS) Risk Based Banking Rule

The NYDFS risk based banking rule is requiring covered institutions to certify they are in compliance with the regulation by April 15, 2018.  The rule is the first of its kind and identifies the key components of effective transaction monitoring and filtering programs.  As mentioned earlier, this rule has highlighted the need for senior management to have greater governance, oversight, and accountability into the management of the transaction monitoring and sanctions screening programs of their institution.

Some of the requirements are straightforward and well known in the industry such as conducting a risk assessment and ensuring a robust KYC program is in place.  However, this rule is unique in the sense, that it highlights very specific system, data, and validation requirements.  Transaction and data mapping were highlighted as key activities which are specific to the implementation of transaction monitoring and sanctions screening systems.  

The focus on data and sound quantitative methods are not new from a regulatory perspective, because many of the requirements in the NYDFS rule can be found in the Office of the Comptroller of the Currency’s (OCC) paper, Supervisory Guidance on Model Risk Management.  The OCC’s paper has traditionally applied to market, credit, and operational risks such as ensuring the financial institution is managing its risk properly to ensure it has enough capital to satisfy reserve requirements and has conducted sufficient stress testing to ensure different market scenarios can be endured.

Many of the high dollar enforcement actions by the NYDFS have involved foreign banks and specifically cite the AML risk of correspondent banking.  There have been several phases to the enforcement actions where some banks have actually engaged in systemic wire-stripping, deleting or changing of payment instructions, to evade US sanctions.  As large financial institutions have moved away from these practices the enforcement actions have cited deficiencies in other aspects of the institution’s AML program.

This brings up back to a common theme of CAS.  For anyone, who has worked with many aspects of an institution’s AML program, it’s complex.  It's not only the systems which are supposed to screen for sanctions and monitor suspicious activity because there are other factors at play such as the interactions between interdepartmental staff, vendor management, emerging financial crime trends, new regulatory rules, staff attrition, siloed information technology (IT) systems, general technology trends, among other things.

In a sense, one interpretation of what the NYDFS rule says, at its essence, is that institutions need to do a better job at managing complexity.  And to manage complexity effectively, in complex organizations, support is needed from senior leadership so requiring the annual certification does make sense.

Derisking and Hawala Networks

The interesting part of enforcement actions is that it has led to derisking of correspondent banking relationships by US and European banks to reduce the risk of severe financial penalties and reputational damage.  According to a 2017 Accuity research report, Derisking and the demise of correspondent banking relationships, there has been a 25% decrease in global correspondent banking relationships largely due to strategies executed by US and European banks.  

This has reportedly left a huge window of opportunity for other countries to move in, and China has been at the forefront by increasing its correspondent banking relationships by 3,355% between 2009 and 2016.  The reduction in correspondent banking relationships increases the pressure for individuals and businesses in emerging markets to seek out alternative methods of finance which could increase the power of criminal groups and other nefarious actors offering shadow banking services.  The Global Head of Strategic Affairs, Henry Balani opined on the consequences of derisking by stating, “Allowing de-risking to continue unfettered is like living in a world where some airports don’t have the same levels of security screening – before long, the consequences will be disastrous for everyone.”

There are informal money transfer systems outside the channels of traditional banking such as hawala in India, hundi in Pakistan, fei qian in China, padala in Philippines, hui kuan in Hong Kong, and phei kwan in Thailand.  Hawala is an informal money transfer system that has been used for centuries in the Middle East and Asia, and facilitates money transfer without money movement through a network of trusted dealers, hawaladars.  The origins of Hawala can be found in Islamic law (Sh’aria) and is referred in texts of Islamic jurisprudence in the early 8th century, and is believed to have helped facilitate long distance trade.  Today, hawala or hundi systems can be found in parallel with traditional banking systems in India and Pakistan.

According to a report issued by FinCEN, Hawala transactions have been linked to terrorism, narcotics trafficking, gambling, human trafficking, and other illegal activities.  In John Cassara’s book, Trade-Based Money Laundering: The Next Frontier in International Money Laundering Enforcement, he cites evidence has shown that funding for the 1998 US embassy bombing in Nairobi was through a hawala office in its infamous Eastleigh neighbourhood, an enclave for Somali immigrants.  

In 2015 it was reported that across Spain, groups such as ISIS and the al-Qaeda-affiliated Nusra Front are funded through a network of 250 to 300 shops - such as butchers, supermarkets and phone call centres - run by mostly Pakistani brokers.

Could the decline of global correspondent banking relationships strengthen informal remittance systems such as Hawala because the average business or person in an impacted geographic area or industry has fewer options now?

This is not to say that US and European regulators should have not aggressively enforced financial institutions compliance failures, but they may need to reevaluate their strategies going forward to ensure the integrity of the global financial system without causing the unintended, and sometimes hard to predict, consequences of derisking.  It’s not only about ensuring the prevention of terrorist financing, but it's also about allowing access to the global financial system to law abiding businesses and citizens so they don’t have to use shadow banking networks which can strengthen various aspects of black markets and criminal networks.

Correspondent banking became increasingly costly because the correspondent banks started to reach out to their customers (respondent banks) to get more information about their respondent banks’ customers.  The correspondent bank would contact the respondent bank through a request for information (RFI) to understand who the customer was and the nature of the transaction.  This created a long and laborious process for correspondent banks to complete and the small revenues generated didn’t justify the operational costs and compliance risks of serving high-risk jurisdictions and categories of customers.

One of the main components of the US Patriot Act was the customer identification program (CIP), but in the case of correspondent banks, how do they really know if the respondent banks are conducting effective due diligence on their own customers?  While it wasn’t the correspondent banks responsibility to verify the identity each and every one of the respondent bank’s customers, during some AML investigations, at certain banks, it almost went to that length.  

This brings up an interesting question about innovation in the correspondent banking space, and if there could be a way for respondent banks to verify their own customer’s identity and somehow share it with their correspondent banks?  A number of problems arise with this such as a data privacy, but there have been companies exploring this possibility to prove, to a certain extent, that respondent banks due diligence procedures are robust and accurate.

The Financial Action Task Force (FATF) has stated that, “de-risking can result in financial exclusion, less transparency and greater exposure to money laundering and terrorist financing risks.”  If punitive fines against financial institutions by regulators was a major contributor to the derisking phenomenon, then what is the role of regulators in supporting innovation to reverse the effects of derisking to ensure the integrity and transparency of the global financial system?

Regulatory Sandboxes

Regulatory sandboxes may be one of the key ingredients needed for compliance departments to innovate, especially in the financial services sector.  Financial institutions are under severe scrutiny for money laundering infractions and the United States regulatory regime is notoriously punitive based on the amount of fines levied against institutions which failed to comply.

It was reported that HSBC Holdings engaged the artificial intelligence (AI) firm, Ayasdi, to help reduce the number of false positive alerts generated by the bank’s transaction monitoring system.  According to Ayasdi, during the pilot of the technology HSBC saw a 20% reduction in the number of investigations without losing any of the cases referred for additional scrutiny.

The bank’s Chief Operating Officer Andy Maguire stated the following about the anti-money laundering investigation process:

the whole industry has thrown a lot of bodies at it because that was the way it was being done

HSBC is one of the world’s largest banks which was fined $1.92 billion in 2012 by U.S. authorities for allowing cartels to launder drug money out of Mexico and for other compliance failures.  

So, why would HSBC be one of the first banks to test out an unproven technology and vendor in an area with so much scrutiny and risk?

First, it's worth noting that the pilot took place in the United Kingdom and not the United States.

While there are many factors at play in a bank’s decision making process, clearly the regulatory sandbox offered by the United Kingdom’s Financial Conduct Authority (FCA) could have been a key factor in piloting the new technology.

In November 2015, the FCA published a document titled the ‘Regulatory Sandbox’ which describes the sandbox and the regulator’s need to support disruptive innovation.  The origin of the regulatory sandbox goes beyond supporting innovation because the FCA perceives their role as critical to ensure the United Kingdom’s economy is robust and remains relevant in an increasingly competitive global marketplace.  The FCA’s perception of their essential role in the continued growth and success of the United Kingdom’s financial services sector is succinctly summarized below:

To remain Europe’s leading FinTech Hub, we have to ensure that we continue to be an attractive market with an appropriate regulatory framework.

The FCA is not the only regulatory regime to discuss the importance of innovation as government officials in other countries have publicly discussed technology as a means to advance various functions within the economy.  On September 28, 2017, at the Institute of Singapore Chartered Accountants’ Conference, the Deputy Prime Minister Of Singapore, Teo Chee Hean, made the following comments about the use of technology in the fight against transnational crime.

"We will increasingly have to use technology and data analytics (to strike) back at them, to detect and pursue transnational crime and money laundering, strengthen our regulatory framework enforcement approach, and collaborate more closely with our international counterparts…

Mr. Teo’s comments about transnational crime make sense because Singapore is arguably the safest country in Southeast Asia, and one of the safest countries in the world.  Hence, Singapore’s touchpoint with crime is at the transnational level, in the form of trade finance and other financial instruments through its extensive banking system.

Singapore also has its own regulatory sandbox and on November 7, 2017 just on the heels of Mr. Teo’s comments, it was reported that OCBC engaged an AI firm which helped it reduce its transaction monitoring system alert volume by 35%.

It's not only financial institutions that are experimenting with AI to fight financial crime because the Australian Transaction Reports and Analysis Centre (AUSTRAC) collaborated with researchers at RMIT University in Melbourne to develop a system capable of detecting suspicious activity for large volumes of data.

Pauline Chou at AUSTRAC indicated that criminals are getting better at evading detection and the sheer transaction volume in Australia requires more advanced technology.  Ms. Chou told the New Scientist: "It's just become harder and harder for us to keep up with the volume and to have a clear conscience that we are actually on top of our data."

It’s worth noting that the Australia supports its own innovation hub and sandbox to explore the prospects of fintech, but at the same time do a better job at combating financial crime, terrorist financing, and various forms of organized crime.

On October 26, 2016 the Office of the Office of the Comptroller of the Currency (OCC) announced it would created an innovation office to support responsible innovation.  While several countries have embraced the notion of a regulatory sandbox, the United States regulators are preferring to use different terminology to ensure financial institutions are held responsible for their actions or lack thereof.  The OCC Chief Innovation Officer Beth Knickerbocker was quoted as saying that the OCC prefers the term “bank pilot” as opposed to “regulatory sandbox,” which could be misinterpreted as experimenting without consequences.

According to a paper published by Ernst and Young (EY), China has leapfrogged other countries in terms of fintech adoption rates which is mainly attributed to a regulatory framework that is conducive to innovation.

As discussed financial institutions are beginning to leverage AI to reduce false positive alerts from transaction monitoring systems and identify true suspicious activity.  The same concept also applies to the future of entity due diligence.  As new technology is developed to simplify and optimize the due diligence process, then organizations which operate within jurisdictions that support innovation will be more likely to be the early adopters and reap the benefits.

Digital Transformation

It’s hard to imagine how to survive a day without our phones as they have become such integral parts of our daily lives.  Being able to send an email, make a phone call, surf the internet, map directions, and watch a video all from one device that fits into our pockets is somewhat remarkable.  At the same time its somewhat expected given computing power has doubled every two years according to Moore’s law.  It’s not only mobile phones because we can see the digital transformation everywhere, even in the public sector which can be slow to adopt new technology.  

The Queens Midtown Tunnel that connects the boroughs of Queens and Manhattan in New York has converted to cashless tolling joining the ranks of other US cities in California, Utah, and Washington.  Cashless tolling is a large upfront cost to implement the necessary systems to support the process and there are construction changes needed to renovate the infrastructure. Also, other bridges have reported a spike in uncollected tolls and a loss in revenue after going cashless.  

However, there are significant benefits such as reducing traffic congestion in the city which helps alleviate some of the air pollution from idling vehicles.  Also, the staff needed to facilitate the collection, transportation, and computation of bills and coins is a high administrative cost for the city.  This highlights concerns many people share, which is that automation and technological innovation in general will take jobs away from people, but legislators believe the advantages of cashless tolling outweigh the disadvantages in the long-term.

Similarly, as we go through various digital transformations as individuals, businesses need to follow suit or potentially be disrupted given that consumer expectations are evolving, or, at the very least, open to easier ways of doing things.  Uber is a strong example, along with other assetless firms, of a company that leveraged technology to completely disrupt a market such as the taxi industry which has historically been fairly stagnant.  

In 2001, 2006, and 2011, there was only one technology company, Microsoft or Apple, listed as one of the top 5 companies in the US by market capitalization.  In 2016, only 5 years later, all 5 companies were technology companies which clearly shows the speed at which times have changed.  The Economist published an article, “The world’s most valuable resource is no longer oil, but data,” which highlights that all 5 of those companies have created elaborate networks, based on physical devices, sensors or applications, to collect enormous amounts of data.

Source: http://www.visualcapitalist.com/chart-largest-companies-market-cap-15-years/

Source: http://www.visualcapitalist.com/chart-largest-companies-market-cap-15-years/

Google, a subsidiary of Alphabet, has a major source of its revenue that comes from advertising.  The reason Google is able to deliver so much economic value is the massive amounts of data collected on users and leveraging advanced machine learning algorithms to make relevant and meaningful ad recommendations based on the user’s preferences while surfing the internet.  Google has fundamentally transformed the advertising industry because all of their metrics can be measured and tracked as opposed to traditional forms of advertising media such as print, television, and radio which can be measured to a degree, but not to the extent to know if the user acted upon the ad or not.

The Rise of Alternative Data

The natural byproduct of governments, companies, and people going through these various forms of digital transformations is that it creates enormous amounts of data.  According to research conducted by SINTEF ICT in 2013 it was estimated that 90% of the world’s data has been created in the last two years.

Dr. Anasse Bari, a New York University professor, contends that the big data and deep learning are creating a new data paradigm on Wall Street.  Dr. Bari explains that diverse data sets such as satellite images, people-counting sensors, container ships’ positions, credit card transactional data, jobs and layoffs reports, cell phones, social media, news articles, tweets, and online search queries can all be used to make predictions about future financial valuations.  

Source: https://www.promptcloud.com/blog/want-to-ensure-business-growth-via-big-data-augment-enterprise-data-with-web-data/

Source: https://www.promptcloud.com/blog/want-to-ensure-business-growth-via-big-data-augment-enterprise-data-with-web-data/

For example, data scientists can mine through satellite images of the parking lots of major retailers over a period of time to predict future sales.  Professor Bari was part of a project that analyzed nighttime satellite images of the earth as a way to predict the gross domestic product (GDP) per square kilometer.  The theory was that the greater the amount of light in a given area would infer a higher GDP.

Many institutions have already incorporated social media as an alternative data source for AML and fraud investigations and even for merger and acquisitions (M&A) deals.  During the analysis phase of a M&A deal, social media could be one of the data sources to leverage by conducting a sentiment analysis of the target company.  Sentiment analysis is type of natural language processing (NLP) algorithm which can determine, among other things, how people feel about the company. Some of the insights that could be derived is that people love their products or have similar complaints of poor customer service which may have an impact on the final valuation.

Sentiment analysis can be a particularly powerful analysis method which was illustrated by an anecdote in James Surowiecki’s book, The Wisdom of Crowds.  The story was that in 1907 Francis Galton realized that when he averaged of all the guesses for people participating in the weight of an ox competition at a local fair, it was more accurate than individual guesses and even supposed cattle experts.  The method was not without its flaws though, because if people were able to influence each other’s responses then it could skew the accuracy of the results.  However, it does point to the powerful insights that could be extracted from social media platforms which have large crowds.

Another potential area for alternative data in the financial services sector is a network of nano satellites that leverage synthetic-aperture radar technology that, among other things, track ships on Earth.  ICEYE is a startup that has raised funding to launch a constellation of synthetic-aperture radar enabled satellites which can observe the Earth through clouds and darkness that traditional image satellites can’t do.

Source: ICEYE - Problem Solving in the Maritime Industry

Source: ICEYE - Problem Solving in the Maritime Industry

Trade finance is one of the key components which facilitates international trade supported by financial institutions.  One component of trade finance is that many goods travel from one country to another on vessels.  North Korea’s aggressive missile tests and nuclear ambitions are creating a volatile political environment and increasing the focus and exposure of sanctions risk.  Financial institution don’t want to be linked to vessels dealing with North Korea, but it can be hard to know, in some circumstances, for sure by limiting the due diligence process to traditional data sources.

Marshall Billingslea, the US Treasury Department’s assistant secretary for terrorist financing, explained to the US House of Representatives Foreign Affairs Committee that ships turn off their transponders when approaching North Korea, stock up on commodities such as coal, then turn them back on as they sail around the Korean peninsula.  The ships would then stop at a Russian port to make it appear that the ships contents came from Russia, and then sail back to China.  The other challenge is that North Korea doesn’t have radar stations that feed into the international tracking systems.  This type of satellite technology could be used by multiple stakeholders to help identify what vessels behave suspiciously near North Korea and could, to a certain extent, make sanctions more effective by applying pressure on noncompliant vessels.

Source: https://www.businessinsider.nl/north-korea-why-un-sanctions-not-working-2017-9/

Source: https://www.businessinsider.nl/north-korea-why-un-sanctions-not-working-2017-9/

A recent report by the US research group C4ADS highlights another tactic that North Korea uses to evade sanctions, which is to create new webs of shell and front companies to continue operations.  This sentiment is echoed in an October 13, 2017 report by Daniel Bethencourt of Association of Certified Anti-Money Laundering Specialists (ACAMS) MoneyLaundering.com North Korean front companies “use a series of perpetually evolving sanctions-evasion schemes” to continue its proliferation of a the nuclear weapons program.  

Changing Consumer Expectations

The new paradigm for the customer experience is Disney World.  For anyone who has travelled to Disney World recently, the MagicBand sets the bar for our expectations as consumers.  The MagicBand is based on radio-frequency identification (RFID) technology and is shipped to your home for your upcoming trip and can be personalized with names, colors, and other designs for everyone in your party.

The MagicBand can be used to buy food and merchandise, enter your hotel room, lookup photos taken by Disney staff, and even enter your preselected rides.  There are water rides at Disney so last thing you want is going on one of those and having your belongings get soaked.  But the idea is to make the experience personalized, frictionless, and pleasurable.  Since, companies are merely a group of individuals then as our individual expectations change so will our organizational ones, even if it happens, in some firms, at a slower pace.

The speed at which consumers are adopting new technologies has advanced rapidly when examining the historical penetration of other technologies such as the electricity, and the refrigerator.  The adoption rate of the cell phone and internet exhibits a steeper incline in a shorter timeframe than earlier technologies as seen in the image below.

Source: https://hbr.org/2013/11/the-pace-of-technology-adoption-is-speeding-up

Source: https://hbr.org/2013/11/the-pace-of-technology-adoption-is-speeding-up

Bank in a Phone

While banks have had various levels of success with digital transformation, there is one that stands out from the rest of the pack.  DBS bank launched Digibank in India in April 2016 and in Indonesia in August 2017 which is basically a bank in your phone.  As a testament of the bank’s success in August 2016, at a Talent Unleashed competition, DBS’ chief innovation officer, Neal Cross, received the most disruptive Chief Innovation Officer (CIO)/ Chief Technology Officer (CTO) globally.  Mr. Cross was judged by prominent figures business such as including Apple co-Founder Steve Wozniak and Virgin Group Founder, Sir Richard Branson,

Source: https://www.dbs.com/newsroom/DBS_Neal_Cross_recognised_as_the_most_disruptive_Chief_Innovation_Officer_globally

Source: https://www.dbs.com/newsroom/DBS_Neal_Cross_recognised_as_the_most_disruptive_Chief_Innovation_Officer_globally

DBS’ strategy with Digibank is succinctly summarized by its chief executive executive, Piyush Gupta, in the below quote:

"With digibank, we've built a bank that pulls together the power of biometrics, natural language, artificial intelligence and in-built security in one offering. We believe this mobile-led offering represents the future of banking."

While Digibank is targeted at the retail banking in emerging markets, it sets a clear precedent of what’s to come in the future, even for institutional banking in the developed economies.  For example, as the customer service chatbots functionality advances then they will begin to be deployed by banks as an option to serve large banking customers.  While some customers will prefer to speak to an actual person, the fact that these chatbots are serving those same people in other sectors and businesses will drive the adoption for financial institutions because of changing customer behavior.

This has also has had profound effects on the financial services industry which, in some sense, is struggling to keep up with the fervent pace of the digitalisation across various aspects of its operations.

Self Service in Spades

Self-service machines are not something new as we have been interacting with automatic teller machines (ATM) for years.  The first ATM appeared in Enfield, a suburb of London, on June 27, 1967 at a branch of Barclay’s bank.  An engineer, John Shepherd-Barron, at a printing company is believed to have come up with the cash vending machine idea and approached Barclay’s about it.

The first ATMs were not very reliable and many people didn’t like them, but banks continued to install them despite the lack of customer satisfaction.  In the United Kingdom in the 1960s and 70s, there was growing pressure from the trade unions for banks to close on Saturdays.  Hence, the executive leadership of banks seemed to think that ATMs were a good idea to appease the unions and customers and to reduce labor costs.  This is another historical example of how different agents of a CAS act, react, and learn from each other and how automation was seen as a remedy for all those involved.

This brings up a number of points of about the transformative effect of new technology.  First, ATMs are densely present in most major cities today and their transformative effect on us has become almost invisible.  Second, it seems that most bank employees were not concerned about getting replaced by an ATM.  How could they?  It performed so poorly and couldn’t do the same type of quality job as a human bank teller, at least at that time.  

After society accepted ATMs as a part of banking, did we even notice as they got better?

Some pharmacies, in low traffic areas, today will only have one employee working in the front of the store acting like an air traffic controller, directing customers to self-service checkout systems.  The self-service systems in most pharmacies today is similar to ATMs in the sense that they have predefined responses and can’t learn or are not embedded with intelligence.  These self-service systems can only give feedback based on the user’s physical interaction with the system usually via a touch screen as opposed to a customer’s voice.  Occasionally, the employee will intervene if the customer attempts to buy an age-restricted item, but many times it's possible to complete a transaction with no human interaction at all.  

A similar transformation will continue to evolve in the entity due diligence space where the consumer, or the account opener, is directed to a system and guided through the onboarding process by a series of prompts.  The self-service onboarding systems are already beginning to emerge by prompting customers to update their information based on policy timeframes, refreshing expired documents or reverifying information required for the bank to conduct its periodic review, if applicable.  While this trend appears to be strongest in the retail banking market right now, it will certainly become the standard for commercial and wholesale banking in the future, it has to.

The Rise of Non-documentary Evidence

As discussed in previous sections consumer expectations are changing and the new customer experience model is Disney’s MagicBand.  Additionally, financial institutions can develop platforms to initiate their customer base to get more engaged in the due diligence process by refreshing their own documentation collected for regulatory purposes.  However, as highlighted earlier regulatory expectations are likely to increase over time which means that offloading all of the heavy lifting to the client is not the best long-term strategy.

Hence, other activities need to be planned to reduce the amount of work the customer has to do on their own and continue to evolve the due diligence process so it becomes as seamless as possible.  There are other factors at play as well such as the risk of losing customers to other more mature digital banks with better onboarding processes or even losing business to fintechs.  Again, fintechs are taking advantage of the current regulatory environment which puts enormous regulatory pressure on depository institutions that creates an opportunity for them to offer snap-on financial products linked to a customer’s bank account.

Another strategy that can be developed, in the US and other countries, for the due diligence process is to leverage non documentary methods of identification as described in the Federal Financial Institutions Examination Council (FFIEC) Bank Secrecy Act (BSA)/Anti-Money Laundering (AML) Examination Manual.  

The determination of what data could satisfy non documentary needs and when to require a potential client to supply the actual document could follow the risk-based approach.  For example, if during onboarding the customer triggers the EDD process based on the institution's risk rating methodology then the physical documents could be requested.  On the other hand, if a potential customer is onboarded and is determined to be low-risk then this could allow for more flexibility when satisfying the policy requirements, including using non-documentary evidence.

As will be discussed later in this document there will be opportunities to leverage the advances in robotic process automation (RPA) and AI to further streamline the due diligence process by automating specific portions of it.

Emerging Risks

There are many emerging risks for financial institutions as the transformative effects of the digital age continue to shape and reshape the world, business, consumer expectations, risk management, etc.  In the OCC’s Semiannual Risk Perspective for Spring 2017 report, four key risk areas were highlighted including strategic, loosening credit underwriting standards, operational (cyber), and compliance.

The theme underlying all of these key areas was the influence of digitalisation which amplified the complex, evolving, and non-linear nature of risk.  For example, while strategic risk and loosening credit underwriting standards were listed separately, one leads to the other.  In other words, strategic risk was created by nonfinancial firms, including fintech firms, offering financial services to customers which forced banks to respond and adjust credit underwriting standards in an attempt to retain market share.  

Cyber risk wasn’t a result of the interactions between competitors, but something that arose out of the increased digitalisation of the modern world, including business processes.  However, cyberattacks are successful because of the immense amount of human and financial capital invested into it by foreign governments and large networks of bad actors to ensure its continued success.

Financial Technology (Fintech)

Financial technology or fintech is not something new because innovation has always been a part of finance, but the what’s emerging is that an increasing number of firms are offering services based on a fintech innovation directly to customers and are threatening the traditional financial institutions market share.

The modern day multi-purpose credit card has its roots in a 1949 dinner in New York City, when the businessman, Frank McNamara, forget his wallet so his wife had to pay the bill.  In 1950, Mr. McNamara and his partner returned to the Major's Cabin Grill restaurant and paid the bill with a small cardboard card, now known as the Diner’s Club Card.  During the card’s first year of business, the membership grew to 10,000 cardholders and 28 participating restaurants and 2 hotels.  

On February 8, 1971, the National Association of Securities Dealers (NASDAQ) began as the world’s first electronic stock market, trading over 2,500 securities.  In 1973, 239 banks from 15 countries formed the cooperative, the Society for Worldwide Interbank Financial Telecommunication (SWIFT), to standardize the communication about cross-border payments.

There were other companies which built risk management infrastructure or data services to serve the financial services sector such as Bloomberg and Mysis.  

The difference between the major changes of the past and today, was that in the past innovation came from within, and if it did come from an external company it was designed to support the financial services sector as opposed to threaten its market dominance.  Today, some fintechs are looking to make the financial services sector more efficient with better software, but other firms are looking to take market share by offering complementary financial service products directly to consumers.

Arguably, the first signs that fintech could be in direct competition with the financial services sector was the emergence of Confinity, now PayPal, in 1998.  Confinity was originally designed as a mobile payment platform for people using Palm Pilots and PDAs.  The company was acquired by Ebay in 2002 to transform the business of payments.  PayPal disrupted the sector by building a payments ecosystem to allow electronic funds transfer typically faster and cheaper than traditional paper checks and money orders.

The global financial crisis of 2008 was pivotal point in the history of fintech which laid the foundation for more challenger firms to begin entering the financial services space.  The collapse of some major financial institutions such as Lehman Brothers, and the acquisition process to consolidate other major financial institutions in the US and Europe created negative public sentiment towards the industry and regulators demanded sweeping changes to ensure that type of crisis wouldn’t happen again.  

Given the sheer magnitude of the 2008 financial crisis, many institutions were so focused on remediation efforts and meeting new regulatory demands and innovation was not a top priority.  However, at the same time there were significant technological advances in smartphones, big data, and machine learning which allowed for fintechs to disrupt the sector - as it laid stagnant for a few years - with consumers more open to digital alternatives.

One financial service that is ripe for disruption is the mortgage loan process which is extremely laborious and, according to CB Insights, costs an average of $7,000, contains more than 400 pages of documents, requires more than 25 workers, and takes roughly 50 days to complete.  Clearly, this is a concern and consumers may opt for nontraditional online lenders to avoid the painful experience in traditional banking.

According to a report by PricewaterhouseCoopers (PWC), large financial institutions across the world could lose 24 percent of their revenues to fintech firms over the next three to five years.  Today, fintechs are focusing on a wide array of applications that can be categorised as consumer facing or institutional and includes services such as business and personal online lending, payment applications, mobile wallets, and robo advisors .  

Some fintechs are marketing directly to consumers and taking advantage of the power of digital platforms and the current regulatory environment which puts enormous pressure on depository institutions.  As discussed in the digital transformation section this creates an opportunity for fintechs to offer snap-on financial products linked to customer’s bank account without the regulatory burden of being a bank.  CB Insights created the graphic below to highlight 250 fintech startups transforming financial services.

Source: https://www.cbinsights.com/research/fintech-250-startups-most-promising/

Source: https://www.cbinsights.com/research/fintech-250-startups-most-promising/

Financial institutions are not only in competition with fintechs for market share because they are also in a fight for talent with technology companies more generally, given the impacts of the digital era.  The CEO of Goldman Sachs hosted an event in September 2017 for 250 students at a New York public college which traditionally wouldn’t have been on the prestigious firm’s recruiting radar.  However, Mr. Blankfein stated the following about Goldman’s current outlook and modified talent sourcing strategy:

"It wasn't an act of kindness on my part, or generosity, or trying to create diversity; it was as pure selfish, naked self-interest, we wanted to really extend our net further because everybody’s involved pretty much in a war for talent. And we compete against obviously all the other financial services firms, but we compete against all the technology firms."

Some financial institutions are taking an alternative approach, rather than competing directly or doing nothing, by either investing in or partnering with fintech companies, and fewer have acquired fintechs directly.  According to data compiled by CB Insights, since 2012 Citi has invested in 25 fintechs and Goldman Sachs in 22, while fintech acquisitions have been less frequent.  However, in October 2017 JP Morgan Chase agreed to acquire the fintech company, Wepay, for $220 million.

Model Risk

The OCC’s paper, Supervisory Guidance on Model Risk Management, outlines an effective framework to manage model risk.  The basic components for managing model risk can be broken into three parts which are:

  • Model development, implementation, and use

  • Model validation

  • Governance, policies, and control

As the digital era changes the way financial products and services are offered, it also changes the way risk is evaluated and managed.  Again, while these robust frameworks have historically applied to market, credit, and operational risk. However, there is clearly a place for due diligence in both third party risk management and AML programs.  

The irony about model risk here is that while new technologies will help automate repetitive tasks and advanced algorithms will assess various risks more accurately, it will greatly increase the complexity of the governance and validation processes.  To calculate risk more effectively more external data sources will be interrogated by more complex algorithms.  This will increase the risk management surface area, and determining where potential issues occur could become more cumbersome to identify and remediate.

Another emerging technology trend is regulatory technology or regtech.  Regtech’s goal is to enable institutions to make better and more informed decisions about risk management and given them the capability to comply with regulations more efficiently and cost effectively.  However, as discussed previously the threat posed by fintechs are putting pressure on financial institutions to loosen their credit underwriting procedures to avoid losing more market share.  This trend will continue to drive the incorporation of more external data sources and more complex algorithms to arrive at more accurate assessments of risk, faster.

Financial institutions also have to deal with legacy IT systems which require data to flow through many toll gates before getting to the risk management systems.  It’s akin to trying to fit ‘a square peg in a round hole’ which can result in various data quality issues.  While adopting new technology can increase the overall complexity of the model risk management process, it could, at the same time, increase the framework’s overall effectiveness, when implemented properly.

Cyber Risk

Cyberattacks have been called one of the greatest threats to the United States, even beyond the risks that a nuclear North Korea may pose.  2017 has been a blockbuster year for data breaches and no organization seems to be immune from the threat.

Some high profile data breaches in 2017 include, but are not limited to entities such as:

IRS, Equifax, Gmail, Blue Cross Blue Shield / Anthem, U.S. Securities and Exchange Commission (SEC), Dun & Bradstreet, Yahoo, Uber, and Deloitte.

While the extent of the data breach varied for each organization, it's abundantly clear that cyberattacks are affecting a wide range of institutions in the public and private sector at an alarming rate.

The foreword to Daniel Wagner’s book, “Virtual Terror: 21st Century Cyber Warfare,” was written by Tom Ridge, the former Governor of Pennsylvania and First Secretary of the US Department of Homeland Security described the threat as follows:

“The Internet is an open system based on anonymity - it was not designed to be a secure communication platform.  The ubiquity of the Internet is its strength, and its weakness.  The Internet’s malicious actors are known to all of us via disruption, sabotage, theft, and espionage.  These digital trespassers are motivated, resourceful, focused, and often well financed.  They eschew traditional battlefield strategy and tactics.  They camouflage their identity and activity in the vast, open, and often undefended spaces of the Internet.  Their reconnaissance capabilities are both varied and effective.  They constantly probe for weaknesses, an authorized point of entry, and a crack in the defenses.  They often use low-tech weapons to inflict damage, yet they are able to design and build high-tech weapons to overcome specific defenses and hit specific targets.”

As institutions move more towards a complete end-to-end digital experience it will require them to monitor the digital footprint of their customers, counterparties, and service providers to ensure nothing has been compromised.  Today, some internet marketplace companies maintain a list of internet protocol (IP) addresses, either on their own or with the help of a vendor, known to be compromised or associated with cyber crime groups to avoid fraudulent charges, account takeovers, and other forms of cyber attacks.

For example, if a credit card was issued by a bank in the US, but the IP address associated with the online transaction was coming from another country overseas it could increase the risk of fraud, assuming the individual was not travelling.

There have been cases of fraudsters impersonating an executive of an existing customer by calling up the wire room of a financial institution and requesting a high priority wire to be sent overseas.  This is a low-tech form of fraud because the fraudster used social engineering to convince the bank’s employee that they were a legitimate representative from the company.  The fact that the fraudster had some much information regarding the bank’s operations, customer’s account, and the customer’s organizational structure which made the scheme that much more believable.

This is another serious risk for business-to-business (B2B) commerce because with all of the data breaches happening, its relatively straightforward to gather or purchase the confidential information needed to conduct an effective B2B fraud.

Cybersecurity has become a core part of third party risk management as an potential business partner’s cyber resiliency could impact the decision making process.  The 2013 data breach of Target’s systems which compromised 40 million credit and debit card numbers highlight the fact that in terms of entity due diligence, cyberattacks are a serious risk to third party risk management.  Target’s systems were reportedly compromised by first stealing the credentials of a Heating and Ventilation company (HVAC) in Pittsburgh. Target used computerized heating and ventilation systems and it was apparently more cost effective to allow remote access to its systems to a third party rather than hire a full time employee on-site.

The risks posed by cyberattacks are forcing organizations to rethink how they engage with customers and other businesses in their supply chain.  The future of a robust entity due diligence process will have to include an assessment of cyber resiliency which could include cybersecurity insurance policies, but this industry is still in its infancy due to a lack of comprehensive standards when assessing cyber risk exposure.  This is beginning to change in the US based on the NYDFS cybersecurity part 500 rule will require covered entities to submit a certification attesting to their compliance with the regulation beginning on February 15, 2018.

Technology and Innovation

The story of human history could be traced back to technological innovations that allowed for early humans to exert greater control over the environment to meet their needs or desires.  The most basic technological innovation was the discovery and control of fire which allowed for warmth, protection, cooking, and led to many other innovations.  Scientists found evidence at a site in South Africa that Homo erectus was the first hominin to control fire around 1 million years ago.

If, we consider ideas as an invention or innovation then the next big shift in human history was the concept of agriculture and the domestication of animals which laid the foundation for civilization to flourish.  Mesopotamia was the cradle of civilisation as no known evidence for an earlier civilization exists.  Mesopotamia was a collection of cultures situated between the Tigris and Euphrates rivers, corresponding mostly to today’s Iraq, to support irrigation and existed between  3300 BC – 750 BC.  One theory also credits Mesopotamia with another revolutionary innovation, the wheel, based on the archaeologist, Sir Leonard Woolley, discovery of remains of two wheeled wagons at the site of the ancient city of Ur.

There is a difference between groundbreaking innovations and incremental improvements to existing products or processes.  Historically, there has arguably been less innovation, in terms of volume, when comparing ancient civilizations to the modern societies of the 20th and 21st centuries.  However, the discovery of fire, the invention of the wheel, paper, compass, and printing press had an extraordinary impact on social structures, evolution of culture, and value systems.  For example, the wheel was a groundbreaking innovation because it rearranged social structures by revolutionizing travel, trade, agriculture, war, among other things.

One of the most groundbreaking innovations of the modern world was electricity because many other revolutionary technologies such as the radio, television, computer, and internet wouldn’t be possible unless there was a way to power those devices.  When analyzing the impact of technology on the future, it's worth imagining if the technology is a groundbreaking advancement of an incremental improvement.

There are four key technology areas which will have a transformation effect, on different time horizons, on the future of entity due diligence which are robotic process automation (RPA), big data platforms, AI, and blockchain.  The general trend of digitalisation will de-emphasise the need for specific aspects of the due diligence process such as collecting and processing data and customer interaction.  Other new technologies such as blockchain will introduce the concept of shared work and consensys which will rearrange how transactions are conducted.  This general trend will be amplified by other adoptions such as the integration of more alternative data sources, self-service systems, acceptance of non-documentary evidence, and more complex algorithms to reduce false positive alerts and the re-prioritisation of risk management queues.

Robotic Process Automation (RPA)

RPA is an important stepping stone in the evolution of the due diligence process, but it's more of an incremental improvement rather than a groundbreaking innovation.  RPA is the application of technology that allows employees of an organization to program a ‘robot’ to complete repetitive and mundane tasks such as processing a transaction, transferring data from one application to another, transforming data, and triggering automated responses.

The entity due diligence process can be extremely cumbersome, especially the KYC process, because the workflow can include hundreds of keystrokes and mouse clicks.  For example, the CIP and CDD components of the KYC process can include over 100 data points to complete for a legal entity.  If, the entity is determined to be high-risk based on their business or product types products being used then the EDD process can include hundreds of data points.  Depending on the policy of the institution, higher-risk customers will have to be reviewed periodically.  

There are many aspects of the KYC workflow that are repetitive where RPA solutions can be designed to complete specific tasks in a queue.  The below graphic shows a potential use case for RPA where the robot can complete different several tasks and transfer data between siloed applications.  In this example, there is the potential for no human interaction, if an escalation to an employee is not needed.

Source: https://www.roboyo.de/en/robotic-process-automation/

Source: https://www.roboyo.de/en/robotic-process-automation/

The rising regulatory expectations and increased competition from fintechs are forcing financial institutions to explore different possibilities to increase efficiency, speed, and reduce costs.  The types of RPA being implemented today are mainly rule based, meaning the process being automated is stable, predictable, and repeatable.  

Big Data

One of the popular historical definitions of big data has been described in a META group report as data which exhibits the three ‘V’s - a large volume, of a wide variety, and can come into an organization at a velocity of high speeds or real time.  The definition of big of data like many emerging technology trends is a moving target and there have been two other V’s added to the historical definition.  The fourth V is veracity, a byproduct of another V - variety, which refers to the trustworthiness or integrity of the data.  The last V is value, which is simply the ability to turn data into value and actionable intelligence.  

The last two V’s of veracity and value most likely arose out of the need to address the real world challenges of implementing big data platforms and deriving real value from it.  Data integrity is highlighted as a key component to ensure the success of big data initiatives.  This also implies that simply storing massive amounts of varied information doesn’t mean the organization is engaged in big data initiatives.  The key to big data is the last V, value, because storing a lot of information has no intrinsic value unless insights can be extracted and acted upon quickly.

The concept of processing large amounts of data has a longer history, but the practical application of dealing with large semi-structured data sets emerged out of the internet search companies need to create an elegant solution of ranking millions of web pages efficiently.  In 2004, when the internet included roughly 51 million websites, Google published a research paper about how they used a MapReduce modeling program to optimize the complex problem of parallelizing a computation across a networks of computers.  Google’s paper prompted Doug Cutting and Mike Cafarella to create Apache Hadoop in 2005 which is used as a big data platform by many organizations today.

The fundamental issue is that for big data problems, there is more data to process than what can be done on a single computer.  Hence, the explosion of the internet created a need to optimize parallel processing to address the bottleneck issue encountered in the physical hardware limitations of using a single computer to process massive amounts of data.

Traditional computer programs have followed a serial computation workflow where an algorithm would be executed in steps - and after the completion of each step - it would proceed to the next step on a single computer.  The explosion of the internet and the continued rise of alternative data has created the need to store, process, transfer, and analyze massive amounts of data or what has been called ‘big data.’  Again, to put the growth of data in context, research conducted by SINTEF ICT in 2013 estimated that 90% of the world’s data has been created in the last two years.

The top graphic in the image below shows the traditional programming paradigm where algorithms follow a serial method of processing instructions one-by-one in a queue.  The bottom graphic in the image below shows the enhanced programming paradigm which follows a parallel processing method where data (or the problem) is broken into discrete parts that can be processed concurrently on different machines, and the parts are recombined later in the process to arrive at the final result.

While this is a fairly technical concept, it’s essential for everyone to understand especially for users of systems that support various business functions.  For example, senior management of the compliance department in financial institutions are the users of various systems such as CDD, transaction monitoring, and sanctions screening to meet regulatory requirements.  

The general trend is that big data and AI will transform all sectors and industries, but it's important to note that, parallel processing implies cloud computing.  This essentially means that to process large amounts of data very fast, the algorithms must split the tasks over a network of computers which doesn’t necessarily mean the network of computers resides within the institution’s firewall.

Hence, institutions may need to send some of their data outside of their network to harness the power of big data through parallel computing.  However, there is the possibility that institutions will opt to implement advanced software within their own data centers and manage the parallel computing process themselves.  However, others may opt to leverage cloud providers to scale up their computing capacity, but this raises questions such as, what data can be sent outside of the institution and what data can’t?  Also, does it need to be anonymised first?

The term big data is a bit of a misnomer because it's not all about the data at all.  Big data is really about the ability to extract value for decision making, and one of the most effective ways to do this today is by leveraging the power of parallel processing.

The massive amounts of data generated by the digital revolution and the advances in parallel computing laid the framework for major successes in AI, which will be explored in the next section.

Artificial Intelligence (AI)

AI is one of the most exciting technological trends of our time where machines have ‘learned’ to recognize faces, classify objects in images, recommend advertisements that align with our preferences, understand and act on voice commands, drive cars autonomously, beat humans in chess back in 1997, and the notoriously complex game of go in 2017.

With all of these impressive feats, what exactly is AI?  This is actually a complicated question, but one of the simplest definitions is that AI is the study of making computers great at tasks associated with human intelligence.

The term “artificial intelligence” was coined in a 1956 academic proposal for a 2 month study of how machines can simulate learning and intelligence which was submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories).  According to the proposal the goal of the workshop was, “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

While the term of AI first appeared in 1956, the interest in objects exhibiting intelligence has appeared throughout antiquity such as in the civilizations of ancient Egypt and Greece. In the Greek myths, Daedalus used quicksilver to put a voice in his statues, Hephaestus created automata for his workshop, and Talos was an artificial man of bronze.  

The independent invention of calculus by Isaac Newton and Gottfried Leibniz was another precursor to AI because created the foundation for the idea that rational thought could be systemised or mechanised.

There were other precursors to AI such as research by the British physician, Richard Caton, who discovered the electrical nature of animal brains by recording impulses.  Caton’s work led to the research of physician, Hans Berger, who was the first to record human electroencephalograms (EEGs) in 1924.  The discovery of the electrical nature of the brain inspired early work in the field of neural networks and cybernetics.  The idea was that, if a brain was an electrical system, then a machine could be built to replicate it.  To this end, the development of AI has been about, at its core, replicating human intelligence or biomimicry, but our definition of intelligence may need to be modified in the context of AI to avoid anthropomorphising it, and distorting its current capabilities and limitations.

Intelligence and Thinking

There seems to be some confusion around what AI can and can’t do today, and this could partially stem from the use of the word intelligence.  Intelligence is a nebulous concept because we could think of intelligence, in a narrow human sense, in terms of an intelligence quotient (IQ), as the ability to break down and solve complex problems quickly.  Other interpretations of human intelligence think of it more broadly as a multitude of abilities, beyond book smarts, such as the ability to ‘read people’ or a type of emotional intelligence.

But intelligence is not limited to humans, even though humans clearly exhibit a special form of intelligence, as animals can communicate with one another, coordinate attacks on a prey, and save food for the future.  It could even be argued that plants exhibit a form of intelligence, as its branches tend to grow away from darkness towards sunlight.  The Sante Fe Institute President, David Krakauer, offered another definition of intelligence stating that, “is making hard problems easy.”

In 1950 Alan Turing, one of the pioneers of computer science, proposed the idea of the imitation game to address the question of whether machines could think.  The game, commonly referred to as the Turing test, was to have an evaluator review the written conversation between player A and B, and determine which player was human and computer.  If, the computer was mistakenly taken for a human 30% or more of the time then the test was passed.  In 2017, at an event hosted by the University of Reading, it was reported that a computer program, Eugene Goostman which simulates a 13-year-old Ukrainian boy, passed the Turing test.  

While the Turing test is an important milestone to gauge the development and capabilities of AI, it doesn’t actually show that machines think.  This was even stated by Alan Turing himself, in the same paper that proposed the game, when he wrote:

“The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.”

In a discussion about thought, Noam Chomsky highlighted this sentence that Turing wrote in his paper to show that Turing was well aware of the limits of human language and the ridiculousness of the question itself.  The phenomenon of thought is not well understood today, so for some to claim that passing the Turing test shows machines can think, presupposes we know what thought is, but we really don’t understand it that well.  

Or as Noam Chomsky explained, saying machines can think is sort of like saying submarines can swim - you can say it - but it really doesn’t make sense.  However, this doesn’t mean that computers don’t exhibit forms of intelligence because clearly they do, but it does show the pitfalls of language and the need to carefully choose the words we use and in what context to achieve the highest level of understanding.

Limits of AI Today and Policies to Protect the Future

It’s important to note that AI is highly domain specific today.  For example, an AI can process human voice commands, but the same AI program wouldn’t be able to drive a car autonomously or determine if an image had a cat in it.  The only potential exception to this was demonstrated by AlphaGo created by Google’s Deep Mind which beat a world champion at the ancient chinese, and notoriously complex, game go.  

Despite the game’s simple rules there are 10 to the power of 170 possible board configurations - making go a googol (10 to the power of 100) more complex than chess.  Traditional approaches to have AI programs learn complex games have been to construct a search tree over all possible positions, but the sheer number of configurations make this method impossible.  This is why the creators of AlphaGo used a search tree with deep neural networks with millions of neural-like connections mimicking the human brain.  

In 1997, when IBM’s Deep Blue beat a world champion at Chess there was a brute force mechanism to it, in terms of processing all of the potential scenarios and arriving at the optimal solution.  Also, some of the highest economic value created by AI systems today such as targeted digital advertising required immense amounts of data beyond human cognition.  So, in this sense, there are cases where AI exhibits greater intelligence than humans today in a very specific domain problem, but its the multitude of intellectual abilities, emotions, and the experience of the physical world that makes us human.

The media exacerbates fears that AI is positioned to take over the world, in some type of Terminator Skynet scenario, in the not too distant future.  While there are serious serious social and economic implications to consider with the continued advancement of AI capabilities and applications, most leading academic experts agree that machines becoming sentient is not something that is likely in the near future.  However, there have been leading figures in the technology world such as Elon Musk who have warned about the dangers of AI.  Other leading tech tycoons and intellectual figures such as Bill Gates and Stephen Hawking have expressed similar concerns.  

There comments do point to the need for the regulation of AI to ensure power is not concentrated among a few companies and that policies are in place to address when people’s jobs are displaced.  A December 2017 report by the McKinsey Global Institute estimated that as many as 375 million workers globally (14 percent of the global workforce) will likely need to transition to new occupational categories and learn new skills, in the event of rapid automation adoption.  While AI is just one technology which encompasses the wider automation trend, it does significantly contribute to the overall job displacement concerns over the next 12 years.

Many have discussed the idea of some type of universal income to support people who are displaced, and allow them the opportunity to study and gain skills relevant for the economies of the future.

Beyond the concern of income replacement, there are also concerns about activity replacement.  If, people are truly displaced by AI, would they even want to study to gain new skills?  Some have speculated that new jobs will be created by the AI revolution that haven’t existed in the past, but can all of those jobs really be filled by the workers of today?  

And perhaps more philosophically, if someone loses their job and has little prospect to replace it with another, then how will they spend those core 8-10 hours of each day that they typically spent working?  Could we see a spike in drug use and civil disobedience?  What policies will be in place to not only replace people’s income, but also to fill people’s schedule with relevant activities to ensure a person’s dignity is intact?

Types of AI Today

One of the world’s leading AI experts, Professor Andrew Ng, has stated during numerous talks and conferences that “AI is the new electricity.”  Professor Ng explained that electricity was an invention that impacted almost every industry such as transportation, medicine, agriculture, food storage, etc.  In essence, Professor Ng is stating that just as electricity was a groundbreaking innovation that led to many other innovations, and AI is in the same revolutionary category.

There are three areas of AI which will likely have the most impact on the entity due diligence process over the next 10 years which are NLP, machine learning, and computer vision.  There are also sub-categories of machine learning that could be used in a certain situations of the due diligence process and not in others.  The image below shows the various fields of study of AI which includes several sub-categories of machine learning, some of which, are likely and not likely to be applied in various portions the due diligence process.  Also, for specific problems various types of AI need to be used at different points in the process to reach the optimal solution.

Source: https://www.datavisor.com/portfolio-items/webinar-practical-approaches-to-apply-machine-learning-to-aml/

Source: https://www.datavisor.com/portfolio-items/webinar-practical-approaches-to-apply-machine-learning-to-aml/

NLP is the study of applying technology to process human language which includes, but is not limited to: sentence understanding, automatic question answering, machine translation, syntactic parsing and tagging, sentiment analysis, and models of text and visual scenes.  Sentiment analysis has become increasingly accurate and more sophisticated as these algorithms have been fine tuned over time.  For example, in 2011 it was observed that the mention of the actress, Anne Hathaway, appeared to influence the stock price of the Berkshire Hathaway stock.  

This suggests that the type of NLP algorithms that executed automated trading strategies from analyzing news sentiment, were somewhat unsophisticated and crude by not being able to distinguish between an actress and a company with one matching word in their names.  This brings up another key distinction about using AI for different purposes such as trading and due diligence.  For hedge funds, its not about understanding how the algorithms works, its simply about profit and loss (P&L).  If, the algorithm is profitable then understanding it becomes secondary, but in many aspects of the due diligence process understanding needs to come first and results later.  

Today, sentiment analysis is becoming more sophisticated and the more advanced algorithms would not make the Anne Hathaway and Berkshire Hathaway mistake. As shown in the graphic below, there are single words in each sentence that suggest the overall sentiment is positive or negative, but the final word reveals the author’s true feeling.  

Source: Where AI is today and where it's going. | Richard Socher | TEDxSanFrancisco https://youtu.be/8cmx7V4oIR8

Source: Where AI is today and where it's going. | Richard Socher | TEDxSanFrancisco https://youtu.be/8cmx7V4oIR8

Machine learning can be broken into several categories such as supervised, unsupervised, reinforcement, neural networks, etc.  As discussed previously supervised machine learning (SML) has brought a tremendous amount of economic value to the advertising industry by ‘learning’ internet users preferences based on their search and website browsing behavior and tailoring advertisements to meet their specific interests.  

SML have several techniques such as decision trees, random forests, nearest neighbors, Support Vector Machines (SVM) and Naive Bayes which can solve complex computations with hundreds of variables (high-dimensional space).  SML model require data to be labeled, meaning there is an input and its expected output is already predefined which works best when the problem is well understood and somewhat stable.  As new data is introduced to the model, it can predict what the output should be based on how its has been trained which works for many, but not all scenarios.

The image below shows a high level flow of the SML process where the data collection, data training, feature extraction, model training, and model evaluation all make up core components of the initial learning process.  As new data is introduced predictions are made and labels classified based on the model’s historical training.

Source: https://www.datavisor.com/technical-posts/rules-engines-learning-models-and-beyond/

Source: https://www.datavisor.com/technical-posts/rules-engines-learning-models-and-beyond/

There are some limitations to SML such as dealing with new scenarios that are not part of the existing landscape.  For example, new typologies can emerge in fraud and money laundering which didn’t exist in the past or were unknown when the model was trained.  Dealing with these unknown unknowns is one of the limitations of SML, which is why unsupervised machine learning (UML) is used to close the gap on CAS such as fraud and money laundering networks.

UML leverage techniques such as clustering and anomaly detection without prior knowledge of the input and expected output.  In other words, no training data is needed for the algorithms to identify useful insights.  UML can identity networks of bad actors acting similarly and hyper segment customers into categories which is important in the fraud prevention and AML space.

he below image represents a simple use case of UML in the fraud and AML space.  It is common for bad actors wanting to commit fraud or launder money to let their accounts age before they start transacting as many institutions treat new accounts as higher-risk.  Traditionally, rule based systems would be used to identify accounts older than X days conducting Y transactions over a time period Z.  The problem with the naive rule based systems is that the hard-coded thresholds could miss suspicious behavior (false negatives) and tends to include non-suspicious behavior (false positives).  

Source: https://www.datavisor.com/portfolio-items/aml-whitepaper/?portfolioID=6839

Source: https://www.datavisor.com/portfolio-items/aml-whitepaper/?portfolioID=6839

UML doesn’t cling to thresholds and looks for patterns of behavior among clusters of entities. The use cases for entity due diligence will leverage both SML and UML for specific problems, but the overall process will most likely use a combination of both to achieve optimal results of the end goal of better entity risk management.  

There are other exciting areas of AI such as robotics, but this area is not likely to have as much of an impact on the entity due diligence process in financial services when compared with other types of AI.  In the Federal Financial Institutions Examination Council (FFIEC) manual, there is a reference to consider on-site visits as part of the due diligence process for cash-intensive businesses which could be potentially conducted by UAV in the future, but there are legal and technical limitations for these devices to operate completely autonomously today. As discussed in the rise of alternative data section, networks of nanosatellites could be used as another data source to verify if an entity is violating sanctions beyond what traditional trade documents show.  

Similarly, UAVs could be used as another data source in the entity due diligence process, but the market needs time to mature because financial institutions are more likely to purchase the data showing images and video of locations and entities, as opposed to directly commissioning a UAV to check up on its prospect or customer.  The UAV scenarios are more aligned to assessing the integrity of physical assets such as real estate for M&A or conducting audits in the retail and manufacturing sectors today.  Other use cases for UAVs, commonly referred to as drones, are emerging to combat modern slavery and forced labour.  In 2015, it was reported that Brazil would use drones with cameras to help combat forced labour with slave like conditions in rural areas.

According to the World Labour Organization at any given time in 2016, there are an estimated 40.3 million people in modern slavery and 24.9 in forced labour. Hence, drones have the ability for governments and companies to manage risk in their supply chain by holding their suppliers accountable for forced labour conditions by creating more capabilities to monitor activities of those companies remotely.

Politically Exposed Persons (PEPs) and Adverse Media Collection

NLP and types of machine learning are positioned to disrupt the way politically exposed persons (PEPs) and entities associated with adverse media are identified, curated, and verified.  PEPs have been traditionally been identified, curated, and verified by human analysts who comb through various data sources to determine if a given person is a PEP.  This approach has its limitations because there is a bottleneck to the amount of PEPs that can be verified because each analyst can only do X number of investigations each day.  X number of investigations multiplied by Y number of analysts is the daily throughput for an organization verifying PEP data via a manual approach.  

Another challenge to the manual approach is the timeliness of the data because the status of a PEP could change on a daily basis, and this may or may not be reflected immediately in the data set which organizations consume for their risk management programs.  By merging the increased accuracy of NLP algorithms with SML which are provided feedback from analysts reviewing potential PEP and adverse media matches, the system can learn and improve over time.

Collective Learning through Data Sharing

This brings up another interesting point about the next generation of PEP and adverse media matching technology based on AI, which is the notion of collective learning.  As described in the big data section the massive amounts of information produced by the internet created a scalability challenge for the internet search companies.  This led to the extreme levels of parallel processing, where a task was broken into parts, and worked on concurrently to vastly increase the throughput and speed to completion.

While the large internet search companies can support massive data centers, not all financial institutions would want to support the type of data centers required to get the most out of AI embedded technologies.  Hence, its likely that certain aspects of the PEP screening and the adverse media process would be outsourced to cloud computing solutions.  If, the same vendor is doing PEP and adverse media matching for different institutions on the same cloud vendor, then is there a way to share true positives and false positive matches, without violating data privacy regulations, from all of the participants?  This way the algorithm can learn from the collective as opposed to be limiting the data to a particular organization’s data set and searches?  This also implies that the large banks will have better risk management algorithms in the future when compared to the smaller ones, simply from the fact of having more data.

The idea of sharing data among financial institutions is the holy grail of the KYC process because the same customer could be onboarded by 10 different banks and would have to replicate the procedure including the supply of all the relevant documentation, 10 separate times.  The KYC utilities have attempted to address the duplicative nature of the KYC process, to a certain degree, by collecting specific information that can be reused by institutions. However, the KYC utilities tend to be specialized for specific types of KYC data, stronger in certain geographic locations, and collect information for a specific types of customers such as hedge funds, asset managers, financial institutions, and corporates.

Obviously, any type of customer data sharing leads to privacy concerns.  However, some financial institutions may argue that cloud computing solutions are simply extensions of their own network, and their own data centers are no more or less secure than the cloud providers.  Other institutions may opt to send out to the cloud all of the data which is not sensitive and do the last mile of matching, with sensitive customer data, on their own networks.  Also, the algorithm could be designed in way where the data was not necessarily shared, but simply exposed to the algorithm itself, and not other banks, which could lessen the data privacy concerns.  Data sharing without violating privacy regulations will be explored again in the blockchain section.

Due Diligence, Behavior, and Networks of Entities

Regulators commonly cite the ‘risk-based approach’ as an important litmus test for the health of an AML program.  AI and the rise of big data is redrawing the lines of what the risk-based approach actually means.  For instance, in some financial institutions today the customer risk rating process is overly reliant on a customer’s static attributes such as operating in a high-risk jurisdiction, stating the intention of using specific high-risk financial products, operating a cash-intensive business, etc.  However, the future of entity due diligence will likely focus more on the actual transactional activity of the entity as opposed to simply relying on their static attributes.  

For example, a customer could be identified as high-risk based on their demographic information, but their transactional activity is so minuscule that it ‘should’ automatically lower the risk score and scrutiny for that entity.  Similarly, a customer which appears totally innocuous could be transacting in a way, which is associated with networks of suspicious actors.  This is why there will be a blurring line between entity due diligence and transaction monitoring, because they are essentially two sides of the same coin and changes in one can influence the other.

The other advantage that SML will bring to both the due diligence and transaction monitoring process is that it will be able to take thousand of disparate attributes to make more accurate predictions about risky customers and behavior.  As discussed in an extended blog post, beneficial ownership can add crucial context to an AML investigation.  For example, some of the data that institutions could have access to, but don’t always leverage for customer risk rating are:

  • Suspicious activity reports (SARs)

  • Hidden links among transaction networks

  • Entities linked to suspects who are PEPs

  • Adverse media

  • Section 314(a) of the USA Patriot Act 2001

  • Section 314(b) of the same Act

  • Subpoenas

Beneficial ownership data provides networks of legal entities and their beneficial owners to allow for sanctions, PEP, and adverse media screening.  The SAR, 314(a), 314(b), and subpoena data helps add context to the customer’s behavior and their associated network inside and outside the financial institution.  If financial institutions can identify what customers are considered suspicious (labeled data) and supply their associated data then SML can make more accurate predictions about the risk future customers pose.  

UML can take this process even further by clustering segments of customers together based on similar behavior derived from thousands of data points, and even identifying hidden links among bad actors as shown in the graphic below.  For example, if a financial institutions filed a SAR against Client C then the investigation may or may not uncover the link between Client C and Client A, based on the movement of funds through Client B.  This may be too much for a human investigator to try to uncover manually and that information may or may not be relevant for a given investigation.  UML can automatically identify these links and present the information to the investigator through a graphic interface to incorporate into their investigation seamlessly.

Source: DataVisor

Source: DataVisor

Julian Wong, VP of Customer Success at DataVisor, says that “previously unknown cases of fraud and money laundering can be uncovered using UML by finding hidden linkages and correlations between bad actors and accounts. These results can in turn be used as labels to train SML models/algorithms and further increase coverage and improve precision. The combination of UML and SML offers companies better protection by catching known patterns of fraud while also casting a safety net to defend against never-seen-before fraud techniques.”

The Director of Content, Brian Monroe, for the Association of Certified Financial Crime Specialists (ACFCS) highlighted the evolving nature and intersection of fraudsters and money launderers in a recent article.  Fraudsters collaborated with unscrupulous hosts and used stolen credit cards to book rental rooms online, where a portion of the proceeds was returned from the host to the fraudster.  This merged fraud and money laundering typologies because the process of using the stolen credit cards was the fraud, and the return of funds from the host to the fraudster was the money laundering, which made the received funds appear legitimate.  In summary, the interplay of UML and SML can address the evolving nature of CAS beyond what traditional rule based systems can offer.

Human Readable Reports

Another emerging area is the ability of various NLP and AI programs to create human readable reports from various data sources on the internet and other subscription based databases.  For example, let’s say a user wanted a report about coal mining in Brazil, then a simple graphical interface will allow a user to enter information and a report will be produced by the algorithms behind the scenes.  Primer, an AI company, is doing just that by making sense of the endless information on the internet by summarizing important topics into a one page report with graphs, charts, and drill down options.

The same idea could be applied the entity due diligence space, where all public and private database information is feed into these types of algorithms to create a human readable report.  There are third party companies willing to produce in depth due diligence reports on companies or individuals for a fee.  These reports can be extremely detailed for a M&A deal or very basic for the screening needs of an individual customer opening a brokerage account.  Regardless of whether a person or an algorithm creates the report, it still needs to be read, understood, and decisioned by a human.

The sanctions, PEP, and adverse media screening results could potentially be incorporated in these types of human readable reports in the future.  Similarly, if any of the SML or UML algorithms identify suspicious activity it could provide drill down options.  The only potential downside of this approach is the over reliance on these types reports to provide information, and could they potentially overemphasize specific data sources and miss important information a human investigator could have found on their own during the investigative process.  This is why model validation and calibration will become such an important activities as AI technologies are embedded and deployed in regulatory compliance settings to ensure models don’t exhibit systematic bias or other shortcomings.

Biometrics

Computer vision covers a wide area of applications, but biometrics and specifically facial recognition is beginning to make its way into financial services.  Biometric authentication is the process of verifying an individual’s identity by matching their fingerprint, retina, or face to pre-collected or pre-approved physiological identifiers.  Facial recognition involves a number of different problems which are generally broken in parts and solved separately such as:

  1. Identify faces in an image

  2. Focus on the face, regardless of the direction, tilt, and lighting

  3. Identify unique features of the face such as the width of the eyes and height of eyebrows

  4. Compare unique features from face in image against database of all known faces to find match, and determine identity or unknown

Facial recognition software has advanced at a rapid pace where Chinese researchers reportedly created an algorithm that can recognize faces better than humans.  Facebook’s system (DeepFace) was tasked with comparing the faces from two images to determine if they were the same person and reached 97.35% accuracy when compared to the averaged accuracy of 97.53% for humans.  Researchers describe the DeepFace system is based on a nine-layer deep neural network with a mind blogging 120 million parameters.

There are already organizations such as Thorn and Marinus Analytics tackling complex issues using facial recognition to fight child sex trafficking.  Marinus Analytics has a released FaceSearch feature, powered by Amazon Web Services Rekognition platform, which allows law enforcement to upload a photo of a potential victim which is searched against internet sex advertisements to determine if the child has become a victim.  It’s logical that law enforcement have turned to the latest advanced in AI technology to fight child sex trafficking due to limited resources and the sheer volume missing children and online advertisements.

Despite the advances in the face recognition technology there are still limitations as represented by researchers in Vietnam, who reportedly fooled Apple’s Face ID with a mask that cost $150.  This suggests that fraud could decrease for a time by raising the cost to fool the software, but clearly the criminal underworld will react and could innovate by using 3D printers to create custom masks in the future.  

Shaun Moore, one of the creators of TrueFace.AI, highlighted that some facial recognition technology can be ‘spoofed’, and can’t tell the difference between a real face and photo. There are other concerns with the collection of biometric data for authentication purposes because of the prevalence of data breaches.  If, a hacker gets access to your password it can be changed, but your retina, iris, and fingerprints are permanent.  Facial recognition is probably the least invasive of all biometrics to collect given the prevalence of social and professional photos floating around cyberspace.

Financial institutions have been providing biometric options for account access as consumers seem to prefer it, over remembering a password.  Traditionally, financial institutions have offered some access controls to a company’s operating account by requiring two signatures from authorized account signers.  However, this has been more of a symbolic act than a real control because most institutions didn’t have ways to systematically validate all signatures. However, there could have been a dollar threshold assigned to a trigger a manual review of a check or wire by an analyst.

In the future, as part of the onboarding process, financial institutions could begin to offer the collection of biometrics for authorized account signers, beneficial owners, etc..  Then fund transfers or other transactions could be initiated and logged with the requestor’s biometrics, which creates greater transparency and could reduce the prevalence of employee fraud. Financial institutions in the US, and many other countries, are required by law to collect and verify the name, address, and tax id of potential customers.  

Could biometric data fall under the same category in the future?

If, institutions begin to collect biometric data on individuals and even beneficial owners for access control purposes, then the question becomes, what else can be done with the data?  

What could develop over the next 10 to 30 years, is that adverse media is expanded to include alternative data such as faces as opposed to simply relying on textual references of an individual in a news article, court filing, PEP database, etc.  As the facial recognition algorithms get better and processing speed continues to double every 18 months then searching for a face, could be similar to a google search today.  

Sanctions lists could evolve to not only include textual information, but also a high resolution photograph of a sanctioned individual.  Then violating sanctions will not be limited to what gets represented in trade documents and fund transfers, but simply being in the same photo as a sanctioned individual could lead to reporting by financial institutions and investigation by regulators.  

For example, if photos of the meeting between Joaquin "El Chapo" Guzman Loera, the OFAC sanctioned Drug Kingpin, Kate Del Castillo, and Sean Penn were leaked 30 years from now, then it could create an adverse media hit in the sanctions screening programs of the future, assuming photos of sanctioned individuals are indexed and compared against facial data of an institution’s customer base, including beneficial owners acting on behalf of a company.

Source: http://abcnews.go.com/International/kate-del-castillo-describes-sean-penns-meeting-el/story?id=37729504

Source: http://abcnews.go.com/International/kate-del-castillo-describes-sean-penns-meeting-el/story?id=37729504

If, biometric data is used for screening purposes in the future, it could feed other emerging trends such as privacy and anonymity which can be offered by the darknet and other encrypted applications.

Blockchain

AI received an incredible of media coverage in 2017, but it was eclipsed by the interest in blockchain, the underlying technology powering the Bitcoin protocol.  The real driver of interest in blockchain was the breakout year that Bitcoin had in 2017, and this drove people to search related terms such as blockchain to understand why the price kept increasing and what was behind it all.

Source: Google Trends

Source: Google Trends

At the most basic level, blockchain is form of mutualized record-keeping in a near irrevocable time-stamped ledger.  To understand blockchain it's worth taking a closer look at the emergence of Bitcoin, which was the first successful implementation of the blockchain idea.  It’s useful to think of blockchain as an idea as opposed to a technology because this may make it more intelligible, to a certain degree.

Despite some attempts to separate blockchain from Bitcoin, it was the world’s first decentralised cryptocurrency, Bitcoin, that propelled the idea of blockchain into the public consciousness.  In October 2008, Satoshi Nakamoto, a pseudonym for a person or group of people, published a paper describing the how the Bitcoin protocol would allow for online payments without having to go through a trusted third party such as a financial institution.  Bitcoin solved the problem of trustless consensus, because it provided a self-governing network through the innovative use of several different ideas such as proof of work (POW), cryptographic signatures, merkle chains, and peer-to-peer (P2P) networks.

The classic problem that Bitcoin solved is referred to as the ‘Byzantine Generals Problem’, which is a logical dilemma of trying to reach consensus described by Leslie Lamport, Robert Shosta, and Marshall Pease in an academic paper published in 1982.  The goal of the paper was to highlight that reliable computer systems must be able to handle malfunctioning parts that give conflicting information to different parts of the system.  This concept was expressed abstractly by thinking about the generals of the Byzantine army, waiting outside of an enemy city waiting to attack it.  

Since the different groups under each general’s command are dispersed over a wide geographic area, it makes centralized command difficult.  Hence, the generals are forced to rely instructions to another through messengers so they can come to a consensus of when to attack the city, which will only be successful if the all generals strike at the same time.  The problem is that the generals know their is a traitor among them, so how will they know the message they receive has not been tampered?  If, different messages are sent to different generals then it will break down the cohesion of the army of how to act and when, and could cause the attack to fail.

Source: https://medium.com/@DebrajG/how-the-byzantine-general-sacked-the-castle-a-look-into-blockchain-370fe637502c

Source: https://medium.com/@DebrajG/how-the-byzantine-general-sacked-the-castle-a-look-into-blockchain-370fe637502c

The same idea could apply to a P2P payment system which doesn’t have any trusted third party validating the transactions.  In a shared and distributed ledger, any payments (messages) to the ledger (coordinated attack time) must be trusted.  But large distributed networks can have millions of users (generals) so how can the payments made through an open ledger system be trusted with no third party facilitating the activity?  Blockchain essentially solves the Byzantine Generals Problem through an innovative combination of existing ideas including cryptography, P2P networks, and game theory.

A Closer Look at Bitcoin

Again, Bitcoin is essentially a digital ledger of all transactions that have been recorded since its genesis and transactions are grouped together to form blocks which are cryptographically secured and linked to previous blocks in the blockchain (ledger).  The Bitcoin ledger is distributed across a network of computers commonly referred to as nodes.  In order for bitcoin transactions to complete successfully, all of the nodes on network must agree or reach consensus that the transaction is accurate and true.  To conduct a bitcoin transaction, a person must have a bitcoin wallet that can read and initiate transactions on the ledger.  

To move bitcoin from one wallet to another, the network of nodes will examine the transaction to verify its authenticity and ensure the bitcoin is available and prevent double-spending.  Once, the individual transaction is determined to be authenticate it gets sent to a queue with other pending transactions that will form the next block in the blockchain.  For transactions to be added to a block they must be approved by the network.  The final approval is based on a computational puzzle (the proof-of-work) which is solved by specific nodes known as miners.  Once a miner solves the proof-of-work for a pending block, the solution is sent to the network for verification.  Once the solution is verified by the network, the block is added to the blockchain.

Bitcoin uses the Secure Hash Algorithm 2 (SHA-2) developed by the National Security Agency (NSA) to encrypt transactions.  For example, if you wanted to send 1 bitcoin to your friend, then the transaction itself can be thought of as a string of characters which is put through a cryptographic hash function which generates a hash value.  The interesting thing about hash functions is that they are nearly impossible to invert.  In other words, it's nearly impossible to know what the input (transaction) was based on analyzing the hash function’s output.  Furthermore, if only one character of the input changes, then the hash function will create a completely different output.  

After each pending transaction is put through the hash function, the individual generated hash values are routed through a merkle tree to form one hash value based on the combined hash values from all of the pending transactions that will form the new block.  Then the computational puzzle is solved based on this newly defined character string based on the combined hash value from the pending transactions within the new block, the hash value from the previous block, a timestamp, and a cryptographic nonce.

Source: https://medium.com/@DebrajG/how-the-byzantine-general-sacked-the-castle-a-look-into-blockchain-370fe637502c

Source: https://medium.com/@DebrajG/how-the-byzantine-general-sacked-the-castle-a-look-into-blockchain-370fe637502c

The cryptographic nonce is where part of game theory comes in because to solve the computational puzzle miners find a nonce value, which is made arbitrarily difficult, that produces the output which meet Bitcoin’s protocol standards.  The only way for the miner to solve the puzzle is by brute force through trial and error and a lot of computing resources.  Miners are also incentivised to competing with other miners and being the first one to solve the puzzle, despite it being very difficult, because the winner is rewarded with bitcoins.  The graphic below shows the goal of miners is to append a nonce value to the end of the combined hash value from the pending block and pass the concatenated value through the hash function again which produces another hash value with 5 leading zeros.

Source: https://medium.com/@DebrajG/how-the-byzantine-general-sacked-the-castle-a-look-into-blockchain-370fe637502c

Source: https://medium.com/@DebrajG/how-the-byzantine-general-sacked-the-castle-a-look-into-blockchain-370fe637502c

Issues with Bitcoin

Bitcoin has proven that the interaction between core concepts such as cryptography, P2P networks, and game theory can be applied to create a functional system where consensus is possible without a trusted third party.  In this sense Bitcoin has been a success because it has inspired a wave of investment and research into what else might be possible.  

However, there are still fundamental issues with the current Bitcoin protocol, and the three major ones are scalability, sustainability, and money laundering.  Bitcoin can only process between 3 to 7 transactions per second, based on an arbitrary limit of a 1 megabyte (MB) block size.  How can Bitcoin be a viable medium to conduct transactions with an upper limit of 7 transactions per second?  This is much lower than the scale of transactions that can be processed through Visa and PayPal infrastructures.

Bitcoin enthusiasts would argue that the block size could be increased if some of the nodes or members on the Bitcoin network agree to the new rules which is referred to as a hard fork.  Ironically, for participants on the Bitcoin protocol, where consensus has been predefined and governed by mathematics, its really hard to come to consensus on Bitcoin outside of the protocol itself.  Bitcoin cash is an example of a hard fork which occurred in August 2017 where some nodes will follow the new rules defined by the fork and other nodes will follow the old rules of Bitcoin.  This essentially creates two blockchains with a common history, but are no longer connected and will create their own unique transaction histories going forward.  Not all hard forks have been successful because the SegWit2x proposal was cancelled as supporters felt consensus was not reached among a wide enough user base of the Bitcoin community.

Another major concern with Bitcoin, and cryptocurrencies in general, is sustainability and because the computational puzzle which needs to be solved requires a lot of computing power. According to Digiconomist, it was estimated that the Bitcoin network uses more power than all of Ireland in one year.  There are various energy sources for electricity, but fossil fuels still hold a majority of the share in the US.  According to the Institute for Energy Research (IER), 41% of electricity is generated by coal and burning coal creates carbon dioxide which is something governments are trying to regulate.  

Based on the estimated energy consumption of Bitcoin, it would take roughly 250 to 300 kilowatt hours to process one 1 bitcoin transaction.  This would equate to powering the average US household for 8 days or burning 244 pounds of coal.  Regulators may try to impose energy consumption standards on cryptocurrency transactions, similar to emission standards imposed on automobiles.  However, the risk could be that artificially decreasing the difficulty of the computational puzzle, could make the protocol more vulnerable to a cyberattack.

The last major concern with Bitcoin is its connection to money laundering and crime.  Ironically, it was exactly the criminal element which were the early adopters of Bitcoin, and this laid the foundation for its legitimacy as a medium for the transfer of value.  Silk road was the first darknet market used primarily for selling illegal drugs, and was shut down by the FBI in 2013.  The darknet has been built on top of the internet, which is only accessible through special tools such as Tor and by direct links, invitations or passwords to allow for anonymity for the website host and the user browsing the web.  Some users use software like Tor to browse the web anonymously in countries where censorship of the internet is common such as in China and Iran.  However, the darknet allowed for dark markets to emerge, but the one missing ingredient was a way to exchange value anonymously which Bitcoin solved, to a certain degree.

Silk road was very successful given that it had over 13,000 items for sale in October 2014, most of them drugs, that were categorized by types such as stimulants, opioids, precursors, etc.  As a testament to Silk road’s success, the US Department of Justice (DOJ) collected $48 million in proceeds from an auction where 144,336 confiscated bitcoins were sold.  Bitcoin was a much more effective medium for conducting illicit transactions when it was not well known in the public mind.  

For example, one of the main ways to buy Bitcoin today is to go through an exchange, and many exchanges perform some basic form of KYC procedures and may collect identification, addresses, etc.  This will make it easier for law enforcement to piece together who is actually behind a given bitcoin transaction.  However, it's still possible to purchase bitcoin anonymously, if someone can deal directly with a seller and circumvent the need to go through an exchange which will try to verify your identity.  

Perhaps the leading indicator that Bitcoin will trend downward in 2018, and beyond, is that criminals, which laid the foundation for its mass adoption, are beginning to abandon it for more anonymous cryptocurrencies.  Law enforcement is ringing the alarm bells to policy makers as can be seen by the testimony made to the Senate Judiciary Committee on modernising money laundering laws by Matthew Allen, ICE's special agent in charge of Homeland Security Investigations (HSI):

"HSI agents are increasingly encountering virtual currency, including more recent, anonymity enhancing cryptocurrencies (AECs), in the course of their investigations. AECs are designed to better obfuscate transaction information and are increasingly preferred by [transnational criminal organizations]."

Regardless of what happens to Bitcoin, bubble or not, and despite the rise in criminal dark markets, Bitcoin was the first successful implementation of a blockchain protocol that has inspired a lot of smart people to imagine what types of applications are possible.  Hopefully this will lead positive social benefits and sustainable innovation.

First Digital Government Doubles down after Cyberattack

A recurring theme in this paper has been CAS and, more subtly, the interconnectedness of things.  It’s very interesting to see what prompted Estonia to digitise nearly its entire government, and how it earned the name, ‘the most advanced digital society in the world’, by Wired magazine.  Estonia has historically been a battleground for various countries such as Denmark, Germany, Russia, Sweden, and Poland because of its important geographic position in the Baltics.  During World War II, Nazi Germany launched their invasion of the USSR, which included Estonia, in 1941.  Eventually, the Soviet army was able to push the Nazi troops out of Estonia by 1944, and reoccupy the country.

In 1947, after World War II had ended, the Soviet authorities introduced the Bronze Soldier which was supposed to represent the USSR’s victory over the Nazis.  Monuments being erected after the war was to be expected, and this is especially true for the USSR which suffered the most human casualties of any country.  However, for ethnic Estonians the monument didn’t represent liberation at all, and it was a solemn reminder of years of Soviet occupation and oppression.  In 2007, the Estonian government decided to relocate the Bronze Soldier from the center of its capital to another site on the outskirts of the city.

The decision to move the monument created outrage from the Russian media outlets, and prompted two nights of protesting, rioting, and looting by Russian speakers.  Then on April 27, 2007, Estonia was hit with a series of cyberattacks, some of which lasting weeks.  The websites and online servers of different institutions across Estonia were taken down through denial of service type attacks, where servers are overwhelmed by a large number of artificially created online requests.  

The result of the cyberattacks during that time period was that ATMs didn’t work reliably, government employees had issues communicating via email, and newspapers and broadcasters couldn’t get the news out.  Liisa Past, a cyber-defence expert at Estonia's state Information System Authority, explained that “cyber aggression is a different type of kinetic warfare” which can be deployed with lower risk of creating tensions with Western powers and other Nato nations than traditional forms of warfare.  

Russian aggression in Crimea resulted in more US sanctions, but cyber aggression didn’t because it's not always easy to prove who was behind an attack.  While Estonia strongly suspects the 2007 cyberattacks were coordinated by the Kremlin, it doesn’t have any definitive proof.  The US is beginning to see cyber aggression as a real geopolitical risk which can be mitigated, when the perpetrators are known, to a certain degree, through its cyber-related sanctions program which was launched on April 1, 2015.

Estonia had the advantage of being a young nation, when its gained its independence in 1991 and was not burdened by legacy IT systems because of its feeble digital infrastructure, a remnant of the Soviet era.  The country’s population, just over 1 million people, is much smaller compared to the majority of other nations in the world which makes it more nimble to change and reaching consensus among heterogeneous stakeholders.  

These conditions and the 2007 cyberattack prompted Estonia to take steps to further secure its digital infrastructure by adopting radically new technology and become more cyber resilient.  There were also economic considerations from the beginning of the country’s independence because by becoming leaders in technological innovation, it would create greater economic opportunities for a country with few other growth options available.

The digital infrastructure that Estonia built is based on public key infrastructure (PKI) and eID system which leverages encryption technology and requires 2-factor identification for access.  Information can be shared among different government agencies on a need to know basis and audit trails are kept what data is being accessed and by whom.  Also, the cost savings and efficiency of having this system is profound because the data is decentralised and is not duplicated across various participating institutions, but simply accessed and used on demand.  Blockchain was not part of Estonia’s initial core digital infrastructure, but it began testing it in 2008 and put blockchain into production in 2012 for various use cases including national health, judicial, legislative, security, and commercial code systems.  The below graphic shows shows key milestones in the digital evolution of Estonia.

Source: https://e-estonia.com/

Source: https://e-estonia.com/

Blockchain in Financial Services

Financial services is one sector that has jumped on the blockchain bandwagon.  This can be easily illustrated by the amount of investment in blockchain startups and banks participating in blockchain related projects.  There was a surge of participation in blockchain startups in 2015 and 2016, and still continues today, as seen in the graphic below.

Source: CB Insights

Source: CB Insights

Large existing customer bases and the ability to cooperate with other financial institutions are distinct advantages that banks have over their fintech competitors.   This ability to cooperate can be traced back to, among other things, the creation of SWIFT, where financial institutions had to agree on standard ways to communicate between one another, regarding payments and other types of financial transactions.  This is an important distinction to make because the success of many of these blockchain projects require the group of participants reaching consensus on the rules that govern the system.

Based on the financial services history of coming up with standards to interact with one another, they are poised to be one of the groups of institutions that can derive tremendous economic value out of the new applications of blockchain.  While one of the motivations to participate in blockchain related project there are also other benefits such as a better customer experience.  There is also a clear distinction between the bitcoin blockchain, and the consortium blockchains such as R3, Hyperledger, and the Enterprise Ethereum Alliance.  The bitcoin blockchain is completely public and only requires the minimum system requirements, application download, and internet connection to participate, but the consortium blockchains are a hybrid because they are ‘member-only’ (not public) and are managed, to a certain degree, by a central authority.  So it depends on how you define blockchain, but the consortium blockchains are probably better defined as a distributed ledger technology (DLT). In a 2016 white paper, The Depository Trust & Clearing Corporation (DTCC) defined DLT as having the following characteristics:

  • A common shared version of the truth – every trusted member has a copy of the same history of all transactions in an asset.

  • All data is encrypted in a common manner according to modern standards and can only be decrypted and inspected by the owner of the required keys to the data.

  • The shared ledger, used by every trusted party in trading a particular asset, establishes a network and data standard that can be integrated with tools, workflows and asset management systems in a simplified, consistent manner.

  • The transaction distribution model defines a paradigm for always-on, active:active processing, which is more resilient to local database corruption than existing hardware replication models.

There are many use cases for DLT including trade settlement, payment processing, trade finance, etc.  It really comes down to the lack of transparency and interoperability between financial platforms within institutions themselves, and externally at other entities that creates spiraling cost and inefficiencies.  Consider what SWIFT actually accomplished, it just allowed for the standardisation of message communication between participating members in the network.  Each member has its own technology infrastructure and data storage facilities.  So in essence, the data was duplicated across every member’s infrastructure involved, at some point in the message chain. This is what Estonia did away with, the unnecessary duplication of data, by leveraging an interoperable digital platform where data was shared and accessed securely on demand.

Historically, SWIFT messages would have to be routed through multiple banks infrastructure, which was clunky, and if a payment did get hung up for compliance or some other reason, then there would be no transparency into where it sat within the collective processing queue. By sharing a distributed ledger, it will greatly increase processing times and reduce costs of old legacy systems, if consensus on the rules are reached and enough participants sign up to the same platform.  As the technology matures and the consortium blockchains start to see some small incremental successes then there will be renewed interest in tackling large scale problems.

Recommendations

At the end of the Channeling big data through RegTech webinar, 8 recommendations were listed, but only a few were explored briefly given time constraints.  Each recommendation will be explored in more detail below, in the context of the points highlighted in this paper thus far.

Source: https://www.bvdinfo.com/en-gb/knowledge-base/webinars-on-demand

Source: https://www.bvdinfo.com/en-gb/knowledge-base/webinars-on-demand

(1) Create a compliance and operations innovation committee (COIC)

As discussed, there are multiple threats that financial institutions face on various fronts including rising consumer and regulatory expectations.  This means banks have to transform their infrastructures to become completely integrated and digital which can satisfy the on-demand and near real time delivery of various requests.  As regulatory expectations increase, so does costs to comply since the underlying technology being used hasn’t changed drastically changed in the last 10 years.  Large institutions can be categorized as CAS in their own right, with conflicting priorities across business lines so a COIC is needed to drive change effectively.

As mentioned earlier, a December 2017 report by the McKinsey Global Institute estimated that as many as 375 million workers globally (14 percent of the global workforce) will likely need to transition to new occupational categories and learn new skills, in the event of rapid automation adoption.  Hence, another key priority for any innovation committee needs to be enterprise policies on human resources.

Not all jobs can be tracked by utilization rates, but customer onboarding, fraud, and money laundering investigations are good job categories, where an employee’s utilization percentage could be estimated, to a certain degree, given most of work is recorded in an investigation tool.  So, an institution might estimate that the average utilization percentage within this department is 70%.  If, the utilization percentage drops down to an average of 60%, then what will the institution do?  There are several options such as:

  • Reduce employee headcount to raise the group’s total utilization percentage back to 70%

  • Retain employees and reallocate the extra 10% of time gained among training, extra vacation time or maybe a 4 day work week

Institutions need to start thinking about how they will retain good employees for the long-term and how they can train people to learn to do completely new jobs.  Also, European companies tend to offer more vacation time than ones is the US, so could increased efficiency and more time be given back to employees for a better work-life balance?  

Also, it's important for institutions to have a utilization rate policy, whenever possible, and monitor compliance with it, so heads of business don’t see automation as an opportunity to reduce headcounts disportionately.  This could presumably be done in an effort to reduce the unit’s costs and artificially inflate profits, which may lead to bigger bonuses for upper management.

(2) Ensure the Chief Data and Technology Officers regularly brief the COIC regarding the organisation’s big data strategy

This point assumes that one of the main objectives of the COIC is a big data strategy.  And it should be.  To get the most out of the latest advances in AI and machine learning then a big data platform on premise or in the cloud is a prerequisite for some of the intensive computing power algorithms.

Big data platforms offer other opportunities such as better master data management (MDM). Another way to think about these central repositories is that they are flexible, and don’t need to adhere to a monolithic data model.  They also offer the opportunity to manage the meaning of data as opposed to simply its movement.  Once, an organization knows what data means then it can protect and value it.

The next step in the evolution of analytics is the reusability of data, similar to the idea of how SpaceX was able to reuse rockets.  Elon Musk realized that one of the major costs of space exploration, was that rockets were throw away items.  If, rockets could be fired into space multiple times then it drastically reduce the costs to explore space.  It’s essentially the same idea with data from different source systems, because today the same mortgage file could be sent to 10 different analytic systems, and would need to be shaped 10 different ways to fit within the data models of those systems.  Once, analytic systems can take the data ‘as is’, then that will be the next phase in the evolution of analytics that will drastically reduce operational overhead costs.

(3) Request investment from the Board to fund AI research and use cases

It won’t be a surprise to anyone who worked on large scale enterprise projects, that support from the Board of Directors is essential to the success of the initiative.  This support can take many forms such as financial, agenda based, etc.  The Board also needs to support the idea that they could fund AI research internally, and the project fails to deliver anything of value, besides a learning experience.  However, there needs to be mind shift that failure is not necessarily a bad thing, if the group learns from it and can document their findings for the next project to reference.  It’s the willingness to fail, learn from that failure, and to try again that will lead to the best possible use cases for new and cutting-edge technology such as AI.

(4) Begin to interview and hire staff with competencies in big data, AI, and machine learning

Institutions will need to invest in projects, but also people to help proof-of-concepts (POC) in motion.  Since, talent in big data, AI, machine learning is limited then the institution may need to consider hiring an expert in AI to help guide a select group of developers to learn how to develop different types of AI algorithms.  This may seem odd that an institution would need to create a type of internal university, but completely relying on external consultants is not a viable option either.  

There are other options available such as sending developers to AI bootcamps or short-term fellowships.  The Data Incubator, founded by Tianhui Michael Li, offers a data science fellowship specifically designed to train highly educated professionals, practicable and applicable quantitative skills that companies need.

The whole idea of work and training needs to be completely rethought as technology and automation are rewriting the rules for what work will be in the future.  As companies send some of their existing developers to get trained as data scientists, they should offer education options to existing employees to learn to code.  If there is high demand then decisions will have to be made whether its a meritocracy or maybe a random raffle.

(5) Identify small use cases in compliance to leverage AI

Another thing that the COIC can help drive is to find real pain points for operations and compliance.  What are some potential use cases for AI?  One could be to hyper-segment, or cluster, groups of customers together based on many data attributes using UML for fraud, money laundering, or other reasons.  Another pain point is false positive alerts in TMS, which could potentially be used with new technology.  Also, false negatives could increase an institution's coverage by identifying instances that may be getting missed by existing algorithms.

(6) Research AI vendors to help with small use cases

Since the AI vendor market is still growing, institutions can take advantage of this, by becoming early adopters and requesting specific functionality that some vendors may be willing to accommodate to get some POCs going.

(7) Refocus existing compliance staff and train them on model validation, quality assurance, and data governance

As mentioned in the first point, technology and automation are likely displace millions of jobs, so institutions should do what they can to retain their existing employees.  There would be a natural migration of investigators or client onboarding staff moving into quality assurance, model validation and data governance because they know what the process looks like from the inside.  Obviously, they would need to learn some new concepts, but this goes back to a key idea of what the institutions want to define as learning, knowledge, and work.  

If, there are enterprise standards established, and there should be, for core activities such as model validation then its reasonable to think that people can be trained to do this work. Furthermore, data governance is thought of as an activity for the IT department to deal with. This idea needs to change, and the tools need to be given to the business lines to manage the meaning of their own data.  This is the whole point of effective challenge in model risk management.

There have been compliance failures that simply stem from the disconnect between compliance and technology and since compliance had no transparency into the black box, it allowed the problem to fester which exacerbated the consequences of when it was found. Allowing compliance to have some access to big data platforms, and the data’s metadata or meaning will allow compliance executives to conduct a better effective challenge and protect themselves from professional liability.

(8) Evaluate goals and progress on, at least, a quarterly basis and produce yearly report of activities and findings

The COIC should clearly define its objectives and timeline, and report on this regularly.  But innovation implies that things may not always work the first time, so expectations need to get set in a way that values learning and progress as opposed to quantifiable economic results in the short-term.  Also, the word ‘failure’ may need to be removed from an organization’s vernacular for another type of phrase.  So, when reporting about AI projects to the board, it shouldn’t be said that our AI project failed, but something more to the effect of our hypothesis was incorrect and here is what we learned.

Board of directors will want to see quantifiable results so there could be metrics created to measure intangible assets such as AI knowledge within the firm.  One way that regulators measure an institution's AML knowledge was to count the number of people it had on staff with anti-money laundering certifications.  Similarly, the number of PhDs, individuals that went through AI fellowships, and number of failed and successful AI projects could all be potential inputs to help quantify an institution’s intangible asset of AI knowledge.

Conclusion

If, we can accept the assertions made about CAS then we can conclude that the future of due diligence is unknown, but we can model some of its potential futures and guess where it could be going.  However, what’s more interesting is that if we buy into the idea of CAS then it suggests that all of the entities which make up the system including legislators, regulators, executives, employees, consultants, institutions, software firms, non-profit entities, and others in the supply chain can help shape its future by taking action and allowing the system to evolve to a better and more sustainable future.  

Criminal networks are adapting rapidly in many ways such as with the use of drones to smuggle drugs or leveraging dark markets to sell stolen data.  Also, the heightened risks associated with asymmetric terrorism and cyberwarfare suggest that institutions should double down on technological innovation, as Estonia did, quite impressively.

Regulatory expectations continue to evolve in scope, and some are disruptive and pose real strategic risks, such as PSD2, to firms that don’t go all in with its digital transformation initiatives.

The general trend is that, as AI matures in its ability to process massive amounts of data and is used for scenarios such as biometric identification, human readable report generation, more accurate risk classification, and other advanced NLP sentiment analysis then alternative data sources will continue to expand the scope of compliance screening programs, including entity due diligence.  It will not be enough to simply trust what your customer tells you and to check what's available online in a limited number of data sources, as the physical world will come into play with the rise of alternative data such as networks of nano satellites that leverage synthetic-aperture radar technology.

Innovation is a key element that governments and legislators need to support, so the institutions which need to stay in compliance with rules and laws can be given some leeway to experiment.  This doesn’t mean that regulators shouldn’t hold institutions accountability for governance failures, but the only way to push the industry forward and truly manage risk is to evolve and try new ways of doing things.  However, even if an institution operates in a jurisdiction that doesn’t offer regulatory sandboxes with safe harbour protections, it shouldn’t stop the experimentation, which could happen in the background.  

The most successful organizations of the future will be the ones that can manage and accelerate their perpetual transformation from one state to another seamlessly.  The leading companies of the future will manage change, as if they are flying in an airplane that never lands and refuels in mid-air and when the transformation of the organization to its next state does occur everyone skydives out of the old plane, at different intervals, into the new plane that’s also moving in mid-air.  It’s the organizations that land their plane to refuel and move people safely on the ground from one plane to another that will fall behind in the innovation game.

Institutions also need to seriously think about their human resource policies and use greater efficiencies and reduced costs as an opportunity to improve the quality of lives of their employees and to improve staff retention.  Technological innovation will transform how organizations function, and similar education and training can transform what functions people can perform.  So, education and training need to take a front row in the companies of the future and it can’t be limited to an upper limit tax deductible reimbursement for getting a good grade in a graduate level course.  Education needs to become a core part of the organization’s long-term strategy and hiring too much talent outside of the firm could lead to a dilution of the firm’s collective business knowledge and historical progression including challenges, successes, and failures.

In terms of the place of people in the future of entity due diligence, it should be acknowledged that logical reasoning and human intuition are still the crown jewels of the decision making and investigative process.  The advances in technology, automation, and AI are all powerful tools which should be handled with care.  This is why governance and oversight need to be implemented to continually assess the impacts of technological innovation on the workforce which should enhance an individual to make a better, more informative, and accurate decisions faster as opposed to marginalizing a person’s logical reasoning and intuition.  As with all power comes responsibility, and technological innovation discussed throughout this paper has great power to transform the future of entity due diligence for the better, if handled with care, and maybe, even make the world a little safer.




 

Read More
Keith Furst Keith Furst

LESSONS LEARNED: RECORD HACKS, BREACHES TO CONTINUE IN 2018 AS MORE CRIMINALS MONETIZE STOLEN DATA

As ACFCS surveys the landscape of what new challenges and opportunities in financial crime 2018 will bring, we are continuing our “Lessons Learned” series, asking key thought leaders what last year taught the community and how that knowledge should help arm compliance professionals for the year ahead.

Not surprisingly, a good predictor of what will happen in 2018 is rooted in trends from 2017, a year where criminals made history with record hack attacks and equally massive data hauls that put millions of people and companies at risk.

Editor Note: This article originally appeared on The Association of Certified Financial Crimes Specialists (ACFCS) website on January 18, 2017.

Written By: Brian Monroe with feedback from Keith Furst

As ACFCS surveys the landscape of what new challenges and opportunities in financial crime 2018 will bring, we are continuing our “Lessons Learned” series, asking key thought leaders what last year taught the community and how that knowledge should help arm compliance professionals for the year ahead.

Not surprisingly, a good predictor of what will happen in 2018 is rooted in trends from 2017, a year where criminals made history with record hack attacks and equally massive data hauls that put millions of people and companies at risk.

These groups – whether large organized criminal outfits, rogue nation state regimes or small-time criminals – didn’t discriminate.

Their targets spanned the spectrum of small and large businesses alike, including banks, law firms, and household name companies, even gaining access to the vast treasure trove of information held by a credit reporting agency.  

That likely will only continue this year, and potentially even get worse.

In one word, the sheer magnitude of the data obtained in 2017 was “unprecedented,” said Keith Furst, founder of Data Derivatives, a boutique consulting firm helping institutions with implementing, fine tuning, and validating financial crime systems.

Furst was kind enough to lend his thoughts and insight on these issues and others in a chat with ACFCS Director of Content, Brian Monroe. Here is an edited transcript of that conversation.

What do you think were the biggest financial crime trends in 2017 and why?

One of the biggest financial crime trends of 2017 was the commoditization of sensitive data. While cyberattacks have been increasing in sophistication and frequency for the past few years, the sheer magnitude and quality of the data obtained in 2017 was unprecedented. 

For example, the Equifax hack will undoubtedly change the rules of the identification game. In other words, how do I know, you are who you say you are? Simply, possessing the correct data is not enough anymore.

One way to address identification is using knowledge-based authentication (KBA) where questions are asked to the person, which the real person and not the average cybercriminal should know. 

Also, it implies that the answers to some of those questions may not be easily accessible in cyberspace. The other emerging trend is biometrics, which could help address some of the identity problem, but could create other issues. For example, if one day, your fingerprints can authorize a money transfer, open your car, and unlock your phone, then what happens when your fingerprints are stolen?

The sad thing about the current state of the world, is that everything is for sale - including sensitive data - and nothing is off limits. There are various marketplaces on the darknet that specialize in the sale of sensitive data, credit card information, child sexual exploitation, hacker-for-hire services, etc. 

The fact that data breaches happened with greater frequency and success in 2017 fed the demand as ordinary criminals learned how to monetize these data sources.

For anyone who wants a comprehensive, readable, and non-technical account of cyberwarfare, cybersecurity, and cyberattacks then I highly recommend the book, “Virtual Terror: 21st Century Cyberwarfare,” by Daniel Wagner.

How did the industry respond to those vulnerabilities, regulatory focal points or criminal tactics?

Cybersecurity is a very complex issue because it involves many disparate and amorphous actors, including other nations initiating cyberattacks. Hence, imposing regulations on the private sector can help strengthen protections and controls, but it may not address all of the actors, issues, and challenges in a comprehensive way. 

For example, it has been documented that the Chinese government initiates cyberattacks against companies in the United States and steals intellectual property, which is shared with the private and academic sectors to help fuel their economic growth.  

So, simply imposing regulatory requirements on private companies that have to protect themselves from an adversary with the financial resources, technical expertise, and determination of a foreign government is not a fair fight. 

In other words, cybersecurity is also a topic of foreign policy, and the US government should clearly define parameters of what types of aggression fall into what category and what types of responses are permissible from the private sector.

That being said, there is a lot of value in creating a framework for cybersecurity best practices and the New York Department of Financial Services (NYDFS) was the first US financial services regulator to propose one with its part 500 regulation.

Let’s examine the case of Equifax, to understand what responsibility it bears, a situation where the company got hacked, reportedly, because of a vulnerability identified, but never patched.  

Equifax failed to deploy a patch that could have prevented the hack from happening, which means there was an internal governance failure. The other major failure of Equifax was that the company didn’t encrypt the social security numbers of millions of people and left them in a plain text format. 

Hence, if a hacker did breach their system, accessing the data was that much easier. However, the one thing that Equifax can’t control is the quality of software available on the market.  

Could subjecting institutions to more stringent regulatory rules be unfair, to a certain degree, by not holding the software industry accountable for the products they produce and the cybersecurity standards adhered to?

In summary, it's a good thing that the NYDFS created the part 500 cybersecurity rule, but policy makers must not lose sight of the fact that this is a complex problem with many interrelated actors and penalizing specific agents within the ecosystem could obfuscate the problem. 

The financial crime resulting from data breaches also reemphasizes the urgent need for more robust information sharing mechanisms among foreign governments, financial intelligence units (FIU), corporations, and other law enforcement groups.

What else do you think financial crime compliance professionals, regulators and FIs should be doing to better detect and prevent financial crime?

The Clearing house published an excellent paper in February 2017 titled, “A New Paradigm: Redesigning the U.S. AML/CFT Framework to Protect National Security and Aid Law Enforcement,” where they outline some key recommendations. 

I don’t agree with all of the recommendations proposed, but a good majority of them make a lot of sense.

The paper discusses information sharing, clarifying regulatory rules, the need for a central repository of beneficial ownership information, regulatory sandboxes, etc. I agree with the paper’s recommendation that regulators should offer institutions the option to participate in regulatory sandboxes under a safe harbor rule that prevents penalties if something goes wrong. 

US regulators seem to be worried that allowing sandboxes will give institutions the opportunity to wiggle their way out of responsibility.

The reality is that identifying money laundering and other types of financial crime is very complex and using more advanced technology, such as the machine learning, natural language processing (NLP), and computer vision, can aid in that process. 

Many enforcement actions reference governance as one of the main causes to serious compliance failures. But why are compliance programs so hard to govern effectively?  Well, because they are complex systems, and managing complexity is not easy. This leads to another question of whether new technology can help reduce complexity and make governance easier.

Artificial intelligence (AI) and regulatory technology (regtech) are full of hype right now and sometimes it's hard to parse out the prize from the promise. However, institutions should be cautiously optimistic, as am I, and should start by focusing on innovation with small use cases regardless of the regulatory environment they are in.  

There have been some incredible advances and achievements of AI-embedded technology, so institutions need to start experimenting now so they don’t fall behind. 

Also, big data platforms can help address one of the major issues plaguing financial crime programs for years, which is data integrity. In these central repositories, institutions can manage the enterprise meaning of their data and not only its movement.

What is an example you have seen using these technologies?

There was an AI vendor which helped a leading global financial institution reduce false positive alerts by 20% from its transaction monitoring system (TMS). This is an important step in the right direction because it frees up capital to invest in other areas of a compliance program, such as risk assessments, model risk management, quality assurance, etc.  

What do you think will be the big issues to tackle in 2018?

There will probably be a spike in new corporation registrations, including shell companies, as Trump’s new tax plan, Tax Cuts and Jobs Act or TCJA, incentivizes people to open corporations as vehicles to hold assets, shield income, pay dividends, etc.

It’s ironic that on one hand, US policymakers are pushing for more transparency on the beneficial owners of legal entities as proposed by the TITLE Act and Corporate Transparency Act, but on the other hand, pass a law that will likely increase the number of legal entities designed to play tax games. 

This actually creates more work for financial institutions because they will have to conduct more due diligence on opaque legal entities. Financial institutions should plan on using automated solutions and robust reference data to deal with the increasingly complex and burdensome problem of beneficial ownership.

Lastly, do you have any tips to help banks maximize resources and better keep their teams strong in a time of tight budgets?  

A colleague of mine once told me that some banks don’t have time to look at new technology because they are too busy managing their current program. Well, this is exactly the reason why innovation needs to be a top priority for compliance teams in 2018. 

The regulatory requirements and the nature of the problem continue to increase in complexity, so doing things the same way is not sustainable.  

While some regulatory regimes have embraced the notion of a regulatory sandbox, this should not prevent institutions operating within other jurisdictions from experimenting. This doesn’t mean that anything needs to get deployed into production, but what it does mean is there should be activity and proof of concepts (POCs) happening in the background. 

Read More
Keith Furst Keith Furst

TREND ROUNDUP: CRIMINALS TURNING TO ONLINE RENTALS TO LAUNDER MONEY, SOCIAL MEDIA TO ENLIST YOUTH

In the last month, organized criminal groups, fraudsters and identity thieves have shown their creativity to launder money and monetize stolen credit card data, in some cases using online rental services to cleanse funds, while in others duping millennials into becoming “money mules” through sham social media job posts.

Early last month, in the aftermath of the Paul Manafort indictment, more than two dozen New York city and state lawmakers sent a letter to online home rental site, Airbnb, pressuring the company to identify and remove illegal listings on its site that could be used by thieves and criminals to launder money, according to the New York Daily News.

Editor Note: This article originally appeared on The Association of Certified Financial Crimes Specialists (ACFCS) website on December 1, 2017.

Written by: Brian Monroe

In the last month, organized criminal groups, fraudsters and identity thieves have shown their creativity to launder money and monetize stolen credit card data, in some cases using online rental services to cleanse funds, while in others duping millennials into becoming “money mules” through sham social media job posts.

Early last month, in the aftermath of the Paul Manafort indictment, more than two dozen New York city and state lawmakers sent a letter to online home rental site, Airbnb, pressuring the company to identify and remove illegal listings on its site that could be used by thieves and criminals to launder money, according to the New York Daily News.

This week, The Daily Beast reported that illicit groups are using Russian crime forums to look for colluding hosts on Airbnb to launder cash from stolen credit cards, according to posts on underground forums and cybersecurity researchers. This adds a new wrinkle in the broader trend of using high end real estate in hot markets to legitimize sullied funds.

At the same time, United Kingdom fraud prevention group, Cifas, reported this week it is seeing a massive increase in criminal organizations attempting to use younger people, often referred to as millennials, to unwittingly move money on their behalf.

The analysis revealed a 75 percent increase in the “misuse of bank accounts involving 18 to 24-year-olds during the first nine months of 2017, compared to the same period last year,” according to the group. To read the full report, please click here.

The most common example of the increasing trend is when a person acts as a “money mule,” meaning they “allow their bank account to be used to facilitate the movement of criminal funds. Young people and students are particularly vulnerable as fraudsters know they are often short of cash.”

Criminals may approach them with what looks like a genuine job offer, asking them to receive money into their bank account and transfer it onto someone else, keeping some of the cash for themselves.

They can approach them through social media and online job postings.

Overall, there were 8,652 cases of “misuse of facility” cases amongst 18 to 24-year-olds between January and the end of September this year, according to the report. The 2017 figures also demonstrate a dramatic rise in money mule fraud over the last five years, with cases involving 18-24-year-olds more than doubling since 2013.

“Our new figures show that money muling amongst young people is on the rise,” said Simon Dukes, Chief Executive of Cifas, in a statement. “This is a serious issue that not only has consequences for the money mule, but for society as a whole. The criminals behind money mules often use the cash to fund major crime, like terrorism and people trafficking.” 

Airbnb’s Russian money laundering problem

The Daily Beast stated in its story it “found a number of recent posts on several Russian-language crime forums, in which users were looking for people to collaborate with to abuse Airbnb’s service.”

These operations typically work by relying on an individual or group using “legitimate or stolen Airbnb accounts to request bookings and make payments to their collaborating Airbnb host. The host then sends back a percentage of the profits, despite no one staying in the property,” or in some cases double books the property to make even more money.  

“The money is 50/50,” one apparent scammer wrote on a Russian crime forum in August, according to the Daily Beast. “You receive the money within two days after the booking date,” it continues, and adds that there are “story-telling hosts” ready, likely referring to hosts who are proactively participating in the money laundering scheme.

The issue of Airbnb money laundering came up in the Manafort indictment as part of his alleged strategies to launder tens of millions of dollars from he made from overseas activities. Prosecutors say he funneled nearly $3 million to buy a Manhattan apartment he said would be for personal use, but later rented on AirBnB.

For the many entities involved in a transaction for Airbnb – the company itself, a credit card company, merchant acquirer, payment processor and bank – it’s likely that the rental company is in the best position to see potentially fraudulent transactions that could be tied to money laundering, said Keith Furst, Founder of Data Derivatives, a boutique consulting firm focused on financial crimes technology.

The company “is the one accepting the credit card payments, so they should be the ones doing the screening,” he said, adding that they would be able to see what is happening with the host, such as if rental revenues are rising in a seasonally slow time, if an address is double booked, or if there are cards tied to the same IP address or even an IP address in Russia.

On the bank, credit card or merchant acquirer side, those entities would likely be the ones getting calls from irate customers saying that Airbnb has charged their credit cards, but they had never used the service or approved the charge, Furst said, adding that these groups should be looking for a higher rate of “charge backs” involving Airbnb.

“With a stolen credit card, there is a small window, a certain timeline the criminals have to act before the customer or bank shuts it down,” he said, adding that one of the reasons fraudsters would pick Airbnb is that it’s a large company used a lot by people across the world and would quickly accept and process the stolen credit card details.

That makes it harder for the company, banks or credit card companies to immediately realize something is amiss – putting the onus on the customers to better monitor themselves.

More information sharing needed

The scenario also illustrates gaps in information sharing across different entities, Furst said.

Although banks have Patriot Act Section 314(b) safe harbors to share information on customers suspected of money laundering and terrorism, it’s not as clear if, when and how banks, credit card companies and a private company like Airbnb, can share data to put all of the various puzzle pieces of a fraud like this together.

On the banking side, sniffing out that side of the fraud is a challenge, particularly if the institution didn’t know a customer was an Airbnb host.

If they did at least know that, they could get a hint of criminal wronging when, say, a person receives a batch of credit card transactions in larger amounts, and then immediately wires the money back to Russia or to anonymous shell companies in offshore secrecy havens, Furst said.

“The bank would only see the money going into the person’s account, but wouldn’t know it was from fraudulent credit cards” as it would probably just look like funds from Airbnb to a host.

“Online marketplaces such as Airbnb are attractive to money launderers for several reasons,” said Alison Jimenez, President of Tampa, Fl.-based Dynamic Securities Analytics, a consulting firm specializing in AML and financial services litigation issues.

“One factor money launders look for is the ability to conduct cross-border transactions,” she said. “Many online marketplaces facilitate legitimate cross-border transactions such as vacation home rentals that can be used as a cover for a criminal trying to remit illicit proceeds across borders.”

For instance, a criminal gang could smuggle fentanyl into the USA and then remit the proceeds back home via a fake “IT consulting gig” or via multiple fake home rentals run through a legitimate online marketplace, Jimenez said.

“Money launderers also look for the ability to lauder large sums of dollars at once,” she said. “Home rentals can cost several thousand dollars and serve as a quick way to cash out a stolen credit card if you have a corrupt or hacked host account on the other end.” 

She agrees with Furst that banks could be hard pressed to uncover the schemes, potentially having to be exceedingly creative.

“A financial institution that ultimately holds the account where the illicit proceeds land would need to follow KYC best practices,” Jimenez said. “One red flag could be a host that only receives payments but does not have normally associated costs like paying a cleaning service.”

Read More
Keith Furst Keith Furst

How beneficial ownership can add crucial context to suspicious activity identification

The fight against money laundering and counter-terrorist financing is evolving like never before, and more external data sources are being integrated with compliance systems. Why is this and how can we make better use of beneficial ownership-related information?

As we discussed in the Bureau Van Dijk's "Beneficial ownership – have you got it right?" webinar, on which I was a panellist, one of the key drivers for the evolution of anti-money laundering (AML) and compliance programmes is the rising regulatory burden financial institutions face, such as from the Customer Due Diligence (CDD) Final Rule and the Fourth Anti-Money Laundering Directive (AML4).

The fight against money laundering and counter-terrorist financing is evolving like never before, and more external data sources are being integrated with compliance systems. Why is this and how can we make better use of beneficial ownership-related information?

As we discussed in the Bureau Van Dijk's "Beneficial ownership – have you got it right?" webinar, on which I was a panellist, one of the key drivers for the evolution of anti-money laundering (AML) and compliance programmes is the rising regulatory burden financial institutions face, such as from the Customer Due Diligence (CDD) Final Rule and the Fourth Anti-Money Laundering Directive (AML4).

heighted focus.jpg

Beneficial ownership data empowers financial institutions to identify, validate and monitor companies and their owners throughout the entire relationship life-cycle, including prospects', and active, inactive and former customers' statuses.

Beneficial ownership data can transform the three fundamental components of an AML programme such as know your customer (KYC), sanctions screening and transaction monitoring. While the KYC and sanctions screening use cases are quite compelling, this article will focus on various ways in which beneficial ownership data can be used as an external data source to help identify suspicious activity for an institution's transaction monitoring programme.

To read this extended blog post in full, please visit this landing page to download a PDF, which you can also print out.

The sections it covers are:

  • The risk-based approach and external data
  • Market abuse and external data
  • The spectrum of suspicion
  • The small business owner
  • How banks detect businesses commingling illicit cash
  • Adding context one beneficial owner at a time
  • How beneficial owner data can add context to suspicious activity
  • The burden of context
  • Comprehensive coverage and certainty

Download the PDF

 

Read More
Keith Furst Keith Furst

MERCHANT-BASED MONEY LAUNDERING PART 3: THE MEDIUM IS THE METHOD

The previous editions of this series on merchant-based explored the many manifestations of the dark side of the terminal, including suspicious transactions merchants may see that could be tied to fraud groups and the risks tied to both closed loop and open loop prepaid cards.

To read the first story, covering “phantom shipments,” please click here. To read the second story on “prepaid gift card smurfing,” please click here

Merchants can be involved with phantom shipments to move value across borders and cash can be anonymously loaded on prepaid gift cards through smurfing operations and used at US merchants to make sales revenue appear legitimate. 

The rules and actions of the payment sector have direct implications on bank anti-money laundering programs.

How? Because while banks are technically not liable for the illicit actions of their customers’ customers – the customers of a merchant or payment processor – the bank is on the hook for properly inquiring about the risk of that customer base and compliance procedures, if any, of the merchants.

Editor's Note: This article originally appeared on the Association of Certified Financial Crime Specialists website on September 21, 2017.

The previous editions of this series on merchant-based explored the many manifestations of the dark side of the terminal, including suspicious transactions merchants may see that could be tied to fraud groups and the risks tied to both closed loop and open loop prepaid cards.

To read the first story, covering “phantom shipments,” please click here. To read the second story on “prepaid gift card smurfing,” please click here

Merchants can be involved with phantom shipments to move value across borders and cash can be anonymously loaded on prepaid gift cards through smurfing operations and used at US merchants to make sales revenue appear legitimate. 

The rules and actions of the payment sector have direct implications on bank anti-money laundering programs.

How? Because while banks are technically not liable for the illicit actions of their customers’ customers – the customers of a merchant or payment processor – the bank is on the hook for properly inquiring about the risk of that customer base and compliance procedures, if any, of the merchants.

At issue is that if a merchant or fraudulent site is later tied to a particular financial institution, and that bank never took the time to engage in the proper level of due diligence, creating a defend-able risk score and adequately tuning the transaction monitoring system, in the eyes of regulators, the bank could have a weak financial crime compliance program.

This article will focus on transaction laundering (TL), in its various forms, which I would argue is a subset of the broader problem of merchant-based money laundering (MBML).  While it may appear that MBML is another form of trade-based money laundering (TBML), they are actually quite different for one reason.

To sum up a key mantra we will explain more on later, keep this in mind: the medium is the method.

In the 1960s, Marshall McLuhan coined the iconic phrase, “The medium is the message,” as he became the oracle of the electric age. But what did he really mean, when he said the medium is the message?

Fundamentally, McLuhan was pointing to the fact that how information is delivered to us through different mediums influence how we interpret the message itself and how it portrays social structures and our understanding of the world.

For instance, let’s take a lot at the tectonic shift in the human experience of conveying information when the world went from hand-writing and copying information to the printing press – which allowed for more wide-scale distribution of knowledge and ideas.

To be sure, the invention of the printing press was arguably one of the most important moments in human history and drastically influenced the development of the modern world. 

Before the printing press, text would have to be copied manually by hand, which was inefficient, costly, and led to low rates of literacy. 

Once printing was mechanized, it allowed for high rates of literacy and the rapid exchange of ideas. In that same vein, we think of money or value transfer as a medium which followed a similar evolution of the acoustic, written, mass production, and electric ages – going from a physical, spatially-limited form of value to a digital, internationally-fluid funding mechanism.

That idea is important to remember because one of the earliest adopters of new monetary technologies is the criminal element.

But let’s look for a moment at how different mediums affect our sensibilities to better understand the challenges to crafting criminal defenses against all the many ways money can move.

Just like television and radio has a completely different effect on our senses, laundering value through cash and merchant terminals leaves a completely different signature, something banks, regulators and investigators have to realize to balance the challenge of stopping criminal groups without creating customers friction and delays.  

This is one of many fundamental struggles in the fight against money laundering, because many of the models we use today treat all forms of value transfer the same in terms of fighting financial crime and creating compliance programs, only looking at a few basic data points. 

Additionally, regulators don’t want to stifle innovation, but they need to find ways to impose sensible regulations to keep pace with new mediums of money or value transfer.

MBML_pic_one.gif

Source: The Independent

The payments ecosystem as a new medium

As we said, however, in order to create current, relevant and agile ways to counter increasingly aggressive and creative organized criminal and terror groups, you need to understand how the United States structures its payments and settlements systems, and the panoply of players in the game, including banks, retailers, merchants, money services businesses, prepaid card providers, third-party payment processors and others.  

The payment ecosystem in the US is complex and has a whole host of entities involved.

When a consumer makes a card purchase at a store or online, the payment flows through the payments ecosystem with the end goal of funding the merchant’s account, assuming the transaction is approved. 

A consumer-initiated card purchase is commonly referred to as a “pull-payment” because the funds are pulled from the consumer’s account and deposited into the merchant’s account[1]. The three main steps of the payments process initiated by a consumer card purchase are:

  • Authorization
  • Funding
  • Settlement

All of the above steps in the payments ecosystem involve various entities including, but not limited to the customer, merchant, gateway, processor, association and issuer. 

Payment ecosystem.jpg

[1] http://www.knowyourpayments.com/transaction-basics/

Source: Know Your Payments

What is transaction laundering?

Now that you have a better sense of the players in the payment chain and who does what, now we need to look at how criminals and fraudsters are trying to game the system.

Transaction laundering happens when a known merchant processes transactions for an undisclosed business.

This clandestine business is usually selling illegal products or services, and leverages the known merchant’s card processing accounts either through collusion or coercion – or simply because the merchant’s card processing systems are not tuned to be sensitive to financial crime and fraud red flags.

As a point of context, while banks, money services businesses and other entities considered a “financial institution” are subject to anti-fraud and anti-money laundering (AML) requirements, merchants typically are not, along with most third-party payment processors.

However, under current AML structures, some banks have foisted AML duties onto payment processors as a duty to continue to hold the account in the face of rampant de-risking in the financial sector, while third-party processors themselves may have to shoulder some counter-financial crime duties depending on how a prepaid payment chain is structured tied to recently-enacted rules.

Now, back to some of the red flags that can be employed by miscreant merchants.  

The unknown businesses selling illegal products and services can disguise themselves in a number of ways, but here are some common examples described in a video by Dan Frechtling from G2 Web Services[1]:

  • Cannabis sales intermingled with toy transactions
  • Pirated movies appearing as software
  • Prohibited injections posing as vitamin sales

Similar to red flags in the AML context, a guiding principle to determine if something is suspicious is if the transaction details don’t make sense for what the merchant should be doing or where they should be doing it.

There are a number of challenges identifying transaction laundering, but one fundamental difficulty is that the complex payments ecosystem allow illicit transactions to enter through a variety of channels including, but not limited to: carts, gateways and virtual terminals.  The below diagram illustrates a common example of transaction laundering:

MBML_pic_three.jpg

[1] https://www.g2webservices.com/acquiring/g2-portfolio-protection/transaction-laundering/

Source: Transaction laundering in four steps https://www.g2webservices.com

But transaction laundering doesn’t end there.

The payments made for illicit products or services through the known merchant on behalf of the unknown business will be withdrawn from the known merchant’s bank account at some point in the future.

This is the touchpoint with the traditional banking world because the known merchant must have an account with a bank which receives the settled payments. 

The ill-gotten gains can exit the merchant’s bank account through a number of methods, but the bank wouldn’t know of any suspicious activity and potential transaction laundering scenarios, unless the merchant acquirer informed the bank of the situation. 

This obviously creates the need for a great deal of collaboration and information sharing between the merchant acquirer and the banks which hold the merchant accounts.

Who's responsible for transaction laundering?

Transaction laundering or credit card laundering is seen as a variation of money laundering by the U.S. Treasury’s Financial Crimes Enforcement Network (FinCEN) subject to suspicious activity reporting (SAR) requirements. 

Transaction laundering violates several Federal Trade Commission, Telemarketing Sales, federal crime laws and some states have their own laws to address this problem. But that begs the question of which institution is supposed to file SARs?

Here is the answer, according to payment industry experts:

“Yet except for certain Money Services Businesses (“MSBs”), non-bank Third-Party Organizations such as ISOs/MSPs, Payment Facilitators/Payment Service Providers, data processors and network providers (collectively “TPOs”) generally are not subject to BSA requirements [highlighting mine]. Thus, it is the acquiring bank’s responsibility to (1) ensure that a TPO’s incident reporting and management program contains clearly documented processes and accountability for identifying, reporting, investigating, and escalating incidents of credit card laundering and other suspicious activity; and (2) monitor TPO compliance and processing information on an ongoing basis to ensure compliance with the acquirer’s SAR obligations.[1]

As stated above, the answer is the acquiring bank. 

This seems oddly familiar because this situation sounds a lot like correspondent banking. In correspondent banking, the correspondent bank provides services to the respondent bank’s customers or the “customer’s customers.” 

Essentially, the correspondent bank is relying on the strength of the respondent bank’s AML program, but ultimately the correspondent bank is held accountable for the payments processed by regulators in their local jurisdiction.

But the U.S. correspondent bank – the operation could also be, say, a New York branch of a foreign bank – processing the overarching transactions will be held accountable for properly risk-ranking the correspondent’s AML program and divining the overall risk score, something several foreign banks have been penalized for recently.

Similarly, the acquiring bank is relying on the payment processors to have adequate controls in place to detect transactions derived from illegal activity. 

As regulatory actions focus more on payment processors, they could also face a round of de-risking practices, similar to what is occurring now in the correspondent banking space, by banks in the payments ecosystem. 

Hence, it’s clearly in the best interest for payment processors to vigorously monitor their merchants’ activity and inform the acquiring bank of any instances of suspicious activity – lest they find they are tied to an illicit organized criminal group and become radioactive to global banks.

Beware cloaked illicit online gambling portals

Beyond shady and unscrupulous online business looking to dupe consumers and merchants, actors in the payment supply chain must also worry about illicit online gambling sites hiding their activities behind front company sites seeming to selling an array of innocuous items to not bring attention to themselves – in one recent case hiding behind a site selling household items.

On June 22, 2017, Reuters published an exclusive story which described an elaborate transaction laundering scheme used to circumvent local online gambling laws. Here is a short excerpt from the article below:

“The scheme found by Reuters involved websites which accepted payments for household items from a reporter but did not deliver any products. Instead, staff who answered helpdesk numbers on the sites said the outlets did not sell the product advertised, but that they were used to help process gambling payments, mostly for Americans.[2]

This story was important because it was one of the first times a major publication detailed a transaction laundering scheme with real investigative reporting. 

As these stories keep coming out from major publications and are linked to more heinous crimes, then it could help shine the spotlight on the risks of e-commerce and the connections to the criminal underworld.

Another challenge that the story indirectly highlighted was that even if merchant acquirers and payment processors could identify transaction laundering, they may not be able to identity the actual people behind the scheme due to the minimal customer due diligence being done in the industry. 

Rather than a race to the top for compliance best practices in the traditional banking space, the payments industry has almost become a race to the bottom to offer no hassles and low fee structures in a highly competitive marketplace.

This can be illustrated by some entities in the payments ecosystem, as Dan Frechtling put it, offering “frictionless onboarding.” 

Frictionless onboarding is not necessarily a bad thing in itself, but if a minuscule amount of customer information is required to open a merchant account and the information is not verified, then it becomes a problem. 

Acquirers and payment service providers that wish to implement frictionless boarding without compromising their review policies may offer conditional approval followed by more stringent scrutiny in a post-boarding “containment” area.

This issue hits on a perennial debate in the compliance community: the potentially negligible value of an extensive customer review and risk assessment process versus defining risk by the transactions the customer actually engages, including going out of expected boundaries, or dealing with countries and entities historically considered high risk.

At the same time, the current momentum to more quickly create relationships that lead to new business creates a new quandary: How can you prevent the same bad actors from opening new fictitious websites and merchant accounts, if you don’t always know who’s behind the scheme?

Don't die from a tie dye high

The risks of illicit groups working behind seemingly legitimate sites was brought into stark relief when investigators uncovered that a psychedelic t-shirt site, appropriately enough, was in actuality selling a tightly-controlled mind-altering drug

Lysergic acid diethylamide (LSD) was created by Albert Hofmann in Switzerland in 1938 from ergotamine, a chemical found in the fungus ergot. Dr. Hoffman accidently discovered the psychedelic effects of LSD in 1943. 

The drug was experimented with for psychiatric reasons and the Central Intelligence Agency (CIA) even tested subjects to determine what type of mind control and wartime applications it may have. 

In the 1960s, the counterculture movement popularized its mind-altering power and it was subsequently prohibited in both its use and distribution. Currently, LSD is listed as a Schedule I drug by the United States Controlled Substances Act, sitting alongside heroin, cocaine and, more controversially, marijuana.

LSD has been steeped in controversy where some leaders of the counterculture such as Timothy Leary touted its life-changing power and skeptics highlighting its dangers and links to accidental deaths caused by a profound state of altered consciousness. 

For example, a student from Northern Illinois University was reported to have died as a result of LSD use when he fall out of a window[3].

Clearly, LSD is a very powerful drug, but unscrupulous merchants are still willing to sell it over the internet by disguising the real purpose of their websites. 

Just imagine that if one was so inclined, you could find a website selling LSD, a schedule I drug, and order it with the click of a button and a credit card and have it delivered right to your door.  Makes you wonder what else goes through the mail.

The below screenshot shows a real website appearing to sell tie dye t-shirts, but it was actually a front for a business selling LSD. 

Figure-1.jpg

[1] https://www.g2webservices.com/blog/11865/acquirer-third-party-sar-obligations-transaction-laundering/

[2] https://www.reuters.com/article/us-gambling-usa-dummies-exclusive/exclusive-fake-online-stores-reveal-gamblers-shadow-banking-system-idUSKBN19D137

[3] https://www.inquisitr.com/2559752/lsd-becoming-popular-again-while-still-dangerous/

Source: G2 Web Services

The website itself has a number of red flags where the t-shirts are only offered in bulk and sizes are described in odd ways. 

As well, the website had a checkout cart where if the credit card option was selected, it will send the visitor an email and redirect them to a separate website with a specific url link. This separate website was specifically designed to take credit card payments. One of the most interesting parts of this scheme was revealed below in a statement by the website operator:

“This is for the avid researcher who doesn’t like dealing with Bitcoin[1].”

bitcoin.png

[1] Source: https://www.g2webservices.com/blog/14723/real-life-launderers-tripping-transaction-laundering/

Source: G2 Web Services

This is actually quite a profound statement because it reveals the experience of the website operator conducting online drug deals was primarily with Bitcoin. 

In other words, if a purchaser was so inclined to buy drugs online they could access the darknet via Tor and use Bitcoin to conduct their transactions almost completely anonymously as the only link to the illicit purchase would be the shipping address.

For online drug dealers to accept credit card payments, it shows they are serving a less technically savvy and larger segment of the drug market. 

Anyone can make online purchases and the problem will only grow as people tell their friends about reliable drug dealing websites. Buyers don’t get the anonymity that the darknet and bitcoin offers, but it doesn’t seem to be slowing down the market. 

Also, for the cautious and low value purchaser, they could load cash onto a prepaid card almost completely anonymously and would only be potentially linked to the illicit purchase based on the address provided.

But the United States in recent months have been targeting darknet drug bazaars and the virtual currency exchanges they are using, the key link to the real world and formal international financial system, in one case taking over a site undercover, watching and detailing the users and their online and fiscal exploits.  

Drug dealer accepts credit card payments

One of the most brazen abuses of a merchant processing terminal was perpetrated by a local drug dealer in the United Kingdom. 

The Police of Gloucestershire raided the home of Mark Slender on August 19, 2016 and seized cash, cocaine, cannabis, digital scales, and a chip’n’pin reader to take credit card payments[1].  The Police from Gloucestershire were shocked because they never saw a drug dealer take credit cards as a payment for drugs. 

Slender even issued his customers receipts with the message, “Cheers, Gup.”

The Express article didn’t explain how Slender obtained access to a mobile chip’n’pin reader, but he could have been the one to open a merchant processing account on his own. This highlights one important point about the payment processing industry which is simply that there is no easy way to know, if merchants are selling illegal products or services through merchant processing terminals. 

While most people buying illegal products would probably prefer some level of anonymity such as using cash in person or bitcoin on the darknet, some people may not even care or are so desperate to buy drugs that they use a credit card in the absence of cash.

Keep in mind that all many individuals need to process credit card transactions is an attachment to their smart phones and a bank account.

The publication reported that Slender was subject to a longer prison sentence due to previous drug dealing convictions. This raises an interesting point about the due diligence process for opening a merchant processing account and if a criminal background check would factor into the calculation of the fraud and money laundering risk profiles. 

This is not to say that anyone with a criminal background should be prevented from opening a merchant account and processing credit cards, but they could pose additional risks to the institution. 

Prohibiting new customers with criminal backgrounds may not be the answer at all, and could encourage more criminality, as such a practice, endorsed broadly, would push many suspicious actions underground, losing key intelligence federal investigators can use to take down larger criminal groups.   

Ultimately, the customer with a criminal background poses additional fraud and money laundering risks, but they could be trying to rebuild their life and prohibiting them to open an account could prevent the reintegration into society as a whole and thus lead them back to the criminal life they may have been trying to escape. 

Corporations are becoming more socially aware and active so this could be a situation where the institution absorbs the additional compliance costs of serving higher-risk customers for the greater good as opposed to simply de-risking whole categories of customers.

Don't wait for central registries and information sharing

While companies, retailers, processors, merchants and others try to juggle risk and find guys on an individual basis, countries as a whole must realize that larger organized crime groups and savvy fraudsters work internationally.

So the only way to stop them is forging stronger cross-border relationships with other firms and law enforcement because, currently, most countries don’t have central registries that detail high-risk or potentially criminal entities, currently the purview of third-party AML risk and list providers.

As well, while many large countries like the United States, United Kingdom and Europe have created county-wide financial intelligence units to store bank reports of potentially suspicious activity – and have attempted to better link these FIUs together – formatting, data privacy and resource constraints can conspire to limit their overall effectiveness.  

Are the lack of central registries and information sharing between countries are a serious problem in the fight against money laundering and terrorist financing?  Of course. 

However, the problem with this argument is that it lessens the responsibility for each country, the country’s regulators and organizations operating within its jurisdiction to push the boundaries of what’s possible in the fight against financial crime.

There is a tremendous amount of external data sources that can be incorporated into AML programs to enhance detection capabilities including negative news, beneficial ownership and other open source data. 

The advent of artificial intelligence, machine learning, and big data also open a whole host of new surveillance and analytic capabilities.

As with other forms of fraud, transaction laundering is more quickly exposed when firms use all their organizational eyes and ears. This includes sales representatives, underwriters, customer support staff and account monitors. 

For example, G2 Web Services has observed adept organizations bring these professionals together to compare notes weekly or monthly, similar to the growing trend of convergence in the financial institution context where AML, fraud and cyber teams connect, cooperate and collaborate to better uncover illicit funds flowing through the bank and risks against the institution itself.

In the merchant-laundering arena, these notes may reveal conclusions about the same suspect business that were insignificant when singular but convincing when combined. In the event transaction laundering has occurred, cross-functional post mortems to look back for clues help banks avoid repeating mistakes.

MBML_pic_six.jpg

[1] http://www.express.co.uk/news/uk/712043/Drug-dealer-uses-chip-pin-machine-take-twelve-thousand-pounds-customer-payments

Source: Collaboration across functions to spot transaction laundering, via G2.

Clearly, the payments industry doesn’t face the same type of AML and terrorist financing challenges as traditional banks. 

However, this should not exempt the entities in the payments ecosystem from taking more proactive steps to identify and report suspicious activity. One of the challenges for organizations that have AML risk, but not to the extent of banks is that it's a slippery slope, and the cost of maintaining a comprehensive AML program potentially outweighs the perceived risks.

AML lite: One the periphery

What’s really needed for entities on the periphery of financial services such as attorneys, accountants, real estate brokers, merchant acquirers, payment processors, and FinTech firms is the idea of an AML lite program. 

The traditional AML programs that have evolved in banks over the years tend to be top heavy, hierarchal, and slow to adapt to new trends. 

While there will be significant challenges to come up with standards and solutions that smaller entities can adopt, additional AML coverage is needed across more industries to increase the identification of suspicious activity to help law enforcement to better put the pieces of the puzzle together.

Regulators also play a key role here.

These influential bodies sit in a tough spot because if they impose stricter AML regulations on entities that can’t adapt fast enough, then they could cause serious economic harm and put companies out of business. 

On the other hand, if these entities, which sit on theperiphery of financial services are not required to comply with any rules, then it's likely they won’t do anything. 

One strategy for regulators to continue to take, is to impose small incremental regulations for targeted industries and let the regulated institutions react and allow businesses to innovate and create services and solutions to meet those new requirements. 

A recent example of this strategy was the action taken by FinCEN which renewed the “existing Geographic Targeting Orders (GTO) that temporarily require U.S. title insurance companies to identify the natural persons behind shell companies used to pay “all cash” for high-end residential real estate in six major metropolitan areas.”[1]

British Columbia implemented its own form of a geographic targeted order for any foreigners buying real estate in the Greater Vancouver Regional District (GVRD)[2]

Foreigner purchasers are supposed to pay an additional property transfer tax of 15% which was implemented as an effort to cool real estate market prices and to keep housing more affordable for the regular people of British Columbia.[3] 

While it appears that British Columbia’s primary objective of the additional property transfer tax was to cool real estate market prices, it also likely reduced, to a certain degree, the amount of illicit funds flowing into the Vancouver real estate market.

Conclusion: A delicate balancing act for all involved

The fight against money laundering and terrorist financing is a delicate policy balancing act for regulators. 

The AML industry is still in its infancy to a certain degree, because U.S. “The Patriot Act” was only signed into law on October 26, 2001 by President George W. Bush in response to the horrific 9/11 terror attacks. 

That attack on the U.S. also pushed stronger global AML and counter-financing of terror standards, emboldening bodies like the Paris-based Financial Action Task Force (FATF), which is now the international standard-bearer of country-wide compliance structures.

So it seems Marshall McLuhan was right when he talked about the “Global Village,” because the world is smaller today and we are all more interconnected. 

The technological advances of cars, trains, and buses allowed people to move farther away from the city into the suburbs. The internet and e-commerce allow us to buy almost anything, even LSD, with the click of a button.

The global village or the shrinking of the world has contributed to the difficulty in thwarting terror attacks because of the speed and variety of travel options available today. The evolution of how “value” is transferred is similar to transportation in the sense that value can move faster and in a wide variety of mediums today. 

The car and airplane fundamentally restructured economies, cultures, and our perceptions of reality. Have we and society in general undergone a similar and perhaps more subtle transformation from the mushrooming mediums of value transfer?

No doubt, the human race has been shaped by these new ways to move money, just as we currently look for even newer, faster and cheaper ways to transact regionally and internationally. Just look at the advances of Bitcoin and its underlying technology the Blockchain.

But in step with the greater ability to move money quickly, easily and even, in some cases, nigh anonymously, detecting real instances of money laundering and terrorist financing in a reliable and automated fashion has grown even more incredibly complex.

Our understanding of how value is transferred and what are the potential exploits and weaknesses of each medium also must evolve to arrive at a more sophisticated approach to combat financial crime.

In other words, the medium is the method, requiring regulators, the private sector and watchdog bodies to craft new methods to better foster compliance, investigative and cooperative standards, best practices and methodologies to counter the entire spectrum of financial crime.

Such moves could formally or voluntarily nudge the payments sector to follow suit, making it harder for sham sites, fraudulent operators and illicit online casinos to engage in transaction laundering by arming merchants, processors, acquirers and others in the payments supply chain with the tools and resources to counter an array of criminal groups while supporting global commerce.     

[1] https://www.fincen.gov/news/news-releases/fincen-renews-real-estate-geographic-targeting-orders-identify-high-end-cash

[2] http://www2.gov.bc.ca/gov/content/taxes/property-taxes/property-transfer-tax/understand/additional-property-transfer-tax#gvrd

[3] https://www.theguardian.com/world/2016/aug/02/vancouver-real-estate-foreign-house-buyers-tax

Read More
Keith Furst Keith Furst

Data reusability: The next step in the evolution of analytics

Data reusability will lessen the response time to emerging opportunities and risks, allowing organisations to remain competitive in the digital economies of the future.

  • If data's meaning can be defined across an enterprise, the insights that can be derived from it expand exponentially
  • When financial institutions work together to identify useful data analytics solutions they can produce great results and add a lot of value to their customers
  • The analytic systems of tomorrow should be able to take the same data set and process them without modifying them

Data reusability: The next step in the evolution of analytics 2017-07-25T16:36:19+00:00

Editor's Note: This article originally appeared on The Asian Banker on July 20, 2017.

By Keith Furst and Daniel Wagner

Data reusability will lessen the response time to emerging opportunities and risks, allowing organisations to remain competitive in the digital economies of the future.

  • If data's meaning can be defined across an enterprise, the insights that can be derived from it expand exponentially
  • When financial institutions work together to identify useful data analytics solutions they can produce great results and add a lot of value to their customers
  • The analytic systems of tomorrow should be able to take the same data set and process them without modifying them

If data is the new oil, then many of the analytical tools being used to value data require their own specific grade of gasoline, akin to needing to drive to a particular gas station with a specific grade of gasoline with only one such gas station within a 500-mile radius. It sounds completely ridiculous and unsustainable, but that is how many analytical tools are set up today.

Many organisations have data sets that can be used with a myriad of analytical tools. Financial institutions, for example, can use their customer data as an input to determine client profitability, credit risk, anti-money laundering compliance, or fraud risk. However, the current paradigm for many analytical tools requires that the data to be used must conform to a specific model in order to work. That is often like trying to fit a square peg in a round hole, and there are operational costs associated with maintaining each custom-built tunnel of information.

The advent of big data has opened up a whole host of possibilities in the analytics space. By distributing workloads across a network of computers, complex computations can be performed on numerous data at a very fast pace. For information-rich and regulatory-burdened organisations such as financial institutions, this has value, but it doesn’t address the wasteful costs associated with inflexible analytic systems.

What are data lakes?

The "data lake" can provide a wide array of benefits for organisations, but the data that flows into the lake should ideally go through a rigorous data integrity process to ensure that the insights produced can be trusted. The data lake is where the conversation about data analytics can shift from what it really ought to be.

Data lakes are supposed to centralise all the core data of the enterprise, but if each data set is replicated and slightly modified for each risk system that consumes it, then the data lake’s overall value to the organisation becomes diminished. The analytic systems of tomorrow should be able to take the same data set and process them without modifying them. If any data modifications are required it could be handled within the risk system itself.

That would require robust new computing standards, but at the end of the day, a date is still a date. Ultimately, it doesn’t matter what date format convention is being followed because it represents the same concept. While there may be a need to convert data to a local date-time in a global organisation, some risk systems enforce date format standards which may not align with the original data set. This is just one example of pushing the responsibility of data maintenance on the client as opposed to handling a robust array of data formats seamlessly in-house.

The conversation needs to shift to what data actually means, and how it can be valued. If data’s meaning can be defined across the enterprise, the insights that can be derived from it expand exponentially. The current paradigm of the data model in the analytics space pushes the maintenance costs onto the organisations which use the tools, often impeding new product deployment. With each proposed change to an operational system, risk management systems end up adjusting their own data plumbing in order to ensure that they don’t have any gaps in coverage.

Data analytics solutions add value

When financial institutions work together to identify useful data analytics solutions they can produce great results and add a lot of value to their customers. The launch of Zelle is a perfect example, since customers from different banks can now send near real-time payments directly to one another using a mobile app.

A similar strategy should be used to nudge the software analytics industry in the right direction. If major financial institutions banded together with economies of scale to create a big data consortium where one of the key objectives was to make data reusable, then the software industry would undoubtedly create new products to accommodate it, and data maintenance costs would eventually go down. Ongoing maintenance costs would eventually migrate from financial institutions to the software industry, which has the operational and cost advantages.

There are naturally costs associated with managing risk effectively, but wasteful spending on inflexible data models takes money away from other things and stymies innovation. US regulators are notoriously aggressive when it comes to non-compliance, so reducing costs in one area could encourage investment into other areas, and ultimately strengthen the risk management ecosphere. Making data reusable and keeping its natural format would also increase data integrity and quality, and improve risk quantification based on a given model’s approximation of reality.

Reusable data will allow institutions to have a "first mover advantage"

Is the concept of reusable data too far ahead of its time? Not for those who use it, need it, and pay for it. Clearly, the institution(s) that embrace the concept will have the first mover advantage, and given the speed with which disruptive innovations are proceeding, it would appear that this is an idea whose time has come. As the world moves more towards automation and digitisation it is becoming increasingly clear that the sheer diversity and sophistication of risks makes streamlining processes and costs a daunting organisational task.

The speed at which organisations must react to risks in order to remain competitive, cost-efficient and compliant is decreasing, while response times are increasing, right along with a plethora of inefficiencies. Being in a position to recycle data for risk and analytics systems would decrease response times and enhance overall competitiveness. Both will no doubt prove to be essential components of successful organisations in the digital economies of the future.

Keith Furst is founder and financial crimes technology expert of Data Derivatives; and Daniel Wagner is managing director of Risk Cooperative.

Read More
Keith Furst Keith Furst

Guest blog: answers to 15 extra questions from our beneficial ownership webinar

Editor's Note: This article originally appeared on the Bureau Van Dijk blog on July 18, 2017.

Last month I was delighted to join Bill Hauserman as a panellist on Bureau van Dijk's webinar, Beneficial ownership – have you got it right?

Bill and I discussed smarter ways to integrate beneficial ownership information into our viewers' compliance processes, so they could start focusing on higher-level decision-making and spend less time on data discovery, and the webinar is now free to watch on-demand.

During the broadcast we received dozens of open-ended questions from our worldwide audience of compliance professionals. We only had a chance to address a few of them on the day. But we couldn't let the rest go to waste, so I offered to answer some in this guest blog. Bill will tackle some of the others in a follow-up blog.

So, in no particular order – and noting that these are my personal views – here they are. You're welcome to contact me for clarification at info@dataderivatives.com.

Editor's Note: This article originally appeared on the Bureau Van Dijk blog on July 18, 2017.

Last month I was delighted to join Bill Hauserman as a panellist on Bureau van Dijk's webinar, Beneficial ownership – have you got it right?

Bill and I discussed smarter ways to integrate beneficial ownership information into our viewers' compliance processes, so they could start focusing on higher-level decision-making and spend less time on data discovery, and the webinar is now free to watch on-demand.

During the broadcast we received dozens of open-ended questions from our worldwide audience of compliance professionals. We only had a chance to address a few of them on the day. But we couldn't let the rest go to waste, so I offered to answer some in this guest blog. Bill will tackle some of the others in a follow-up blog.

So, in no particular order – and noting that these are my personal views – here they are. You're welcome to contact me for clarification at info@dataderivatives.com.

Your questions answered

1. "You can't know what you don't know, so if you initially find limited information about an entity, what are the best practices for ensuring that continued monitoring efforts aren't missing vital information about that entity?"

Actively monitoring sanctions, politically exposed persons (PEPs) and negative news lists is a good way to provide coverage even with somewhat limited information. However, the more data that a company has on an entity the better the matching algorithms can perform, so limited identifying information could lead to more false positives alerts.

Hence, getting as much accurate data on the entity upfront will reduce operational costs of screening entities on an ongoing basis.

Beneficial ownership adds another layer of coverage to this framework by revealing more persons or entities to screen against sanctions, PEPs and negative news lists.

Ultimately, it comes down to an institution's regulatory requirements and applying a risk-based approach which could be based on its business model, geographic footprint and risk appetite.

2. "How should a financial institution determine when and for whom to gather information on ownership exceeding 10%?"

In the United States, the Financial Crimes Enforcement Network (FinCEN) has defined the 25% beneficial ownership threshold as the minimum amount in order to meet basic customer due diligence (CDD) standards, but it is not intended to undermine stricter internal CDD practices. 

Financial institutions can take the initiative to go below the 25% threshold, but it is not required is the US currently.

Generally, financial institutions conduct enhanced due diligence (EDD) on customers they perceive to pose greater money laundering and terrorist financing risk, which can include factors such as industries, sectors, products used and associated jurisdictions. EDD procedures are based on the financial institution's policies, but this is one area where the 25% beneficial ownership threshold can potentially be lowered to apply a risk-based approach.

3. "What should a company do if beneficial owners are not disclosable to be screened?"

There are instances where no beneficial owners with 25% or more equity interests will exist. In this case, FinCEN requires financial institutions to collect information on at least one individual with significant responsibility to control, manage, or direct a legal entity customer.

This is another area of risk, because customers have some discretion on how to identify an individual that fits that criteria. At the very least, the individual that the legal entity customer identifies can be screened against sanctions, PEPs and negative news lists.

4. "What about subsidiaries of the large Chinese corporations that are almost all led by CCP appointees, which makes them a PEP? This leads to over-population of high-risk entities."

There are many definitions of a politically exposed person (PEP), but generally a legal entity would not be considered a PEP. However, you may be finding that many Chinese companies have individuals in leadership positions who would meet the definition of a PEP, based on your jurisdiction's requirements, because of their association with the Chinese Communist Party (CCP).

There could be ways to assign risk levels to PEPs based on their influence, status and position held. For example, a PEP who holds a local position may have less influence than one at the national and international level.

Also, a high concentration of PEPs controlling Chinese companies does not automatically imply that all Chinese companies with PEP associations are high-risk, but this would be based on the company's policy. Creating a policy that outlines the company's definition of a PEP based on regulatory requirements, but also defines a methodology to determine criteria for high- and low-risk PEPs is one possible way to apply a more risk based approach.

5. "Since there is now a requirement for the beneficial ownership form, do you know if there is a specific regulation number that is referenced for this and when are all institutions supposed to comply?"

FinCEN's customer due diligence final rule (PDF) requires financial institutions to have a certification form completed by an individual opening the account on behalf of the entity.

The individual will be required to sign the form and certify that the information is complete and correct.

The CDD final rule issued became effective 11th July 2016 and financial institutions covered by the rule must comply with these rules by 11th May 2018.

6. "Regarding the 314(a) list, does the bank also need to include the beneficial owners information in the database list?"

As per FinCEN guidance (page 9 [PDF]), "[t]he regulation implementing section 314(a) does not require the reporting of beneficial ownership information associated with an account or transaction matching a named subject in a 314(a) request."

There has been some debate over the limits of using the 314(b) request between financial institutions. Section 314(b) allows financial institutions to share information with one another under the 'safe harbor' that offers protection from liability to help better identify and report potential money laundering and terrorist financing. However, some financial institutions appear to be applying a broad interpretation of what potential money laundering and terrorist financing can encompass and use the 314(b) to try and validate beneficial ownership.

7. "Most of the databases with beneficial ownership information only contain information on listed companies. How do you identify beneficial owners of privately held companies?"

In the United States, some states may or may not require entities to disclose beneficial ownership. Also, even if states do require beneficial ownership today, that might not have been the case historically and companies formed many years ago may still be missing beneficial ownership information.

Ultimately, the responsibility of providing beneficial ownership information in the United States is not on the state registries or financial institutions, but on the clients themselves.

This highlights the need for an automated approach, because leveraging databases such as Orbis, which Bureau Van Dijk has built, could reveal beneficial owners of complex corporate structures from jurisdictions which require its disclosure. In other words, by monitoring all jurisdictions in an automated way you have the best chance of identifying the ultimate beneficial owners without having to rely on a representative from the entity of a customer or third party, who may have limited knowledge of the organisation’s actual corporate structure and ownership.

8. "Can enhanced due diligence (EDD) be used as a procedure to conduct beneficial ownership investigation?"

For financial institutions, collecting beneficial ownership information is now required as part of the CDD final rule in the United States. Enhanced due diligence (EDD) is generally conducted on high-risk customers that pose additional money laundering and terrorist financing risks.

For organisations that may collect beneficial ownership, but are not considered regulated institutions under the CDD final rule, the EDD process may be a place where beneficial ownership can be collected. This would depend on the organisation's policy and regulatory requirements.

9. "In terms of identifying the individual in the 'control prong', what specific officers should be identified? One that is a set list for all customers?"

According to FinCEN the control prong is defined as "a single individual with significant responsibility to control, manage, or direct a legal entity customer, including an executive officer or senior manager (e.g., a Chief Executive Officer, Chief Financial Officer, Chief Operating Officer, Managing Member, General Partner, President, Vice President, or Treasurer); or any other individual who regularly performs similar functions (i.e., the control prong)."

FinCEN goes on to state that the list is not all inclusive and there could be significant differences in how legal entities are structured.

10. "Identifying beneficial owners is very important but various agencies have a threshold level of ownership that triggers a rule (e.g. the OFAC '50% Rule'). Is there a 'rule of thumb' for level of control that should be of concern, e.g., a sanctioned individual who owns only 20-30% verses 5-10%?"

The FinCEN CDD final requires covered financial institutions to collect beneficial ownership information for individuals, if any, that directly or indirectly own 25% or more of a legal entity customer.

There is no rule of thumb, but financial institutions have been known to lower the beneficial ownership threshold for high-risk customers, which is generally triggered by an EDD process.

11. "Why is there no agreed standard on translation of, e.g., Cyrillic?"

Leonard Shaefer, PhD, principal of Onomastic Resources LLC, opined on why there is not an agreed upon standard on the romanisation of Cyrillic, i.e., converting the Russian alphabet into the alphabet used in English.

Dr Shaefer stated: "Same reason as the one behind differing standards for TV signals, data encoding, weights and measures, temperature measurement, etc. Standards are not like the speed of sound, but more like peace or trade treaties which are best-guess arrangements that depend on fastidious human commitment and co-operation, both of which tend to erode over time. And then a new standard/treaty is devised, to fix everything that was wrong with the last one. And, as with the old one, some people play by the rules and some don't."

12. "How can we validate what our member is putting on the business account form as far as who the beneficial owner is?"

Third-party data sources can be used to corroborate the beneficial ownership information provided by the account opener. For legal entities with complex global corporate structures this can become a very labour-intensive and time-consuming validation exercise.

Also, the more you start to look at the complexities of identifying beneficial ownership, the more it becomes apparent that a holistic, automated and risk-based approach is needed. Some of you may have read headlines from major publications that proclaim that 'data is the new oil'. There is some validity to this claim, but let's focus on beneficial ownership for a moment.

One way to think about Bureau Van Dijk's Orbis database, which contains beneficial ownership data among other things, is to think about the infrastructure and effort needed to extract oil. Bureau Van Dijk has built up the global infrastructure and network of relationships to collect, mine and refine corporate information which can be used for a number of purposes, but clearly this has tremendous value to the compliance industry.

Verifying beneficial ownership information provided on an account form is one of many use cases for Orbis data because it can be leveraged to help validate the information on a form is accurate.

13. "I would like to know whether we must perform a drill-down into the beneficial ownership of a listed company whose CEO is a PEP. Or does it depend only on our RBA?"

Generally, from a financial institution's perspective a publicly traded company's anti-money laundering (AML) risk would be lower than a private company. However, there could be two scenarios where the CEO of a listed company could be identified as a PEP. In the first scenario, the CEO may also be a beneficial owner with 25% or greater equity interests in the legal entity customer or identified as an individual with significant responsibility to control, manage, or direct the legal entity customer. If the institution screens all of the beneficial owners, including ownership and control prongs, against sanctions, PEPs and negative news lists, the CEO's current or former PEP status could be identified.

In the second scenario, the CEO may not meet the criteria of a beneficial owner, but the financial institution may become aware, during its normal CDD process, that the CEO is a PEP. Rita Gemayel, CAMS, a financial crimes specialist, stated that 'if a financial institution becomes aware that beneficial owners below the 25% ownership threshold are PEPs or associated with negative news then it's industry practice to include this in the documentation and act accordingly.'

Analysts can come across this information by following their standard procedures and using various database tools including Google searches or during the EDD process, which may lower the ownership threshold requirement based on the institution's internal policy.

14. "Do you know if the US rule contradicts any European beneficial ownership requirements or regulations?"

At a high level the US and EU beneficial ownership requirements are very similar. There is a push in the EU to move towards central registries that report beneficial owners and share that data, but the US has not made any commitments regarding a central registry. This could be due to the legislative framework and the division of federal and state laws. There are proposals by the EU to lower the threshold to 10% for high-risk entities as well. It would be beyond the scope of this article to give an in-depth comparative analysis of the two regulatory frameworks.

15. "Do the following 'entity' types count as an entity under the new CDD rule: Formal Club accounts (e.g. Girl Scouts), memorial/benefit accounts, UTMA, conservatorship/guardianship accounts, estate accounts and/or informal club accounts (e.g. volley ball league account? What are some triggering events? How do you as a financial institution define what is a triggering event?"

Formal and informal club accounts do not fall under the legal entity definition. As per FinCEN, the CDD final rule (PDF) "defines a legal entity customer as a corporation, limited liability company, other entity created by the filing of a public document with a Secretary of State or similar office, a general partnership, and any similar entity formed under the laws of a foreign jurisdiction that opens an account. The definition also includes limited partnerships, business trusts that are created by a filing with a state office, and any other entity created in this manner. A legal entity customer does not include sole proprietorships, unincorporated associations, or natural persons opening accounts on their own behalf. Similarly, trusts do not fall under the legal entity definition."

FinCEN also explains that "the definition of legal entity customers only includes statutory trusts created by a filing with the Secretary of State or similar office. Otherwise, it does not include trusts. This is because a trust is a contractual arrangement between the person who provides the funds or other assets and specifies the terms (i.e., the grantor/settlor) and the person with control over the assets (i.e., the trustee), for the benefit of those named in the trust deed (i.e., the beneficiaries). Formation of a trust does not generally require any action by the state."

Triggering events may include filing of a suspicious activity report (SAR), currency transaction report (CTR), a 314(a) request, unusual account activity, an increase in wire transactions, a change in customer's account information, such as a change to a foreign address, a change in signers, and changes in beneficial owners.

That's it... for now

Look out for Bill's follow-up and do let me know if I can answer any more of your questions on beneficial ownership specifically or fin- and reg-tech more generally. Here are my contact details again.

Recording of last month's webinar

This is available for free to view for the next 12 months.

 

Read More
Keith Furst Keith Furst

10 things I learned at the Future of Finance Summit in Singapore

1 - Don’t say the F word. 

No not that word. Fintech. The Founder of The Asian Banker, Emmanuel Daniel, drove this point home by holding up a jar during his opening speech on the first day of the event demanding a person pay S$1 each time they said the f word. 

 The point he seemed to be making is that we have come to use fintech so loosely that it has lost its meaning. Fintech is short for financial technology and it’s so broad and all encompassing that we, especially in financial services, lose sight of the gravity of the digital transformation happening before our eyes.

He stressed the new power of the consumer demanding friction-less financial services in many different verticals including payments, lending and investing. He left the audience with a somber warning that the banks that become leaders in the digital economy will survive and the ones that don’t will die within the next 20 years.

1 - Don’t say the F word. 

No not that word. Fintech. The Founder of The Asian Banker, Emmanuel Daniel, drove this point home by holding up a jar during his opening speech on the first day of the event demanding a person pay S$1 each time they said the f word. 

 The point he was making is that we have come to use fintech so loosely that it has lost its meaning. Fintech is short for financial technology and it’s so broad and all encompassing that we, especially in financial services, lose sight of the gravity of the digital transformation happening before our eyes by not understanding it more deeply.

He stressed the new power of the consumer demanding friction-less financial services in many different verticals including payments, lending and investing. He left the audience with a somber warning that the banks that become leaders in the digital economy will survive and the ones that don’t will die within the next 20 years.

Founder of The Asian Banker Emmanuel Daniel giving his opening speech

Founder of The Asian Banker Emmanuel Daniel giving his opening speech

2 - Fintech could lead to a decrease in systemic risk, at least in the beginning

Barney Frank predicted that the innovations coming out of fintech will likely lead to regulatory frameworks more focused on consumer protection as opposed to rules addressing systemic risk.

Mr. Frank argued that peer-to-peer (P2P) lending could create regulatory concerns of due diligence and consumer protection, but it wouldn’t pose the type of systemic risk created by mortgage backed securities from the 2007-2008 financial crisis. Then again will we see derivatives spin out from these P2P lending platforms?

3 - The regulatory tug of war could get worse

The fight between national and state regulators in the US could heat up as Trump finishes appointing financial regulators.

4 - Being a regulator is a tough job, but somebody's got to do it

Regulators should not wait too long to update the rules and need to ensure the rules they do impose don't stifle innovation.

Former Congressman Barney Frank

Former Congressman Barney Frank

5 - If you work in operations, you may want to take a few online courses to learn some new skills

Blockchain has the potential to reduce middle and back office costs by $20 billion or more.  A good portion of these costs can be attributed to salaries.

David Shrier giving a fast paced and data rich presentation

David Shrier giving a fast paced and data rich presentation

6 - The future of cyber attacks is about to get worse

Cyber attacks are likely to increase in the future and we will probably start seeing impacts on the physical world and critical infrastructure.  Wannacry was just the tip of the iceberg.

7 - Even Somali pirates are innovating, are you?

Less sophisticated criminals can partner with hackers to reduce risk and increase profits from physical robberies. This was illustrated by Somali pirates partnering with a hacker who was able to breach a content management system (CMS) and identify high value items on vessels in specific shipping containers.

8 - Incidence response plans must be real

It’s not enough to simply have a nice slide deck describing what would be done when something goes wrong. Each step needs to be carefully planned, but also rehearsed. For example, does the organization have an ability to press a button and get a phone call and text message out to all their employees that the incidence response plan has been activated?  If, company laptops are quarantined due to a malware incident, does the organization have a backup site they can get up and running in a matter of hours?

Howard Mannella presenting on incidence response plans

Howard Mannella presenting on incidence response plans

9 - Artificial intelligence (AI) is even coming for lawyers! 

I had the honor of being on the Cybersecurity Thinking Global, Acting Local panel after Professor Marco Gercke gave his presentation on “The Future of Banking – Between the Opportunities of Artificial Intelligence/Machine Learning and the Challenges of Cyber Threats.”

Professor Gercke described an experiment he participated in where lawyers were put in a room and asked to negotiate a contract within 12 hours and provide a price for it. After the experiment they were informed that the an AI machine was asked to do the same task. Remarkably, when the two contracts were compared they were almost identical, except for one clause.

A law firm was hired to determine which contract was better and it took them two weeks to figure out the AI produced the better contract. Why? Because the lawyers didn’t know that the clause they included in their version was contested in two small jurisdictions which is why the AI changed it.

He ended by mentioning that the fundamental issue with the real world applications of AI is trust. Can I trust what the machine is telling me?

10 - Estonia is a global leader in digital government

Still confused about blockchain?  Watch this short video on the technology Estonia uses that has propelled it to global leadership in digital government. While this system would have serious hurdles to overcome in large and complex jurisdictions such as the US, it does provide some interesting food for thought. How much money do governments waste on inefficient and outdated processes?

 

Many thanks to Emmanuel Daniel, Robin Lee 李显龙, David Gyori, Howard Mannella, CBCP, Ka Wei Yuen, David Shrier, Gerlinde Gerber, Theo Nassiokas and all of the other speakers, moderators, delegates and staff. The Future of Finance Summit #FOFSummit17 was truly a great event and I enjoyed meeting so many great people!!

Former Congressman Barney Frank and Keith Furst selfie

Former Congressman Barney Frank and Keith Furst selfie

Keith Furst presenting on digital identity

Keith Furst presenting on digital identity

 

 

Read More
Keith Furst Keith Furst

Data integrity can no longer be neglected in anti-money laundering (AML) programs

The New York State Department of Financial Services (NYDFS) risk based banking rule, went into effect on January 1, 2017 and will have a significant impact on the validation of financial institutions’ transaction monitoring and sanctions filtering systems.  The final rule requires regulated institutions to annually certify that they have taken the necessary steps to ensure compliance. 

Data integrity is particularly interesting because it arguably hasn’t been given the same emphasis as other components of an effective anti-money laundering (AML) program, such as a risk assessment. 

There has always been an interesting dynamic between the way compliance and technology departments interact with one another.  This new rule will force institutions to trace the end-to-end flow of data into their compliance monitoring systems which could be a painful exercise.  This exercise will demand the interaction between groups which may have stayed isolated in the past and it will require some parts of the organization to ask tough and uncomfortable questions to others.  Clearly, gaps will be found and remediation projects will have to be launched to address those items. 

Editor's Note: This article originally appeared on the Gresham Tech blog on May 15, 2017.

The New York State Department of Financial Services (NYDFS) risk based banking rule, went into effect on January 1, 2017 and will have a significant impact on the validation of financial institutions’ transaction monitoring and sanctions filtering systems.  The final rule requires regulated institutions to annually certify that they have taken the necessary steps to ensure compliance. 

Data integrity is particularly interesting because it arguably hasn’t been given the same emphasis as other components of an effective anti-money laundering (AML) program, such as a risk assessment. 

There has always been an interesting dynamic between the way compliance and technology departments interact with one another.  This new rule will force institutions to trace the end-to-end flow of data into their compliance monitoring systems which could be a painful exercise.  This exercise will demand the interaction between groups which may have stayed isolated in the past and it will require some parts of the organization to ask tough and uncomfortable questions to others.  Clearly, gaps will be found and remediation projects will have to be launched to address those items. 

But does data integrity need to be such a painful endeavor?  Could there be a better way to streamline the process - and even create value to the institution as a whole - increasing the enterprise value of data (EvD) by ensuring its integrity?

Can New York regulators really have a global impact?

Any financial institution which is regulated by the NYDFS should have already reviewed this rule in detail.  Some have even suggested if financial institutions haven’t launched projects to ensure they can certify by April 15, 2018, then they are already behind.

What about financial institutions with no physical presence in the United States?  Should these banks be paying close attention to this regulation?  The answer is unequivocally – yes.  First of all, if you read the rule it’s straight to the point and makes a lot of sense (it’s only five pages long). But it goes beyond just common sense and raising the standards and accountability for AML programs. 

Imogen Nash, of Accuity, explored the global implications of NYDFS regulations, in a blog post back in August 2016, which are succinctly summarized below:

On the surface, one regulatory body that oversees one state would seemingly have little influence on a global scale. Look a little deeper and you’ll find that New York City is not only the financial capital of the United States but also 90% of foreign-exchange transactions and 80% of trade finance transactions (which often involve global parties and cross-border currency exchange) involve the US dollar and flow through financial institutions in New York.”[1]

Ultimately, the argument is that as long as the U.S. dollar continues to hold its dominance among the world’s currencies then the reverberations of what the NYDFS requires will be felt across the globe.

What is data integrity?

Data integrity can be described as the accuracy and consistency of data throughout its entire lifecycle of being used by business or technical processes.  The importance of accurate data in a financial institution can be easily demonstrated with a simple mortgage use case.   

On day one, a customer is approved for a $1,000,000 mortgage with a monthly payment of $5,000.  Let’s imagine that the loan system which approved that mortgage had a “bug” in it and on day thirty, the outstanding loan balance suddenly went to $0.  Obviously, maintaining an accurate balance of outstanding loans is an important data integrity task for financial institutions.

Do banks really make these types of errors?

It wouldn’t be the norm, but there have been documented cases of unusual bank glitches, such as the Melbourne man whose account was mistakenly credited $123,456,789.

The miraculous $123M credit to a customer’s account is definitely an outlier because most core banking systems function fairly well.  However, the reliability of a core banking system can’t necessarily be correlated to transaction monitoring and sanction screening systems. 

The fundamental difference between the two types of systems is that the latter need to aggregate data from many different banking applications which are in scope for compliance monitoring.  While each core banking application may function reliably well on its own - when all of these disparate banking systems are aggregated together into one compliance monitoring system, then various data integrity and quality issues arise.

But it’s not only the aggregation of disparate systems which is a problem, it’s the fact of how these systems are aggregated, which is like “fitting a square peg into a round hole.”  Banks systems come in many shapes and sizes, but compliance monitoring systems are one-size-fits-all.  Just imagine Shaquille O'Neal trying to fit into a toddler’s sneakers and then playing a full court basketball game.  Some things just don’t fit and when things don’t fit, it could impact performance. It’s the same idea when forcing data into compliance monitoring systems.

Why the extract, transform and load (ETL) process is a serious risk for compliance?

The process of moving data from the core banking applications to the transaction monitoring and sanctions screening systems presents several challenges and risks which should be validated on an ongoing basis.  The process which moves data from one source system to another is generally referred to as the extract, transform and load (ETL) process.  There are questions which arise at each stage of the ETL process such as:

  1. Was all of the data extracted properly?
  2. Was all of the data transformed from the source to the target system as designed?
  3. Was all of the data loaded from the source to the target system successfully?

Unless the a financial institution’s information technology (IT) department has already implemented data integrity and quality reports for their compliance partners, then the ETL process which supports financial crimes is nothing more than a “black box.”  In other words, there would be no reliable way to determine the effectiveness of the data migration processes and controls without some level of reporting outside of IT.

Does the below interaction sound familiar?

The screenshot below highlights two requirements in the NYSDFS rule, but the key word to notice is “validation.”  To validate the integrity, accuracy and quality of data is actually a significant effort because validation not only implies that a process is taking place, but it’s safe to assume that evidence will be requested to prove that the validation itself was complete and accurate.  Since the rule is requiring the Board of Directors or Senior Officer(s) to certify that their institution is complying, then clearly senior management should require reporting around the ETL process or “black box” which feeds data into the transaction monitoring and sanctions screening systems.

2. validation of the integrity, accuracy and quality of data to ensure that accurate and complete data flows through the Transaction Monitoring and Filtering Program;

3. data extraction and loading processes to ensure a complete and accurate transfer of data from its source to automated monitoring and filtering systems, if automated systems are used;
Source: http://www.dfs.ny.gov/legal/regulations/adoptions/dfsp504t.pdf

It’s just data. No big deal right?

Implementing data integrity and quality checks is no simple feat and transaction monitoring and sanctions screening systems will have their own unique requirements.  Transaction monitoring systems (TMS) generally require some level of transformation logic which categorizes various transaction types into groups, which are monitored by different AML models. 

There are other common transformations which occur to support the TMS, such as converting foreign currencies to a base currency used for calculation logic.  Obviously, you can’t sum transactions completed in Euros (€) and Japanese Yen (¥) together because the result would be erroneous. 

These transformation rules are also susceptible to data integrity issues if the source systems make changes to their application without informing the owners of the TMS, who also may need to make changes to their own processes.

Why data integrity projects have failed in the past

Data integrity is not a new concept for those who work with AML systems.  Actually, data integrity issues could potentially rank as one of the more popular responses by industry professionals for major impediments to the effectiveness of AML systems.  Some banks are still using the data integrity processes they built internally or with third parties with varying degrees of success and sustainability.

One of the main issues is that many of these data integrity processes are built within the databases themselves or a dedicated data warehouse is constructed for this end.  This may seem like a minor detail, but there are four ramifications:

  1. Exclusions: If, any exclusions are applied to customer, account or transaction data then it could be difficult to monitor what data is in scope and if it was extracted properly.
  2. Square peg in a round hole: The transform process is susceptible to significant data integrity issues because the source system is forced to conform to the compliance monitoring system’s data structure.
  3. Data validation: Generally, if a financial institution does have any data validation processes then they will performed in the “staging area” of the compliance monitoring system to ensure basic data integrity. Once the data validation process is moved forward to the “staging area”, then it’s essentially skipping steps one and two in the ETL process which can contribute to data integrity issues that are not identified.  Another question which arises about the process is whether it’s ongoing or was performed one-time.
  4. Data lineage: When financial institutions use inflexible monitoring systems with strict data models it presents a challenge to data lineage. How can the financial institution trace back the data in the monitoring system to what’s in the source system?  Even if they could follow the breadcrumbs; did the transform step in the ETL process manipulate the data so extensively that it’s become unrecognizable?

A path forward under the spotlight

Data integrity doesn’t seem to be going away anytime soon based on the fact that the NYDFS has just shined the spotlight on this issue.  In our highly fragmented and compartmentalized modern world we may need to see the challenges we face through a new and objective lens. 

Traditional approaches to data integrity projects have yielded limited quantifiable results in the past.  Isn’t it about time to try a more flexible solution which can evolve seamlessly with the institution’s regulatory requirements and technological landscape? 

Cost, flexibility and speed are three components of monitoring systems that financial institutions must manage effectively as regulatory requirements continue to expand due to growing threats such as cybersecurity.  Data integrity is much broader than just compliance, but building the foundation to get it right for compliance monitoring can cross pollinate into other areas of the enterprise.  Ultimately, data integrity can become a strategic business advantage when trying to enter new markets and launch new products.

[1] Onaran, Yalman. “Dollar Dominance Intact as U.S. Fines on Banks Raise Ire.” Bloomberg. 16 July 2014. http://www.bloomberg.com/news/articles/2014-07-15/dollar-dominance-intact-as-u-s-fines-on-banks-raise-ire
Web. 13 July 2016.

Read More
Risk Managment Keith Furst Risk Managment Keith Furst

Micro-jurisdictional risk: One of AML’s missing links

High level jurisdiction risk assessments alone are often too broad in scope to include in anti-money laundering policies; Micro-jurisdictional risk analysis could help allay model bias.

  • Some aspects of AML policies and procedures are in need of an overhaul, to capture variables important to making accurate analysis
  • Low risk jurisdictions have high-risk neighbourhoods and, conversely, not all customers and transactions from high-risk jurisdictions warrant heightened scrutiny
  • Financial institutions should go beyond existing AML guidelines and apply a more granular risk-based approach to geography to stay ahead of upcoming best practices

Editor's Note: This article originally appeared on The Asian Banker on May 9, 2017.

High level jurisdiction risk assessments alone are often too broad in scope to include in anti-money laundering policies; Micro-jurisdictional risk analysis could help allay model bias.

  • Some aspects of AML policies and procedures are in need of an overhaul, to capture variables important to making accurate analysis
  • Low risk jurisdictions have high-risk neighbourhoods and, conversely, not all customers and transactions from high-risk jurisdictions warrant heightened scrutiny
  • Financial institutions should go beyond existing AML guidelines and apply a more granular risk-based approach to geography to stay ahead of upcoming best practices

Financial institutions tend to build their anti-money laundering (AML) frameworks based on regulatory guidelines and commonly accepted industry standards. This can include jurisdiction risk, a common input banks use when evaluating customers and their transaction activity to determine degrees of AML risk, or to identify suspicious activity. Jurisdiction risks are based on a number of factors, including links to sanctions, terrorism, narcotics, corruption and other legislative and government deficiencies. The problem with attempting to include jurisdiction risk in AML policies is that it is simply too broad in scope, ranging from entire countries to subcomponents of cities, which lends such analysis to be either overemphasised or underemphasised as a data input.

As an example, there is strong evidence from the US that crime often occurs in High Intensity Drug Trafficking Areas (HIDTA), given the link between violent crime, competing drug distributors, and addicts looking to finance their next fix. HIDTA is an example of a high-risk micro-jurisdiction within the US, and banks may factor this into their lending decisions.

By contrast, foreign micro-jurisdictions have generally not been a component of AML programmes because of the large number of geographic areas which would need to be monitored on an ongoing basis, and the difficulty of defining how a standard of a high-risk micro-jurisdiction could apply across the globe. This leaves a gaping hole in the AML process, which severely limits its potential global impact.

As an example, while Belgium is generally considered to be a country with low credit, compliance, and traveller risk, it has also proven to be a breeding ground for Islamic extremism with a 2016 report by the International Centre for Counter Terrorism in The Hague noting that an estimated 13% of EU foreign fighters in Syria have roots in Belgium - the greatest number per capita of any country in the EU. An issue for Belgium is that information is not consistently shared between law enforcement agencies, making it easy for leads to fall through the cracks. Brussels has six police forces, each reporting to a different mayor. Given that rudimentary intelligence information is such a difficult process, what real chance do national or global law enforcement authorities realistically have to apply AML standards to the transfer of funds between terrorism suspects in Belgium, and between Belgium and destinations in the Middle East?

Other high-risk micro-jurisdictions have been tied to terrorism. Evidence has shown that funding for the 1998 U.S. embassy bombing in Nairobi was through a hawala office in its infamous Eastleigh neighbourhood, an enclave for Somali immigrants. Kenya is generally considered to be high for risk for travellers, with Eastleigh being at elevated risk due to violent crime and terrorist activity. Another example is El Salvador, which became the world’s most violent country in 2016 with its capital, San Salvador, becoming the world’s most homicidal city. San Salvador is clearly another high-risk micro-jurisdiction where law enforcement has itself become a victim of the drugs and gang war, with 49 police officers killed in 2015 alone. High murder rates can imply greater AML risk in specific geographic areas.

Even though a correlation can undoubtedly be made between high-crime neighbourhoods and money laundering, it is worth asking whether financial institutions should be responsible for applying a risk-based approach to these areas, as they would be required to do for other, non-high risk jurisdictions. Clearly there is great need for stripping bias from decision making to enable managers to make better decisions about whether, and how, to apply AML methodologies. Placing greater significance on managing jurisdiction risk would appear to be an important way to create greater awareness among investigators who review customers and their transactions.

However, imagine two publicly traded multinational companies conducting transactions with one another from high-risk jurisdictions. That should automatically be considered high-risk by virtue of geography, because the nature of risk depends on the entities involved, their reputation risk, history, and potential AML exposure. Should there turn out to be a strong correlation between customer risk ratings and jurisdiction risk, it could be a symptom that jurisdiction risk is being overemphasised. The same bias could apply to an institution’s transaction monitoring system, which may generate alerts in proportion to the jurisdiction’s risk level they originate from or terminate in.

Model biases can manifest themselves in any number of areas within an AML programme, including customer due diligence, transaction monitoring and sanctions. Focusing on micro-jurisdictions and other factors linked to transactions can allow for the recalibration of existing models to adapt a more granular risk-based approach. While it may seem like a daunting task to identify high-risk micro-jurisdictions, data is available and can be made useful more easily than many institutions may imagine, with the right approach. It is incumbent upon all financial institutions to go beyond existing AML regulatory guidelines to find the most effective approach to AML. Eventually, the regulations will catch up with emerging best practices.

Keith Furst is founder and financial crimes technology expert of Data Derivatives. Daniel Wagner is managing director of Risk Cooperative and co-author of the book “Global Risk Agility and Decision Making”. The views expressed herein are strictly of the authors.

Read More
Keith Furst Keith Furst

B2B Fintech: Payments, Supply Chain Finance & E-invoicing Guide 2017

A new guide produced by The Paypers explores the evolving world of transaction banking, B2B payments, supply chain finance & e-invoicing market

By downloading the 2017 Guide, you will learn:

  • what shapes digital transaction banking: the journey towards customer centricity (Innopay, BNY Mellon), the benefits of digital solutions for (currently underserved) corporate customers (Aite Group, Nordea), how banks are preparing for the upcoming regulations of PSD2 (Open Banking and APIs), KYC & the 4th AML directive (Deutsche Bank, Accenture, Data Derivatives);

Editors Note: This guide post originally appeared on The Paypers on March 13, 2017.

A new guide produced by The Paypers explores the evolving world of transaction banking, B2B payments, supply chain finance & e-invoicing market

By downloading the 2017 Guide, you will learn:

  • what shapes digital transaction banking: the journey towards customer centricity (Innopay, BNY Mellon), the benefits of digital solutions for (currently underserved) corporate customers (Aite Group, Nordea), how banks are preparing for the upcoming regulations of PSD2 (Open Banking and APIs), KYC & the 4th AML directive (Deutsche Bank, Accenture, Data Derivatives);

  • what are the driving forces in B2B payments: instant payments (Tieto, UniCredit), unified APIs (NACHA), the drivers for banks and corporates in developing their payment strategies (Strategic Treasurer), the opportunities in the European commercial card and payments (CleverAdvice);

Furthermore, you will learn about:

  • the trends in supply chain finance: how to successfully start a supply chain finance programme (Orbian), how logistics service providers are shaping the future of supply chain finance (Supply Chain Finance Community), the evolution of terminology (ICC Banking Commission), and the reason why supply chain finance is not meeting its promise to SMEs (Capital Chains);

  • what are the opportunities and challenges in achieving treasury maturity (Zanders), how fintech transforms supply chain finance (Hitachi Capital America), and the supply chain growth strategies in the emerging markets (Standard Chartered Bank);

Finally, get insights on:

  • how to reduce double financing risks in SME financing with blockchain (Innopay);

  • the strategic approaches to B2B e-communication optimization (Comarch EDI), the European standard on electronic invoicing (Peter Potgieser), and the strategies to implement and comply with the Directive 2014/55 (EESPA).

The 2017 edition is endorsed by Innopay, an independent consulting firm, and Supply Chain Finance Community, a not-for-profit group for all those involved in supply chains.

This Guide, carefully created by The Paypers, puts together the most recent and relevant information in transaction banking, payments & finance, so make sure you download your complimentary copy!

Download report

Read More
Keith Furst Keith Furst

AML Data Quality: The Challenge of Fitting a Square Peg into a Round Hole

As mentioned in my previous articles, traditional rule-based transaction monitoring systems (TMS) have architectural limitations which make them prone to false positives and false negatives:

This article focuses on the third drawback of existing TMS solutions: how their inflexible data models lead to poor data quality, resulting in additional false positives and false negatives.

I think many of us working in the anti-money laundering (AML) technology space have experienced the frustration of spending many hours retrofitting new data types to squeeze into the rigid data model of a TMS. Unfortunately, the more effort we spend retrofitting data, the more likely we introduce data quality issues. Further, when we don’t complete it in a timely fashion, we’re exposed to risk of large fines from regulators. That said, there’s hope on the horizon from machine learning solutions that are more forgiving of disparate data formats.

Editor's Note: This article originally appeared on the DataVisor blog on February 12, 2017.

As mentioned in my previous articles, traditional rule-based transaction monitoring systems (TMS) have architectural limitations which make them prone to false positives and false negatives:

This article focuses on the third drawback of existing TMS solutions: how their inflexible data models lead to poor data quality, resulting in additional false positives and false negatives.

I think many of us working in the anti-money laundering (AML) technology space have experienced the frustration of spending many hours retrofitting new data types to squeeze into the rigid data model of a TMS. Unfortunately, the more effort we spend retrofitting data, the more likely we introduce data quality issues. Further, when we don’t complete it in a timely fashion, we’re exposed to risk of large fines from regulators. That said, there’s hope on the horizon from machine learning solutions that are more forgiving of disparate data formats.

Square peg in a round hole

Sending data from source systems to many of the existing TMS is like trying to fit a square peg in a round hole. There are two major reasons for why this is the case.

First, TMS require a lot of data of many various types. Financial institutions typically have many disparate customer, account and transaction systems that feed data into the TMS to satisfy monitoring requirements. Second, existing TMS have a monolithic data model that’s generally difficult to adjust without significant customization.

This forces the financial institution to change its data to conform. However, this is difficult because each source system will have its own unique characteristics and ultimately serve a different business purpose. For example, a mortgage lending application may function differently than a system handling retail demand deposit accounts (DDA). Furthermore, each system will have its own data model or way to store and update information.

Unfortunately, these challenges result in a long, arduous process that’s filled with subtle gotchas, leading to missing potential AML events, leaving you exposed to huge fines from regulators. For example, imagine that a financial institution purchased a commercial loans company. The financial institution must integrate the acquired company’s data into their existing TMS, but the process takes longer than anticipated. During a regulatory exam, the regulator uncovers that the purchased company’s data is still not being monitored by the existing TMS. The regulator views the acquired firm’s lack of integration into the existing AML framework as a red flag and decides to probe the program deeper than it had in the past.

Even worse, the more the data is reshaped to fit the TMS data model, the greater the likelihood of developing additional data issues. And as you know, this will lead to false positives and false negatives down the line.

The best solution is to minimize data transformations. If the files are kept as close to the system’s original format as possible, the data integrity issues will be isolated to the system. While a certain degree of data transformations will be required before the detection algorithms are run, this can be accomplished within the TMS itself. However, this would require a TMS that is not based on a monolithic data model, and has some flexibility and adaptability.

How unsupervised machine learning (UML) leads to a more flexible TMS

There are some promising AI-based TMS solutions that are designed to solve this data inconsistency problem. Using unsupervised machine learning (UML) allows the TMS to have flexible data requirements. (For more information about how UML works in the context of AML, read my first blog post on the subject.)

To understand why, consider their differences. Traditional TMS with rule-based models look for specific scenarios and require specific fields structured in certain ways to map them to their internal data model. UML does not have a strict data model that inputs must adhere to; rather, it works with the data that it’s given.

Consider the scenario where an account was previously dormant and then suddenly began transacting very quickly. A rule would require several highly specific data fields and encode strict thresholds in order to try to match the scenario. However, the rigidity of the data fields make the initial integration difficult which increases the likelihood of data quality issues. A secondary issue is the strict thresholds, which lead to false positives and false negatives.

On the other hand, a TMS that leverages UML can take in a variety of data fields to find hidden networks of accounts with anomalous behavior. For example, UML may uncover a network of accounts that were previously dormant and started transacting quickly.

Note this example is simplified, as in practice the UML model would take into account hundreds to thousands of different data attributes to uncover the network.

There are three major benefits of using UML to power or supplement a TMS. First, with low data integration effort required, there are few chances to make mistakes that lead to data quality issues (and ultimately, false positives and false negatives). Second, it’s faster to get the TMS up and running. And third, it’s much easier to add new data fields or entire new use cases over time. This includes changing business logic (for example, new product offerings are launched) and relentless criminals adapting their methods.

The future of TMS technology

Ultimately, detecting money laundering is extremely complex. To make matters worse, customers, customer behaviors, product offerings, regulatory requirements, and even institutions themselves are under a constant state of change. We must consider that the tools we use to fight financial crime today not only limit our technical capabilities, but may actually influence the way we think about the problem itself.  As Marshall McLuhan said, “We shape our tools and afterwards our tools shape us.” It’s time we got some better tools.

Read More
Keith Furst Keith Furst

BankThink De-risking shows failure of AML teams to innovate

Anti-money-laundering rules have always been a challenge in the financial services arena, with regulatory bodies demanding high standards of compliance and levying fines for noncompliance. Financial institutions have long struggled to meet those demands.

But the high regulatory burden of satisfying these rules is not an excuse for the current de-risking phenomenon, in which financial institutions are pulling out of regions and client relationships seen as carry money laundering risk, rather than face the costs and regulatory risk of maintaining those relationships. The conundrum associated with satisfying AML regulations has as much to do with a failure of imagination in efforts to follow the rules as it does with how onerous the regulatory requirements are.

Editor's Note: This article originally appeared on the American Banker on April 12, 2017.

Anti-money-laundering rules have always been a challenge in the financial services arena, with regulatory bodies demanding high standards of compliance and levying fines for noncompliance. Financial institutions have long struggled to meet those demands.

But the high regulatory burden of satisfying these rules is not an excuse for the current de-risking phenomenon, in which financial institutions are pulling out of regions and client relationships seen as carry money laundering risk, rather than face the costs and regulatory risk of maintaining those relationships. The conundrum associated with satisfying AML regulations has as much to do with a failure of imagination in efforts to follow the rules as it does with how onerous the regulatory requirements are.

Banks have a propensity to blame regulators and excessive compliance costs for their pulling out of business lines without necessarily trying to find a way to make it work. Compliance teams and other stakeholders have resisted being honest about the need to innovate. There has been a failure to experiment and an overreliance on “conventional” AML standard operating procedures.

Banks’ frustration with current industry tools, practices and standards have prompted a lobbying effort to call for reforms. A February 2017 report released by The Clearing House provided some excellent suggestions regarding information sharing, prioritization of AML/combating-the-financing-of-terrorism (AML/CFT) standards and beneficial ownership reporting requirements, among others. However, the report failed to acknowledge the possibility that existing regulations may be purposely broad and open to interpretation. The risk-based approach is a recurring theme in regulatory guidelines, but why must regulators have to clearly define priorities and standards for banks?

Banks can test hypotheses and discover new trends where guidelines may not exist, but the industry has discouraged deviations from accepted norms. The Clearing House report suggests that banks are afraid to innovate for fear of regulatory sanction. One can only wonder whether this is a palpable risk or a manufactured fiction spread by the AML industry. The explosion of funding in regtech startups by venture capital firms demonstrates that investors realize there is an opportunity to redefine the AML industry because it is evident that innovation will not come from within.

In fairness, the banks are not completely to blame, as the currently accepted tools to fight financial crime do not allow them to innovate. The conundrum associated with satisfying AML regulations has as much to do with a failure of imagination by a whole host of entities in the AML supply chain as with onerous regulatory requirements and compliance burdens. AML stakeholders — including banks, certification organizations, technology companies, regulators and the compliance community as a whole — have resisted being honest about the need to innovate.

To move the innovation needle forward, all parts of the supply chain will need to do their part. For the banks, part of the answer is to integrate AML/CTF with risk management processes more generally. This should not be a difficult task, particularly as so much of what banks do is transactional in nature. Nearly all banks have developed formal programs to manage transactional risk, and most of these are centralized so as to establish and maintain control over an entire network of operations. Transactional risk management is usually integrated with the credit risk management function, but larger banks tend to integrate transactional risk management into their overall risk management process.

Most banks take a comprehensive view of risk, but tend to differ in terms of how specific risks affect their risk-rating system. Many banks apply a single country rating to all types of exposures, while distinguishing between foreign and local currency funding. Formal exposure limits tend to be set annually and managed through the use of aggregate country exposures. Nearly all banks have developed formal programs to manage transactional risk, and most of these are centralized so as to establish and maintain control over an entire network of operations. Almost all banks assign formal country ratings, most of which cover a broad definition of risk. Ratings are typically assigned to all types of credit and investment risk, including local currency lending.

Transactional risk ratings establish a ceiling that also applies to credit risk ratings. Most banks do not generally have formal regional limits to lending, but some banks monitor exposures for a given region informally, and most have specific country limits. Many banks apply a single country rating to all types of exposure, while distinguishing between foreign and local currency funding. Formal exposure limits tend to be set annually and managed through the use of aggregate country exposures.

Few banks can say they have fully, effectively and efficiently integrated AML into the larger risk management process, or have linked AML with their country risk management programs. But such steps are needed, rather than merely continuing the de-risking blame game. The AML sphere is ripe for transformation on the part of all players in the supply, delivery and user chain. It may just be that the very regulatory oversight used to enforce compliance can be turned into the vehicle driving AML innovation.

 

 

Read More
Keith Furst Keith Furst

The Other Elephant in the Room: Defeating False Negatives in AML Systems

False positives have a terrible reputation among anti-money laundering (AML) circles. As mentioned in my previous article on ending the false positive alerts plague, approximately 90-95 percent of alerts generated by Transaction Monitoring Systems (TMS) are false positives. So, why don’t we tighten our rule thresholds to let fewer alerts through?

Unfortunately, tightening thresholds typically increases false negative alerts, which are real money laundering activities that the TMS didn’t catch. Though false positive alerts lead to high operational overhead, false negative alerts can cost you both in reputation and in major fines and penalties. As AML teams know, regulators have no qualms handing over hefty fines, which have risen considerably in the last decade.

Editor's Note: This article originally appeared on the DataVisor blog on March 5, 2017.

False positives have a terrible reputation among anti-money laundering (AML) circles. As mentioned in my previous article on ending the false positive alerts plague, approximately 90-95 percent of alerts generated by Transaction Monitoring Systems (TMS) are false positives. So, why don’t we tighten our rule thresholds to let fewer alerts through?

Unfortunately, tightening thresholds typically increases false negative alerts, which are real money laundering activities that the TMS didn’t catch. Though false positive alerts lead to high operational overhead, false negative alerts can cost you both in reputation and in major fines and penalties. As AML teams know, regulators have no qualms handing over hefty fines, which have risen considerably in the last decade.

Source: https://www.wsj.com/articles/no-more-regulatory-nice-guy-for-banks-1419957394

Source: https://www.wsj.com/articles/no-more-regulatory-nice-guy-for-banks-1419957394

The fight against producing false negative alerts while trying to manage false positive alerts is like playing the world’s worst game of AML whack-a-mole. Just when you think that you’re fixing one of the problems, the other pops up with a vengeance. This begs the question: is it possible to reduce false negative alerts without increasing false positive alerts?

Is it a snake or an elephant? Seeing the whole picture

To answer this question, we first need to understand the root cause of false negative alerts. Think about the three general steps for a money laundering scheme: placement, layering, and integration. The big reason for false negative alerts is that most TMS only look at the symptoms of money laundering rather than the whole picture.

For example, an existing TMS might implement a rule that triggers if an account has multiple transactions close to, but less than, $10,000. A sophisticated money launderer knows about these rules, so they get around them by using stolen identities, multiple accounts, different transaction types, and low transaction amounts. Ideally, the TMS would not hard-code a cut-off threshold. Rather, a more sophisticated TMS would find the suspicious patterns to conclude that one person is behind the 100 accounts that are transferring money to each other.

Trying to catch money laundering with the current rudimentary rules systems we’re used to is similar to the parable of the blind men trying to identify an elephant. One blind man feels the trunk and thinks it’s a snake. Another feels a leg and thinks it’s a tree, and so on. Without a global picture of what’s going on, it’s impossible to spot the real money launderers (the “elephants”). The TMS is blind to many of the suspicious signals that money launderers unknowingly broadcast.

The Solution: global network detection with unsupervised machine learning (UML)

The solution to the false negative alerts problem is using unsupervised machine learning (UML) for global network detection. UML can view all of the activity at once, and in the context of all other accounts and activity. This is accomplished by linking accounts and activity together, which makes sense to humans but cannot be accomplished efficiently and comprehensively by existing TMS. By linking accounts, UML can detect hidden global networks of money laundering activity.

To understand what global network detection is, consider this example. Say Client A sends a payment to Client B, who then sends a payment to Client C. In current TMS, Client C’s demographic profile and other payment activity would not influence the rules contributing to an alert for Client A.

However, UML can uncover this undetected hidden link. Not only is Client B associated with Client C, but Client A is as well. Of course, this is an oversimplified example. Hidden links can be created with some combination of thousands of data fields to connect seemingly unrelated accounts and customers. The below image, which you may recognize from my first post, depicts customers which are linked based on shared demographic information such as an email address, physical address, phone number, internet protocol (IP) address and a common beneficiary.

The manual alerts queue will also change. Rather than looking at the account level, items in the review queue may contain multiple accounts, each with their own set of events. These accounts will be suspiciously linked in several ways. Ultimately, a human investigator will review what abnormal patterns were presented to them, but this approach saves the investigator time by not needing to manually comb through data and discover new patterns. Imagine the time savings if, instead of reviewing 100 alerts individually, you can review one hidden network with 100 linked accounts at once. Further, with all of the information presented upfront, there’s a much lower chance of accidentally missing something.

Ultimately, the major benefits of UML and global network detection make the decision clear. AML teams need to push themselves to embrace this new technology. By doing so, AML teams will:

  1. Have the sophistication required to see the bigger picture, taking hundreds to thousands of attributes at once
  2. Automatically adapt to new suspicious activity patterns and products launched
  3. Have more information about each account is given, reducing the time required to consolidate information for a case
  4. Triage many accounts and their coordinated activity at once, reducing the items in their queue

Existing TMS are too rudimentary to effectively catch money launderers, who are very good at quickly adapting and blending in. Money launderers have no choice but to be very careful and effective; the stakes are very high if they get caught. It’s up to the AML community to raise these stakes even more, and deter these money launderers from using our financial systems to funnel illicit funds.

Read More
Keith Furst Keith Furst

End the False Positive Alerts Plague in Anti-Money Laundering (AML) Systems

Based on a report issued by PricewaterhouseCoopers (PwC), 90 to 95 percent of all alerts generated by transaction monitoring systems (TMS) are false positives. Not only does this translate into operational overhead, it may also lead to missing real alerts hiding under the mountain of false positive alerts. This is not news to those in the anti-money laundering (AML) space. I often hear the same complaint from my colleagues who implement TMS at other financial institutions. It’s not uncommon to hear:

“Our TMS was generating a few hundred alerts every month, but after we went through the upgrade, it’s generating thousands!”

The problem with false positive alerts is that it creates huge operational overhead that translates into absolutely zero substantive suspicious activity report (SAR) filings. At a certain point, there are diminishing returns for alerts generated. A bank can only investigate so many alerts and still conduct effective investigations.

Editor's Note: This article originally appeared on the DataVisor blog on February 12, 2017.

Based on a report issued by PricewaterhouseCoopers (PwC), 90 to 95 percent of all alerts generated by transaction monitoring systems (TMS) are false positives. Not only does this translate into operational overhead, it may also lead to missing real alerts hiding under the mountain of false positive alerts. This is not news to those in the anti-money laundering (AML) space. I often hear the same complaint from my colleagues who implement TMS at other financial institutions. It’s not uncommon to hear:

“Our TMS was generating a few hundred alerts every month, but after we went through the upgrade, it’s generating thousands!”

The problem with false positive alerts is that it creates huge operational overhead that translates into absolutely zero substantive suspicious activity report (SAR) filings. At a certain point, there are diminishing returns for alerts generated. A bank can only investigate so many alerts and still conduct effective investigations.

Not only do these problems generate massive amounts of false positive alerts, they make it easy for criminals to get around existing TMS. (A topic we will explore in a future post.)

Why are there so many false positive alerts?

There is a fundamental technical barrier to traditional TMS that leads to a flood of false positive alerts. The TMS rely on rules or simple models which have a myopic view of global trade, human behavior, complexity of transactional networks and hidden links between nefarious actors. They have a very simplistic view of the activity being monitored by only distilling it down into only a few dimensions for the rule to interrogate.

Here are the two fundamental problems of existing TMS:

  1. Coarse-grained rules that result in detecting many scenarios, most of which are not actually suspicious
  2. Using only a subset of event types and data available, which limits the number of signals they can use for detection

For example, by looking at all the information available in the following diagram, it’s clear that in these ten transactions, only one is suspicious:

However, existing TMS only look at a subset of the data available to it. One rule in the TMS may be to flag all transactions as suspicious within a specific timeframe if they’re between $9,500 and $9,999. In this case, they all look the same, so all ten of these transactions are flagged as suspicious. This is a 90% false positive rate.

Is it possible for existing TMS to make their rules less coarse-grained? No, because TMS only look at a subset of event types, so if the rules are designed to be more specific, then they will miss real suspicious activity. Casting a wide net means that they will be able to detect some suspicious accounts, but will also result in alerts on a lot more good accounts. It’s not enough to simply tweak the existing rules or simple models. Rather, it’s necessary to look to a new technical solution to address the false positive alerts plague.

The promise of unsupervised machine learning

Unsupervised machine learning (UML), if implemented properly, can solve these problems for AML teams. UML can be leveraged to reduce false positives by looking at all activity within your financial institution from a global view and linking common bad actors together. This drastically reduces false positive alerts without compromising on compliance with regulatory guidelines.

To see how this is possible, it’s important to understand the technology. UML is a category of machine learning that can detect hidden patterns in large data sets, such as fraudulent user accounts, without prior knowledge of what a fraudulent account looks like. This is different from supervised machine learning, which requires knowledge of previous patterns to catch similar ones in the future. In the context of AML, UML automatically finds these hidden patterns to link seemingly unrelated accounts and customers. These links can be one of thousands of data fields that the UML model ingests. The below image depicts customers detected by UML because they are linked due to shared attributes such as an email address, physical address, phone number, internet protocol (IP) address and a common beneficiary.

So, in contrast to using coarse-grained rules, UML considers thousands of data fields to detect complex networks. This allows UML to look at a vast array of attributes and sift the real signal (suspicious activity) from the noise. Furthermore, UML can ingest all event data, which enables it to determine if accounts have similar suspicious related activity. For example, UML can link accounts together that have similar high transaction volume with low dollar amounts in the same time window—without being programmed to look for this specific case.

UML also decreases the prevalence of false positive alerts because it can catch a group of related accounts. So, it has more confidence that these accounts are bad. Think about it – If you saw one account do something weird, you might be unsure if it’s bad, but if you see fifty accounts linked together doing similar suspicious activity, you become extremely confident that they’re all bad. UML is better at differentiating between good and bad activity, and when an alert is generated, you can be much more confident that it’s a real alert.

What this means for compliance departments

With rapidly growing compliance department costs and no decrease to regulatory fines in sight, it’s becoming increasingly clear that we need a new approach to TMS. UML can reduce compliance costs by lowering false positive alerts and reprioritizing time spent on investigations. At the same time, it can increase the quality of suspicious activity report (SAR) filings. As the number of alerts to investigate decreases, existing compliance resources can be reallocated to other important activities such as quality control, analyst training and risk assessments.

From a practical perspective, the transition from traditional TMS to one that uses UML does not have to happen overnight. Instead, using UML alongside another TMS can be a great place to start. This can be an easy and more gradual solution, which I’d recommend when implementing any new TMS, unsupervised or not.


Ultimately, financial institutions have embraced UML in other areas of banking such as fraud, credit risk and trading, so it is only a matter of time before compliance departments do the same. It’s now a question of when and which institutions will lead the pack out of traditional TMS to raise the stakes in the fight against money laundering.

Read More
Keith Furst Keith Furst

Merchant-based money laundering part 2: Prepaid gift card smurfing

During the holiday season, when you were in a time crunch to get something for that special someone, or an acquaintance you felt obligated to shop for, you may have entered a pharmacy and noticed how easy it was purchase a prepaid gift card.

For you, it was an easy gift, because the recipient could likely use the plastic cash anywhere to get themselves something they actually wanted.

You could easily have gotten several prepaid cards and spent a few hundred dollars. In that scenario, everyone is happy because you get to shop en masse, and that ridiculously long Christmas list finally started to shrink – about the same time as the weight of your wallet.  But that freedom of choice also extends to the criminal element, who also want the financial freedom and anonymity prepaid cards can offer – that is, if you employ classic smurfing techniques used to launder drug money and adapt them to the prepaid card, or stored value, front.

Editor's Note: This article originally appeared on the Association of Certified Financial Crime Specialists on January 26, 2017.

During the holiday season, when you were in a time crunch to get something for that special someone, or an acquaintance you felt obligated to shop for, you may have entered a pharmacy and noticed how easy it was purchase a prepaid gift card.

For you, it was an easy gift, because the recipient could likely use the plastic cash anywhere to get themselves something they actually wanted.

You could easily have gotten several prepaid cards and spent a few hundred dollars. In that scenario, everyone is happy because you get to shop en masse, and that ridiculously long Christmas list finally started to shrink – about the same time as the weight of your wallet.   

But that freedom of choice also extends to the criminal element, who also want the financial freedom and anonymity prepaid cards can offer – that is, if you employ classic smurfing techniques used to launder drug money and adapt them to the prepaid card, or stored value, front.

The problem is a challenging one with real life repercussions. In many recent high-profile criminal and terror acts, such as those in Paris and other countries, groups used prepaid cards to fund their operations.

These groups can act as “smurfs” by fanning out to many retailers without them putting all of the pieces together – in many instances choosing pharmacies, not unlike drug dealers scouring pharmacies in decades past for cold pills that, at the time, were a critical precursor for meth.

At issue is a sometimes disjointed relationship between banks issuing the cards, merchant acquirers processing the cards, the card companies with their name on the cards and the merchants using the cards. In this second piece on merchant-based money laundering, we will be looking at the role, responsibilities and some potential best practices for merchant acquirers.

In short, while some merchant acquirers may be banks, be part of a bank or owned by a bank and thus subject to financial crime compliance rules, in certain cases they simply perform designated functions through third-party relationships, thus creating a possible crease for criminals.  

Many acquirers are third-party companies that have visibility into what and how prepaid cards are being used and can gain insight into, say, if a large number of cards are being processed by a certain merchant, or even a high-risk foreign company or nebulous online operation.

But these merchant acquirers are typically more worried about a merchant that could be engaged in fraud, not the merchant processing cards as part of a broader international network to launder money.

Depending on the make up of the company, merchant acquirers may not be subject to formal anti-money laundering (AML) obligations under state, federal or contractual rules.

Organized criminal groups are increasingly exploiting this perceived gap in the stored value supply chain to move illicit funds and make them look clean, a dynamic even more insidious if the unsavory characters already have a merchant account at a brick-and-mortar or online business, or have co-opted a previously unsullied business through bribery or corruption.

'Open loop' is often most attractive for criminal ends

They do this by exploiting certain oversight mechanics and transaction thresholds created as part of AML and anti-fraud rules.

Typically, identification rules in bank transactions, such as a deposit or withdrawal of more than $10,000, and a wire transfer of more than $3,000, come with obligations to get individual details and put it into a customer transaction report (CTR).

As well, if a person goes into a bank and tries to deposit more than $10,000 in prepaid cards or uses more than $3,000 in cards to wire money, the bank would ask for identification.

On that note, as a point of context, if a seller of prepaid access, such as a pharmacy, sells a prepaid re-loadable card then the issuing bank needs to collect basic KYC information on the purchaser within a certain time period, or the card will be deactivated and the individual wont be able to load more money onto it.  

But, in many instances, if a criminal goes into any of the array of small and large stores that sell prepaid cards – the preferred method due to their ubiquity is pharmacies – and only spends between $300 and $500, they can purchase a card that can be used at nearly any merchant in the United States with nary a second glance.

Some retailers have set certain ID thresholds as low as $300 or $500, a figure that a card smurf would find out to determine his laundering ceiling.

The preferred method is through what are termed “open-loop” prepaid cards, typically associated with a major card or bank brand and that can be used across the United States and in some cases internationally. In the prepaid, or stored value, space, there are also “closed-loop,” cards, such as a non-reloadable card that can only be used at a specific restaurant or retailer.

Those have a significantly lower risk to be used by criminals to store and launder money, but still can be. 

The open-loop prepaid gift cards are issued by the same major card brands as non-reloadable, meaning you can only load cash on them one time. There are other open-loop prepaid debit cards which are reloadable, but reloading them more than once would require some type of verification of your identity with the issuing bank.

Also, you may have noticed that if you purchased an open-loop prepaid gift card with cash that there was no attempt by the store’s employees to verify your identity and the card was activated before you left the store. 

The non-reloadable open-loop prepaid gift cards typically have a maximum load amount of $500 and these cards can only be used at a physical or online merchant located within the United States. 

However, even with these sensible restrictions, these cards are susceptible to a money laundering scheme which can be described as prepaid gift card smurfing.

As we said, this article will explore the money laundering risks associated with a non-reloadable open-loop prepaid gift card, which we have shortened for editorial purposes to “prepaid gift card.” 

Even though it won’t be covered in this story, there is still an ongoing debate regarding prepaid cards which are carried across the United States border without counting towards the cross border reporting requirement of $10,000, though they clearly should, say experts.

Currently, the U.S. Treasury’s Financial Crimes Enforcement Network (FinCEN) is trying to create rules that would subject prepaid cards to the same rules as cash at border crossings, but the agency has run into political, economic and logistical hurdles to craft a system that would allow smooth travel and trade, but create a filter for card-carrying criminals.

The Scheme

Despite the dollar limit and geographic restrictions placed on prepaid gift cards, there are still money launderers that will exploit the ability to load cash anonymously onto these cards. 

This scheme requires effort and the money launders will need to have their own physical or virtual card terminals or a relationship with a business that does to extract the value from the cards to a bank account. 

The effort comes into play because an individual would need to travel from store to store purchasing prepaid gift cards with cash. 

This is not a new idea as this type of smurfing scheme has been well documented by various law enforcement agencies.

As we have stated, “smurfing” is where individuals, such as in the case of drug dealers on a physical scale, would make small purchases of Sudafed at different pharmacies to support illicit methamphetamine production – making it nearly impossible for any one pharmacy to put all the pieces together and report the activities to authorities.

Mexico restricts import of pseudoephedrine and ephedrine

But to truly understand the lengths that criminals will go to employ lower-ranking, expendable agents, or even dupe vulnerable or homeless people, to smurf gift cards, you need to get a quick history lesson on how pharmacies have been victimized in the past, in this case to by Mexican cartels to get a vital chemical precursor to meth.

Here is the story.

The Mexican drug trafficking organizations (DTO) were producing massive of amounts of methamphetamine with the help of imported chemicals and smuggling their end product to the United States in decades past.

To combat the DTOs the Mexican government restricted the import of chemicals used to produce methamphetamine, but ironically this led to the production operations being moved back to the United States.

“Pseudoephedrine and ephedrine import restrictions in Mexico resulted in decreased Mexican methamphetamine production in 2007 and 2008,” according to the Justice Department.

In 2005, the Government of Mexico (GOM) began implementing progressively increasing restrictions on the importation of pseudoephedrine and ephedrine. In 2007, the GOM announced a prohibition on pseudoephedrine and ephedrine imports into Mexico for 2008 and a ban on the use of both chemicals in Mexico by 2009.

Due to the strict regulation of these types of chemicals in the United States the DTOs found it easier to source the chemicals they needed by having smurfs purchase small amounts of Sudafed from various pharmacies, so doing the same thing to get their hands on dozens and hundreds of prepaid cards in some cases is no problem.   

For instance, in October 2007, a Fresno County investigation revealed that a couple “had been conducting daily pre-cursor chemical smurfing operations, soliciting homeless individuals to travel from store to store to purchase pseudoephedrine. In exchange, the couple paid each person approximately $30 and some-times gave the individuals alcohol,” according to the Justice Department.

Source: Merced County Sheriff's Department.

Source: Merced County Sheriff's Department.

Note: This waste was left alongside the road in a commercial orchard in Merced County in 2008. Among the waste were approximately 10,000 empty pseudoephedrine blister packs.

The Sudafed smurfing operations illustrate that some people are willing to go out and purchase small amounts of a product from various store locations if they are compensated with money or drugs. 

However, thanks to the “Combat Methamphetamine Epidemic Act of 2005” and the National Precursor Log Exchange (NPLEx), which track an individual’s over-the-counter (OTC) medication purchases that contain the precursors used to manufacture methamphetamine, the illicit production of this destructive drug has been hindered to a certain degree. 

Density of Pharmacies in major cities

But that law has no bearing on prepaid cards, and they are just as available as precursor chemicals were in pharmacies.

Traveling from pharmacy to pharmacy to make small incremental purchases of prepaid gift cards would work best in major cities given the density of stores within a given radius.

For instance, just look at how easy it would be to jump from pharmacy to pharmacy in Gotham.

New York City is densely populated and based on a list issued by the Department of Health and Mental Hygiene, there were 253 pharmacies in the borough of Manhattan. 

These 253 pharmacies are only a subset of the total number of pharmacies in Manhattan because they were identified as the ones which provide the drug Naloxone without a prescription and generally most major pharmacies sell prepaid gift cards.

Map of NYC Pharmacies

Sammy the Smurf

Let’s examine a scenario where a fictitious Smurf named Sammy travels to 50 different pharmacies in one day and 250 over the course of the week as a way to see how much a smurf can launder. 

Sammy plans his route so he would visit the same pharmacy only once per week which would diminish any type of suspicion from the store’s employees because it would appear as if he was loading his weekly wage onto a prepaid gift card.

If, Sammy loaded $500 on 50 prepaid gift cards per day for 5 days per week, then it would total $25,000 per day and $125,000 per week.

If, Sammy the Smurf did this over a 52-week period then he would have loaded $6,500,000 of cash onto prepaid gift cards. 

Furthermore, if Sammy was compensated the New York minimum wage of $11.00 per hour for his efforts – something a criminal group acting as a real company could do – then the cost of activating 250 prepaid gift cards over the course of one week would amount to approximately three times his weekly wage for a 40-hour work week. 

The total annual cost of implementing this smurfing scheme would only be 1.33 percent or it would cost $87,880 to load $6,500,000 of cash onto prepaid gift cards which includes card activation fees and a full time Smurf earning the New York minimum wage.

Evading the Currency Transaction Report (CTR) threshold

Sammy the Smurf was a rather extreme example of prepaid gift card smurfing, but merchants being controlled by a criminal group, or working in concert with criminals for a fee, could use this scheme on a smaller scale to evade the $10,000 CTR threshold

There is nothing wrong with a customer depositing more than $10,000 in cash at their local bank, but it will need to be reported to FinCEN, the nation’s financial intelligence unit and arbiter of anti-money laundering (AML) laws, so some business owners are weary to deposit that much cash at one time. 

If, businesses were motivated to remain below the $10,000 threshold, for whatever reason, then they could load a couple of thousand dollars onto prepaid gift cards and process those cards at their physical or virtual card terminals.

But that begs the question: should a prepaid gift card be considered equivalent to cash?

Well, of course it should.  If, anyone can load cash onto a prepaid gift card at various retail locations and a merchant can process those cards which will result in an automated clearing house (ACH) transfer directly to their bank account, then what is the difference between that process and walking into a bank and depositing the physical cash?

Clustering customers and activity for outlier detection

To answer that, you have to get a better understanding of how prepaid card networks operate.

If a coopted merchant was engaging in a prepaid gift card smurfing scheme then there would be discernable red flags which could be detected by the merchant acquirer. 

Merchant acquirers enable merchants to “process credit and debit card payments and help in increasing sales by accepting the most popular cards to attract customers to their businesses,” according to a report by Capgemini.

Typically, a card payment transaction involves two sides: the first between the cardholder and the bank that issued their card; and the second between the merchant and the acquiring bank, according to the report.  

On the whole, cardholders only deal with merchants and the issuing bank while performing card transactions; they are not concerned with the merchant acquiring side of the industry, according to the technology and consulting firm.

However, this second acquiring side of the industry contains a network of highly advanced intermediaries who handle card transactions via authorization, clearing and settlement, and dispute management, according to the firm.

But merchant acquirers may need some help when it comes to identifying certain patterns that could be tied to money laundering.

To detect this type of behavior, the merchants would need to be clustered into similar segments that could be based on business classification, card type and processing volumes.

The businesses could be classified by standard industry lists such as the merchant category code (MCC) or the North American Industry Classification System (NAICS) code. 

The card processing volumes would be the total volume and value of transactions processed over a specific period of time. Merchants could be grouped into volume categories such as low, medium, high and very high. 

The business classification and volume category would be used to determine what the average processing profile was for each category of credit, debit and prepaid cards. The card type can be determined by the first 6 digits of the card which is known as the bank identification number (BIN).

As an example, let’s say the merchant acquirer grouped all of their merchants with a MCC of 5812 (Eating places and Restaurants) together with an average card processing volume between $10,000 and $30,000 per month. 

When all of the merchants within the 5812 code and $10,000 and $30,000 processing volume are grouped together then the average prepaid gift card volume is calculated to be 1.5 percent.

By parsing out these data points, if, a particular merchant has a prepaid gift card volume of 40 percent, then a merchant acquirer could determine it may be suspicious because it is a significant deviation when compared to the other members in the group.

The cost of regulation

But in order to nudge merchant acquirers to engage in ever more severe and technical analysis, it could take new, costly regulations or explicit AML obligations – something they sector would likely fight.

As the cost of regulation continues to balloon across various sectors of financial services a questions bubbles to the surface: is the problem really the regulation or the way organizations attempt to comply? 

Merchant acquirers and the major card brands do not have the same money laundering exposure as traditional banks, but this doesn’t mean that their products and services can’t be exploited for illicit purposes. 

The generation of cash from illicit activities could be derived from the most atrocious crimes and the ability to point law enforcement in the right direction through a suspicious activity report (SAR) has its own intrinsic value regardless of the dollar amount.

Read More
Keith Furst Keith Furst

How the 2016 United States Presidential Election Redefined Risk Management for the Better

The results of the 2016 United States Presidential election were shocking to some due to the polling statistics and the mainstream media's narrative around the highly probable election outcome which turned out to be wrong.

The results of the 2016 United States Presidential election were shocking to some due to the polling statistics and the mainstream media's narrative around the highly probable election outcome which turned out to be wrong.

No matter what you think of Trump and the final the result of the election it will be studied for generations because it defied conventional wisdom and members of the mainstream media asserted it would be a long shot for Trump to pull off a victory.  Ultimately, there was an industry wide systematic failure in the way the polling models were designed and executed.  So how did an obscure Professor from Stony Brook prove the big polling companies wrong? 

Professor Helmut Norpoth's Primary Model

On March 7, 2016 Professor Norpoth predicated that Donald Trump would defeat Hillary Clinton or Bernie Sanders with a confidence level of 87% or 99% respectively.  Professor Norpoth turned out to be right and proved that many of the major polling and news organizations were just flat out wrong.  Well, he didn’t get everything right because his model predicated Trump would win the popular vote which turned out to be false, but Trump did win the Electoral College. 

Primary Model.png

Professor Norpoth’s primary model is based on two major inputs which are the results of election primaries and the “swinging of the pendulum” or the tendency for the White House party to change after two consecutive terms.  In an interview with Fox News Professor Norpoth described that since Barack Obama didn’t do as well getting reelected as he did getting elected it was a strong indication that the 2016 Presidential race would be a swing election.  In other words, the momentum was already in favor of Republicans based on historical data regardless of the nominee.

Helmut Norpoth returns to 'Fox & Friends' after his prediction became reality

What does it mean to risk management?

According to ISO 31000, risk is defined as the “effect of uncertainty on objectives”.   Hence, risk management is attempting to manage uncertainty and the way this is done across many sectors of the economy is to build quantitative models.  The results of the 2016 US Presidential election are a solemn reminder of one crucial, but seemingly easy to forget fact.  The models we build are only approximations of reality, but not reality itself.  But what happens when an entire industry (pollsters) essentially rely on the same fundamental model without making any modifications based on new information?

The map is not the territory

Alfred Korzybski was a Polish scholar who famously said that “the map is not the territory.”  Korzybski was highlighting the fact that a map is really an abstraction of the territory and while it may be useful to us it is not the territory itself.  This is an obvious, but perhaps blurry distinction as our reliance and dependence on technology continues to grow. 

Have you ever had the experience of driving to an address under the guidance of some type of global positioning system (GPS) system and it took you to the wrong location?  While it may have been an inconvenience at the time it illustrates our reliance on models and algorithms and the high expectations we set for 100% accuracy all of the time.

Normal
0




false
false
false

EN-US
X-NONE
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true"
DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="26…

 

Is there really strength in numbers?

Two common questions which are asked from a financial institution's senior leadership to consultants implementing regulatory compliance systems are:

  • How are other banks doing it?
  • What is the industry standard?

These are actually both really good questions which the senior leadership should be asking their consultants and peers at other financial institutions.  It's the idea that the collective intelligence of a diverse group of practitioners will sum up to a more comprehensive and accurate model than any one financial institution can come up with by itself.  There is merit in this notion, but what if the collective gets it wrong because of complacency, inadaptability and a general banking culture which discourages dissenting opinions?  If risk management is truly about managing uncertainty then isn't there an inherent risk with a strict adherence to an industry standard?  Or should financial institutions meet their regulatory expectations and adhere to industry standards, but simultaneously strive to identify limitations in their current models and create a culture which embraces innovation and creativity?

Read More
Keith Furst Keith Furst

Trade-Based Money Laundering in Southeast Asia: Risks, Trends and Mitigation Measures

Trade-based money laundering (TBML) in Southeast Asia has been on the rise in recent years, driven by a confluence of factors including robust economic growth, a relatively weak regulatory environment (with the key exception of Singapore,) corruption issues, and the presence of sophisticated transnational criminal networks. These factors, as well as political, socio-economic and cultural dynamics, put the region at heightened risk for TBML in which financial institutions, corporations and governments should play a greater pro-active role in combating.

Editor's Note: This article originally appeared on the Access Asia Consulting blog on November 28, 2016.

Trade-based money laundering (TBML) in Southeast Asia has been on the rise in recent years, driven by a confluence of factors including robust economic growth, a relatively weak regulatory environment (with the key exception of Singapore,) corruption issues, and the presence of sophisticated transnational criminal networks. These factors, as well as political, socio-economic and cultural dynamics, put the region at heightened risk for TBML in which financial institutions, corporations and governments should play a greater pro-active role in combating.

According to the Organization for Economic Co-operation and Development (OECD), the economies of the Association of Southeast Asian Nations (ASEAN) are expected to grow at a rate of 5.2 percent from 2016 to 2020, which will be led by Vietnam and the Philippines.  The major contributors for growth include strong fixed investment, foreign direct investment and an increasing demand of goods from both domestic and foreign customers. But as domestic and international trade increases in the region, which is coupled with enduring signs of bribery and corruption issues, so does the risk for TBML.

The United States Department of Homeland Security has defined TBML as “disguising criminal proceeds through trade to legitimize their illicit origins.” The characteristics of this misconduct include misrepresenting the price, quantity and the quality of either imports or exports. Some of the red flags are over-invoicing and under-invoicing exports and imports, multiple invoicing of goods, manipulating the quantity and even phantom shipments.  In addition, payments to a vendor by an unrelated third party and unusual shipping routes are also suspicious indicators.

So, why does TBML matter to financial institutions, corporations and governments? The first reason is its potential adverse consequences to the domestic economy such as governments receiving lower tax revenues when companies under-invoice the value of its shipments.  Misrepresenting the value or the quantity of imports and exports may allow criminals and corrupt government officials to transfer capital more easily in order to legitimize the source of funds.  Regulatory scrutiny continues to increase across the globe around TBML and financial institutions should take the necessary steps to ensure they do not facilitate nefarious trade deals.

According to research by the Washington-based group Global Financial Integrity (GFI,) three of the top 10 countries for illicit financial flows (IFFs) are in ASEAN: Malaysia, Thailand and Indonesia (ranked as 5, 8 and 9 respectively.) However, when the top 10 countries for IFFs are compared to their gross domestic product (GDP,) the magnitude of the issue is put into perspective.  Malaysia can be categorized as the country with the potentially highest risk of TBML because 14.1 percent of its GDP has been identified as potential IFFs.

Another susceptible country is Cambodia. GFI estimated that over $15 billion was lost to illicit financial outflows between 2004 and 2013 – including US$ 4 billion in 2013 – most of it secretly shifted offshore using a technique known as trade misinvoicing. Access Asia also puts Myanmar and Vietnam at heightened risk for TMBL in Southeast Asia, where misinvoicing (including overpricing on imports sold by a member company incorporated in the import country) is a common way of reducing recorded profits to evade taxes. Thus, it is important for financial institutions in such countries to conduct thorough know-your-client due diligence.

Illicit financial flows can have a detrimental effect on a country’s economy by removing revenue from governments which can potentially be used for various development initiatives. In Cambodia, one opposition lawmaker was quoted in local media of saying that the US$ 4 billion reportedly lost to illicit financial outflows in 2013 was more than the entire national budget.

TBML is difficult to detect because of the complexity within the trade finance process itself and the number of entities involved.  The unstructured format of the required documentation such as word documents, PDF files and scanned images create additional automation and screening challenges.

So, what more can financial institutions do in the fight against TBML? Access Asia recently engaged with Keith Furst, the founder and financial crimes technology consultant at Data Derivatives, who provided us his views on the issue.

According to Furst, utilizing advanced algorithms should be considered as an effective resource to augment a financial institution’s existing anti-money laundering program specifically to address the unique challenges presented by trade finance.

Furst explained:

Many of the unstructured data in the form of PDF files and scanned images can be converted to machine readable text by leveraging optical character recognition (OCR) software.  Once these documents are converted into machine readable text then other algorithms can be applied to them such as natural language processing (NLP) where key data elements are extracted for analysis such as geographies, entities, individuals, ports, name of products, quantities and unit prices.  There are other opportunities to leverage advanced algorithms such as unit price and unit weight analysis.

The unit price analysis would focus on product pricing falling outside what would be considered normal for that industry and product type in the transaction.  This is an incredibly complex task given name similarity among dissimilar products, range of quality, volume discounts, etc.  However, certain products and industries would be easier to accumulate pricing profiles and determine discrepancy red flags.  Similarly, unit weight analysis would also focus on discrepancy identification, but for volume as opposed to price.  Nefarious actors may try to understate or overstate the quantity of goods shipped when compared to the actual payments made.  If the payment amount was abnormally low when compared to the product and container it was shipped in then this could be a red flag for an undervalued shipment.

TBML is a very effective tool for transnational organized crime groups to move value across international borders, and the prevalence of such groups operating throughout Southeast Asia – many with links to domestic and international terrorism, is another factor that puts the region at heightened risk for TBML.

Financial institutions tend to be reactive as opposed to proactive when it comes to complying with regulatory requirements, yet taking a strategic approach to compliance can actually be a competitive advantage.  Access Asia believes that financial institutions which forecast upcoming trends in the compliance and regulatory space and prepare accordingly will be better equipped to deal with the increased regulatory expectations when compared to their competitors.

Ultimately it will be up to the banks serving Southeast Asia to spearhead the campaign against TBML in their local jurisdictions; a failure to do so could eventually lead to a de-risking process by the large global banks which could affect the availability and cost of trade finance products in the region. Currently, the Monetary Authority of Singapore (MAS) is taking the lead to combat TBML in Southeast Asia, yet Access Asia believes it is only a matter of time before other countries follow suit. If they fail to do so, such countries will fall behind in terms of perception in the eyes of the global financial community and this will negatively effect investment and trade opportunities.

 

Read More
Keith Furst Keith Furst

Money Laundering in Canada 2016 Conference

Data Derivatives was featured as one of the sponsors at the Money Laundering in Canada 2016 Conference.

Keith Furst giving a presentation on The SWIFT transparency problem and How to conduct an anti-money laundering (AML) system assessment.  Both presentations were filmed and will be posted in the coming weeks.

The presentation from the event is below.

Read More