Banking on Your Data: The Role of Big Data in Financial Services

Banking on Your Data

Christopher Gilliard

Prepared Testimony and Statement for the Record by Christopher Gilliard, Ph.D. Before The House Financial Services Committee Task Force on Financial Technology

 

My name is Dr. Chris Gilliard, and I have spent the last 6 years studying, teaching, and writing about digital privacy and surveillance. I focus on the ways that digital technologies perpetuate and amplify historical systems of discrimination. Too often, digital technology renders systems invisible and inscrutable under the guise of proprietary code, black box algorithms, or Artificial Intelligence. There are now countless documented examples of algorithmic discrimination[1], data breaches, violation of consumer privacy[2], and extractive practices on the part of platforms.[3] At present, the de facto ethic of “move fast and break things” operating under codewords like innovation and disruption—and in an environment where the few existing regulations are seldom enforced—companies have been able to use consumer data in whatever ways serve the financial interest of the corporation. Moving forward, the onus for addressing these problems must be shifted onto companies, so that before they move their product to market, they provide evidence that they will not bring harm to the consumer, much in the same way food and drug safety operate now.

 

When we think about how Big Data operates in the financial marketplace now, it may not be possible or useful to define the distinction between “financial big data” and all other data. Financial “big data” plays a role not only in Finance, Insurance, or Real Estate, but also in employment, transportation, education, retail, and medicine. Because the market does not make that distinction, we cannot either. In addition, third party data brokers accumulate all manner of data to the point that even if there are categories of data that are protected, processing massive amounts of data often creates the existence of proxies that allow for discrimination against protected classes within or among systems that may not appear to be “financial”. For example, Cracked Labs reports that “Oracle claims to have data on billions of purchase transactions from 1500 leading retailers.” [4]

 

The primary reasons that many people remain unbanked are because of historical inequality. While new forms of banking and credit may provide access to systems those people have traditionally not had access to, many of these technologies also offer these benefits in exchange for people’s privacy or create opaque systems that offer consumers little opportunity for redress. It is telling that the Apple Goldman Sachs card[5] received so much interest, because opaque algorithms affect marginalized populations all the time, yet they do not have the reach and power to trigger massive media attention and an investigation by the state. Yet, the stakes could not be any more different. For rich folks, it may mean being denied a larger credit limit; for the poor, this may mean paying for medicine, shelter, or food.

 

The notion that companies like Facebook, Google, and Amazon are entering into banking in order to benefit the unbanked or people who do not have access to traditional credit markets is absurd on its face, as one recent report in Bloomberg asserted regarding Google’s proposal to partner with banks to offer checking accounts through its Google Pay app.[6] “For Google, the bank partnerships will give the tech behemoth a better ability to show advertisers how marketing dollars spent on its system can drive purchases…”

 

There are two crucial frameworks for understanding these technologies and their impacts on marginalized communities: digital redlining[7] and predatory inclusion. Digital redlining is the creation and maintenance of technology practices that further entrench discriminatory practices against already marginalized groups—one example (among many) being when journalists at ProPublica[8] uncovered the fact that Facebook Ad targeting could be used to prevent Black people from seeing ads for housing, despite the Fair Housing Act prohibiting such conduct.

 

Predatory inclusion is a term coined by scholars Louise Seamster and Raphaël Charron-Chénier to refer to a phenomenon whereby members of a marginalized group are offered access to a good, service, or opportunity from which they have historically been excluded, but under conditions that jeopardize the benefits of access. “… the processes of predatory inclusion are often presented as providing marginalized individuals with opportunities for social and economic progress. In the long term, however, predatory inclusion reproduces inequality and insecurity for some, while allowing already dominant social actors to derive significant profits.”[9] As an example of this, we might look at a report on the cash advance app Earnin, which offers loans and users are able to “tip” the app. As reported in the NY Post, “If the service was deemed to be a loan, the $9 tip suggested by Earnin for a $100, one-week loan would amount to a 469 percent APR.”[10]  As Princeton professor Ruha Benjamin has argued, “our starting assumption should be that automated systems will deepen inequality unless proven otherwise.” [11]

Because of how algorithms are created and trained, historical biases make their way into systems even when computational tools don’t use identity markers as metrics for decision-making, but because of preexisting social realities and also because of the ways that so many different data points can serve as proxies for prohibited categories. Further, the notions of consent—“notice and consent” or “informed consent”, as they are currently constructed—are not sufficient for a number of reasons: privacy policies mainly serve to protect companies; credit scoring companies operate w/o the express consent of the consumers they purportedly serve. (I cannot opt out of being a “customer” of Experian, Equifax, and Transunion for instance); data is extracted, collected, combined, processed and used in ways that go beyond the stated purpose provided to consumers; there is often limited accountability for when they have been irresponsible with consumer data; companies rarely disclose, and consumers even more rarely understand, the full range of uses for their data.

 

We must reject the notion that regulations stifle innovation, as those harmed during innovation phases tend to be the most marginalized, and only later are policies addressed with no repairing of harms. The idea that corporate innovation, rather than the rights of historically marginalized groups, is an interest that Congress must protect turns ideas of citizenship and civil rights upside-down. The typical life cycle of a technological harm is human decision-making leads to a technical failure. That these systems are proprietary often make the harms more difficult to detect. Companies often offload the responsibility of detecting harms to researchers and journalists and the companies then only correct the harm after their discrimination or failures have been pointed out, and even then grudgingly, often not completely, and finally the entrenchment of the unregulated system is used as argument that there should be no further regulation.

 

While at the beginning of this document, I called for companies to provide evidence that their products first do no harm, this should not be mistaken as a call for companies to self-regulate. This model is unsafe and unsustainable. Consumers need to be empowered, as do regulators, in order to provide an environment that fully protects individuals’ rights.


  1. For more information, see Safiya Noble, Algorithms of Oppression (2018); Virginia Eubanks, Automating Inequality (2018)
  2. See Carole Cadwalladr’s work on Facebook and Cambridge Analytica https://www.theguardian.com/news/series/cambridge-analytica-files
  3. For more information, see Shoshana Zuboff, Surveillance Capitalism (2018)
  4. https://crackedlabs.org/dl/CrackedLabs_Christl_CorporateSurveillance.pdf
  5. For more information, see https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/
  6. Jennifer Surane https://www.bloomberg.com/news/articles/2019-11-17/google-checking-accounts-may-give-banks-an-edge-in-deposit-wars
  7. Gilliard and Culik, “Digital Redlining, Access, and Privacy” https://www.commonsense.org/education/articles/digital-redlining-access-and-privacy
  8. Angwin and Parris Jr. Facebook Lets Advertisers Exclude Users by Race https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race
  9. Predatory Inclusion and Education Debt: Rethinking the Racial Wealth Gap, 4 Social Currents (2017)
  10. Kevin Dugan Popular cash advance app Earnin operating in payday loan ‘gray area,’ critics claim https://nypost.com/2019/03/21/popular-cash-advance-app-earnin-operating-in-payday-loan-gray-area-critics-claim/
  11. Rework, a Podcast by Basecamp https://podcasts.apple.com/us/podcast/bonus-breaking-the-black- box/id1264193508?i=1000456947960

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Making Sense of Digital Humanities Copyright © 2022 by Christopher Gilliard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book