Exploring Intentional Bias in the Marketing of Consumer Products

Epstein Becker & Green

Advances in artificial intelligence (“AI”) continue to present exciting opportunities to transform decision-making and targeted marketing within the world of consumer products. While AI has been touted for its capabilities in creating fairer, more inclusive systems, including with respect to lending and creditworthiness, AI models can also embed human and societal biases in a way that can result in unintended, and potentially unlawful, downstream effects.

Mostly when we talk about bias, we focus on accidental bias. What about intentional bias? The following hypothetical illustrates the problem as it relates to the marketing of consumer products.

In targeted advertising, an algorithm learns all sorts of things about a person through social media and other online sources, and then targets ads to that person based on the data collected. Let’s say that the algorithm targets ads to African Americans. By “intentional” we don’t mean to suggest that the software developer has racist or otherwise nefarious objectives relating to African Americans. Rather we mean that the developer simply intends to make use of whatever information is out there to target ads to that particular population (even if that data is specifically race or data that correlates with race, such as ZIP Code). This raises a number of interesting questions.

Setting aside certain situations involving bona fide occupational qualifications (for those familiar with employment law), would this be okay legally? What if the product is certain hair care products or a particular genre of music? What about rent-to-own furniture based on data that suggest that African Americans are greater than average consumers of such furniture? Taking this scenario a step further, what if it is well documented that rent-to-own arrangements are a significant contributing factor to poverty among African Americans?

Bias can also be introduced into the data through the way in which the data are collected or selected for use. What if the data, collected from predominately African American ZIP Codes, suggest that African Americans typically are willing to pay higher rental rates, and so the advertisements directed to African Americans include those higher rates? Could the companies promoting these advertisements based on those statistical correlations be subject to liability for predatory or discriminatory lending practices? Do we still need human judgment to make sure that AI supported decision making is fair?

These are among the questions that we’ll explore in our upcoming panel on targeting advertising and we invite you to join us.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Epstein Becker & Green | Attorney Advertising

Written by:

Epstein Becker & Green
Contact
more
less

Epstein Becker & Green on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide