Exponential Bias AutomatedWhat is BiasVicious Cycle - Automating Bias71% of YouTube's COVID-19 misinformation was recommended by algorithmA crowdsourced investigation into YouTube's recommendation algorithmDissecting racial bias in an algorithm used to manage the health of populationsBias a Sepsis - Unfortunately PrevalentDouble Standard in MedicineAlgorithmic Bias Cheat SheetStep 1: Inventory AlgorithmsStep 2: Screen for BiasStep 3: Retrain Biased Algorithms (or Throw Them Out)Step 4: Set Up Structures to Prevent Future BiasFurther Reading
The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency. - Bill Gates
“Data is never this raw, truthful input and never neutral. It is information that has been collected in certain ways by certain actors and institutions for certain reasons.” - Catherine D’Ignazio, Assistant Professor at Massachusetts Institute of Technology (MIT)
“the degree to which a reference value deviates from the truth"
"When it's actively suggesting that people watch content that violates YouTube's policies, the algorithm seems to be working at odds with the platform's stated aims, their own community guidelines, and the goal of making the platform a safe place for people." Brandi Geurkink, Mozilla's Senior Manager of Advocacy
71% of YouTube's COVID-19 misinformation was recommended by algorithm, study says
YouTube's recommendation algorithm still regularly suggests videos with COVID-19 misinformation, according to research published July 7 by nonprofit Mozilla Foundation. The nonprofit crowdsourced its investigation by having more than 37,000 YouTube users report content that contained misinformation, violence or hate.
A health care algorithm affecting millions is biased against black patients
A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment - @colinlecher
We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias.
This disparity results increase the percentage of Black patients receiving additional help from 17.7 to 46.5%.
The bias arises because the algorithm predicts health care costs rather than illness
Unequal access to care means that we spend less money caring for Black patients than for White patients.
Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise.
Cost Is A Reasonable Proxy For Health, But It’s A Biased One
We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of bias [Algorithmic] in many contexts.
We must change the data we feed the algorithm—specifically, the labels we give it. Because labels are the key determinant of both predictive equality and predictive bias, careful choice can allow us to enjoy the benefits of algorithmic predictions while minimizing their risks - Obermeyer
A health care algorithm affecting millions is biased against black patients
A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was just revealed in research published this week in the journal Science .
Obermeyer et al. 2019 - Dissecting racial bias in an algorithm used to manage the health of populations.pdf
Dissecting racial bias in an algorithm used to manage the health of populations
The U.S. health care system uses commercial algorithms to guide health decisions. Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients (see the Perspective by Benjamin).
The Epic Sepsis Model (ESM), a proprietary sepsis prediction model, is implemented at hundreds of US hospitals. The ESM’s ability to identify patients with sepsis has not been adequately evaluated despite widespread use.
27 697 patients who had 38,455 hospitalizations met inclusion criteria
Low Sensitivity: ESM identified only 183 Sepsis occurred in 2,552 (7%).
Low Specificity: ESM did not identify 1,709 (67%) patients with sepsis.
Alert Fatigue: Falsely identified 6,971 of all 38,455 hospitalized patients (18%), thus creating a large burden of alert fatigue.
Conclusions and Relevance: This external validation cohort study suggests that the ESM has poor discrimination and calibration in predicting the onset of sepsis. The widespread adoption of the ESM despite its poor performance raises fundamental concerns about sepsis management on a national level.
The REAL Problem: The algorithm used information on bills for sepsis to define which patients had sepsis. Not the measure of sepsis that researchers would ordinarily use.
We Need to Stop using Billing, Cost and other Proxy Data for algorithm development
We Need to independently evaluate and report real world results before implementation of algorithms in Medicine.
Wong et al. 2021 - External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients.pdf
A popular algorithm to predict sepsis misses most cases, study finds
t was a win-win. Hospitals needed to prevent patient deaths from sepsis, a complication of infection; and Epic, the nation's largest seller of medical records, needed users for its new product - an algorithm that could predict which patients would develop the condition so doctors could intervene earlier.
A hospital algorithm designed to predict a deadly condition misses most cases
The biggest electronic health record company in the United States, Epic Systems, claims it can solve a major problem for hospitals: identifying signs of sepsis, an often deadly complication from infections that can lead to organ failure. It's a leading cause of death in hospitals.
"I don’t know how bad this is yet, but I think we’re going to keep uncovering a bunch of cases where algorithms are biased and possibly doing harm.” - Heather Mattie, Harvard University
A clear double standard in medicine: While health care institutions carefully scrutinize clinical trials, no such process is in place to test algorithms commonly used to guide care for millions of people.
'Nobody is catching it': Algorithms widely used in hospitals are rife with bias
he algorithms carry out an array of crucial tasks: helping emergency rooms nationwide triage patients, predicting who will develop diabetes, and flagging patients who need more help to manage their medical conditions.
Step 1A: Talk to relevant stakeholders about how and when algorithms are used: Create a list of algorithms within your organization; consider broad definitions of algorithms and ask open ended questions. Step 1B: Designate a ‘steward’ to maintain and update the inventory: Choose a person to be responsible for keeping the inventory current, in consultation with a diverse group.
Step 2A: Articulate the ideal target (what the algorithm should be predicting) vs. the actual target (what it is actually predicting): Consider whether there is a mismatch that can cause bias. Step 2B: Analyze and interrogate bias: Choose comparison groups (e.g. race), and perform some basic checks of how well the algorithm predicts its actual target. Then, investigate how label choice might create bias in how well the algorithm predicts its ideal target.
Step 3A: Try retraining the model on a label closer to the ideal target: Assess possible mitigations to label choice bias by comparing results between different labels. Step 3B: Consider alternative options (if necessary): If you are unable to improve or retrain the algorithm, consider other possible solutions. If data is the problem — a non-representative dataset, or no variables that match the ideal target — consider collecting new data.
Step 3C: Consider suspending or discontinuing the use of the algorithm (if necessary): If you are unable to improve the algorithm and/or its inputs, pause the use of the algorithm until you find a solution — or discontinue its use altogether.
Step 4A: Implement best practices for organizations working with algorithms: Under the aegis of the steward and a diverse team, conduct recurring audits and ensure rigorous documentation of current and future models.
Algorithmic Bias Initiative
This playbook will teach you how to define, measure, and mitigate racial bias in live algorithms. By working through concrete examples-cautionary tales-you'll learn what bias looks like. You'll also see reasons for optimism-success stories-that demonstrate how bias can be mitigated, transforming flawed algorithms into tools that fight injustice.
EFL Playbook: Mitigating Bias in Artificial Intelligence
Artificial intelligence (AI) is increasingly employed to make decisions affecting most aspects of our lives, particularly as digital transformation is accelerating in the face of COVID-19.
What Do We Do About the Biases in AI?
Over the past few years, society has started to wrestle with just how much human biases can make their way into artificial intelligence systems-with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.