The idea of using predictive analytics in child welfare easily conjures images of child abuse investigators targeting parents a machine deems most likely to harm their children.
Because black families are so disproportionately likely to be involved with the child protection system, critics credibly argue that predictive risk modeling will only exacerbate existing racial bias. But what if the tool was used in the other direction: to identify child abuse investigators who disproportionately target black children for removals?
Earlier this month, I moderated a panel on predictive analytics at a child welfare conference at the University of Pennsylvania. A member of the audience – a black woman – asked why police departments don’t use this technology to target, for example, trigger-happy cops predisposed to kill young black men.
One of the panelists, Richard Berk, an expert on statistical analysis within the criminal justice system, said that it is possible to use predictive analytics to do just that, but such applications have been met with fierce resistance from “police unions.”
Later, over email, Berk pointed me to two studies that make it clear that big data can be an ally in identifying troublesome and dangerous cops.
One examined 500,000 police stops in New York City in 2006. Aside from the troubling fact that nearly 90 percent of those stops were conducted on nonwhites, the researchers’ algorithms were able to identify “15 NYPD officers who appear to be stopping an unusually large fraction of nonwhite pedestrians, and flag[s] 13 officers who appear to be stopping substantially fewer nonwhite pedestrians than expected.”
The second study, which also used NYPD data, focused squarely on police – both shooters and non-shooters – on the scene when shots were fired. After controlling for the uncomfortably common occurrence of all officers at the scene discharging their weapons, the researchers focused on “106 shooting incidents involving 150 shooters (cases) and 141 non-shooters (controls).” Black officers and those with chronic disciplinary problems were about three times more likely to fire their weapon.
Clearly, it is possible to isolate the cops who need discipline. The same techniques could be applied to child welfare. Ironically, big data is providing empirical evidence that predictive analytics should be used to determine which workers are most likely to treat parents differently based on race.
A 2017 analysis out of Broward County, Florida used a predictive risk model to determine that a full 40 percent of the child welfare cases referred for a removal or intensive services aimed at keeping a family together could have been handled less intrusively. Such a finding is particularly disturbing for black families, who are twice as likely as white families to be investigated for suspected child abuse. A groundbreaking 2017 study estimated that 53 percent of all black children in America will experience their parents being investigated for child maltreatment by age 18, while only 23.4 percent of white children will have the same frightening experience.
While socioeconomic conditions are a significant driver of such disparities, there is no doubt that racial bias plays a role in what happens after a referral reaches the child abuse hotline.
One of the panelists at the UPenn conference was Emily Putnam-Hornstein, who has made an art of linking all types of administrative data to better understand the statistical contours of child abuse and neglect. Her work with vast California data sets is showing just how baked in racial bias is.
“Data indicate that we are currently screening in more low-risk black and Latino children for investigation than low-risk white children,” Putnam-Hornstein said in an email. “And we are screening out more high-risk white children than high-risk black or Latino children. There may not be an objectively ‘right’ rate of screen-in or screen-outs, but risk models can help us document unwarranted variation in practice decisions, providing information critical to addressing bias.”
The county comprising Pittsburgh, Penn. is home to the most advanced predictive analytics tool in all of child welfare. Since the launch of the Allegheny Family Screening Tool (AFST) in 2016, the rate at which cases were opened on black children went down, while the rate of open cases for white children went up, according to an evaluation released in May. Much like Putnam-Hornstein’s findings, this seems to indicate “unwarranted variation in practice decisions.”
In a press release to announce the Allegheny evaluation, Rhema Vaithianathan, who led the development of the tool (and was the third UPenn panelist) said:
“We often hear about concerns that predictive analytics might worsen racial disparities, so we are pleased the evaluation finds that the AFST can support decisions that actually reduce racial disparities in the child welfare system.”
When it comes to systemic racism, the culprit is too often organizational. It seems entirely plausible to use predictive analytics to isolate the individuals most likely to create incredible and unnecessary pain for black children and families.
When it comes to stopping state-sanctioned violence – whether an unjustified police shooting or child removal – shouldn’t we use the most advanced tools at hand? It seems to me that predictive analytics - which has been so maligned as the harbinger of automated racism – could actually be a key to eroding its hold.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: Predictive Analytics