The AI-powered system uses 279 variables to score families for risk, based on cases from 2013 and 2014 that ended in a child being severely harmed. Some factors might be expected, like past involvement with ACS. Other factors used by the algorithm are largely out of a caretaker’s control, and align closely with socioeconomic status. The neighborhood that a family lives in contributes to their score, and so does the mother’s age. The algorithm also factors in how many siblings a child under investigation has, as well as their ages. A caretaker’s physical and mental health contributes to the score, too.
While the tool is new, both the data it’s built on and the factors it considers raise the concern that artificial intelligence will re-inforce or even amplify how racial discrimination taints child protection investigations in New York City and beyond, civil rights groups and advocates for families argue.
Joyce McMillan, executive director of Just Making A Change for Families and a prominent critic of ACS, said the algorithm “generalizes people.”
“My neighborhood alone makes me more likely to be abusive or neglectful?” she said. “That’s because we look at poverty as neglect and the neighborhoods they identify have very low resources.”
It could be used to see which families could be offered help, like money, food, and/or childcare, but noooo!