Florida DCF’s numbers are out for June and, in terms of investigations and removals, they were spectacularly normal. The predicted wave of removals isn’t here yet, if it’s ever coming at all. To the contrary, the system is struggling to discharge kids, as reunification and adoptions have both significantly decreased. We also have our first look at medical care, dental care, placement stability, sibling separation, and other measures that DCF only reports quarterly. Go ahead and form your own theory about what happened on those before you read on. You might be surprised.
Here are the details.
There were 10,054 initial in-home abuse calls accepted in June 2020. That is only 230 calls below June 2019, making it a fairly normal summer. You can see in the chart above that June calls have been dropping since at least 2016. The rate of decrease actually slowed this month, coming in at 11% above the seasonal trendline. That bump could be due to delayed calls from April and May or the drumbeat of media stories about how people need to report. The short version: people can’t say that calls are down this month, but I’m not quite ready to say they’re rebounding either.
Looking at the pipeline effects, with fewer calls in April and May, there were fewer investigations to close in June — though some investigations may have also been closed faster due to reduced workload. The number of verifications also went down while the number of removals rose slightly. None of this is particularly unusual, except it happened to occur during a pandemic.
And that’s the takeaway thus far: we haven’t seen the drastic increase in removals that people whose jobs depend on caseloads keep predicting. The chart below shows that there has been a gradual but steady decrease in removals since around 2016. June was about 2% up, which is right with the trend.
Now that we have more data for context, it appears that April 2020 was our lowest month for removals, as everyone was getting adjusted to the lockdown. That dip of 15% in April was still less than Hurricane Irma in November 2017, which shut down four of the five regions and reduced removals by 19%. There was a slight rebound in 2017, but it came two months later and quickly leveled out.
Meanwhile, we are seeing lower numbers of kids exiting care across the board. Total exits were about 18% below the trendline. Reunifications were 14% below, and guardianships were pretty even. Adoptions in June, typically one of the biggest adoption months right before the end of the fiscal year, were down a whopping 42% from the expected value. I’ve got a few theories on why. First, trials have been postponed, which could reduce the number of kids free for adoption. Countering that, though, appeals usually take 4-8 months, so we should still be clearing out adoptions from December 2019 to maybe March 2020. Second, the summer adoption campaigns that normally result in lots of adoptions happening all at once probably got paused this year. All those adoptions may wind up spread over the summer months more evenly. The lower rate of adoptions for June isn’t a headline — yet.
New this month, we have data from DCF reports that are run quarterly. Take a look at the DCF chart below. The pandemic seems to have solved placement instability. We don’t yet know why that is, but I suspect foster parents were less likely to ask for a kid to be removed, and agencies were less likely to act on requests that were made. Whatever the reason, the decrease in placement changes happened statewide. The Southern Region hit the 4.12 moves per 1,000 bed days standard for the first time in five years. It will be interesting to see if we can hold these numbers down.
The lockdown appears to have significantly disrupted dental care. Note that the measure covers the entire quarter and looks back seven months, so some of these kids may have gone to the dentist prior to the lockdown.
The lockdown only slightly reduced medical care. Maybe more kids wound up going to the doctor with suspicious symptoms, or maybe the expansion of telehealth made it possible to see a doctor without going in. The window here is 12 months, so maybe we’ll see a dip next quarter if the lockdown continues.
As far as the other reports, there was no noticeable change in the percent of kids placed in their removal circuit, seen by case managers every 30 days, sibling groups placed together, or just about any of the other quarterly reported measures. You can review DCF’s dashboards here.
A critical step in reducing racial disparity in the child welfare system is having a way of measuring it. The standard methods, including the one used by DCF in its public dashboards, all have documented weaknesses that either under- or over-estimate disproportionality and provide numbers that cannot be directly compared between different subgroups like regions or CBCs. This post uses a standardized measure proposed by Rolock 2011 to show where in Florida’s system we need to focus. There will be numbers and lots of graphs, so here are the main points.
In fiscal year 2018-19, non-white kids in Florida had 1.80 times the risk of experiencing a child abuse investigation as white kids.
White kids under investigation were then slightly more likely to have their allegations verified (1.07x) and white kids who were verified were slightly more likely to be removed (1.04x), but not at rates that overcame the initial disparity in investigations. White children also had a higher chance of being discharged from care once in it (1.06x).
The result is that non-white children had 1.73 times the risk of being in out-of-home care compared to white children. The disparity was even higher for non-white teens at 2.20 times the risk of white teens.
Disparity rates varied widely by CBC, with risk of out-of-home care ranging from nearly even in one CBC (1.008x) to almost five times as high (4.720x) in the most racially disparate CBC. Those differences were driven largely by disparity in investigations and then an inability to discharge non-white children at equal rates. This was especially true of the CBCs with the highest disparity in out-of-home care.
In terms of placement, non-white children in care were at higher risk of incarceration, Baker Act, and running away. White children were at higher risk of being placed in a therapeutic placement under most CBCs.
The takeaway is that if non-white children were in care in the same proportions as white children, there would be 4,000 fewer kids in the system. That would save $23.4 million per year in board rate payments alone. The National Council for Adoption estimated that in 2010 it cost a state approximately $25,000 per year to keep a child in foster care. If that’s true today, racial disparity in foster care costs Florida $100 million dollars annually.
The chart below shows the disparities by CBC. The values answer the question “How many times greater are non-white children at risk of the action compared to white children when accounting for state demographics?” It calculates each stage of the case using the group of kids who entered from the previous stage — for example, the risk of removal is calculated using the group of children whose allegations were verified. This lets us see how each stage contributes to or corrects for previous disparities. You can see clearly: racial disparity begins at the beginning. The rest of this post explains this chart.
First let’s discuss disparity
Conversations around measuring disparity often turn into a debate about how many children should be in foster care or need to be the target of child welfare services. One argument in that debate says that not all difference is disparity. Disparity, in that view, is only “a bad difference,” and some kids need to be in foster care for their protection. Under that view, if more Black kids are in foster care, then that could be reflective of their needs and not necessarily bad thing. A stronger version says that Black kids are in care more because they need to be, full stop.
The argument is rarely that blunt, but it’s always lurking. For example:
One issue that clouds this discussion is that there is no clear standard for child welfare involvement. One cannot say, for instance, that because less than 1% of children in the United States are in foster care that this is the correct percentage—nor is there any evidence that this percentage should necessarily be higher or lower. While it is often assumed that less contact with the child welfare system is good, both under and over representation of specific ethnic or racial groups should raise questions…
As such, much of the literature frames child welfare services as either a neutral or positive public health project that either helps or does not help families. It is rarely posited as something that harms. Under that neutral good view there is some “correct” number of kids in foster care, even if the goal is to get that number as low as possible. For example, Bywaters et al. (2015) describes and then rejects a framework of supply and demand where families have needs that create demand for child welfare services and the government and communities in turn supply those needs. Intead, Bywaters frames the issue as one of the inequities that drive kids into care — and by doing so, the moral and ethical problems of the system become more clear. Roberts, Cloud, Phillips & Pon, Burrell, Cooper, and many more writers have put the child welfare system into the context of people who experience it and the communities that pay for it most heavily.
We also know foster care is a system of inequity because, while there are lots of privileged people seeking to make it unavoidable for other families, they are not simultaneously demanding access to the system for their own kids. Families need community safety, good physical and mental health, social support, material wealth, and political power to create better lives. If you have that, you don’t need DCF. Nobody calls DCF to put their child in foster care for a few days while they go on a business trip, and there is no Operation Varsity Blues for rich people trying to scam their kids into care. That’s because foster care is not a good thing.
Take the responses of Black mothers in Florida’s system, who described the ways the system takes and keeps kids for reasons completely unrelated to parenting:
The sense of powerlessness and helplessness was profound as parents described being trapped by personal limitations and systemic unresponsiveness. Concentrated poverty translated into a series of severe deficits: lack of sound housing, nutritious food, accessible healthcare, adequate transportation, and childcare services. Concomitant with high standards set by the courts, such factors combined conspired to decrease the likelihood these families would ever see their children returned
When a bad thing happens almost exclusively to poor, disfavored, and marginalized people, the morality and ethics of the situation are clear. Difference in a punitive, inequitable system isdisparity. Further parsing of who “needs” to be in the system is no more instructive than debating who “needs” to be poor, unhealthy, or alone. We can discuss what drives more kids into foster care as a question of inequity, but no kid should be there at all. Now let’s get to the numbers.
The simplest way to measure difference is to just count. In the chart below you can see that non-white children (the sum of DCF’s Black and Other categories) made up about 28% of the population, whereas they made up 42% of investigations and 40% of out-of-home care. Their percentage of discharges was slightly lower at 39%. These patterns are going to play out again and again — racial disparity starts on the telephone.
Before we go further, the three DCF categories Black, White, and Other should be interpreted with lots of caution. Schmidt et al. (2015) found that one state’s system tended to label kids as white at higher rates than school districts labeled the same kids, and even higher than the kids described themselves. The study also showed that 20% of the children’s racial self-identification changed over time. To my knowledge, the race labels in Florida DCF’s system are entered when the case comes in — usually for a baby — and not updated again.
DCF doesn’t limit case managers or investigators to just the three categories. They actually have the six below, plus three extras: Unable to Determine, Declined to Respond, and Unknown. They additionally capture whether the child is or is not Hispanic. (They have a marginally helpful FAQ about it here.) In DCF’s public-facing dashboards, they roll these options into the three categories above. The “Other” category is entirely too broad — an Asian child and a child who is multiracial Black and white will have very different experiences in care, but are lumped into the same group. I can’t undo that here. Future work needs to be done.
Back to the measures. To determine disparity, you might next look at proportions, the number of kids per 1,000 in the general population who find themselves at each stage in the system. You can see below that 86.59 per 1,000 non-white kids went through investigations, but only 47.15 white kids did.
There is an important factoid hiding in this chart. If non-white kids were in foster care in the same proportion as white kids (4.52 per 1,000), there would be around 4,000 fewer kids in care. That would reduce our foster care rolls by about 17%. At a board rate of $16 per day, that would save the state $23.4 million per year in foster care payments alone. Racial disparity costs money. If you use the National Council for Adoption‘s 2010 estimate that it cost a state approximately $25,000 per year to keep a child in foster care, racial disparity costs Florida $100 million dollars annually.
Now on to our next measure. Taking out-of-home care as an example, if you divide the proportion of Black children (6.9 per 1,000) by the proportion of all children (5.45 per 1,000) you get a value called the disproportionality index (DI) or the disproportionality representation index (DRI). In this example, 6.9 / 5.45 = 1.27. This is the measure that DCF uses on its dashboards. It has the benefit of being easy to understand: there are 1.27 times more Black kids in out of home care than “should be” as compared to all kids in the community. It has the drawback of making it harder to compare regions or CBCs of different sizes and racial compositions. As an extreme example, 3.0 / 9.0 gives a DI of 3.0, but 43.0 / 49.0 gives 1.14, even though they are both separated by 6 per 1,000. Conversely, I can get a DI of 3.00 by 3.0 / 9.0 or 20.0/ 60.0. The value 3.0 can represent vastly different experiences on the ground.
Below is what it looks like on DCF’s dashboards. Graphing the DI is messy.
If you divide two groups’ disproportionality indices by each other, you get a new number that is sometimes called the disparity index (Shaw et al., 2008), and is a calculation of relative risk. A value of 1.0 would be equal risks in the two groups. The chart below shows that non-white kids are anywhere from 1.64 to 1.84 times as prevalent in the system as they are in the general population (and white kids are underrepresented by the same amount). You can see that the disparity is the highest in investigations (1.84), goes down a little in removals (1.71), and rises again as you go through the stages of a case to out-of-home-care (1.73) and being in care for over 12 months (1.80). This is exactly what we saw in our very first chart of measures above. It’s just easier to see the relationship now.
The disparity index, just like the disproportionality index, has its own quirks that make it hard to compare among different CBCs or regions. Specifically, it tends to under-estimate the risk when the percentage of the measured population is small and over-estimate the risk when it is large. (Rolock 2011). To DCF’s credit, it does not provide side-by-side comparisons on its dashboards. But that’s what we need to do, and doing so requires more complicated math that gives simpler measures.
On a final note, we also want to measure how disparity creeps in at each step through the system. That means we should use a step-wise approach, calculating the disparity at each stage based on the population that entered the stage from the previous step. For example, to get a disparity measure for children who were removed, we will use the group of children who had verified maltreatment — not all kids in the general population. This is sometimes called decision-point based enumeration. (Thurston & Miyamoto, 2020.) Now, here we go. This is what this post is actually about.
Harder math, better graphs
To compare CBCs to each other, let’s use a measure called the weighted risk ratio (WRR). The WRR allows you to more directly compare groups that are themselves not homogeneous. Nancy Rolock at University of Illinois at Chicago wrote in favor of using WRRs in child welfare back in 2011. They are also recommended for use in the special education arena to determine differences among lots of different schools and districts, which helps focus resources and scrutiny on places that need it. I’m using the special ed formula.
WRRs answer questions like: “How many times greater is a specific racial group at risk of being removed in comparison with all other racial groups under the same CBC, weighted by the demographics of the state?” The math-magic is that WRRs adjust for the variability between different CBCs to give you a number you can compare even when the CBCs don’t have the same group percentages (our 3/9 and 20/60 example above). The downside is that it doesn’t work well for small numbers so you have to use an alternate formula that compares the local risk to the comparison group’s statewide risk.
So what does that get us? The chart below shows the weighted risk ratios for children in Florida at each listed stage of the system (based on the population from the previous step in most cases). All of the data comes right from DCF’s dashboards, just calculated in a different way. I’m using the categories White vs. Non-white (i.e., Black + Other) to make it easier, and also because I don’t trust DCF’s “Other” category to stand on its own. A risk ratio of 1.0 means that white and non-white children have equal risk of being screened in at that stage. If a value is less than 1 (orange), then white children have higher risk, and if the value is greater than 1 (blue), then non-white children have higher risk.
The top row of the graph shows that during fiscal year 2018-19, non-white children in Florida had 1.836 times the risk of an investigation as white children in Florida. White children were then slightly more likely to have their allegations verified (-1.072) and slightly more likely to be removed (-1.043), but neither number was large enough to make up for the original disparity. And, importantly, white children were also more likely to be discharged (-1.057). The result is that non-white children in Florida were at 1.733 time the risk of out-of-home care placement than white children.
The graph also breaks it down by age in the next rows. You can see that the risk was highest for non-white babies and decreased slightly through age 14. White babies were more likely to be verified and removed (albeit not at rates that overcame the original disparity in investigations), but then around age 10 things shift. Non-white tweens and teens with verified allegations were at higher risk of removal than white ones (1.023 and 1.131). White teens were then significantly more likely to be discharged than non-white teens (-1.374). The result is that non-white teens in Florida had 2.196 times the risk of out-of-home care placement than white teens, a risk much higher than any other age group.
Weighted risk ratios by CBC
Now for the bigger picture. Here is the same chart by CBC, again for fiscal year 2018-19. The CBC with the highest WRR for investigations, Citrus Family Care Network (3.427), is almost three times as high as the CBC with the lowest, Kids First of Florida (1.196). The WRR in the out-of-home care measure ranges from nearly even at -1.008 to a whopping 4.72. Racial disparities are happening everywhere, but not in the same ways.
You can see so much in these graphs. For example, you see that disparity starts in investigations under every single CBC. By the verification and removal stages, you can see how the local systems work either towards or against disparity, but only one CBC managed to overcome that initial skew. The systems with lower racial disparity in their investigations seemed to correct for it better than the areas with higher disparities. A few areas at the top of the chart actually compounded the disparity through disproportionate removals (but so did the CBC at the very bottom).
Finally, the graph shows that the areas with the highest levels of disproportionality in their out-of-home care populations (seen at the top of the chart) significantly struggled to correct for it through discharges. Only one CBC — Kids First of Florida — was close to equal on its out-of-home care rates, and it also had the highest rate of discharging non-white children.
Let’s look at the same chart from the perspective of Black children vs. non-Black children below.
A few things leap out when centering on Black children. First, the disparity in investigations is lower but still high. Black children in Kids First Florida in Clay County (Circuit 4) actually have lower risk of investigation as non-black kids. Again, removals of Black children tended to be correlated with areas with higher out-of-home care disparity. There are even CBCs where Black children have lower risk of being in out-of-home care than non-Black children, but that may be due to the over-representation of kids in the “Other” category. Those areas still had higher risk for non-white kids.
The most striking thing about this graph, though, is the insanely low risk of discharges (i.e., high risk of not being discharged) for Black children in those areas with the highest out-of-home care disparity. It’s like there’s a jetstream pushing Black kids into care and keeping them there. I wonder what that could be.
One more chart — the CBC chart organized by DCF regions.
Weighted Risk Ratios of Discharge Types
The obvious next step is to try to look at discharges by race as well. To do this, I used DCF’s Exits from Care dashboard and added up the values for fiscal year 2018-2019. Here’s what I got. You can see that CBCs with high disparity have trouble discharging non-white children across all discharge types. CBCs with lower disparity do manage to discharge non-white kids at higher rates here and there, especially into guardianships and reunification. Only three CBCs managed to adopt non-white kids out at slightly higher rates: Big Bend CBC (1.08), ChildNet Broward (1.06) and Heartland for Children (1.03).
One CBC really stands out above because its risk ratio numbers are huge. A white child with Family Integrity Program in St. Johns County (Circuit 7) had 7.29 times the risk of being adopted (I recognize that’s a weird way to say it) and 5.08 times the risk of guardianship as a non-white child. That’s because in 2018-2019 only 5 non-white children were adopted and 2 non-white children went into guardianships, compared to 56 and 15 white children respectively. That means 55% of white kids in St. Johns County exited through adoption or guardianship, but only 15% of non-white kids did. Yes, it’s a small CBC with about 180 kids, 80% of which are white. It only discharges about 15 kids per month, but as you can see below, it rarely breaks 5 non-white kids exiting per month, mostly to reunification, and many months have 0. Someone should ask questions about that.
This last part doesn’t come from DCF’s dashboards. Instead I used the Public FSFN Database from February 2020 to look at every kid in foster care’s placement history. Specifically, I ran queries for correctional, therapeutic, mental health (Baker Act), and runaway episodes. I limited the queries to just those kids who came into care in fiscal year 2018-2019, which cut out kids who had been in care for years prior, but also let us say that these are events that happened early in their removal period and doesn’t bias towards kids who were in care longer and had more chance to be arrested, etc. I used DCF’s numbers for removals, which in retrospect may not have been the right call because DCF’s numbers were slightly lower than what is found in the database. It’ll still get us in the ballpark.
The graph below shows a significant difference in risk of non-white kids being placed in correctional placements, and very extreme skews towards white children for therapeutic placements and non-white children for mental health placements at many CBCs. Non-white children were at higher risk of running away almost everywhere, which could be a factor of there being more non-white teens in care or of not seeing the system as helpful.
The May numbers are out and they begin to show how the pandemic is affecting things later in the pipeline from investigation to removal. Remember that investigations take about 60 days from intake. Intakes were down 6% in March and 16% in April from their expected numbers based on historical trends. Closures on investigations in May, however, dropped nearly 34% from the expected trend. That suggests investigations may be taking longer to close. Meanwhile, verifications were down 22% from trend, and removals were only 7% below trend.
Having removals come in 7% under the trendline is very normal. It happened in November 2019 and didn’t make headlines. This suggests that the reduction in intakes was more heavily weighted toward low-risk cases that would not have resulted in a removal anyway. DCF only verifies about 15% of calls and only removes about 4-6% children under investigation, so a reduction in frivolous cases is a win for everyone. We need to wait another month to see how the remaining March-April intakes come out.
On the back-end, exits from care are still very low. They were 40% lower than last year and 20% below the trend line. Reunifications and adoptions were both down. Courts should probably set a few rocket docket days to hear the cases that are pending an uncontested reunification, adoption, and guardianship.
Looking ahead, intakes in May dropped about 33% from 2019 levels, or 9.9% below trend. That puts us back to 2010-level numbers, and it’s still too early to say whether that is an anomaly or a new normal.
Intakes were down about 9.9%. It’s very hard to put that decrease in context because the intake numbers started behaving oddly in mid-2019. The “turn” may have predated the virus, but we’ll only know if this was a dip or a new normal with more time. The numbers for April have been adjusted up to -16.3% (from -26.5% last month) based on the new data. That’s still hurricane levels low.
Investigation closures were down 34%. We are starting to see the pipeline effects of the lockdown. Intakes were low in March, so closures — which take up to 60 days — are predictably low in May. Based on the numbers of intakes in April, closures should be down even more in June.
Verifications were down 22%. Verifications have been steadily decreasing for years. The fact that verifications didn’t drop as much as closures suggests (but definitely doesn’t prove) that the lockdown resulted in a larger percentage of lower risk cases from being called in. It doesn’t fully prove that because we can’t say how the pandemic affected DCF’s perception of what should be verified.
Removals were only down 7%. That’s well within normal ranges for removals. This is the strongest indication yet that the pandemic resulted in fewer low-risk cases being reported — with the same caveat that we don’t know how DCF’s risk analysis changed. Removals per 100 victims went up a little, again suggesting that more low-risk cases were removed from the investigation pool.
There’s been a lot of discussion about homebound children and sexual abuse cases, and sure enough removals were up for sexual abuse (+23%), inadequate housing (+14%), and inadequate supervision (+10%). That could mean more egregious sexual abuse cases or less risk-tolerance from DCF. Removals came in lower than expected for physical abuse (-11%), drug abuse (-7%), and domestic violence (-3%). All of these, however, were within normal ranges, meaning we saw similar ups and downs throughout the last 16 years. The removal of Black children was down 7.7% in May and 22% in April. That April number was very low.
Discharges from care were down 20%. That’s very low. Courts are starting to get back up and running, so this number should go back up over time as cases that were ready to close get put on the docket. Here are the numbers by permanency type: reunifications were down 13%, adoptions were down 13%, and guardianships were flat.
That’s it for now. The full dashboard lets you see the numbers all the way down to the county level.
We are finally starting to see what impact a nationwide quarantine will have on the foster care system. DCF handled 18,894 intakes in April, which was the lowest number in at least a decade. You could call that 40% lower than last year, or you could say that it’s 26% lower than expected, given that the numbers have been on a downward trend anyway. Either way, as you can see in the full dashboard, we’re beyond hurricane numbers, which typically give a month of lower intakes and then a return to normal. We are instead moving into something unprecedented: what happens if the foster care system doesn’t get new kids?
Ins and outs
For context to the numbers below, I want to give an analogy. The foster care system is a pipeline. Kids come in through the front and leave out the back. Governmental policies, more than anything else, determine the rates at which kids enter and exit. Over the decades, various federal and state administrations have prioritized removals, adoptions, reunifications, in-home services, licensed placements, family placements, and a slew of other policies that had the primary effect of changing the rate of kids coming in and the rate of kids going out.
The middle of the system is like a firehose. If you push water into the hose faster than it can exit, then your hose will expand and eventually burst. If you don’t keep enough pressure in the hose, the water will sit and become stagnant. Pressure comes from pumps and valves and other mechanisms that keep things moving.
Putting that water hose in child welfare terms, the policies around permanency timelines, judicial review, and funding limitations are designed to keep kids moving through the system. The policies around placements are meant to provide enough flexibility that the system can expand and contract without suffering serious shocks.
The most significant expansions and contractions in systems have largely been a result of intentional governmental policies. Since the acculturation of Americans to mandatory child abuse reporting from the 1960s through the 1990s, there has been a steady stream of reports, but not always a consistent level of governmental response. People have built careers, contracts, and budgets around handling a certain number of cases or children per month. Foster parents make work decisions based on the board rate, and programs hire staff on the reasonable assumption that every child who leaves will be replaced by a new one.
A lot of discussions have centered on the question of whether kids will be left in unsafe homes due to the pandemic. I think that’s a valid question. We don’t know whether those 26% of calls would have resulted in removals or not. We don’t even have a good baseline, because we don’t know how the pandemic will change the rates and nature of child maltreatment. We have reason to believe financial insecurity raises the risk of abuse and neglect, but we also know that income inequality and racial disparities impact the numbers by targeting certain families for scrutiny. We have no idea what happens when everyone goes through social and financial trauma at the same time.
The pandemic also raises a second question of what happens to the system if people don’t call cases in and that steady stream of kids suddenly dries up. Will investigators start removing children on facts that would have otherwise gotten a pass? Will the system hold onto existing children longer to minimize the time that beds stay empty? Will programs close or change their focus to take in different kids? Will foster parents not take one child because they had built their home’s budget around two or they now have their own families to look after? Will children who had been bounced around before suddenly find stability when there is no other child waiting to take their place.?
We have never in modern history had a prolonged supply-side shock to the child welfare system. How it responds will tell us a lot about how it works.
Here are the numbers for Florida.
Intakes were down 26.5% from the expected rate. You can see that the March numbers have been adjusted up to -4% because they don’t look so low compared to the new trend. The numbers were down in every region of the state.
Verifications were normal, maybe even a little up (+1%). They’ve been going down for a very long time. That dip in December 2019 predates any serious talk of pandemics. I have no explanation for that. Verifications follow this pattern everywhere except the Southern and Northeast Regions, where they were flat.
Removals were down 19%. There’s usually a two-month lag between an intake and a removal. These investigations would have happened during the pandemic, but the calls likely came in right before it. We are still seeing hurricane levels here. Removals for physical abuse (-30%), inadequate housing (-11%), inadequate supervision (-18%), and sexual abuse (-14%) were all significantly down. Drug abuse (-4%) and domestic violence (-6%) removals were low but normal.
Removals per 100 intakes were normal (-2%). Again, these removals are based on intakes that came in right before the lockdown. We will be watching this number over the next months to see what happens.
Exits were low-normal (-5%). This will also be interesting to watch.
Reunifications were normal (-1%). But, you can see that drop-off getting ready to happen. If that line jumps back up next month, we’re continuing our normal downward trend. If it falls off even more, we have something new happening.
Guardianships may be back to normal (-0.4%). You can see that guardianships took a huge hit in March, but hopped back up in April. Again, that is either part of the old decrease or it’s a new normal. Only time will tell.
Adoptions recovered (+3%). I suspected that judges would handle the adoptions that got postponed, and that seems to be what happened. Those cases were already ready, so it’s easy to just set the one final hearing. We do not yet know how the pandemic will affect the permanency timelines of cases that come in now.
The full dashboard is available here. You can check the numbers for your region, CBC, circuit, or county. You can also see the breakdown of how removals are different for different maltreatment types. I hope you’re doing well in these strange times.
A lot of people have been asking, hypothesizing, and, frankly, guessing about what effect a global pandemic and quarantine coupled with unprecedented levels of governmental and community response will have on child welfare measures. Anyone who claims certainty right now is selling something. We really don’t know.
We do have a few new data points, however. Florida DCF released its dashboard numbers last week, and they show a reduction in intakes. It’s not nearly as large as I expected. You can get different numbers depending on how you count, so there’s plenty of room for salesmanship, but I would say anywhere from 10-17% down is defensible. A lot of other measures were affected way more. Many were not affected at all.
I created a dashboard to look at all of this. I made it for COVID-19 but you can also review other events in Florida child welfare history. Below are the things I noticed in the data.
Intakes were down
I’m going to use investigation intakes as the measure for this discussion, mostly because DCF has a dashboard on those. Intakes, as far as I understand DCF’s documentation, are completed calls to the hotline that are either accepted for investigation or screened out. I don’t think it includes abandoned calls.
First, let’s look at the actual numbers. Did the floor fall out of intakes in March? Not at all. They were in the middle of the decade’s high and low. Back in January 2010, we got 8,000 fewer calls per month and removed a lot more kids.
It’s hard to read charts with all those ups and downs. To smooth things out, let’s take the year-over-year change in March of each year instead. What you see below is that intakes were down about 10% from March 2019 to March 2020. But, they were also down 8% from March 2018 to March 2019 after being up for five prior years. You could call that a 2-point or a 25% decrease from 8 to 10, but it doesn’t really look like that much to me. It could be part of an ongoing trend that started when Governor DeSantis took office. Again, the number was somewhere in the middle of the decade’s high and low.
Maybe March is just weird. Maybe the previous administration did a huge child abuse awareness campaign in March that increased calls. The next chart looks at all the months at once. It’s a little spaghetti-esque, but you can see the general trends: January and February were pretty normal, but March is down over the last two years. It’s not lower than the summer months, but it’s definitely been dropping in the last two years.
We need a way to compare months to each other over different years. We can do that using seasonal decomposition. I’ve used this before and always get a lot of questions about it. So, as an example, if you are in the business of selling sunscreen, your sales will go up and down during the year because of the weather (these are called “seasonal factors”). Those seasonal factors are pretty stable over time — summer sales will be more than winter sales every single year. But your sales also change over time due to other forces like the economy or how people feel about tanning (“trends”). Those effects are not the same year to year. To really determine if you had a good month in sales compared to the past, you would want to remove the seasonal effects and the trends so that you’re comparing what is left over (sometimes called the “noise” or “error” value). So let’s do that for abuse calls.
Below are the seasonal factors for statewide intakes. These are the predictable ups and downs you see year to year. The colors represent the seasons to make it prettier. You can see that abuse investigation intakes go up about 12% in April, then down about 10-11% in the summer. That corresponds to those very regular ups and down in the first chart above. There are still around 20,000 abuse calls per month in the Summer — it’s just 22% fewer than in the Spring. Just like in the sunscreen example, we will adjust the numbers to account for these normal ups and downs that are caused by the seasons.
Next we have to filter out the bigger trends by taking a moving average. (I’m using a +/- 6 month median window here.) You can see that abuse calls have remained mostly flat. There was a slight rise at the beginning of 2016 and then again in June 2017. Since January 2019 (a new administration), we’ve seen a slightly decrease. The ends of trend lines are always bumpy, but especially when you have a giant worldwide pandemic sitting right off the edge. As we get more data, this line will get smoother.
Now for the magic: if you take the original DCF values and remove both the seasonal effects and the trend line, you get a flat line with easily comparable ups and downs. I turned it into a bar chart to make it easier to read below. I also added two horizontal lines to mark four standard deviations from the mean. (Thanks to anomaly.io for sharing their work.) Anything higher or lower than that is probably an anomaly.
So, what does the graph show? Intakes were probably about 16% below the decade’s seasonal average in March. And, that was very, very rare.
There is one other similar dip in intakes. It occured in September 2017 during Hurricane Irma when intakes were 15% lower than expected. There are two smaller dips in June 2018 and June 2019 that I have no explanation for. The highest spike in the other direction was March 2011 (+13%) right after the Barahona case hit the news.
What does all this tell us? That intake numbers dropped in March: 10% over last year or 16% over a typical March in the last decade. Those are hurricane level numbers.
What doesn’t it tell us? Whether those lost calls would have resulted in removals or whether they were low-risk calls that easily got pushed aside when other things became more pressing. There were still 25,000 intakes done. Investigations take about 60 days, so we won’t know more until later. Even then the picture will be murky because we won’t know what wasn’t called in or how to account for a workforce that can’t do in-home inspections except in urgent cases.
What else don’t we know? We don’t know what April will look like. I’ve heard people predict that cases will go up, which makes some intuitive sense. But the data does not show that actually happens after hurricanes, and we don’t know whether this downturn will be a passing moment or a new normal that changes the numbers forever.
What else changed?
Intakes have been getting all the press, but here are some other statewide child welfare measures that are also worth looking at.
Verifications jumped up 12%. You don’t even need math — you can see this in the DCF data. Verifications had been higher than expected for a couple of months, but this may by the March 2020 numbers making the rest of 2020 look stronger than is real. For comparison, verifications went down significantly (-12%) the month of Hurricane Irma. There was no rebound, which is slight evidence that maybe they weren’t that bad.
Removals were pretty normal. They were down, but not by much more than a ton of other months that didn’t have a pandemic in them. For comparison, removals were down 20% during Hurricane Irma and never really bounced back after that. Again, that might be evidence that those were lower-risk removals. The only extraordinary spike in removals was in March 2011 after the Barahona case. I wonder if any of those were removals from adoptive parents?
Exits from care were low but normal. Exits were down 11%. For Hurricane Irma, exits went down 20% and rebounded over 4 months.
Reunifications were very normal. It’s interesting that there were lots of times in the last decade when reunifications dipped low, but this wasn’t one of them. I can’t explain that spike in 2010.
Guardianships were down 34%. You can see this right in the data, too. During Hurricane Irma, guardianships went down 12%, so this really is a notable decline. That other giant dip is in February 2015, after Phoebe Jonchuck was found dead.
Adoptions were normal, maybe… I really like this one, because it shows how tricky this stuff is. First, adoptions are very seasonal — tons get done in June and November of each year. Also, when hurricanes and other events have suppressed adoptions in the past, they have shown large spikes a few months later to make up for the backlog.
We expect February to be 9.4% down from the year average. So it is unusual to see in February 2020 there was a giant spike (+64%) in adoptions before the event we expect to dampen the numbers. The increase was driven largely by the Central Region cranking out a lot of adoptions in February. If they saw a lockdown coming and rushed to finish pending adoptions, then that is an effect. If the spike happened because a few large sibling groups get adopted, then that is a coronavirus coincidence. Notice that March 2020 is registering in the numbers as normal, but looks very low. I expect these numbers to level out when we get a few more months of data to determine the wider trends.
Everything else? The tableau dashboard lets you filter by region, CBC, circuit, or county. In addition to the measures above, it also lets you look at major maltreatment categories like substance abuse, domestic violence, physical abuse, sexual abuse, and inadequate supervision. I’ll update it next month when the new numbers come out.
Back in August, an ad hoc committee of the juvenile justice board in Hillsborough issued a lengthy report on the state of placement instability in their circuit. The report diagnosed the phenomenon of children refusing placement as a major source of that instability.
The committee concluded that “children under the care and custody of Florida’s child welfare system should not have the ability to refuse temporary placements that have been determined to be in their best interest by the parties charged with their care.” The committee recommended a new law to expand the “children in need of services” statute to permit foster children to be placed into staff secure facilities (i.e., 24-hour supervision) if they refused placement, were chronic runners, didn’t go to school, or didn’t comply with the treatment recommended. They could then be placed in a physically secure (i.e., lock-down) facility if they didn’t comply with the staff secure program.
The proposal initially struck me as well intended but misinformed. Foster kids can already be placed in staff secure facilities by DCF without a court order. Any group home can be converted to staff secure by upping the staffing ratio and making sure someone is awake at all times in the home. You don’t need a new legal regime for that — you need money.
Second, foster kids can be placed in physically secure programs through a DJJ commitment, a SIPP placement, a Marchman Act placement, or a Baker Act. All of those placements are extremely expensive and have a limited number of beds. The gatekeepers for those programs have to do constant triage to limit use of the programs to the neediest children. If you want to increase access to secure settings with intensive treatment, it will take money to expand the placement array. Once you have a sufficient array, you can start working with the less extreme cases described by the committee.
The report correctly notes that Florida’s foster care placement system is reactionary. It recommends data collection and analysis to create predictive models for which children will have future placement challenges, because “indisputable data on risk factors is not available and would be beneficial to decision making.” If you know me or have read anything on this blog, you know that I couldn’t pass that challenge up.
It turns out that Hillsborough has been logging when children refuse placement in FSFN since 2017. (This should be required for all CBCs.) There were 49 kids in the most recent version of the public database that had ever refused placement. Me and my team of intrepid students reviewed the complete history of all 49 kids, then the history of their most common placements, and then the whole placement array in Hillsborough. What we learned filled over 150 pages. I have broken it into a main report and an addendum that gives a narrative of the placement history of all 49 children. (The addendum will come out soon — it’s over 100 pages.) This post is a summary.
What we found was consistent with existing research on placement instability in foster care. The refusal children were disproportionately non-white teens with significant time in group home settings; but there were children as young as seven who refused. For the most part they were well-known to the system: the median number of placements prior to refusal was 21 (31 in total), and nearly 70% of their placements prior to refusal (75% in total) ended because the provider requested the child be removed. Being ejected by providers after placement was the most pronounced feature of children who refused placement. Most of the children refused only one or two times; and many spent more time at the agency’s office because no appropriate placement could be found than they did because they refused.
What surprised us was that children were slightly more stable after refusing placement than they were before, at least temporarily. The refusal episodes seemed to elicit an agency response that children’s previous placement instability did not. Usually the added stability was from the agency obtaining therapeutic placements, but sometimes stability came from negative causes, as some kids ran away for extended periods of time or were arrested. In some cases, it appeared that the children refused with a specific placement in mind they wanted to be in.
Speaking of arrest, the committee was mostly composed of DJJ professionals, so it makes sense that they were focused heavily on that population. However, it turned out that the children who refused placements were not any more seriously involved in the delinquency system than other unstable kids in foster care. They did, however, appear to have higher levels of mental health needs.
And that’s where Hillsborough is particularly failing. Our review of the system as a whole found a placement array that routinely played hot potato with high-needs children, bouncing them around in circles often back to placement they were just kicked out of. When I first read the committee’s report, my thought was “you’re describing an STGC — just open one.” Hillsborough has no Specialized Therapeutic Group Care programs, and instead sends children across the state to therapeutic programs. Locally it relies on group homes that rarely keep children more than a month on average and enhanced rate foster homes that keep kids a median of 4 days per placement. The problem isn’t the law; the problem is the array.
Our review disagrees with the committee’s report in another way: instability was in fact highly predictable and often began long before a child refused placement. Existing research shows that a child with 4+ placements has a 70% chance of additional instability, and a child with 6+ daily maladaptive behaviors has a 25% increase in risk of disruption per additional behavior. By reviewing the placement histories we found 131 highly unstable children in Hillsborough since 2017, many of whom had more than 50 placements. Only 38 of them had ever refused a placement, though. Refusal and non-compliance are not appropriate triggers for intervention — instability is.
Where we agree with the committee is this: Hillsborough needs to regain control over its array and create a clear escalation process for children who are serially ejected from placements. This will take a significant effort to change the culture. The burden should not be on the children to accept a 22nd placement failure or risk civil commitment for disobedience. The burden should be on the providers to work with children to never get to that level of instability.
Our full report is linked here. The addendum containing the narratives for all 49 children will be released soon. I hope this is helpful to people on the ground in Hillsborough.
I sat on this post until after National Adoption Month because it seemed like the polite thing to do and because I didn’t have time to finish it until now. It’s about adoption. Not whether adoption is good or bad. My feelings about adoption are whatever my client’s feelings are — and it should be no surprise that some kids want to be adopted and others do not. Instead, this post is about adoption as a policy, a specific tool of the system to accomplish certain goals. It is specifically about how much that tool costs.
The Guardian ad Litem Program started as a scrappy community of advocates and gadflies who sought to bring attention and change to the dependency system. It is now a state agency that’s been appropriated over $600 million in the last 15 years. There has never been a comprehensive study to determine whether the Program accomplishes its goal of improving the lives of foster children. I looked at the Program’s performance numbers, first out of irritation, then curiosity, and ultimately the realization that we need more fact-based information about the “Second DCF” to answer the core questions surrounding its continued funding and structure. The questions explored here are whether the GAL Program is ethical, effective, and good for children? The answer to all three questions, it turns out, is the same.
This post introduces a new public FSFN dashboard on permanency timing in Florida’s child welfare system. If you want to just play with the dashboard, you can find it here. All but one of the graphics in this post come from the dashboard.
Every year, in legislatures across the country, well-meaning people propose bills to speed up permanency for foster kids. Permanency is a psychological concept focused on attachment, belonging, and community. Those are hard to legislate, so people focus instead on procedural definitions. The legal meaning of permanency is to close the court case and get the state out of a family’s life for as long as possible. In the process, hopefully leaving the child better than the system found them.
That closure could happen by returning a child home to a parent, placing the child in a guardianship, or having the child adopted. It could also mean a child aging out. The end result is largely the same to the state: one less case on its docket, and varying ongoing financial obligations depending on the way the child exited care. The path a case takes can change a child’s life.
This post introduces a new public FSFN dashboard: the Placement Provider Info Dashboard. If you want to jump straight there, feel free. You should click the fullscreen button in the bottom right corner. Below is the why and how of it.
There is a well-meaning bill working through the legislature that would exempt the names of foster parents from Florida’s public record laws. (Current law exempts their addresses, financials, and the floorplans of their houses.) The bill cites four “public necessities” to bar access to foster parents’ names: (1) it will help keep foster children’s names confidential, (2) it will prevent “unwanted contact” by the press, (3) it will prevent “unwanted contact” by the child’s relatives [i.e., parents], (4) not doing so would compromise foster parents’ privacy. The reasons don’t really stand up to scrutiny. More importantly, public access to information on foster placements is actually a good thing.
The elephant in the room is named Candi Johnson
Let’s start by acknowledging that Candi Johnson, the mother of two children in foster care, orchestrated the shooting of an elderly foster parent in Miami. She went to the foster home with her teenage son and demanded the children. When the foster parent fought back, the son shot her and fled with Candi Johnson and the kids. The foster parent is a hero for defending the kids even when she had no idea who was after them. The media reports that Candi Johnson had a long history of violence and had absconded with her children before. She is currently pending trial for attempted murder, kidnapping, armed burglary, and interfering with child custody.
The public records exemption would not have prevented Candi Johnson and her son from shooting the foster parent. The list of foster parents in the FSFN database has nearly 68,000 people on it. Candi Johnson’s kids would have aged out before she figured out which provider was caring for her children that way. More importantly, Candi Johnson did not use public records to find the foster home. Everyone in the neighborhood knew the woman was a foster parent. Everyone on the case knew Candi Johnson was violent. The foster mother didn’t know who Candi Johnson was when she banged on the door — or else she wouldn’t have opened it and maybe wouldn’t have accepted the placement. Having foster caregivers meet with parents in a supervised setting when they first take in kids could have actually prevented this. Further increasing the separation between them would not.
The bill isn’t about the kids
Candi Johnson stirred up a lot of latent anxieties that some (but certainly not all) foster caregivers feel about the families of the children they take in. The sponsor of the bill says that DCF received calls from “several” foster parents that they would quit if their names were not protected. I received a comment from one foster parent saying the same. The problem is that the bill doesn’t protect foster parents from the people they (rightly or wrongly) are afraid of. It does, however, make it harder to identify wrongdoing by DCF or other foster parents towards the kids they care about.
Let’s start with the bill’s first goal: protecting children’s privacy. The bill doesn’t actually do that. Having the names of foster parents does not tell me the names of the kids in their homes. If we want to keep foster children’s information confidential then we would also include provisions making it illegal for foster care providers to post about the child online, including pictures and over-sharing facebook posts. That’s how most parents in the system find their kids — a mutual friend spots the pictures and forwards them.
If we were serious about not disclosing a child’s foster care status, the bill would have provisions aimed at school personnel who tell a child’s classmates and protective investigators who question neighbors and disclose more than they should. We would also shut down National Adoption Day and Heart Gallery events where kids are brought to one place with giant signs that say “foster care” and television cameras rolling. Nobody is particularly worried about any of that.
A public records request on foster parents would currently give you a name and maybe a zip code, but nothing on the individual kids. You can get that much from a google search (and more). There is actually a much bigger and more immediate leak of information about foster children: the court hearings are open to the public. I have been in countless hearings where essentially this exchange happened in front of a room full of strangers waiting on other cases:
CLERK: Calling the Case of [insert actual name of the child or parents]. All parties please announce.
[everyone, including the parents and children go around and say their actual legal names]
JUDGE: Are the foster parents here? Please just use their initials.
[nobody mentions that there is no law that says foster parents get to be anonymous in court hearings]
FOSTER PARENT: J.M. Good morning, Your Honor.
JUDGE: Ok, we’re here today for a status on medical treatment. Did the child go to the gynecologist for an STD check? She was sexually assaulted. I am very concerned that you did not take her sooner.
CASE MANAGER: Yes, the child is present and can report. She tested negative.
We accept that the hearings are open because we believe the system is better when it works in public. In that context, foster parents’ names are not more sensitive than a child’s history of abuse. Foster parents are good people who largely volunteer to help kids and families that need it. They do not, however, have stronger privacy interests than the actual children and families in the system. When you sign up for this work, you sign up for it in public.
Second, the bill seeks to limit foster parents’ names because disclosure could lead to “unwanted contact” from the child’s “relatives.” This also won’t work and is actually a bad policy goal. Foster parents, parents, relatives, case managers, guardians ad litem, and therapy dogs all sit outside court together, sometimes waiting hours for the case to be called. In most courthouses it’s impossible to hide. And they shouldn’t want to hide: if foster parents are following the Quality Parenting Initiative co-parenting guidelines, then they use that time to talk with the parents and relatives to get to know them and the children better. I hope this sounds as direct as I mean it: a foster parent who doesn’t want contact with a child’s relatives should look for other ways to help children. Fostering isn’t for you.
There may be times when it’s not safe to engage in co-parenting with the child’s family, such as in Candi Johnson’s case. In those situations, court orders and injunctions directed at the parties on the case are the right remedy. Limiting the public’s access to information about foster care providers doesn’t solve the problem: the child’s family already has the foster caregiver’s identity information, or can easily get it by reading the Case Plan or by waiting in the parking lot for 20 minutes. In cases where there are serious safety concerns, there should be serious security responses. A general public records exemption is not a cure.
Finally, I’m not a First Amendment scholar, but “unwanted contact by the press” is why we have public records laws — the press and other watchdogs are supposed to investigate and sometimes that investigation is unwanted. The foster care system is a billion-dollar-a-year government industry. The fact that it recruits and underpays volunteers to perform some of its essential functions does not insulate it from scrutiny.
Now for why it’s actually good to make this information public.
Public information helps make better decisions
After publishing the Visualizing Foster Care Instability project, I received a lot of comments asking for a dashboard that gives information about the foster care providers. I attended a Florida Youth Shine quarterly meeting where a young person still in foster care said in a session, “There should be some way for us to know about placements before we go there. We should know as much about them as they do about us.” That resonated with me. We wouldn’t stay at a hotel without reading the reviews, but we expect foster kids to just show up at a house in the middle of the night and take it on faith that it will be safe.
So I made something: The Florida Foster Care Provider Dashboard. (I’m not really good at naming things.) The goal is to put everything we know about foster care providers in one place so that advocates and the public can make better decisions on how the system operates.
It’s functional, not pretty. I recommend viewing it in full screen because it has a lot of parts. Here’s what it looks like:
Here’s what you can learn from it:
How long do kids stay with ____? This chart in the top-right shows the distribution of how long a provider’s placements lasted. A provider’s average placement length may be skewed due to a few kids they kept for years. This chart shows the real breakdown. You can set it to measure in days, weeks, months, or years. Above, you can see that this provider had 315 placements in total and 134 that lasted less than a week. The average placement length was 45 days, but the median was only 9 days. This is not a stable placement for most kids who go there.
What kind of placement is this? The two boxes in the bottom left break down DCF’s own designation of the placement type. Above you can see that this provider was almost exclusively a foster care provider (orange bar), but spent 16 days as a relative placement, and 4 as a group home. The Service Type shows that this home was mostly for kids aged 13-17.
Why do kids leave this placement? The third box on the bottom left shows the reasons that placements with this provider ended. You can see above that 101 placements ended “in accordance with the case plan,” which usually means pursuant to a court order, while 61 placements ended because the child ran away. Twenty kids aged out of this home.
How has it changed over time? The box in the bottom right corner show the complete placement history for this provider. You can see that they started fostering in 2003 and had their last placement in 2018. Over time, placements with this provider have gotten shorter and shorter. That’s fairly normal for the long-time placements. The first few placements are usually the longest.
What about some summary stats? Right in the middle are the summary stats that I’ve calculated for the provider: number of children, number of placements, average placement length, median placement length, average miles kids moved from the last placement, and average concurrent kids (meaning how many children were placed there simultaneously on average).
I’ve also created Provider Flags that alert you to certain questions you might want to ask about a placement before putting a child there. They are found in the pink box right in the middle of the dashboard. The flags are based on the objective criteria below.
Death of Child: The provider had at least one placement where the end reason was “Death of Child”.
High Reunification, Adoption, Age Out, or Guardianship: The provider was placement to at least 6 children and more than 50% of them went on to reach the stated permanency goal. This does not mean the children exited care from the provider directly.
High Runaway: The provider had at least 6 placements and more than 25% of placements ended in the child running away.
High Turnover: The provider had at least 25 placements and more than 50% of placements ended in under 30 days.
High Disruption: The provider had at least 25 placements and more than 50% of placements ended because the provider requested a change, the child requested a change, or the placement “disrupted.”
High Concurrency: Children with this provider had an average of 10 or more other children placed there concurrently. This could be either because the provider’s capacity is 10 or more children or because the high turnover rate caused 10 or more children to pass through the provider.
High Mileage: The provider had at least 25 placements and the average child moved more than 50 miles from their previous placement. Miles are calculated from the center of a provider’s zip code region.
High Hospitalization: The provider had at least 6 placements and more than 25% of placements ended due to hospitalization of the child.
First Run Warning: The provider had at least 6 placements and more than 10% of children placed there ran for their first time while with the provider.
Baker Act Warning: The provider had at least 6 placements and more than 5% of placements ended because the child was Baker Acted. Baker Acts were calculated by finding children whose next placements were for “Routine/Emergency Mental Health Services”. Note that for small placements, even one Baker Act will raise this warning.
Arrest Warning: The provider had at least 6 placements and more than 5% of placements ended because the child was arrested. Arrests were calculated by finding children whose next placements were for “Correctional Placement” or whose placement end reason was “Incarceration/Detention”. Note that for small placements, even one arrest will raise this warning.
Night to Night Warning: The provider had at least 25 placements and more than 25% of placements were for 2 or fewer days.
I joked that this would be like Yelp for Foster Care, so I went ahead and added three more tabs to make that a reality:
School Map: This tab allows you to click on a provider and see the greatschools.org map for the school in its zip code.
Walk Score: This tab allows you to click on a provider and see the walkscore.com ratings for the zip code.
Yelp: This tab allows you to click on a provider and see the yelp.com most popular places in the zip code.
What can we learn from this?
The Herald Tribune did a story on the public records bill a few days ago. In it, the sponsor is quoted as saying:
“The foster parents are not the people who have been suspected of doing anything wrong,” Roach said. “It’s the parents themselves. … Those are the people that need scrutiny, not the foster care parents.”
Hold on now, nobody said bio parents shouldn’t be scrutinized — and they are heavily scrutinized, in the form of evaluations, classes, supervised visits, and home inspections. The question is whether a higher-than-zero level of public scrutiny of foster parents is warranted.
Yes, it is.
If we didn’t know who they are, we wouldn’t know that foster parent “Lor. Hic” has taken in 525 kids for 626 placements and asked for their removal 360 times (“High Disruption”, “Night to Night Warning”). If she’s agreeing to be a night-by-night placement, then she’s running a a shelter with a foster care license. We need to talk about that practice.
We wouldn’t know that foster parent “Tif. Gip.” has an average placement length of 29 days, but a median placement length of only 2 days (“High Turnover”). The average child placed with Tif. Gip. would experience over 7.5 roommates during their time there. That’s essentially a group home with a foster home license. We need to talk about that, too.
We wouldn’t know that foster parent “Sha. Rob.” experienced the Death of a Child in 2002, or know whether the teams involved in placing kids there for the next three years were aware of that fact.
We also wouldn’t know that foster parent “Kat. Mel.” had over 10% of her 138 kids run for the first time while in her care (“First Run Warning”). Same for “Pat. Fau.”, “Ann. Gre.”, “Gen. Zie.” and many others. What’s going on in these homes?
If the confidentiality gets expanded to institutional providers, then we wouldn’t know that the Hibiscus Vero Group Home has six flags: Arrest Warning, High Turnover, High Concurrency, High Mileage, First Run Warning, and the Death of a Child in 2012. It may be a perfectly lovely place, but I would want to ask questions about all of those issues before I sent a child there.
I’ve requested the provider payment database from DCF as well, and it’s pending. I plan to add an overlay on how much providers get paid. An earlier version of the payment data that I have showed that there was a foster parent in Miami (Jef. Hor.) that received $15,000 per month to care for one child. That is not a typo. If foster parent info is exempted from public records, we wouldn’t know that. The public has the right to scrutinize how the government spends its money. Fifteen-thousand a month is either excessive or what everyone else should get.
I’m also working on a version that overlays the Florida Sex Offender database. The foster home for “Tra. Dav.” that I used in the first example above is in a zip code with nearly 250 registered offenders. Most zip codes have a fraction of that. Maybe we want to think about that when we place kids there, especially certain kids.
Beyond just risk factors, there’s also the problem of the foster parents who have literally hurt kids. If their names are exempt from public records, we don’t know who they are unless they kill a child. (And kids are placed with providers even after others die, as seen above.) I appreciate what the sponsor is saying, but the statewide Guardian ad Litem Program was created and funded with taxpayer money and then significantly strengthened in response to a foster parent murdering a child. The potential for misconduct with our most vulnerable children warrants constant vigilance regardless of who the caregiver is. Trust should not be blind.
Okay Robert Latham, you’re a hypocrite for not publishing the foster parents’ names
Here is where some astute commenter sends me a pointed note that I have chosen to use initials instead of names. I must, therefore, agree that publishing the names would be dangerous.
I believe that publishing the full names would kick the anthill and close down access to a valuable source of public information. I’m using the initials so that readers can focus on the importance of the information and not the hypothetical problems that can come from releasing it (even though it’s been public literally forever). For the public version, I think the three-letter initials are sufficient to find a specific foster parent you’re looking for if you already know the name. I am happy to share the unredacted version with anyone working directly in the system who could use the information.
Public information helps us make the system better, but we can’t do that if we’re not allowed to know things.