By Erica C. Barnett
In 2022, the King County Regional Homelessness Authority did away with the longstanding, but flawed, practice of physically counting people experiencing homelessness on a single night. By replacing the physical “point in time” count with a statistical model based on Department of Commerce Data, combined with interviews with people recruited through the broad-ranging social networks that exist among unsheltered people, the KCRHA hoped to produce a more accurate picture of homelessness in King County.
The interviews, which contributed to the KCRHA’s estimate of more than 53,000 people experiencing homelessness in King County, also served a second, arguably more impactful, purpose: They formed the basis for an overarching plan that will guide the authority’s use of public dollars for the next five years. The Five-Year Plan, which includes recommendations for specific temporary housing types (and initially came with a $12 billion price tag), was based largely on 180 of these interviews, which researchers used to “identify specific temporary and permanent housing models directly from the voices of people living unsheltered, interpreted in partnership with people with lived experience,” according to the final five-year plan.
PubliCola has obtained the transcripts of more than 80 of these interviews, which took place in the early spring of 2022, through a records request. The interviews range from terse questions and answer sessions to lengthy, discursive conversations in which interviewers abandon the Q&A format to offer opinions, give advice, and tell people they can help them access services—something qualitative researchers are generally cautioned not to do. We also consulted two experts on qualitative research to learn more about how interviews like the ones KCRHA oversaw can best be used, and to learn some best practices for the kind of evaluation the KCRHA was attempting to do.
Additionally, PubliCola talked to an experienced data analyst at the KCRHA, who explained how the project worked. Initially, the interviews (which former KCRHA CEO Marc Dones called “oral histories”) were the sole focus of the research project, which the KCRHA titled “Understanding Unsheltered Homelessness.” Later on, after the Department of Housing and Urban Development rejected the KCRHA’s request to skip the point-in-time count altogether, Dones decided to “combine efforts between doing the Point in Time Count and this qualitative data collection,” Owen Kajfasz, the KCRHA’s acting chief community officer, said. “So we really merged two projects into one data collection.”
Some of the earliest interviews, which took place in South King County, didn’t include the proper consent forms or had transcripts that couldn’t be traced back to interview subjects. And overall, the interviews ended up oversampling straight white men, and undersampling women, people of color, and LGBTQ+ people, forcing researchers to go back and add some incomplete interviews to the pool to correct the imbalance.
To conduct the interviews, KCRHA recruited members of the Lived Experience Coalition, a group that advocates for the inclusion of people with personal experience being homeless in policy and decision-making processes. (KCRHA staff also conducted some of the interviews.) Most interviewers received a two-part training led by Dones, who served as the “primary investigator,” or lead researcher, on the project. Those who couldn’t make the training or came on board later were instructed to read the training documents, which included a list of 31 questions, before starting work.
LEC members also held all three seats on the advisory board that oversaw the project, and later made up a majority of the team that “coded” the interviews in order to translate them into a set of recommendations for the five-year plan. As we’ve reported, the KCRHA has recently tried to distance itself from the LEC, but at the time—early 2022—the group was deeply integrated into the agency’s operations.
Although the researchers conducted more than 500 interviews, they ended up using just 180 transcripts. Some of the earliest interviews, which took place in South King County, didn’t include the proper consent forms or had transcripts that couldn’t be traced back to interview subjects. Overall, the interviews ended up oversampling straight white men, and undersampling women, people of color, and LGBTQ+ people, forcing researchers to go back and add some incomplete interviews to the pool to correct the imbalance—overrepresenting marginalized groups because they are the least served by the current shelter and service system..
“This wasn’t a perfect process,” Kajfasz acknowledged. “We did have more of those [interviews] that we couldn’t use than I was anticipating.”
Once the interviews were complete, a group of LEC members and KCRHA staff, aided by technical assistance from a Washington, D.C.-based firm called the Cloudburst Group, read transcripts of the interviews and “coded” them to correspond with different shelter and housing types, using the codes “to identify specific temporary and permanent housing models directly from the voices of people living unsheltered, interpreted in partnership with people with lived experience,” according to the Five-Year Plan. Although the final plan no longer includes specific dollar figure or specific numeric recommendations (eliminating, for example, a chart that suggested building no new tiny house villages ), it still represents a proposal that would, if implemented, reverse many longstanding policies and invest heavily in new approaches, like many more permanent parking spaces for people living in RVs and cars across the city.
The interview transcripts show many interviewers engaged in patient, compassionate attempts to elicit clear responses from people who were often discursive, rambling, and hard to follow. Interviewers from the Lived Experience Coalition used their own experiences to guide conversations and make their interview subjects comfortable—a key reason for including people with lived experience in data collection.
In one such conversation, the interviewer expresses concern and empathy when the person they’re talking to describes a series of traumatic situations, while still keeping the overall conversation on track. “I’m sorry you experienced that in such a tragic way. Thank you for just being vulnerable and open and sharing that because it’ll give me a glimpse of who you are and what you’ve been through,” the interviewer says, then moves on to the next question.
Researchers who use qualitative methods say it’s important to allow the conversation to flow and to use the questions as a guide rather than reading them word by word.
“In a qualitative interview, so much depends on the amount of trust and empathy that the interviewer can show,” said New York University School of Social Work professor Dr. Deborah Padgett, an expert on qualitative research who has written several books on the subject. “If you’re there in a trusting way, and you’re there as a researcher as opposed to a case worker or outreach worker or more official person, it gives you some legitimacy.”
Dr. Tyler Kincaid, a research assistant professor at Department of Psychiatry and Behavioral Sciences at the University of New Mexico who has led qualitative research about people experiencing homelessness, said qualitative interviews can’t be scripted to the extent that an ordinary survey can. “There’s an art to making the participant comfortable enough to respond, to keep the conversation going,” Kincaid said, and going “off script” is just part of the process. “If you have, say, 10 semi-structured questions, hopefully there’s followup questions and side questions and things within those ten standard questions on a piece of paper to help to bring out more information,” Kincaid said.
But the transcripts also revealed troubling practices. In the transcripts, interviewers often cut people off, talk at length about themselves, or offer unsolicited advice. Several times, interviewers suggest they or someone else at the interview site can directly connect people with services, such as housing vouchers or a workaround for King County’s hated 211 system, or jump in with answers before the person has had time to respond.
The experts we spoke to said it’s important for researchers not to involve themselves in people’s lives or promise things they can’t deliver. Kajfasz said researchers were told that the point of the research was data collection, not problem-solving, but “folks with lived experience, when they know they have a solution for somebody, they offer it.” Some interview locations had housing navigators or other services on site, Kajfasz added.
In one transcript, an interviewer offers their opinion about the man’s substance use, saying that with drugs, “when you wake up in the morning, you hate yourself.” “I don’t ever hate myself,” the man retorts. After a tangent about the concerns police have raised about encampment fires, the second interviewer tells the man he should join the military. “You can still do it. You’re young enough. You understand?”
In many cases, interviewers suggested answers to their own questions before people had a chance to speak. In one representative transcript, an interviewer repeatedly appears to cut their subject, a Native American man, off—suggesting, for example, that the reason the man is homeless is because he “prefer[s] to be in the woods” and doesn’t “want to be acclimated in society.” Although the man says “yeah” in response to both those statements, he objects when the interviewer continues, “You don’t want an apartment.” “Well, I do eventually,” he says.
Later in the transcript, a second interviewer offers their opinion about the man’s substance use, calling it “impressive” that “it’s just Everclear now” and adding, “you’ll wean yourself off that soon enough,” prompting the man to say he isn’t so sure. With drugs, the second interviewer continues, “when you wake up in the morning, you hate yourself.” “I don’t ever hate myself,” the man retorts. After a tangent about the concerns police have raised about encampment fires, the second interviewer tells the man he should join the military. “You can still do it. You’re young enough. You understand?”
In another transcript, the interviewer suggests that their subject, a Latino man who appears to struggle with English, find work as a day laborer—a stereotypical job for Spanish-speaking immigrants. None of the interviews PubliCola reviewed were conducted in a language other than English; Kajfasz said the KCRHA offered “language services,” but that “the majority of folks, even if English was not their first language, were choosing English.”
Padgett says that while good qualitative research requires an interviewer to be patient, “take a lot of time,” and build trust and empathy, interviewers should never weigh in with their own opinions or advice. “If you’re giving opinions about what they’re saying, you’re taking up valuable space in the conversation—and they may not want that level of pity,” Padgett said. “When I’m training people, the idea is to be empathic. In the moment, you might say, ‘I’m really sorry that happened to you,’ but [you shouldn’t] go down the rabbit hole of ‘tell me more about your trauma.'”
In an interview that appeared to cross this line, an interviewer jumped in when the man he was interviewing, whose race is not identified in the transcript, said he was probably homeless because he’d been in the foster care and prison systems. “Now you know that’s not true,” the interviewer said. The conversation continued:
Interviewer: You can’t tell me that, because you’re here for a reason. You got kids. We went through the pandemic. You got sick two times, you just said. Hell to the no, ain’t nobody in their life ever in my face will ever say that. Not while I’m standing there. I’m sorry. That just hurt. That just touched me.
Subject: No, I, I—
Interviewer: Don’t ever say that shit again to me.
Subject: My apologies. I will not say that again.
Speaker 2: To anybody, because I want you to know, that’s something that’s instilled in you and that you’re going to instill every person that come through your life, especially your children. Because my kids, they already know like I’m their ride or die. They tell people, you don’t know my mama when they tell them, oh you know how your mom- No. You don’t know my mom. So, it’s a different generation. When we got to teach that generation, there is something to live for. You’re not here for nothing. You what I’m saying? I don’t know where we went off of. Let’s see. Where did you go to school?
Padgett, who looked over the interview questions before we spoke, said the questions themselves were “pretty good, but it’s qualitative, so what’s good on paper only comes out as good if the interviewer does it well so there’s a lot more onus put on the interviewer” to keep things on track. The KCRHA did not provide its training materials, but Padgett said she usually has trainees do mock interviews, then supervises them and provides feedback on their methods throughout a project so they can adjust and improve.
In addition to asking leading questions and interrupting, a number of transcripts include interviewers skipping past questions, making assumptions about people’s gender identity or sexual orientation, and speaking excessively about themselves. In one transcript, an interviewer provides a detailed roster of their own family members’ birthdays; in another, the interviewer tries to recruit the person they’re interviewing to join the Lived Experience Coalition and the KCRHA’s Vehicle Residency Policy Group.
“It’s not about the interviewer,” Padgett said. “You should think of yourself as wearing a hat that says ‘researcher’ on it, and if you take that hat off and become a comrade of lived experience, then you’re losing what qualitative [research] does best, which is having some distance but also empathy. It’s a juggling act.”
“You really don’t want to make some sort of big, generalized governmental or programmatic decisions just based off qualitative research.” —Dr. Tyler Kincaid, University of New Mexico
Once the interviews were complete, another team of researchers, which included several members of the LEC, translated them into housing types, using specific keywords and concepts that people brought up during their conversations to create a roster of shelter types that might be appropriate. People who are using drugs but want to get sober might end up in a box titled “recovery housing,” while those with medical problems might end up in another box labeled “medical respite.” Many of the 180 interviews were with people living in their vehicles or RVs, who often ended up in separate boxes for safe parking and RV safe lots.
“It wasn’t directly, ‘hey, I need medical respite,’ so these people get medical respite, or ‘hey, I need RV parking,’ so this person gets RV parking. It was looking at all of these types of challenges folks are facing,” Kajfasz said. “And for some of those, we’re having to take pieces of information across the interview.” People who were employed but couldn’t afford rent, for example, suggested a need for more housing with supported employment services, while people struggling to stay sober suggested a need for sober housing.
At a glance, some of these solutions can seem overly determinative—some people who want to quit drinking or using other drugs might do better living independently than moving into group recovery housing, for example. Others, like RV parking lots, are widely viewed as short-term solutions, not permanent homes. Although these may seem like minor issues—shouldn’t people trying to avoid drugs and alcohol jump at an opportunity for a room in sober living, even if they would prefer a private apartment?—they translate into real policy choices, and ultimately into real money.
The initial version of the Five-Year Plan called for nearly 4,000 medical respite beds ($2.7 billion over five years); 2,570 units of recovery housing ($1.8 billion); and nearly 5,000 permanent parking spots for passenger vehicles and RVs ($192 million). The specific numbers and dollar figures may have been excised from the final plan, but the mix of shelter, or “temporary housing,” types—based on an “analysis of PIT interviews and input of the Lived Experience Commission advisory group,” according to an internal memo—remains the same, so it seems important to get it right.
“You really don’t want to make some sort of big, generalized governmental or programmatic decisions just based off qualitative research,” Kincaid, from the University of New Mexico, said. For example, he said, it would “so difficult to [use] any sort of qualitative research” as the basis for investing in one type of shelter over another, unless people consistently identified a specific type of shelter they wanted.
Padgett, from NYU, said she believes strongly that “well-done, rigorous qualitative research can play a strong and scientifically valid role, but it’s all in how you handle the information so that you’re not coming to conclusions with no basis in the data.”
In a memo from September 2022, consultants from the Cloudburst Group summarized some of the lessons the LEC and KCRHA learned from the Understanding Unsheltered Homelessness project. Among their conclusions: If the KCRHA does another series of interviews in the future, researchers need to identify the intent of the project before starting interviews, and trainers should emphasize the need to ask questions consistently, “as well as allowing participants to speak and not be interrupted.”
The way the researchers recruited participants—by identifying an initial “wave” of subjects who recruited new people through their social networks—was flawed and contributed to an interview pool that was disproportionately made up of straight, white men.
Finally, the memo noted, it was hard to interpret some interviews because they included multiple people (interviewers as well as people who approached and started talking during interviews; in the future, the memo says, the protocol for interviews “should establish that these are individual interviews.”
Kajfasz said that without Dones’ “significant expertise” in the area of qualitative research, the KCRHA isn’t planning to do an interview project of similar size and scope any time in the near future. Nor will in-person interviews form the basis for the next point-in-time count, which the KCRHA must conduct next year. The KCRHA is “currently conferring with HUD about what the next PIT count” will involve, Kajfasz said, but it probably will never look like last year’s count again. “Never do I ever want to do the point-in-time count and large-scale qualitative interviewing together,” Kajfasz said.