Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / I have this assignment

I have this assignment

Sociology

I have this assignment. it's very important assignment and the professor is a bad one please be careful with copying and pasting. She once told me I copy and paste and also said I change some words of original source and put the same text so please be careful.

I have attached the sylabus of the corse so you know wwhat is about.

So, the assignment is 10 pages long. You should use 6 evaluations. I have attached a full description of it. Please note: this assignment is 3 draft, but you will only write the final one with considering the points the professor mentioned on them. everything is detailed in the attachment summary of the assignment and I have attached two papers of students from other year as examples. Please do as they did. I am attached them just as examples.

The topic I chose to write about is No Child Left Behind Act. So please write about it. I will attach the power points we used in class so they can help you as working on this paper once you accept the request

SYLLABUS Course Description This course introduces students to policy and program evaluation. Topics include understanding the nature and rationale of, and the need for program evaluation; assessing program theory and design; assessing program process and implementation; measuring and monitoring program outcomes; understanding the effects of impact evaluation, comparison group designs, and designs with strict controls; detecting, interpreting, and exploring program effects; assessing the economic efficiency of program; planning an evaluation; and understanding the social and political context of evaluation. The final project will be a program evaluation synthesis of a (a) federal, (b) state, (c) county, or (d) municipal program based on the student’s choice. Learning Outcomes At the conclusion of this course, students should • be knowledgeable about the general fundamental concepts of policy and program evaluation; • be knowledgeable about select data utilized in select policy and program evaluations; • be knowledgeable about select methods utilized in select policy and program evaluations; • possess the skills needed to interpret results of select policy and program evaluations; and • possess the skills needed to communicate and present research questions/goals, data, methods, and findings of select policy and program evaluations to public policy makers and other non-technical audiences in a way understandable to them. Class Website This class uses Blackboard and Blackboard Collaborate Ultra in order to enhance the online learning experience of students. The Blackboard website contains the class syllabus, worksheets, and lecture slides. The Blackboard Collaborate Ultra website contains the venues for the synchronous class sessions. Recordings will be available for 150 hours. Source for the Blackboard website: https://mymasonportal.gmu.edu/ -> courses Source for the Blackboard Collaborate Ultra website: https://mymasonportal.gmu.edu/ -> courses -> tools -> Blackboard Collaborate Ultra -> session Course Materials Required Readings: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Course Requirements: Prerequisite Students who have not yet taken POGO 511, PUBP 511, PUBP 704, or an equivalent class need permission of the instructor. Class Attendance/Participation/Behavior Students are highly encouraged to attend class. Many studies have shown a high causation between class participation and high grades. Students are expected to prepare for class by studying the reading assignments before class, arrive on time, and to participate in class discussions. Writing Assignments/Final Paper/Deadlines The final project will be a program evaluation synthesis of six (6) evaluations of a (a) federal, (b) state, (c) county, or (d) municipal program based on the student’s choice. Class grades will be based on • a first draft of the introduction of the synthesis of six (6) evaluations of the chosen policy or program (15% of the grade): o Microsoft Word document; o name of student; o title with name of discussed program and the term “synthesis;” o one page minimum, five pages maximum; o double spaced; 2 o o o o o o 12 pt. font; document page number(s); direct quotes need a page number; if there are no page numbers state “n.p;” references in reference section in alphabetical order by last name; American Psychological Association (APA) 7, author/year of publication style https://www.apastyle.org/; to be submitted to by February 15th, 4:30 pm; 3 • • • 1 a second draft of the introduction and a first draft of the data and methods sections of the synthesis of six (6) evaluations of the chosen policy or program (15% of the grade): o Microsoft Word document; o name of student; o title with name of discussed program and the term “synthesis;” o two pages minimum, ten pages maximum; o double spaced; o 12 pt. font; o document page numbers; o direct quotes need a page number; if there are no page numbers state “n.p;” o references in reference section in alphabetical order by last name; o American Psychological Association (APA) 7, author/year of publication style https://www.apastyle.org/; o to be submitted to by March 15th, 4:30 pm; a final presentation (10 minutes minimum; 15 minutes maximum, five slides minimum, 25 slides maximum; ppt; 25% of the grade); a final paper, i.e., a program evaluation synthesis (i.e., o introduction; o background/literature1 review; o data; o methods; o results; o public policy (optional) o conclusion) of the six (6) evaluations of the chosen policy or program; 45% of the grade) o Microsoft Word document; o name of student; o title with name of discussed program and the term “synthesis;” o ten pages minimum, thirty pages maximum; o double spaced; “The literature”, i.e., what others, not the author/s of a study, have done/found. 4 o o o o o o 12 pt. font; document page numbers; direct quotes need a page number; if there are no page numbers state “n.p;” references in reference section in alphabetical order by last name; American Psychological Association (APA) 7, author/year of publication style https://www.apastyle.org/; to be submitted to by 10 pm on the day of the presentation. 5 Assignment Submission, Late or Missing Assignments Drafts are due as an e-mail submission to the instructor at the beginning of class. A draft that was submitted after the deadline is considered late. Students will lose 20 (out of 100) points every 24 hours after the deadline. After five days assignments will not be evaluated by the instructor (i.e., zero grade). Final papers are due as an e-mail submission to the instructor (by 10 pm on the day of the presentation. On the first day of class a sign up list for the final presentations will be made available. On presentation day, students are expected to arrive at the beginning of class, i.e., students are discouraged from arriving at the presentation time predicted by them. A presentation that was not held in person, i.e., in absence, is not considered a presentation (i.e., zero grade). A final paper that was submitted after the deadline is considered late. Students will lose 20 (out of 100) points every 24 hours after the deadline. After five days assignments will not be evaluated by the instructor (i.e., zero grade). Appeals on the paper grade must be made in writing within 72 hours after grades have been posted on patriotweb.gmu.edu. Final class grades are non-negotiable. Academic Accommodation for a Disability Students with a disability or who need academic accommodations are encouraged to see the instructor and contact the Office of Disability Services. All academic accommodations must be arranged through Disability Services (http://ds.gmu.edu). GMU/Schar School Policy on Plagiarism The profession of scholarship and the intellectual life of a university as well as the field of public policy inquiry depend fundamentally on a foundation of trust. Thus, any act of plagiarism strikes at the heart of the meaning of the university and the purpose of the Schar School. It constitutes a serious breach of professional ethics and it is unacceptable. Plagiarism is the use of another’s words or ideas presented as one’s own. It includes, among other things, the use of specific words, ideas, or frameworks that are the product of another’s work. Honesty and thoroughness in citing sources is essential to professional accountability and personal responsibility. Appropriate citation is necessary so that arguments, evidence, and claims can be critically examined. Plagiarism is wrong because of the injustice it does to the person whose ideas are stolen. But it is also wrong because it constitutes lying to one’s professional colleagues. From a prudential perspective, it is shortsighted and self-defeating, and it can ruin a professional career. The faculty of the Schar School takes plagiarism seriously and has adopted a zero tolerance policy. Any plagiarized assignment will receive an automatic grade of “F.” This may lead to failure for the course, resulting in dismissal from the University. This dismissal 6 will be noted on the student’s transcript. For international students who are on a university-sponsored visa (e.g., F-1, J-1 or J-2), dismissal also results in the revocation of their visa. To help enforce the Schar School policy on plagiarism, all written work submitted in partial fulfillment of course or degree requirements must be available in electronic form so that it can be compared with electronic databases, as well as submitted to commercial services to which the School subscribes. Faculty may at any time submit student’s work without prior permission from the student. Individual instructors may require that written work be submitted in electronic as well as printed form. The Schar School policy on plagiarism is supplementary to the George Mason University Honor Code; it is not intended to replace it or substitute for it. http://schar.gmu.edu/current-students/masters-advising/academic-policies-forms/ Notice of mandatory reporting of sexual assault, interpersonal violence, and stalking: As a faculty member, I am designated as a Responsible Employee and must report all disclosures of sexual assault, interpersonal violence, and stalking to Mason’s Title IX Coordinator per University Policy 1412. If you wish to speak with someone confidentially, please contact one of Mason’s confidential resources, such as Student Support and Advocacy Center (703.380.1434) or Counseling and Psychological Services (CAPS) (703.993.2380). You may also seek assistance from Mason’s Title IX Coordinator by calling 703.993.8730 or e-mailing Schar School Master of Public Administration (MPA) diversity statement: The Schar School MPA program is committed to create a learning environment that reflects the growing diversity of the modern workplace and of the communities that are being served by public service organizations. We welcome, value and foster respect for all individuals and their differences, including race and ethnicity, socio-economic status, sex, sexuality, gender expression and identity, national origin, first language, religion, ideology, age and ability. We encourage all members of the learning environment to engage with the material personally, but to also be open to exploring and learning from experiences different than their own. Resources: Mason Writing Center Arlington http://writingcenter.gmu.edu/ Mason Libraries Arlington http://library.gmu.edu/ Mason Libraries Arlington POGO646: infoguides.gmu.edu/evaluation University Life Arlington https://ularlington.gmu.edu/ Mason Patriot Pantry https://ssac.gmu.edu/patriot-pantry/ Mason Student Support and Advocacy Center (SSAC) https://ssac.gmu.edu/ Mason Student Health Services Arlington https://shs.gmu.edu/ 7 Mason Counseling and Psychological Services Arlington http://caps.gmu.edu/ Mason Learning Resource Center https://into.gmu.edu/learning-resource-center YMCA Arlington https://www.ymcadc.org/locations/ymca-arlington/?bid=03 ($2 per visit w/ Mason student ID) Mason Emergency Preparedness Guides https://ehs.gmu.edu/emergencymanagement/plans-guides/ Mason Safe Return to Campus https://www2.gmu.edu/Safe-Return-Campus 8 Course Outline January 25 Topic: Introduction to Class, Introduction to Policy and Program Evaluation Study Assignment for January 25th: Eleanor Chelimsky (2017) “Credibility, Policy Use, and the Evaluation Synthesis.” In Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations. Eds. Steard I. Donaldson, Christina Christie, & Melvin M. Mark (pp. 177-200). Thousand Oaks, CA: SAGE. http://mutex.gmu.edu/login?URL=http://methods.sagepub.com.mutex.gmu.edu/base/download/BookChapter/credibleand-actionable-evidence/i780.xml Assignment (due February 1st): Think about a (a) federal, (b) state, (c) county, or (d) municipal program to be synthesized for the final project. Be prepared to talk about your final paper in class. February 1 Topic: Introduction and Overview Study Assignments for February 1st: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 1: Introduction and Overview. 3-22. Patrick J. Wolf, Brian Kisida, Babette Gutmann, Michael Puma, Nada Eissa and Lou Rizzo (2013) “School Vouchers and Student Outcomes: Experimental Evidence from Washington, DC” Journal of Policy Analysis and Management 32.2, 246-270. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=8 6235898&site=bsi-live Please bring the completed worksheet “Wolf et al. (2013)” to class, available on Blackboard. Class visit Kimberly MacVaugh, Policy and Government Librarian. 9 Assignment (due February 8th): Think about a (a) federal, (b) state, (c) county, or (d) municipal program to be synthesized for the final project. Be prepared to talk about your final paper in class. 10 February 8 Topic: Ethnical and Cultural Issues in Program Evaluation Study Assignments for February 8th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 2: Ethical and Cultural Issues in Program Evaluation. 23-42. Rosalie L. Pacula, David Powell, Paul Heaton, and Eric L. Sevigny (2015) “Assessing the Effects of Medical Marijuana Laws on Marijuana Use: The Devil is in the Details.” Journal of Policy Analysis and Management 34.1, 7-31. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=9 9922611&site=bsi-live Please bring the completed worksheet “Pacula et al. (2015)” to class, available on Blackboard. Assignment (due February 15th, 4:30 pm): Prepare a first draft of the introduction of the synthesis of the chosen policy or program. Submit this draft to February 15 Topic: Needs Assessment Study Assignments for February 15th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 3: Needs Assessment. 45-62. William G. Bowen, Matthew M. Chingos, Kelly A. Lack, and Thomas I. Nygren (2014) “Interactive Learning Online at Public Universities: Evidence from a Six-Campus Randomized Trial” Journal of Policy Analysis and Management 33.1, 94-111. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=9 2866901&site=bsi-live Please bring the completed worksheet “Bowen et al. (2014)” to class, available on Blackboard. Assignment (due February 22nd): 11 Be prepared to talk about your final paper in class. 12 February 22 Topic: Survey Methods for Program Planning and Monitoring Study Assignments for February 22nd: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 4: Survey Methods for Program Planning and Monitoring. 63-83. Stephanie P. Browne and Sara LaLumia (2014) “The Effects of Contraception on Female Poverty” Journal of Policy Analysis and Management 33.3, 602-622. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=9 6667219&site=bsi-live Optional: The Journey to the Evidence Act 2019, Part I Event: Urban Institute 10/03/17: Realizing the Promise of Evidence-Based Policymaking https://www.urban.org/events/realizing-promise-evidence-based-policymaking Please bring the completed worksheet “Browne & LaLumia (2014)” to class, available on Blackboard. Assignment (due March 1st): Be prepared to talk about your final paper in class. March 1 Topic: Selecting and Measuring Outcome Objectives Study Assignments for March 1st: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 5: Selecting and Measuring Outcome Objectives. 87108. Carolyn J. Heinrich, Patricia Burch, Annalee Good, Rudy Acosta, Huiping Cheng, Marcus Dillender, Christi Kirshbaum, Hiren Nisar, and Mary Stewart (2014) “Improving the Implementation and Effectiveness of Out-of-School-Time Tutoring” Journal of Policy Analysis and Management 33.2, 471-494. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=9 4874275&site=bsi-live 13 Optional: The Journey to the Evidence Act 2019, Part II Podcast: The Lab @ DC 11/05/17: Recommendations of the Commission on Evidence-Based Policymaking https://soundcloud.com/user-768286365/nick-hart-recommendations-of Please bring the completed worksheet “Heinrich et al. (2014)” to class, available on Blackboard. Assignment (due March 8th, 4:30 pm): Be prepared to talk about your final paper in class. March 8 Topic: Inference and Logic in Pragmatic Outcome Evaluation Study Assignments for March 8th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 6: Inference and Logic in Pragmatic Outcome Evaluation: Don’t Let the Perfect Become the Enemy of the Good. 109-121. Steven W. Hemelt and Dave E. Marcotte (2013) “High School Exit Exams and Dropout in an Era of Increased Accountability” Journal of Policy Analysis and Management 32.2, 323-349. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=8 6235901&site=bsi-live Optional: The Journey to the Evidence Act 2019, Part III Podcast: The Lab @ DC 11/05/17: Encouraging Government to Use a Portfolio of Evidence https://soundcloud.com/user-768286365/kathryn-newcomer-how-do-we Please bring the completed worksheet “Hemelt & Marcotte (2013)” to class, available on Blackboard. Assignment (due March 15th): Prepare a second draft of the introduction and a first draft of the data and methods sections of the synthesis. Submit this draft to 14 March 15 Topic: Impact Evaluation: Feasible Outcome Evaluation Designs Study Assignments for March 15th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 7: Feasible Outcome Evaluation Designs. 122-136. Lori S. Bennear, Jonathan M. Lee and Laura O. Taylor (2013) “Municipal Rebate Programs for Environmental Retrofits: An Evaluation of Additionality and Cost-Effectiveness” Journal of Policy Analysis and Management 32.2, 350-372. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=8 6235902&site=bsi-live Optional: The Journey to the Evidence Act 2019, Part IV Event: Urban Institute 04/24/18: Using Evidence in Policy and Program Decisions https://www.urban.org/events/using-evidence-policy-and-program-decisions Please bring the completed worksheet “Bennear et al. (2013)” to class, available on Blackboard. Assignment (due March 22nd): Be prepared to talk about your final paper in class. March 22 Topic: Single-Case Designs for Evaluating Programs and Practice Study Assignments for March 22nd: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 8: Single-Case Designs for Evaluating Programs and Practice. 137-152. Jacob Leos-Urbel (2014) “What is a Summer Job Worth? The Impact of Summer Youth Employment on Academic Outcomes” Journal of Policy Analysis and Management 33.4, 891-911. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=9 8198695&site=bsi-live 15 Optional: The Journey to the Evidence Act 2019, Part V Event: Urban Institute 10/18/18: Building Evidence and Learning Agendas https://www.urban.org/events/building-evidence-and-learning-agendas-federal-agencies Please bring the completed worksheet “Leos-Urbel (2014)” to class, available on Blackboard. Assignment (due March 29th): Be prepared to talk about your final paper in class. 16 March 29 Topic: Practical and Political Pitfalls in Outcome Evaluations Study Assignments for March 29th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 9: Practical and Political Pitfalls in Outcome Evaluations. 153-172. Laura E. Grant and Matthew Potoski (2015) “Collective Reputations Affect Donations to Nonprofits” Journal of Policy Analysis and Management 34.4, 835-852. http://mutex.gmu.edu/login?URL=http://web.a.ebscohost.com.mutex.gmu.edu/bsi/pdfviewer/pdfviewer?vid=2&sid=47 16ba01-7088-40ee-9320-77dffe88d530%40sdc-v-sessmgr02 Optional: The Journey to the Evidence Act 2019, Part VI Event: BPC 03/15/19: A New Era for Federal Evaluation: Implementing the Evidence Act https://bipartisanpolicy.org/event/a-new-era-for-federal-evaluationimplementing-the-evidence- act/ Please bring the completed worksheet “Grant & Potoski (2015)” to class, available on Blackboard. Assignment (due April 5th): Be prepared to talk about your final paper in class. April 5 Topic: Analyzing and Presenting Data from Formative and Process Evaluations Study Assignment for April 5th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 10: Analyzing and Presenting Data from Formative and Process Evaluations. 175-185. Terri J. Sabol and P. Lindsay Chase-Lansdale (2015) “The Influence of Low-Income Children’s Participation in Head Start on Their Parents’ Education and Employment.” Journal of Policy Analysis and Management 34.1, 136-161. 17 http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=9 9922617&site=bsi-live 18 Optional: The Journey to the Evidence Act 2019, Part VII Event: Urban Institute 04/02/19: Using Evidence for Improvement in the Foundation for Evidence-Based Policymaking Act https://www.urban.org/events/using-evidence-improvement-foundations-evidence-based-policymaking-act Please bring the completed worksheet “Sabol & Chase-Lansdale (2015)” to class, available on Blackboard. Assignment (due April 12th): Be prepared to talk about your final paper in class. April 12 Topic: Analyzing Data from Outcome Evaluations; In-class presentations of final project Study Assignment for April 12th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 11: Analyzing Data from Outcome Evaluations. 186209. Liana Fox, Christopher Wimer, Irwin Garfinkel, Neeraj Kaushal, and Jane Waldfogel (2015) “Waging War on Poverty: Poverty Trends Using a Historical Supplemental Poverty Measure.” Journal of Policy Analysis and Management 34.3, 567-592. http://mutex.gmu.edu/login?URL=http://search.ebscohost.com.mutex.gmu.edu/login.aspx?direct=true&db=bth&AN=1 03144829&site=bsi-live Optional: The Journey to the Evidence Act 2019, Part VIII Event: Urban Institute 01/23/20: Delivering on the Evidence Act: How Agencies Can Engage Stakeholders in the Learning Agenda Process https://www.urban.org/events/delivering-evidence-act-how-agencies-can-engage-stakeholders-learning-agenda-process Please bring the completed worksheet “Fox et al. (2015)” to class, available on Blackboard. Assignment (due April 19th): 19 Be prepared to talk about your final paper in class. April 19 Topic: Writing and Disseminating Evaluation Reports; In-class presentations of final project Study Assignment for April 19th: Allen Rubin (2020) Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies (New York: Cambridge University Press). 978-1-108-79909-6. Chapter 12: Writing and Disseminating Evaluation Reports. 210-227. Optional: The Journey to the Evidence Act 2019, Part IX Event: AEI 01/27/20: How is the Evidence Act Changing Federal, State, and Local Policymaking? https://www.youtube.com/watch?v=lNe5wtI5sTk April 26 Topic: In-class presentations of final project May 3 Topic: In-class presentations of final project May 10 Topic: In-class presentations of final project Grades will be posted on patriotweb.gmu.edu after May 12th, 10pm. Graded finals will be available after May 12th, 10 pm. Contact 20 Please read the course summery very well, attached for you an example of other student who done the same work before and please be aware of the APA style for the references. Writing Assignments The project will be a program evaluation synthesis of six (6) evaluations of a (a) federal, (b) state, (c) county, or (d) municipal program based on the student’s choice. • a first draft of the introduction of the synthesis of six (6) evaluations of the chosen policy or program o Microsoft Word document; o name of student; o title with name of discussed program and the term “synthesis;” o one page minimum, five pages maximum; o double spaced; o 12 pt. font; o document page number(s); o direct quotes need a page number; if there are no page numbers state “n.p;” o references in reference section in alphabetical order by last name; o American Psychological Association (APA) 7, author/year of publication style https://www.apastyle.org/; • a second draft of the introduction and a first draft of the data and methods sections of the synthesis of six (6) evaluations of the chosen policy or program o Microsoft Word document; o name of student; o title with name of discussed program and the term “synthesis;” o two pages minimum, ten pages maximum; o double spaced; o 12 pt. font; o document page numbers; o direct quotes need a page number; if there are no page numbers state “n.p;” o ======================= o references in reference section in alphabetical order by last name; here is what the teacher want us to do in the paper o The second draft of the paper is supposed to focus on data and methods. You may want to visit the slides from Week 1, Slides 30 and 31 have questions that you could answer in the data and methods sections. You may also want to visit the student papers from past semesters, I attached it here , to see how these students solved these challenges. also please be aware of the APA style for the reference • a final paper, i.e., a program evaluation synthesis (i.e., o introduction; o background/literature1 review; o data; o methods; o results; o public policy (optional) o conclusion) of the six (6) evaluations of the chosen policy or program; o Microsoft Word document; o name of student; o title with name of discussed program and the term “synthesis;” o ten pages minimum, thirty pages maximum; o double spaced; o 12 pt. font; o document page numbers; o direct quotes need a page number; if there are no page numbers state “n.p;” o references in reference section in alphabetical order by last name; o American Psychological Association (APA) 7, author/year of publication style https://www.apastyle.org/; This is form PowerPoint 1 of session 1 ok Slide 22: do you have the current structure in your paper. • Final paper (i.e., program evaluation synthesis with • Data (required; discussed by data set); • Data set #1 (SNAP administrative data set) • Data set #2 (American Community Survey ACS) • Data set #3 (Current Population Survey Food Security Supplement) • Methods (required; discussed by method); • Method #1 (descriptive statistics) • Method #2 (difference-in-difference method) • Method #3 (regression analysis) Slides 30 and 31: does your current paper answer most of these questions? • Quantitative data: • What is the formal, specific name of the data set? (example: “U.S. Census data” is unspecific; “American Community Survey of the U.S. Bureau of the Census” is specific) • Is it a primary or secondary data set? • Since when has the data set existed? • What triggered the data collection in the first place? • Who collected the data? • How often is the data set collected? • Where is the data set housed? • Who funds the data set? • What is the dependent variable? [as you may have regressions in your methods section] • What are the independent variables? [paraphrase if there are many] • What is the response rate? • What is n? • Qualitative data: • What is the formal, specific name of the data set (example: “qualitative data” is unspecific; “expert interviews conducted by Katrin Anacker in Columbus, Ohio in October 2013” is specific)? • Is it a primary or secondary data set? • Since when has the data set existed? • What triggered the data collection in the first place? • Who collected the data? • How often is the data set collected? • • • • • Where is the data set housed? Who funds the data set? What type of survey is this (mail? Phone? Internet? Something else?) What is the response rate? What is n? ? Program evaluation synthesis of a (a) federal, (b) state, (c) county, or (d) municipal program based on the student’s choice. ? One program only (more than five years old, details below). ? Six evaluations (details below). ? Final Paper: What are acceptable evaluations? ? Program or policy evaluations (oftentimes labeled as such) published by the GAO, an evaluation “shop” (examples: Westat, Mathematica, Abt Associates, Impaq, ICF, etc.), or by academics; ? Peer-reviewed academic journal articles or working papers that evaluate a program or policy (examples: Journal of Policy Analysis and Management, Housing Policy Debate, NBER, etc.) Final Paper: What are not evaluations? ? ? ? ? ? ? Tweets; Posts on Facebook; Blog posts; Opinions (labels: “Opinion,” or “Perspective”); Policy briefs; Newspaper articles (example: The New York Times, The Washington Post, The Los Angeles Times, etc.); ? Magazine articles (examples: Time, Newsweek, Salon, The Atlantic, etc.); ? Non-peer reviewed journal articles (examples: National Journal, Planning, etc.); ? Book chapters in edited volumes. Final Paper: What are required sections of the final paper? 1. Introduction (required!) 2. Literature Review (required!) Theme or subtopic #1 (required!) Theme or subtopic #2 (required!) Theme or subtopic #3 (possibly more, possibly less) 3. Data (required!) 4. Methods (required!) 5. Public policy (optional) 6. Results (required!) Theme or subtopic #1 – same as in Literature Review Theme or subtopic #2 – same as in Literature Review Theme or subtopic #3 – same as in Literature Review 7. Conclusion (required! -- not a summary) Appendix: Tables – see below (required) ? First draft of the introduction of the synthesis of six (6) evaluations of the chosen policy or program (15% of grade); ? Second draft of the introduction and first draft of the data and methods sections of the synthesis of six (6) evaluations of the chosen policy or program (15% of grade); ? Final paper (i.e., program evaluation synthesis with Introduction (required) [based on 6 evaluations, other sources]; Background/literature review (required) [a synthesized background/literature review of the 6 evaluations structured by theme/subtopic; not 6 “standalone” summaries of the 6 evaluations]; Data (required) [by data set, based on 6 evaluations]; Methods (required) [by method, based on 6 evaluations]; Results (required) [by theme/subtopic, based on 6 evaluations]; Public Policy (optional); Conclusion (required) [based on 6 evaluations, other sources;)) Appendix (required): Tables – see below (required) ? Final paper (i.e., program evaluation synthesis with Introduction (required; topic: impact of SNAP on….); Background/literature review (required); [literature review: a synthesized background/literature review of the 6 evaluations structured by theme/subtopic; not 6 “stand-alone” summaries of the 6 evaluations] Topic or subtheme #1 (impact of SNAP on birth weight) Topic or subtheme #2 (impact of SNAP on BMI) Topic or subtheme #3 (impact of SNAP on vegetable consumption) ? Final paper (i.e., program evaluation synthesis with Data (required; discussed by data set); Data set #1 (SNAP administrative data set) Data set #2 (American Community Survey ACS) Data set #3 (Current Population Survey Food Security Supplement) Methods (required; discussed by method); Method #1 (descriptive statistics) Method #2 (difference-in-difference method) Method #3 (regression analysis) ? Final paper (i.e., program evaluation synthesis with Results (required; results of the evaluation – same topics/subthemes in the background/literature review); Topic or subtheme #1 (impact of SNAP on BMI) Topic or subtheme #2 (impact of SNAP on birth weight) Topic or subtheme #3 (impact of SNAP on food intake) Conclusion (required); Appendix (required): Tables – see below. ? Introduction; ? Literature review (at least two subtopics; a synthesized background/literature review of the 6 evaluations structured by theme/subtopic; not 6 “stand-alone” summaries of the 6 evaluations); ? Data (discussed by data set); ? Methods (discussed by method); ? Results (at least two subtopics, discussed by theme/subtopic); ? Public policy (optional) ? Conclusion (not a Summary). Appendix (Tables – see below) ? ? ? ? ? provides an interesting concept and presentation; is clear and complete; discusses the history and evolution of the program to “set the stage;” quickly discusses the two or three themes or subthemes; provides the outline of the paper (introduction, literature review, data, methods, results, (public policy), conclusion); ? see the examples from previous semesters on Blackboard. Matrix: What are the elements of a well-written data section? ? Challenge: your future boss may not be a subject matter expert and is thus not familiar with your synthesized program; Thus, he/she will read about the data sets and the methods for the first time; You cannot expect your boss to know much about the data sets and the methods; Thus, you will need to patiently explain aspects to him/her so he/she can follow your train of thought. Matrix: What are the elements of a well-written data section? grouped by data set, not by evaluation! ? Quantitative data: What is the formal, specific name of the data set? (example: “U.S. Census data” is unspecific; “American Community Survey of the U.S. Bureau of the Census” is specific) Is it a primary or secondary data set? Since when has the data set existed? What triggered the data collection in the first place? Who collected the data? How often is the data set collected? Where is the data set housed? Who funds the data set? What is the dependent variable? [as you may have regressions in your methods section] What are the independent variables? [paraphrase if there are many] What is the response rate? What is n? ? Qualitative data: What is the formal, specific name of the data set (example: “qualitative data” is unspecific; “expert interviews conducted by Katrin Anacker in Columbus, Ohio in October 2013” is specific)? Is it a primary or secondary data set? Since when has the data set existed? What triggered the data collection in the first place? Who collected the data? How often is the data set collected? Where is the data set housed? Who funds the data set? What type of survey is this (mail? Phone? Internet? Something else?) What is the response rate? What is n? Matrix: What are the elements of a well-written results section? ? presents program evaluations in a clear, concise, synthesized, and thorough fashion; ? focuses on results grouped by theme/subtopic, not by evaluation; ? makes good use of insights provided by the literature review; ? makes good use of insights provided by data and methods sections ? see the student papers from previous semesters on Blackboard. Matrix: What is included in a well-written conclusion section ? accurately summarizes the final paper; ? makes clear and logical evaluation or policy recommendations, based on the synthesis; ? conclusions and recommendations are adequate and supported by the synthesis; ? provides plausible program and public policy recommendations; ? discusses the limitations; ? see the examples from previous semesters on Blackboard. Matrix: What is included in a well-written reference section? ? ? ? ? https://www.apastyle.org/ (APA 7) In-text citations, not footnotes. Author-year style Example: “In the U.S., many communities lack affordable housing (Anacker, 2018).” Matrix: What are common writing pitfalls? ? Chaining direct quotes is NOT writing; ? Use only direct quotes when the original sentence is “earth-shattering” and almost impossible to paraphrase; ? Writing does NOT have flow, i.e., sentences are not connected; ? One sentence = one paragraph; ? Each sentence starts the same (The study…. The study…. The study….); ? Reference unclear (“This”, “They,” [insert a noun]); ? Fillers that can be left out (and do not change anything) ? Repeated and redundant statements throughout the paper; Matrix: What are common formatting pitfalls? ? ? ? ? ? Not providing your name on the paper; Not using 12 pt. font; Not using double spacing; Not inserting page numbers; Not providing direct quotes with a reference, including a page number. Matrix: What are common reference pitfalls? ? References do not follow APA 7 style; ? Having references in footnotes; ? Having references in footnotes that are website links only (instead, use authoryear style); ? Not providing references to substantiate statements; ? References do not follow “author/year” format (example: bla bla bla (Anacker, 2020).) ? References are not in alphabetical order; ? References are by first name; ? References are incomplete; ? References have journal titles in caps. An Evaluation Synthesis of The Homelessness Prevention and Rapid Re-Housing Program INTRODUCTION The McKinney-Vento Homeless Assistance Act of 1987 was passed by Congress to fundamentally change the way the U.S addressed homelessness in America, and “aimed to provide ‘urgently needed assistance to protect and improve the lives and safety of the homeless, with special emphasis on families and children’” (U.S. Interagency Council on Homelessness, 2016, p. 1). Throughout the law’s history, slight adjustments were made that changed the way U.S. Department of Housing and Urban Development (HUD) homeless programs were administered. It was not until the early 2000s that Congress implemented a comprehensive overhaul to McKinney-Vento through the Homelessness Emergency Assistance and Rapid Transition to Housing (HEARTH) Act. Initially introduced in 2001, the HEARTH Act was finally passed in 2009 under the Obama Administration. Under this law, HUD’s homeless programs transitioned from focusing on treating those who were currently homeless to preventing people from experiencing homelessness. Extensive changes included eligibility expansion for those who were considered at risk, emphasized outcomes over activities, incentivized behavior, expanded funding for rapid housing and emergency homelessness prevention, and more (Berg, 2013). During the HEARTH Act’s journey to passage, the Great Recession began (Berg, 2013). Between 2007 and 2009, the total number of people who experienced homelessness ranged from approximately 630,000 to 647,000 in the U.S. (National Alliance to End Homelessness, 2020). 1 As a result of the economic crisis that struck the U.S. between December 2007 and June 2009, Congress passed the American Recovery and Reinvestment Act of 2009 (Recovery Act) (Federal Reserve, 2013). Under this law, $13.6 billion were allocated for projects and programs administered under HUD. Specifically, $1.5 billion was appropriated for the Homeless Prevention and Rapid Re-housing Program (HPRP), a one-time federally-funded program that was active from 2009-2012 (U.S. Department of Housing and Urban Development, n.d.). According to Berg (2013), “The HPRP … used exact language that had appeared in earlier versions of what became the HEARTH Act … [meaning that] … these models of RRH and homelessness prevention were tested and used in virtually every community in the country” (p. 322). To respond effectively to the financial crisis, HPRP’s income eligibility requirements were adjusted to offer services to a wider range of the population. Specifically, the income threshold for HPRP was set to 50 percent of the Area Median Income (AMI) which was 20 percent higher than the income threshold established under the HEARTH Act. Individuals and families who fell within HPRP’s income eligibility ? a formula used for the Emergency Shelter Grants Program — were high risk of becoming homeless and this income adjustment offered them immediate assistance. The Emergency Shelter Grants Program, an established regulatory framework, allowed HUD to distribute the funds quickly to states, municipalities, counties and U.S. territories (U.S. Department of Housing and Urban Development, 2016). This synthesis will review six studies that evaluate the effectiveness of the HPRP on a program participants’ likelihood of returning to a state of homelessness or re-entering homeless services. The paper will include a literature review followed by an analysis of the data, methods and results from the evaluations. Two sub-themes will be analyzed as a part of the greater 2 analysis. The first is the effect of HPRP on a family’s likelihood of re-entry to homelessness/homeless services. The second sub-theme that will be analyzed is the effect of HPRP on homeless veterans and their likelihood of re-entry to homelessness/homeless services. Finally, the synthesis will review possible policy implications, policy alternatives as well as limitations of the evaluations. LITERATURE REVIEW Authors of all evaluations, except Rodriguez and Eidelman (2017), reported a shift in homelessness policies from temporary to long-term solutions in the years leading up to the beginning of HPRP. Earlier programs were heavily focused on treating those who were already homeless through temporary means (emergency shelters or transitional housing) instead of preventing people from becoming homeless (Piña & Pirog, 2018). The policy landscape shifted towards programs that offered emergency rental assistance, landlord mediation, housing costs assistance, credit repair, legal services and utility payments (Brown et al., 2017; Byrne et al., 2016; Piña & Pirog, 2018). Piña and Pirog (2018) cited research that stressed the importance of the relationship between spatial accessibility and program utilization. Low-income families rely heavily on public transportation and as a result, eligible recipients that did not live close to the service provider were less likely to take advantage of the housing program (Allard et al., 2003). Brown et al.’s (2017) review of the existing literature revealed positive outcomes for HPRP participants shortly after the programs implementation ? 89.9 percent of participants exited the program into permanent housing (U.S. Department of Housing and Urban Development, 2016). Finally, all six evaluations noted a limited presence of empirical research dedicated to the assessment of the 3 long-term effects of HPRPs on improving outcomes for individuals and families, further stating that more rigorous and in-depth research is required, especially among subpopulations. Subtheme 1: Effect of HPRP on a Family’s Likelihood of Re-entry to Homelessness/Homeless Services Traditionally, most prevention programs focus on family outcomes and permanent housing support often prioritizes single adults (Brown et al., 2017; Byrne et al., 2016). Earlier research revealed that families experiencing homelessness who received assistance through subsidized housing and other forms of intervention were more likely to stabilize their housing situation than those who did not receive assistance (Vaclavik et al., 2018). Piña and Pirog (2018), Rodriguez and Eidelman (2017), and Vaclavik et al. (2018) reviewed the Family Options Study conducted by Gubits et al. (2015) and Gubits et al. (2016). The researchers randomly assigned families in homeless shelters to a housing intervention, (treatment group) or to “usual care” (control group). The control group was not assigned to any intervention and were left to seek services on their own. The study found similar outcomes to families who were assigned to a RRH intervention to those assigned to “usual care” after the 18-month follow-up period. Also, upon comparing those receiving RRH to those that received permanent subsides the Family Options Study reported that RRH recipients experienced greater shelter use after 18-months than their counterparts. Brown et al. (2017) cites research that emphasizes that the demographics, risk factors and psychological needs of families and individuals varied. For instance, individuals were more likely to suffer from mental illness or substance abuse and required additional services (Culhane et al., 2007; Shinn et al., 1998). The researchers’ review of the existing literature also found a higher rate of return to homelessness among single adults compared to families to received 4 similar HPRP interventions. Specifically, Brown et al. (2017) refers an evaluation of Connecticut’s RRH Program that reported a rate of return to homelessness after three years for families was only five percent while single adults experienced an 18 percent rate of return. Similarly, research cited by Byrne et al. (2016) showed low rates of re-entry to homelessness for households compared to individuals. Byrne et al.’s (2016) overall consensus of the literature was that previous studies observed rates of re-entry ranged between 2-5 percent for families in prevention programs and 5-15 percent for families in RRH programs. Subtheme 2: Effect of HPRP on a Homeless Veteran’s Likelihood of Re-entry to Homelessness/Homeless Services The U.S. Department of Veteran Affairs (VA) Supportive Services for Veteran Families (SSVF) program was an initiative that offered both veterans and veteran families HP and RRH services to similar to those in the HPRP. All six of the evaluations reviewed the existing literature about this program. The HPRP was a catalyst for the transformation of federal homeless programs and also influenced the creation of the SSVF (Berg, 2013). Brown et al. (2017), Brown et al. (2018), Piña and Pirog (2018), Rodriguez and Eidelman (2017), and Vaclavik et al. (2016) all cited a study conducted by Byrne et al. (2016). The study results revealed that 16 percent of individual veterans returned to a shelter within one year of exiting RRH programs and 26.6 percent returned to a shelter within two years (Rodriguez and Eidelman, 2017). Vaclavik et al. (2018) cited research that indicated that veterans and individuals who participated in a HPRP who were unable to secure a higher source of income before the services ended, were more likely to reexperience homelessness (Brown et al., 2017). Specifically, the research showed that these individuals were more likely to re-enter a state of homelessness if 5 they received Rapid Rehousing (RRH) than those who received Homelessness Prevention (HP) services (Brown et al., 2017). Further, Vaclavik et al. (2018) cited previous research that indicated that veterans were likely to need more resources in addition to homeless services to enable their transition to civilian life (O’Connell et al., 2008; Washington et al., 1997). DATA Data Set 1: Indianapolis Homeless Management Information System (HMIS) 2009-2012 Brown et al. (2017), Brown et al. (2018), and Vaclavik et al. (2018) used the Indianapolis HMIS data set, collected from 2009 to 2012. This data system collects demographic, socioeconomic, and disability data (age, sex, race/ethnicity, monthly income, disabling condition, etc.) and is used as a case management tool to measure performance of individuals and families who receive homelessness services in the Indianapolis, Indiana area. To receive funding under the McKinneyVento Act, the system is subject to the U.S. Department of Housing and Urban Development’s (HUD) National Data and Technical standards for participation, data collection, privacy and security (Indianapolis Continuum of Care (CoC)). Brown et al. (2017) and Vaclavik et al.’s (2018) follow-up period spanned an average of 4.5 years. While all three evaluations used the same data set, Vaclavik et al.’s (2017) original sample consisted of 1,812 individuals, which was comprised of 682 adults and 1,130 children in 511 families. Brown et al. (2018) was a sub-study of Brown et al. (2017) that used the same data set. The larger of the two, Brown et al. (2017), was a longitudinal study that included a more complex sample population than Brown et al. (2018). Both of their original samples were obtained from the Indianapolis HMIS which included 2,477 adults and children. Brown et al.’s (2017) inclusion criterion was single adult households who participated in HPRP between 2009 6 and 2012 as well as those who upon leaving the program entered permanent housing. With these parameters, Brown et al.’s (2017) final data set consisted of 370 participants. In addition, Brown et al. (2017) selected a separate sample of 71 participants from the larger data set who exited the HPRP who were in unstable living situations, but were not technically homeless and living in a shelter. Brown et al.’s (2018) study only included single unaccompanied adults, totaling 515 observations. Data Set 2: Georgia Homeless Management Information System (HMIS) 2011-2012 Rodriguez and Eidelman (2017) utilized data from Georgia’s HMIS collected from July 2, 2011 to June 30, 2012. This data set included information from the state’s entire 159 counties and was developed by an Atlanta-based organization, Pathways Community Network Institute, Inc. They studied households that exited rapid-rehousing (RRH), transitional housing (TH), and emergency shelter (ES) programs. If households received services from multiple programs during the sampling period (July 2, 2011 and July 30, 2012), the researchers counted the program enrollment that occurred the earliest. Furthermore, to avoid selection bias, Rodriguez and Eidelman (2017) excluded observations of households who were considered chronically homeless at the start of the program. Rodriguez and Eidelman (2017) reviewed households in RRH, TH, and ES with children (N = 248, 473, and 1,470, respectively) and households in RRH, TH, and ES without children separately (N = 131, 2,016, and 7,881, respectively). 7 Data Set 3: 26 State School Districts Student Homelessness Status as Defined by the Department of Education (2005-2014) Piña and Pirog (2018) requested data on the homeless status of students directly from all fifty states. Annually, school districts track the housing status of students and report these numbers to their state based on criteria for homeless status defined by the U.S. Department of Education. The study only included data from states that had begun to collect data prior to the implementation of HPRP in 2009. Piña and Pirog (2018) received responses from 38 states but only 26 states (6,679 school districts) were included in the final data set. Of the 12 states who did not send their data, some had not collected data on students’ homeless data prior to 2009 and others completely failed to respond to the researchers’ request. To compare the accuracy of the aggregated data, the researchers also used the homeless student count from the National Center for Homeless Education at the state level from the 2006-2007 to 2013-2014 academic years. Data Set 4: Veterans Affairs’ (VA) National Homeless Registry 2011-2013 and the Veterans Affairs’ Electronic Medical Records (Obtained via the Veterans Affairs’ Corporate Data Warehouse) Byrne et al. (2016) used two sources to create the data set for their study, which included the VA’s National Homeless Registry and the VA’s Electronic Medical Records. The National Homeless Registry adheres to the standards of HUD’s HMIS technical standards and includes individual data on participants in the Supportive Services for Veteran Families (SSVF) who exited the program between October 1, 2011 and September 30, 2013. It includes information on the SSVF-rendered services as well as individuals’ demographic data. The demographic data encompassed general information such as age, sex, ethnicity and race and included whether the 8 participant had a disabling condition (physical, mental or deployment disabilities, substance abuse disorders, HIV/AIDS, etc.). The final sample included N = 39,337 veterans who were divided into four subgroups: single veterans in RRH programs, single veterans in homelessness prevention programs, veterans in families RRH programs, and veterans in families in homelessness prevention programs. In addition to the VA’s National Homeless Registry, Byrne et al. (2016) used data from the VA’s Electronic Medical Records. These were used to evaluate a veteran’s level of engagement with VA medical services prior to exiting the SSVF. METHODS Statistical Analysis Descriptive Analysis Only one evaluation, Rodriguez and Eidelman (2017), used a statistical method other than regression. They conducted a descriptive analysis of their samples as well as a calculation of the simple impacts of the treatments on the participant’s likelihood of returning to a shelter. Once completed, they used descriptive analysis as well as simple impacts of the treatments to fit them into generalized linear mixed models (GLMs) where parameters were estimated using the Laplace approximation. Prior to analysis, the researchers used propensity score matching to avoid selection bias. They categorized those receiving TH or RRH as the study’s treatment group and those only receiving ES at their control group (i.e., the absence of treatment beyond shelter). Their dependent variable was whether the family returned to a shelter within two years upon leaving their treatment program. Further, they dichotomized the characteristics of the head of household, the independent variables, which included disabling conditions, veteran status, race, 9 ethnicity and sex. Each household in their data set was given a total of three propensities, one for each intervention (ES, TH, and RRH). Regression Analysis Five of the six evaluations used regression analysis to analyze their data (Brown et al., 2017; Brown et al., 2018; Byrne et al., 2016; Piña & Pirog, 2018; Vaclavik et al., 2018). Piña and Pirog (2018) used the following proposed model to evaluate the district-level proportion of homelessness students: Yit = β1HPRPit + β2Cit + ∝i + ϑt + ?dt. Their dependent variable was the calculation of the number of homeless students per 100 students by district. Piña and Pirog (2018) analyzed the data by year and measured HPRP treatment data based on three variables, which include whether the district is in a county with an HPRP organization; if the district was within 10 miles of an HPRP organization; or if the district was within 20 miles of an HPRP organization. The authors also controlled for other programs that existed or expanded during the observation time that could influence the data. In addition, they controlled for variables such as the county or district’s geographical unemployment rate, the percentage of students receiving free or reduced lunch, the cost of renting a two-bedroom apartment comparable to the median household income and the percentage of unoccupied rental units. All of Piña and Pirog’s (2018) regression models used robust standard errors to evaluate heteroscedasticity and serial correlation. 10 Regression Analysis – Kaplan-Meier Method Brown et al. (2017), Byrne et al. (2016), and Vaclavik et al (2018) specifically used the KaplanMeier method, a regression model which helps researchers to predict the risk of survival over time after leaving a HPRP program. Survival in the Kaplan-Meier method is calculated by St = (number of subjects living at start – number of subjects died) / number of subjects living at the start. This formula requires researchers to remove those subjects that have died or stopped participating in the study. The rate of risk is evaluated at every time interval set by the researcher (Goel et al. 2010) and in the cases of these evaluations it was done through follow-up periods conducted monthly with participants. Vaclavik et al. (2018) used this method to determine the annual housing re-entry rate, Byrne et al. (2016) used the regression to “assess how the risk of an episode of homelessness changed over time following SSVF program exit” (p. 258) and Brown et al. (2017) used it on their full sample to determine “the cumulative proportion of participants who did not re-enter homeless services over time” (p. 132). Regression Analysis – Cox Proportional Hazards Regression Model Brown et al. (2017), Byrne et al. (2016), and Vaclavik et al. (2018) employed the Cox proportional hazards regression model. The Cox Regression Model has been used to evaluate survival data. It analyzes and identifies any patterns of association of multiple predictor variables to determine which variable combinations are associated with a higher rate of survival (Christensen, 1987). Vaclavik et al. (2018) was not able to use the multivariate Cox method because of the limited number of observations in their sample and thus used a univariate Cox model. Byrne et al. (2016) used the multivariate method to “assess the relationship among veteran, SSVF program, and community level variables and the risk of a homeless episode 11 following SSVF program exit” (p. 258). Finally, Brown et al. (2017) used a univariate Cox model on the participants receiving HP and participants receiving RRP as well as a multivariate Cox model on the entire sample. Regression Analysis – Univariate and Multivariate Logistic Models Brown et al. (2018) conducted univariate logistic regressions as well as multivariable analysis of HP and RRH recipients separately and assigned dichotomies to each based on whether participants completed the program. The researchers’ dependent variable, another dichotomy, was whether participants entered permanent housing upon leaving their program or entered into non-permanent housing (1 = permanent, 0 = non-permanent). Similarly, Vaclavik et al. (2018) used univariate regressions to account for their limited sample size. They also dichotomized their data but did so based on the specific housing types and financial services rendered (1 = received, 0 = not received). Their dependent variable, like Brown et al. (2018), was whether participants entered permanent housing situations upon exit from the program. Brown et al.’s (2017) program variables were the same as Brown et al. (2018) and Vaclavik et al. (2018); the researchers analyzed whether participants received HR or RRH services, dichotomized HPRP program completion as well as receipt of financial services and other forms of support. Byrne et al. (2016) used the number of days to an episode of homelessness after exiting the SSVF program at its dependent variable and veteran level demographics (i.e., sex, age, disabling condition and monthly income) as its independent variables. 12 Regression Analysis – Hosmer-Lemeshow Goodness of Fit Test Brown et al. (2018) and Vaclavik et al. (2018) used the Hosmer-Lemeshow goodness of fit tests to ensure that their regression models fit adequately with their data. The Hosmer-Lemeshow is a chi-square test often used for assessing risk-scoring models in binary outcomes and can be used across a wide variety of sample sizes (Paul et al., 2012). RESULTS Four of the studies found that HP had a great impact on increasing participants’ likelihood of exiting the program to permanent housing versus participants who received RRH. Specifically, Brown et al. (2018) found that 73.6 percent of participants exited the program to permanent housing. Only 9.5 percent of Brown et al.’s (2017) sample re-entered homeless services after their exit from HPRP. Two years after participants exited the program in Byrne et al.’s (2016) study, 10.9 percent of families experienced homelessness while 17.9 of single adults experienced homelessness. Valclavik et al. (2017) found that 10.9 percent of permanently housed HP participants re-entered homeless services. Along this trend, Brown et al. (2017) and Byrne et al. (2016) observed that men were at greater risk to re-entry than women who received prevention services. Piña and Pirog (2019) noted that their study is limited about the effectiveness of the different services, RRH versus HP, but argued that because prevention services were applied more often the RRH. They found that prevention services are effective. Rodriguez and Eidelman (2017) observed that households who did not receive any treatment (TH or RRH) during the study were twice as likely to return to a shelter. Additionally, three of the studies — Brown et al. 13 (2017), Byrne et al. (2016); and Vaclavik et al. (2018) – found a decreased risk of re-entry for those participants who were enrolled in HPRP services for a longer period of time. Brown et al. (2017), Brown et al. (2018), and Vaclacvik et al. (2017) all found that higher incomes, particularly an increase in income during HPRP participation, reduced the rate of reentry. Interestingly, Brown et al. (2018) results saw a correlation between income and housing outcomes. They identified that for every $100 increase in income for HP participants that their likelihood of exiting to a permanent situation would increase by 11 percent. This finding did not vary when other demographic variables were controlled for. Additionally, Brown et al. (2017) and Vaclacvik et al.’s (2017) results showed that individuals who required financial assistance with security deposits were more likely to re-enter homeless services. Brown et al. (2017), Brown et al. (2018), Vaclavik et al. (2018), and Byrne et al. (2016) found that the majority of HPRP participants identified as Black or African American. Brown et al. (2018) found that individuals in their sample who identified as Black or African American were more likely to exit to permanent housing situations in contrast to Brown et al. (2017) and Byrne et al. (2016) who found that African American participants were at a higher risk of reentry. Some studies observed that age contributed to a participant’s increased risk of re-entry. Brown et al. (2018), Byrne et al. (2016), and Rodriguez and Eidelman’s (2017) results indicate that older single adults as well as men were at higher risk of re-entry. Brown et al. (2018) found that older age was significant in determining a participant’s odds of entering permanent housing and Brown et al. (2018), Byrne et al. (2016) found that individuals with disabling conditions had lower odds of achieving a permanent housing situation. Brown et al. (2017), Brown et al. (2018), and Byrne et al. (2016) found that participants with a disabling condition (physical, 14 psychological and/or substance abuse) were attributed with an increased risk of re-entering homelessness. Subtheme 1: Effect of HPRP on a Family’s Likelihood of Re-entry to Homelessness/Homeless Services Vaclavik et al. (2018) found that the primary recipients of HPRP services — the individual who interacted with homeless service providers on behalf of the family ? were overwhelmingly single females. Additionally, the researchers found that the majority of families in their sample, 69.9 percent, were African American. Rodriguez and Eidelman (2017), who only studied RRH and temporary housing, observed that most families receiving RRH were also primarily women but that they were more often White. Families who remained in the program for longer periods of time and also had older adult family members were more likely to exit into permanent housing. Additionally, families who received financial assistance were more likely to exit into permanent housing. HPRP recipients who had higher income and had a longer program enrollment length, had greater odds of exiting into permanent housing (Vaclavik et al., 2018). Specifically, Vaclavik et al. (2018) found that 97.9 percent of families exited HPRP services into permanent housing. Vaclavik et al.’s results (2018) indicating that families who received financial assistance through a security deposit had higher odds of re-entry to homeless services. Families that received legal services through HPRP were at higher risk of re-entry. The authors also found that HP families were less likely to re-enter homeless services than those who received RH. In particular, the researchers discovered that families that had a veteran member that received HP had a greater chance of re-entry. Families with more children were at higher risk of re-entering homeless services who received RRH. Comparatively, families with younger children receiving 15 HP were more likely to re-enter homeless services (Vaclavik et al., 2018). Piña and Pirog (2019) had similar results and claimed that HPRP participation can reduce the rate of homelessness up to 12 percent. They also found that proximity to a HPRP provider reduced the rate of homeless students by 0.13 points or an 8 percent decrease. Subtheme 2: Effect of HPRP on a Homeless Veteran’s Likelihood of Re-entry to Homelessness/Homeless Services Among veteran populations, Brown et al. (2017) and Byrne et al. (2016) found that African Americans were at a higher risk of homelessness, though Brown et al. (2017) saw a greater risk among those who received RH versus Byrne et al. (2016) who noticed a greater risk among HP recipients. Additionally, both saw a higher risk factor among male veterans. Specifically, Byrne et al.’s (2016) results indicated a risk factor for males receiving both HP and RRH services but Brown et al. (2017) only saw an increased risk factor among males receiving RRH. In Brown et al. (2017) and Byrne et al.’s (2016) analyses, age was a significant contributing factor to reexperiencing homelessness for single veterans. Specifically, Byrne et al.’s (2016) results revealed that those who were between 45 and 54 years of age were at the highest risk, then ages 30 to 61 years of age, while the youngest cohort (18 to 29) had the lowest risk of re-entry. Further, the authors reported that individual male veterans who received either RRH or HP had an elevated risk for homelessness compared to their female counterparts. Byrne et al.’s (2016) analysis also revealed that participants who had previously experienced homelessness or sought homeless services before receiving RRH or HP through the HPRP program or SSVF were at higher risk to re-entering a state of homelessness. Veterans who were part of families, especially those with children, had a higher success rates than their single 16 counterparts (Byrne et al., 2016). Finally, veterans’ income did not factor into the risk of reentering homeless services. In fact, Brown et al. (2017) noted that veterans who re-entered homeless services had significantly higher incomes than other groups at the beginning of their program participation than nonveterans. PUBLIC POLICY, LIMITATIONS AND CONCLUSION This program evaluation synthesis of the HPRP indicates that HP and RRH may be effective tools to mitigate and prevent homelessness. Based on the results from the individual studies, prevention strategies are more effective at reducing the rate of re-entry to homelessness or homeless services for program participants than rapid rehousing strategies. Additionally, HPRP was the largest homeless program to ever exist in the U.S., having significant influence over emergent homeless policies (Piña & Pirog, 2018). After federal HPRP funds were exhausted, many groups expressed their desire to maintain homeless prevention programs in their communities (Cunningham et al., 2015). Given the relative success of HPRP and the public’s support, policymakers should consider allocating additional resources to further evaluate the program’s effectiveness across more subgroups. Policymakers should also collect additional data and conduct further research to determine if the creation of another federally funded program is warranted, perhaps solely for HP in lieu of both RRH and HP. An alternative recommendation would be for states to create their own grant program, specific to homeless prevention, that is made available to public actors within their state. While the evaluations observed positive results with HPRP participants, they all had limitations and the program requires further research. Only Rodriguez and Eidelman’s (2017) 17 evaluation was quasi-experimental; none were experimental; and none were randomized. There was consensus among all of the evaluations that there was a limit to empirical research available on this program’s long-term effectiveness and that there is a need for more research in order to adequately evaluate HPRP. The evaluations also lack generalizability because they only studied specific subgroups (i.e., only families and single adults, only students, only single adults or only veterans). Also, the evaluations were limited geographically, with most evaluations using data from specific localities such as Indianapolis. Only one study, Byrne et al. (2016), used national data, obtained from the VA, but even that study cannot be generalized because it only analyzes veterans and veteran families. Another study, Piña and Pirog (2018) attempted to gather national data by contacting and collecting it from each state, due to a limited response, they were only able to use data from 26 states. Overall, the existing research on the effectiveness of HPRP is promising and additional attempts should be made to determine if the program is worth resurrecting. The program may require adjustments but the original financial investment and success of the program should encourage policymakers should consider revisiting this homeless policy. 18 REFERENCES Allard, S. W., Tolman, R. M., & Rosen, D. (2003). Proximity to service providers and service utilization among welfare recipients: The interaction of place and race. Journal of Policy Analysis and Management, 22(4), 599–613. https://doi.org/10.1002/pam.10157 Berg, S. (2013). The HEARTH Act. Cityscape, 15(1), 317–323. Brown, M., Klebek, L., Chodzen, G., Scartozzi, S., Cummings, C., & Raskind, A. (2018). Housing status among single adults following Homelessness Prevention and Rapid Re-housing Program participation in Indianapolis. Evaluation and Program Planning, 69, 92–98. Brown, M., Vaclavik, D., Watson, D. P., & Wilka, E. (2017). Predictors of homeless services re-entry within a sample of adults receiving Homelessness Prevention and Rapid Re-Housing Program (HPRP) assistance. Psychological Services, 14(2), 129–140. Byrne, T., Treglia, D., Culhane, D. P., Kuhn, J., & Kane, V. (2016). Predictors of homelessness among families and single adults after exit from homelessness prevention and rapid re-housing programs: evidence from the Department of Veterans Affairs Supportive Services for veteran families program. Housing Policy Debate, 26(1), 252–275. Christensen, E. (1987). Multivariate survival analysis using Cox’s regression model. Hepatology, 7(6), 1346–1358. Connecticut Coalition to End Homelessness. (2013). Where are they now? Three years later, did rapid re-housing work in Connecticut? http://cceh.org/wpcontent/uploads/2015/04/NEW_RRH_works_in_CT_FINAL_2013.10.10_CCEH.pdf 19 Culhane, D. P., Metraux, S., Park, J. M., Schretzman, M., & Valente, J. (2007). Testing a typology of family homelessness based on patterns of public shelter utilization in four U.S. jurisdictions: Implications for policy and program planning. Housing Policy Debate, 18(1), 1–28. Cunningham, M. K., Burt, M., Scott, M., Locke, G., Larry, B., Klerman, J., Fiore, N., & Stillman, L. (2015). Prevention programs funded by the Homelessness Prevention and Rapid Re-Housing Program. https://www.huduser.gov/portal/sites/default/files/pdf/HPRP-report.pdf Goel, M. K., Khanna, P., & Kishore, J. (2010). Understanding survival analysis: Kaplan-Meier estimate. International Journal of Ayurveda Research, 1(4), 274–278. Gubits, D., Shinn, M., Bell, S., Wood, M., Dastrup, S. R., Solari, C., Brown, S. R., Brown, S., Dunton, L., Lin, W., McInnis, D., Rodriguez, J., Savidge, G., & Spellman, B. E. (2017). Family Options Study: Short-term impacts of housing and services interventions for homeless families. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3055272 Gubits, D., Shinn, M., Wood, M., Bell, S., Dastrup, S., Solari, C. D., Brown, S. R., McInnis, D., McCall, T., & Kattel, U. (2016). Family Options Study: 3-year impacts of housing and services interventions for homeless families. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3055295 Indianapolis Continuum of Care (CoC). (n.d.). Homeless Management Information System. Indy CoC. https://www.indycoc.org/programs-policies/hmis National Alliance to End Homelessness. (2020). State of Homelessness: 2020 Edition. https://endhomelessness.org/homelessness-in-america/homelessness-statistics/state-ofhomelessness-2020/ 20 O’Connell, M. J., Kasprow, W., & Rosenheck, R. A. (2008). Rates and Risk Factors for Homelessness After Successful Housing in a Sample of Formerly Homeless Veterans. Psychiatric Services, 59(3), 268–275. Paul, P., Pennell, M. L., & Lemeshow, S. (2013). Standardizing the power of the Hosmer–Lemeshow goodness of fit test in large data sets. Statistics in Medicine, 32(1), 67–80. Piña, G., & Pirog, M. (2019). The impact of homeless prevention on residential instability: evidence from the Homelessness Prevention and Rapid Re-Housing Program. Housing Policy Debate, 29(4), 501–521. Rich, R. (2013, November 22). The great recession. Federal Reserve History. https://www.federalreservehistory.org/essays/great_recession_of_200709 Rodriguez, J. M., & Eidelman, T. A. (2017). Homelessness interventions in Georgia: rapid rehousing, transitional housing, and the likelihood of returning to shelter. Housing Policy Debate, 27(6), 825–842. Shinn, M., Weitzman, B. C., Stojanovic, D., Knickman, J. R., Jiménez, L., Duchon, L., James, S., & Krantz, D. H. (1998). Predictors of homelessness among families in New York City: From shelter request to housing stability. American Journal of Public Health, 88(11), 1651–1657. U.S. Department of Housing and Urban Development. (n.d.a). HPRP: Homelessness Prevention and Rapid Re-Housing Program. https://www.hudexchange.info/programs/hprp/ U.S. Department of Housing and Urban Development. (2009, May). The McKinney-Vento Homeless Assistance Act, as amended by S. 896 Homeless Emergency Assistance and rapid transition to 21 Housing (HEARTH) Act of 2009. https://www.hudexchange.info/resource/1715/mckinney-ventohomeless-assistance-act-amended-by-hearth-act-of-2009/ U.S. Department of Housing and Urban Development. (2016). Homelessness Prevention and Rapid Re-Housing Program (HPRP): Year 3 & final program summary. https://files.hudexchange.info/resources/documents/HPRP-Year-3-Summary.pdf U.S. Interagency Council on Homelessness. (2016). U.S. Interagency Council on Homelessness: Historical overview. https://www.usich.gov/resources/uploads/asset_library/USICH_History_2016.pdf Vaclavik, D., Brown, M., Adenuga, P., Scartozzi, S., & Watson, D. P. (2018). Permanent housing placement and reentry to services among family recipients of Homelessness Prevention and Rapid Re-Housing Program (HPRP) Assistance. Journal of Primary Prevention; New York, 39(6), 591–609. Washington, D. L., Yano, E. M., McGuire, J., Hines, V., Lee, M., & Gelberg, L. (2010). Risk factors for homelessness among women veterans. Journal of Health Care for the Poor and Underserved, 21(1), 82–91. 22 APPENDICES DATA SET Appendix 1 Data Source Indianapolis Homeless Management Information System (HMIS) 2009-2012 Georgia Homeless Management Information System (HMIS) 2011-2012 26 State School Districts Student Homelessness Status as Defined by the Department of Education (20052014) Veterans Affairs’ (VA) National Homeless Registry 2011-2013 Veterans Affairs’ Electronic Medical Records (Obtained via the Veterans Affairs’ Corporate Data Warehouse) Brown et al. (2017) Brown et al. (2018) X X EVALUATION Byrne et al. Piña & (2016) Pirog (2018) Rodriguez& Vaclavik et Eidelman al. (2018) (2017) X X X X X 23 METHODS Appendix 2 Evaluation Methods Descriptive Analysis Regression Analysis Regression Analysis – Kaplan-Meier Method Regression Analysis – Cox Proportional Hazards Regression Model Regression Analysis – Univariate and Multivariate Logistic Models Regression Analysis – HosmerLemeshow Goodness of Fit Test Brown et al. (2017) Brown et al. (2018) X X X X X X EVALUATION Byrne et al. Piña & (2016) Pirog (2018) X X X Rodriguez & Eidelman (2017) X Vaclavik et al. (2018) X X X X X X X X 24 Running head: HOUSING CHOICE VOUCHER PROGRAM “The Rent is Too Damn High”: A Synthesis of Program Evaluations of the Housing Choice Voucher Program 1 HOUSING CHOICE VOUCHER PROGRAM “The Rent is Too Damn High”: A Synthesis of Program Evaluations of the Housing Choice Voucher Program Introduction In 2010, Jimmy McMillan attracted widespread attention in his long-shot bid for Governor of New York with a single pithy catchphrase: “The rent is too damn high” (Paulson, 2010, n.p.). Ten years on, McMillan’s mantra still rings true for many renters across the country. There is not a single county anywhere in the United States in which a full-time minimum-wage worker can rent a two-bedroom apartment that meets government affordability standards (Fiske, 2019). In 2019, the Center on Budget and Policy Priorities reported that over 20 million renters in the United States pay more than half of their income towards housing (Center on Budget and Policy Priorities, 2019). The federal government seeks to address this problem through a variety of programs. In 1974, President Gerald Ford signed the Housing and Community Development Act, which amended the U.S. Housing Act of 1937 to create the Section 8 voucher program (known since 1999 as the Housing Choice Voucher program [Carlson, 2011, p. 129]). This program has the stated purpose “of aiding lower-income families in obtaining a decent place to live and of promoting economically mixed housing” (Housing and Community Development Act, 1974, p. 662). Through this program, local Public Housing Authorities (PHAs) provide vouchers to qualifying families to assist in making rent payments, and, in some cases, depending on the choice of the local PHA, monthly homeownership expenses (U.S. Department of Housing and Urban Development, n.d.a.; U.S. Department of Housing and Urban Development, n.d.b.). Today, it is the largest housing assistance program in the country, providing assistance to over two million households (Sard, 2018). 2 HOUSING CHOICE VOUCHER PROGRAM 3 To apply for a housing voucher, a family must be U.S. citizens or legal immigrants, and their household income must not exceed 50% of the median income in the area in which they live. They must apply through their local PHA, which will verify their income, family size and composition, and other information before issuing a voucher. In many cases, because demand for vouchers often exceeds supply, they may be put on a waiting list (U.S. Department of Housing and Urban Development, n.d.a.). Once a family receives a voucher, they may use it to pay for any housing that the family chooses, including their current place of residency, provided that the unit meets safety standards and complies with the program’s requirements, and the landlord agrees to participate. The PHA signs an agreement with the family and the landlord, and makes the voucher payments directly to the landlord. However, the family must still pay 30% of their monthly adjusted gross income on rent and utilities, and must notify the PHA of changes in income, family size, or if they intend to move (vouchers are portable) (U.S. Department of Housing and Urban Development, n.d.a.). How effective is the Housing Choice Voucher program at improving the situations of participating low-income families? Six evaluations that have studied the effects of this program on the lives of families include Carlson et al. (2012), Jacob et al. (2015), Lens et al. (2011), Sanbonmatsu et al. (2011), Teater (2011), and Wood et al. (2008). I synthesize information from these six evaluations, with a focus on the effects of the program on outcomes regarding neighborhood quality, education, and employment. Information on each of these evaluations may be found in Table 1 in Appendix A. Literature Review Neighborhood Outcomes HOUSING CHOICE VOUCHER PROGRAM 4 Sanbonmatsu et al. (2011) cites previous research that provides a theoretical framework for the importance of neighborhood choice to families’ wellbeing. Jencks and Mayer (1990) define four different means by which neighborhoods can affect people’s outcomes: epidemic models (the influence of neighbor behaviors), collective socialization (common values shared in a neighborhood), institutional models (like access to better schools, for example) and relative deprivation or competition models (including reduced exposure to stressors such as violence). Other research has shown that access to different neighborhoods can have positive effects on families’ outcomes. Four of the studies in this synthesis—Sanbonmatsu et al. (2011), Wood et al. (2008), Carlson et al. (2012), and Lens et al. (2011)—each cite previous research that studied the Gautreaux residential mobility program. In 1976, following a lawsuit charging discrimination in Chicago public housing practices, the U.S. Supreme Court ordered the Chicago Housing Authority to provide vouchers to African-American families that would allow them to live in areas that were less than 30 percent African-American (Sanbonmatsu, 2011). Sanbonmatsu et al. (2011) refers to Keels et al. (2005) and Rubinowitz and Rosenbaum (2000), and Wood et al. (2008) refers to Popkin, Rosenbaum, and Meaden (1993), all of which found that the change in neighborhood had led to improved employment and educational outcomes. Additionally, Lens et al. (2011) cites Keels et al. (2005), which found that in the long term, families in the Gautreaux program that used the voucher to move to new neighborhoods ended up living in areas with lower crime rates. Regarding neighborhood safety, Teater (2011) cites a study of a separate program by Brooks et al. (2005), in which people who had used a voucher to move said that they felt safer in their new situation. Several prior studies cited by Sanbonmatsu et al. (2011) show evidence that recipients of housing vouchers tended to live in areas with lower poverty than families who lived in project HOUSING CHOICE VOUCHER PROGRAM 5 housing (Newman & Schnare, 1997; Khadduri, Shroder, & Steffen, 1998; Devine et al., 2003; Olsen, 2003). Wood et al. (2008) cites similar research findings by Feins and Patteron (2005), which show that voucher recipients who chose to move to a new neighborhood, and then moved a second time, ended up in areas with lower poverty rates than the areas in which they had started. Research cited by Lens et al. (2011) demonstrates that, in 1990, voucher recipients lived in higher poverty areas than the general public (Pendall, 2000). However, Lens et al. (2011) and Teater (2011) both cite Hartung and Henig (1997), which provides evidence that voucher recipients lived in lower-poverty neighborhoods than recipients of other forms of housing assistance. Educational Outcomes The studies in this synthesis refer to prior research indicating that public programs that increase family income can lead to positive educational outcomes for children. Jacob et al. (2015) cites work that showed an increase in children’s test scores due to an increase in family income from the Earned Income Tax Credit (Dahl & Lochner, 2012) and welfare-to-work experiments (Duncan, Morris, & Rodrigues 2011). Increased income in the form of rental assistance may therefore have a similar effect on child outcomes. Sanbonmatsu et al. (2011) cites research that shows that higher-income areas usually have higher quality schools than lower-income areas (Connell & Halpern-Felsher, 1997). Additional prior research cited by Sanbonmatsu et al. (2011) demonstrates a positive correlation between the presence of affluent neighbors and a number of important educational outcomes for children, including IQ and verbal ability scores (Brooks- Gunn et al., 1993; Duncan, BrooksGunn, and Klebanov, 1994; Klebanov et al., 1997), reading (Chase-Lansdale et al., 1997), and HOUSING CHOICE VOUCHER PROGRAM 6 math (Entwisle, Alexander, & Olson, 1994). Moving to a higher-income neighborhood might therefore result in a better education. Additionally, Sanbonmatsu et al. (2011) refers to prior research that demonstrates that children who are exposed to serious violence perform worse in school (Sharkey 2010), which may reinforce the hypothesis that housing voucher receipt, which could enable a family to move to less violent neighborhoods, could result in improved academic performance. However, the act of moving to a new school can also have detrimental effects: Sanbonmatsu et al. (2011) also cites Beatty (2010), which found a negative correlation between school mobility and educational achievement. Employment and Economic Outcomes Prior research evidence on the effect of housing vouchers on employment and earnings is mixed. Wood et al. (2008) refers to work that shows a negative effect of voucher receipt on employment (Patterson et al., 2004; Shroder, 2002). According to Wood et al. (2008) and Carlson et al. (2012), Shroder (2002), in a comprehensive review of the literature, found that vouchers had no meaningful impact on recipients’ employment. Additional studies cited by Wood et al. (2008) found that vouchers led to lower family earnings (Olsen et al., 2005; Susin, 2005). Carlson et al. (2012) also cites work by Goering et al. (2002), Shroder (2002), Goering (2003), Turney et al. (2006), and Kling et al. (2007) regarding the Moving to Opportunity demonstration (on which Sanbonmatsu’s (2011) evaluation is also based) to show that vouchers did not result in significant differences to employment. Additional research cited by Carlson et al. (2012) includes work by Mills et al. (2006) on the Welfare to Work demonstration (on which Wood et al.’s (2008) evaluation is also based), which showed initial decreases in employment that disappeared after three and a half years. Additionally, Carlson et al. (2012) refers to research by Jacob and Ludwig (2006) on the Chicago Housing Authority natural experiment (on which HOUSING CHOICE VOUCHER PROGRAM 7 Jacob’s (2015) evaluation is based), which showed that “voucher recipients worked and earned less” (Carlson, 2011, p. 131). However, studies of the Gautreaux program that Carlson et al. (2012) cites (Rosenbaum, 1995; Mendenhall et al., 2006) show that voucher recipients who moved to the suburbs ended up with higher earnings than before they moved. Sanbonmatsu et al. (2011) also cites a study of the Gautreaux demonstration (Rosenbaum & Popkin, 1991) to show the same conclusion, as well as a more recent study (Anil et al., 2010) that found that voucher recipients in Atlanta, Georgia who used their voucher to move experienced significant increase in employment compared with those who stayed in other public housing situations. Data Administrative Datasets These six evaluations, while measuring different outcomes and using different methods, do share some commonalities in their approaches. For instance, five of the evaluations merged data from a variety of sources to create a unique quantitative dataset. And every one of those five draws at least some of their data from administrative, government-run databases. Carlson et al. (2012), Jacob et al. (2015) and Wood et al. (2008) used data from Unemployment Insurance databases to measure economic and employment outcomes for study samples. Wood et al. (2008) and Sanbonmatsu et al. (2011) drew upon the Public Housing Information Center to supplement their datasets. And Lens et al. (2011) used data on vouchers and public housing from the U.S. Department of Housing and Urban Development’s “Picture of Subsidized Housing.” Overall, the studies drew upon over 15 different publicly administered datasets. A summary of the datasets used can be found in Table 1 in the Appendix. Datasets included the HOUSING CHOICE VOUCHER PROGRAM 8 Wisconsin-administered Client Assistance for Re-employment and Economic Support database (CARES; Carlson, 2011), Temporary Assistance for Needy Families (TANF) data files (Carlson, 2011), public school and criminal records (Jacob et al., 2015), and the HUD Tenant Rental Assistance Certification System (Sanbonmatsu et al., 2011). Survey Data Wood et al. (2008) and Sanbonmatsu et al. (2011) created primary datasets using surveys. These involved a baseline survey at assignment (for Wood et al. (2008), N = 8,573 heads of households and for Sanbonmatsu et al. (2011), N = 4,604 households) and follow up surveys (for Wood et al. (2008), N = 2,481 and for Sanbonmatsu et al. (2011), N = 4,143) as the studies progressed. Additionally, each of these studies used surveys to estimate the effect of the intent to treat (ITT—all those assigned to the group that was offered housing vouchers) and the treatment-ontreated (TOT—all of those who actually used housing vouchers). Sanbonmatsu et al.’s (2011) follow up surveys were long term—10 to 12 years after baseline—while Wood et al. (2008) conducted the follow up surveys 4.5 – 5 years following baseline data collection. Sanbonmatsu et al.’s (2011) survey covered a broader range of topics, including neighborhood and housing quality, physical health, mental health, economic outcomes, and a variety of other measures. Wood et al.’s (2008) survey was narrower in focus, including measures on housing mobility and neighborhood environment, income, public assistance, and other economic outcomes. HOUSING CHOICE VOUCHER PROGRAM 9 Methods Random Assignment Random assignment to treatment and control groups featured in the methodology of three of the evaluations: Wood et al. (2008), Jacob et al. (2011), and Sanbonmatsu et al. (2011). The evaluation that Wood et al. (2008) carried out was related to an experiment that was officially authorized and funded by the U.S. Department of Housing and Urban Development and known by the name of the Welfare to Work Voucher Program. This program randomly assigned housing choice vouchers to families eligible for or receiving TANF. For the analysis, Wood et al. (2008) used a sample of 8,731 families from six of the sites involved in the program (Atlanta; Augusta, GA; Fresno, CA; Houston; Los Angeles; and Spokane, WA). Jacob et al. (2015) also took advantage of a government’s random assignment, although in this case it was a natural experiment. For this study, Chicago randomly assigned 8,560 voucher applicants to receive a voucher, because there were far more applicants than available vouchers. This resulted in a control group of 22,447 households. Sanbonmatsu et al. (2011) worked from an officially sponsored government experiment known as the Moving to Opportunity demonstration. In this experiment, the researchers randomly assigned eligible families to one of three experimental conditions. Those in the treatment group received vouchers that they could use only in areas with low crime rates. Those in the “Section 8” group received normal housing choice vouchers according to the usual rules of the program. And those in the control group received no voucher, but were eligible for other forms of assistance. HOUSING CHOICE VOUCHER PROGRAM 10 Rather than random assignment, Carlson et al. (2012) matched members of the treatment group to members of the control group by census tract, in order to be able to match treatment cases to control cases in similar geographic areas. Statistical Analysis The evaluations used a variety of statistical methods to analyze their data. Carlson et al. (2012) used a difference-in-differences regression model to estimate the effect of housing voucher receipt on employment outcomes. They estimate the model using Generalized Least Squares (GLS) with random effects controls. Wood et al. (2008) used t-tests to ensure no significant differences between treatment and control at baseline. Then, they used OLS regression analysis to estimate the effects of voucher receipt on homelessness, housing mobility, and other wellbeing outcomes. For binary outcome variables, they used a probit regression model. Wood et al. (2008), Sanbonmatsu et al. (2011), and Jacob et al. (2015) all included two analyses in their evaluations: one for the intent to treat (or every observation assigned to the treatment group) and one for the treatment on treated (TOT—only those members of the treatment group who actually got a voucher and used it to rent). Sanbonmatsu et al. (2011) used OLS to estimate the effects of ITT and TOT on a variety of wellbeing outcome measures, including “social, economic and educational prospects, risky and criminal behavior, [and] health” (p. xiv). Because there were two treatment groups, the researchers calculated each of these effects twice. This way, they estimated the effect of being offered a voucher to move to a lower-poverty neighborhood (with mobility counseling) versus actually using that voucher to move. Likewise, they estimated the effect of being offered a voucher with the traditional requirements versus actually using that voucher. HOUSING CHOICE VOUCHER PROGRAM 11 Jacob et al. (2015) also analyzed the effect of the intent to treat versus the treatment on treated. They used OLS to estimate the effect of ITT on their dependent variables of child outcomes like education, health, and criminal involvement. However, in their evaluation, between five percent and eight percent of the control group received housing vouchers through other means. They used two-stage least squares with randomized voucher offers as an instrument to estimate the effect of a family using a voucher that had been acquired through any source. Lens et al. (2011) is the sole quantitative study with no treatment or control groups. Rather, the researchers generated their own variable called crime exposure rate, “which weights a neighborhood’s crime rate by the proportion of the sample’s relevant household type (voucher, LIHTC, etc.) within that neighborhood” (p. 143) in order to estimate the typical crime rates experienced by households in each group. Then they used other types of low income housing units to estimate the counterfactual neighborhoods of where voucher recipients would have lived without the voucher. Using difference of means testing, they estimate whether voucher recipients live in safer neighborhoods than other low-income households. Qualitative Methods Only one evaluation—Teater (2011)—relied solely on qualitative methods. After conducting indepth interviews with 14 voucher recipients to gather data on program participants’ perspectives and recommendations, the researchers conducted two stages of coding to gain insight into the major themes of the responses. They used NVivo software to code the data and concluded with 43 codes that allowed them to identify key themes. Wood et al. (2008) included participant interviews and qualitative analysis as part of their overall evaluation. They interviewed 141 program participants five years after random HOUSING CHOICE VOUCHER PROGRAM 12 assignment to “learn how voucher users make important family decisions and how those decisions could be influenced by the receipt of housing assistance” (p. 377). Sanbonmatsu et al. (2011) also complemented quantitative analysis with qualitative interviews. They interviewed 3,273 adults (including adults from each of the study groups) and 5,105 youth between the ages of 10 and 20. Results Neighborhood Outcomes Three of the studies measured outcomes related to neighborhood quality, safety, or satisfaction. Generally, these studies found that the Housing Choice Voucher program led to an improvement in neighborhood quality for voucher recipients. For example, Lens et al. (2011) found that voucher holders lived in safer neighborhoods than people who received the low-income housing tax credit, people who lived in public housing, and poor renters who did not receive any housing assistance. Specifically focusing on violent crime, they found similar results—voucher recipients lived in areas with less violent crimes. Interestingly, however, the results were quite different among subgroups of the sample—the study found that black voucher holders lived in lower crime neighborhoods as compared to other black renters, while white and Hispanic voucher holders actually lived in higher crime neighborhoods when compared to their nonvoucher-holding counterparts. Similarly to Lens et al. (2011), Wood et al. (2008) found that TANF-eligible families that used vouchers tended to live in neighborhoods with lower poverty rates and higher employment rates than TANF-eligible families that did not receive vouchers. Again, however, other findings tempered these results. For example, receipt of a voucher did not have any statistically HOUSING CHOICE VOUCHER PROGRAM 13 significant effect on the level of satisfaction that families felt regarding their neighborhoods. Additionally, the interview portion of Wood et al.’s (2008) study revealed that while voucher recipients reported that they felt they had moved to better neighborhoods, they still expressed worry for the safety of their children in those neighborhoods. Sanbonmatsu et al.’s (2011) findings related to neighborhood poverty and quality are less mixed. This study contained two experimental groups: participants who received vouchers with no geographical limitations as well as participants who received vouchers with the requirement of living in an area with a poverty rate of less than ten percent. Each group was found to live in lower poverty neighborhoods than those who did not receive vouchers. Additionally, participants in each experimental group were statistically significantly more likely to report satisfaction with their current neighborhood than participants who did not receive a housing voucher. Educational Outcomes Three of the studies examined the effects of the Housing Choice Voucher program on educational outcomes. Generally, the findings do not indicate that the program has a meaningful effect on education. For example, Jacob et al. (2015) found that study participants whose family received a voucher when they were children had no more likelihood of graduating from high school than partici...

pur-new-sol

Purchase A New Answer

Custom new solution created by our subject matter experts

GET A QUOTE

Related Questions