The Pursuit of Quality for Social Work Practice: Three Generations and Counting

Correspondence regarding this article should be directed to Enola Proctor, Brown School, Campus Box 1196, Washington University in St. Louis, One Brookings Drive, St. Louis, MO 63130 or via e-mail to ude.ltsuw@pke

Keywords: quality, evidence-based practice, implementation strategies, outcomes, training The publisher's final edited version of this article is available at J Soc Social Work Res

Social work addresses some of the most complex and intractable human and social problems: poverty, mental illness, addiction, homelessness, and child abuse. Our field may be distinct among professions for its efforts to ameliorate the toughest societal problems, experienced by society’s most vulnerable, while working from under-resourced institutions and settings. Members of our profession are underpaid, and most of our agencies lack the data infrastructure required for rigorous assessment and evaluation.

Moreover, social work confronts these challenges as it is ethically bound to deliver high-quality services. Policy and regulatory requirements increasingly demand that social work deliver and document the effectiveness of highest quality interventions and restrict reimbursement to those services that are documented as evidence based. Social work’s future, its very survival, depends on our ability to deliver services with a solid base of evidence and to document their effectiveness. In the words of the American Academy of Social Work and Social Welfare (AASWSW; n.d.), social work seeks to “champion social progress powered by science.” The research community needs to support practice through innovative and rigorous science that advances the evidence for interventions to address social work’s grand challenges.

My work seeks to improve the quality of social work practice by pursuing answers to three questions:

What interventions and services are most effective and thus should be delivered in social work practice?

How do we measure the impact of those interventions and services? (That is, what outcomes do our interventions achieve?)

How do we implement the highest quality interventions?

This paper describes this work, demonstrates the substantive and methodological progression across the three questions, assesses what we have learned, and forecasts a research agenda for what we still need to learn. Given Aaron Rosen’s role as my PhD mentor and our many years of collaboration, the paper also addresses the role of research mentoring in advancing our profession’s knowledge base.

What Interventions and Services Are Most Effective?

Answering the question “What services are effective?” requires rigorous testing of clearly specified interventions. The first paper I coauthored with Aaron Rosen—“Specifying the Treatment Process: The Basis for Effectiveness Research” (Rosen & Proctor, 1978)—provided a framework for evaluating intervention effectiveness. At that time, process and outcomes were jumbled and intertwined concepts. Social work interventions were rarely specified beyond theoretical orientation or level of focus: casework (or direct practice); group work; and macro practice, which included community, agency-level, and policy-focused practice. Moreover, interventions were not named, nor were their components clearly identified. We recognized that gross descriptions of interventions obstruct professional training, preclude fidelity assessment, and prevent accurate tests of effectiveness. Thus, in a series of papers, Rosen and I advocated that social work interventions be specified, clearly labeled, and operationally defined, measured, and tested.

Specifying Interventions

Such specification of interventions is essential to two professional responsibilities: professional education and demonstrating the effectiveness of the field’s interventions. Without specification, interventions cannot be taught. Social work education is all about equipping students with skills to deliver interventions, programs, services, administrative practices, and policies. Teaching interventions requires an ability to name, define, see them in action, measure their presence (or absence), assess the fidelity with which they are delivered, and give feedback to students on how to increase or refine the associated skills.

To advance testing the effectiveness of social work interventions, we drew distinctions between interventions and outcomes and proposed these two constructs as the foci for effectiveness research. We defined interventions as practitioner behaviors that can be volitionally manipulated by practitioners (used or not, varied in intensity and timing), that are defined in detail, can be reliably measured, and can be linked to specific identified outcomes (Rosen & Proctor, 1978; Rosen & Proctor, 1981). This definition foreshadowed the development of treatment manuals, lists of specific evidence-based practices, and calls for monitoring intervention fidelity. Recognizing the variety of intervention types, and to advance their more precise definition and measurement, we proposed that interventions be distinguished in terms of their complexity. Interventive responses comprise discrete or single responses, such as affirmation, expression of empathy, or positive reinforcement. Interventive strategies comprise several different actions that are, together, linked to a designated outcome, such as motivational interviewing. Most complex are interventive programs, which are a variety of intervention actions organized and integrated as a total treatment package; collaborative care for depression or community assertive treatment are examples. To strengthen the professional knowledge base, we also called for social work effectiveness research to begin testing the optimal dose and sequencing of intervention components in relation to attainment of desired outcomes.

Advancing Intervention Effectiveness Research

Our “specifying paper” also was motivated by the paucity of literature at that time on actual social work interventions. Our literature review of 13 major social work journals over 5 years of published research revealed that only 15% of published social work research addressed interventions. About a third of studies described social problems, and about half explored factors associated with the problem (Rosen, Proctor, & Staudt, 2003). Most troubling was our finding that only 3% of articles described the intervention or its components in sufficient detail for replication in either research or practice. Later, Fraser (2004) found intervention research to comprise only about one fourth of empirical studies in social work. Fortunately, our situation has improved. Intervention research is more frequent in social work publications, thanks largely to the publication policies of the Journal of the Society for Social Work and Research and Research on Social Work Practice.

Research Priorities

Social work faces important and formidable challenges as it advances research on intervention effectiveness. The practitioner who searches the literature or various intervention lists can find more than 500 practices that are named or that are shown to have evidence from rigorous trials that passes a bar to qualify as evidence-based practices. However, our profession still lacks any organized compendium or taxonomy of interventions that are employed in or found to be effective for social work practice. Existing lists of evidence-based practices, although necessary, are insufficient for social work for several reasons. First, as a 2015 National Academies Institute of Medicine (IOM) report—“Psychosocial Interventions for Mental and Substance Use Disorders: A Framework for Establishing Evidence-Based Standards” (IOM, 2015)—concluded, too few evidence-based practices have been found to be appropriate for low-resource settings or acceptable to minority groups. Second, existing interventions do not adequately reflect the breadth of social work practice. We have too few evidence-based interventions that can inform effective community organization, case management, referral practice, resource development, administrative practice, or policy. Noting that there is far less literature on evidence-based practices relevant to organizational, community, and policy practice, a social work task force responding to the 2015 IOM report recommended that this gap be a target of our educational and research efforts (National Task Force on Evidence-Based Practice in Social Work, 2016). And finally, our field—along with other professions that deliver psychosocial interventions—lacks the kinds of procedure codes that can identify the specific interventions we deliver. Documenting social work activities in agency records is increasingly essential for quality assurance and third-party reimbursement.

Future Directions: Research to Advance Evidence on Interventions

Social work has critically important research needs. Our field needs to advance the evidence base on what interventions work for social work populations, practices, and settings. Responding to the 2015 IOM report, the National Task Force on Evidence-Based Practice in Social Work (2016) identified as a social work priority the development and testing of evidence-based practices relevant to organizational, community, and policy practice. As we advance our intervention effectiveness research, we must respond to the challenge of determining the key mechanisms of change (National Institute of Mental Health, 2016) and identify key modifiable components of packaged interventions (Rosen & Proctor, 1978). We need to explore the optimal dosage, ordering, or adapted bundling of intervention elements and advance robust, feasible ways to measure and increase fidelity (Jaccard, 2016). We also need to conduct research on which interventions are most appropriate, acceptable, and effective with various client groups (Zayas, 2003; Videka, 2003).

Documenting the Impact of Interventions: Specifying and Measuring Outcomes

Outcomes are key to documenting the impact of social work interventions. My 1978 “specifying” paper with Rosen emphasized that the effectiveness of social work practice could not be adequately evaluated without clear specification and measurement of various types of outcomes. In that paper, we argued that the profession cannot rely only on an assertion of effectiveness. The field must also calibrate, calculate, and communicate its impact.

The nursing profession’s highly successful campaign, based on outcomes research, positioned that field to claim that “nurses save lives.” Nurse staffing ratios were associated with in-hospital and 30-day mortality, independent of patient characteristics, hospital characteristics, or medical treatment (Person et al., 2004). In contrast, social work has often described—sometimes advertised—itself as the low-cost profession. The claim of “cheapest service” may have some strategic advantage in turf competition with other professions. But social work can do better. Our research base can and should demonstrate the value of our work by naming and quantifying the outcomes—the added value of social work interventions.

As a start to this work—a beginning step in compiling evidence about the impact of social work interventions—our team set out to identify the outcomes associated with social work practice. We felt that identifying and naming outcomes is essential for conveying what social work is about. Moreover, outcomes should serve as the focus for evaluating the effectiveness of social work interventions.

We produced two taxonomies of outcomes reflected in published evaluations of social work interventions (Proctor, Rosen, & Rhee, 2002; Rosen, Proctor, & Staudt, 2003). They included such outcomes as change in clients’ social functioning, resource procurement, problem or symptom reduction, and safety. They exemplify the importance of naming and measuring what our profession can contribute to society. Although social work’s growing body of effectiveness research typically reports outcomes of the interventions being tested, the literature has not, in the intervening 20 years, addressed the collective set of outcomes for our field.

Fortunately, the Grand Challenges for Social Work (AASWSW, n.d.) now provide a framework for communicating social work’s goals. They reflect social work’s added value: improving individual and family well-being, strengthening social fabric, and helping to create a more just society. The Grand Challenges for Social Work include ensuring healthy development for all youth, closing the health gap, stopping family violence, advancing long and productive lives, eradicating social isolation, ending homelessness, creating social responses to a changing environment, harnessing technology for social good, promoting smart decarceration, reducing extreme economic inequality, building financial capability for all, and achieving equal opportunity and justice (AASWSW, n.d.).

These important goals appropriately reflect much of what we are all about in social work, and our entire field has been galvanized—energized by the power of these grand challenges. However, the grand challenges require setting specific benchmarks—targets that reflect how far our professional actions can expect to take us, or in some areas, how far we have come in meeting the challenge.

For the past decade, care delivery systems and payment reforms have required measures for tracking performance. Quality measures have become critical tools for all service providers and organizations (IOM, 2015). The IOM defines quality of care as “the degree to which … services for individuals and populations increase the likelihood of desired … outcomes and are consistent with current professional knowledge” (Lohr, 1990, p. 21). Quality measures are important at multiple levels of service delivery: at the client level, at the practitioner level, at the organization level, and at the policy level. The National Quality Forum has established five criteria for quality measures: They should address (a) the most important, (b) the most scientifically valid, (c) the most feasible or least burdensome, (d) the most usable, and (e) the most harmonious set of measures (IOM, 2015.) Quality measures have been advanced by accrediting groups (e.g., the Joint Commission of the National Committee for Quality Assurance), professional societies, and federal agencies, including the U.S. Department of Health and Human Services. However, quality measures are lacking for key areas of social work practice, including mental health and substance-use treatment. And of the 55 nationally endorsed measures related to mental health and substance use, only two address a psychosocial intervention. Measures used for accreditation and certification purposes often reflect structural capabilities of organizations and their resource use, not the infrastructure required to deliver high-quality services (IOM, 2015). I am not aware of any quality measure developed by our own professional societies or agreed upon across our field.

Future Directions: Research on Quality Monitoring and Measure Development

Although social work as a field lacks a strong tradition of measuring and assessing quality (Megivern et al., 2007; McMillen et al., 2005; Proctor, Powell, & McMillen, 2012), social work’s role in the quality workforce is becoming better understood (McMillen & Raffol, 2016). The small number of established and endorsed quality measures reflects both limitations in the evidence for effective interventions and challenges in obtaining the detailed information necessary to support quality measurement (IOM, 2015). According to the National Task Force on Evidence-Based Practice in Social Work (2016), developing quality measures to capture use of evidence-based interventions is essential for the survival of social work practice in many settings. The task force recommends that social work organizations develop relevant and viable quality measures and that social workers actively influence the implementation of quality measures in their practice settings.

How to Implement Evidence-Based Care

A third and more recent focus of my work addresses this question: How do we implement evidence-based care in agencies and communities? Despite our progress in developing proven interventions, most clients—whether served by social workers or other providers—do not receive evidence-based care. A growing number of studies are assessing the extent to which clients—in specific settings or communities—receive evidence-based interventions. Kohl, Schurer, and Bellamy (2009) examined quality in a core area of social work: training for parents at risk for child maltreatment. The team examined the parent services and their level of empirical support in community agencies, staffed largely by master’s-level social workers. Of 35 identified treatment programs offered to families, only 11% were “well-established empirically supported interventions,” with another 20% containing some hallmarks of empirically supported interventions (Kohl et al., 2009). This study reveals a sizable implementation gap, with most of the programs delivered lacking scientific validation.

Similar quality gaps are apparent in other settings where social workers deliver services. Studies show that only 19.3% of school mental health professionals and 36.8% of community mental health professionals working in Virginia’s schools and community mental health centers report using any evidence-based substance-abuse prevention programs (Evans, Koch, Brady, Meszaros, & Sadler, 2013). In mental health, where social workers have long delivered the bulk of services, only 40% to 50% of people with mental disorders receive any treatment (Kessler, Chiu, Demler, Merikangas, & Walters, 2005; Merikangas et al., 2011), and of those receiving treatment, a fraction receive what could be considered “quality” treatment (Wang, Demler, & Kessler, 2002; Wang et al., 2005). These and other studies indicate that, despite progress in developing proven interventions, most clients do not receive evidence-based care. In light of the growth of evidence-based practice, this fact is troubling evidence that testing interventions and publishing the findings is not sufficient to improve quality.

So, how do we get these interventions in place? What is needed to enable social workers to deliver, and clients to receive, high-quality care? In addition to developing and testing evidence-based interventions, what else is needed to improve the quality of social work practice? My work has focused on advancing quality of services through two paths.

Making Effective Interventions Accessible to Providers: Intervention Reviews and Taxonomies

First, we have advocated that research evidence be synthesized and made available to front-line practitioners. In a research-active field where new knowledge is constantly produced, practitioners should not be expected to rely on journal publications alone for information about effective approaches to achieve desired outcomes. Mastering a rapidly expanding professional evidence base has been characterized as a nearly unachievable challenge for practitioners (Greenfield, 2017). Reviews should critique and clarify the intervention’s effectiveness as tested in specific settings, populations, and contexts, answering the question, “What works where, and with whom?” Even more valuable are studies of comparative effectiveness—those that answer, “Which intervention approach works better, where, and when?”

Taxonomies of clearly and consistently labeled interventions will enhance their accessibility and the usefulness of research reports and systematic reviews. A pre-requisite is the consistent naming of interventions. A persistent challenge is the wide variation in names or labels for interventive procedures and programs. Our professional activities are the basis for our societal sanction, and they must be capable of being accurately labeled and documented if we are to describe what our profession “does” to advance social welfare. Increasingly, and in short order, that documentation will be in electronic records that are scrutinized by third parties for purposes of reimbursement and assessment of value toward outcome attainment.

How should intervention research and reviews be organized? Currently, several websites provide lists of evidence-based practices, some with links, citations, or information about dissemination and implementation organizations that provide training and facilitation to adopters. Practitioners and administrators find such lists helpful but often note the challenge in determining which are most appropriate for their needs. In the words of one agency leader, “The drug companies are great at presenting [intervention information] in a very easy form to use. We don’t have people coming and saying, ‘Ah, let me tell you about the best evidence-based practice for cognitive behavioral therapy for depression,’” (Proctor et al., 2007, p. 483). We have called for the field to devise decision aids for practitioners to enhance access to the best available empirical knowledge about interventions (Proctor et al., 2002; Proctor & Rosen, 2008; Rosen et al., 2003). We proposed that intervention taxonomies be organized around outcomes pursued in social work practice, and we developed such a taxonomy based on eight domains of outcomes—those most frequently tested in social work journals. Given the field’s progress in identifying its grand challenges, its associated outcomes could well serve as the organizing focus, with research-tested interventions listed for each challenge. Compiling the interventions, programs, and services that are shown—through research—to help achieve one of the challenges would surely advance our field.

We further urged profession-wide efforts to develop social work practice guidelines from intervention taxonomies (Rosen et al., 2003). Practice guidelines are systematically compiled, critiqued, and organized statements about the effectiveness of interventions that are organized in a way to help practitioners select and use the most effective and appropriate approaches for addressing client problems and pursuing desired outcomes.

At that time, we proposed that our published taxonomy of social work interventions could provide a beginning architecture for social work guidelines (Rosen et al., 2003). In 2000, we organized a conference for thought leaders in social work practice. This talented group wrestled with and formulated recommendations for tackling the professional, research, and training requisites to developing social work practice guidelines to enable researchers to access and apply the best available knowledge about interventions (Rosen et al., 2003). Fifteen years later, however, the need remains for social work to synthesize its intervention research. Psychology and psychiatry, along with most fields of medical practice, have developed practice guidelines. Although their acceptance and adherence is fraught with challenges, guidelines make evidence more accessible and enable quality monitoring. Yet, guidelines still do not exist for social work.

The 2015 IOM report, “Psychosocial Interventions for Mental and Substance Use Disorders: A Framework for Establishing Evidence-Based Standards,” includes a conclusion that information on the effectiveness of psychosocial interventions is not routinely available to service consumers, providers, and payers, nor is it synthesized. That 2015 IOM report called for systematic reviews to inform clinical guidelines for psychosocial interventions. This report defined psychosocial interventions broadly, encompassing “interpersonal or informational activities, techniques, or strategies that target biological, behavioral, cognitive, emotional, interpersonal, social, or environmental factors with the aim of reducing symptoms and improving functioning or well-being” (IOM, 2015, p. 5). These interventions are social work’s domain; they are delivered in the very settings where social workers dominate (behavioral health, schools, criminal justice, child welfare, and immigrant services); and they encompass populations across the entire lifespan within all sociodemographic groups and vulnerable populations. Accordingly, the National Task Force on Evidence Based Practice in Social Work (2016) has recommended the conduct of more systematic reviews of the evidence supporting social work interventions.

If systematic reviews are to lead to guidelines for evidence-based psychosocial interventions, social work needs to be at the table, and social work research must provide the foundation. Whether social work develops its own guidelines or helps lead the development of profession-independent guidelines as recommended by the IOM committee, guidelines need to be detailed enough to guide practice. That is, they need to be accompanied by treatment manuals and informed by research that details the effect of moderator variables and contextual factors reflecting diverse clientele, social determinants of health, and setting resource challenges. The IOM report “Clinical Practice Guidelines We Can Trust” sets criteria for guideline development processes (IOM, 2011). Moreover, social work systematic reviews of research and any associated evidence-based guidelines need to be organized around meaningful taxonomies.

Advancing the Science of Implementation

As a second path to ensuring the delivery of high-quality care, my research has focused on advancing the science of implementation. Implementation research seeks to inform how to deliver evidence-based interventions, programs, and policies into real-world settings so their benefits can be realized and sustained. The ultimate aim of implementation research is building a base of evidence about the most effective processes and strategies for improving service delivery. Implementation research builds upon effectiveness research then seeks to discover how to use specific implementation strategies and move those interventions into specific settings, extending their availability, reach, and benefits to clients and communities. Accordingly, implementation strategies must address the challenges of the service system (e.g., specialty mental health, schools, criminal justice system, health settings) and practice settings (e.g., community agency, national employee assistance programs, office-based practice), and the human capital challenge of staff training and support.

In an approach that echoes themes in an early paper, “Specifying the Treatment Process—The Basis for Effectiveness Research” (Rosen & Proctor, 1978), my work once again tackled the challenge of specifying a heretofore vague process—this time, not the intervention process, but the implementation process. As a first step, our team developed a taxonomy of implementation outcomes (Proctor et al., 2011), which enable a direct test of whether or not a given intervention is adopted and delivered. Although it is overlooked in other types of research, implementation science focuses on this distinct type of outcome. Explicit examination of implementation outcomes is key to an important research distinction. Often, evaluations yield disappointing results about an intervention, showing that the expected and desired outcomes are not attained. This might mean that the intervention was not effective. However, just as likely, it could mean that the intervention was not actually delivered, or it was not delivered with fidelity. Implementation outcomes help identify the roadblocks on the way to intervention adoption and delivery.

Our 2011 taxonomy of implementation outcomes (Proctor et al., 2011), became the framework for two national repositories of measures for implementation research: the Seattle Implementation Research Collaborative (Lewis et al., 2015) and the National Institutes of Health GEM measures database (Rabin et al., 2012). These repositories of implementation outcomes seek to harmonize and increase the rigor of measurement in implementation science.

We also have developed taxonomies of implementation strategies (Powell et al., 2012; Powell et al., 2015; Waltz et al., 2014, 2015). Implementation strategies are interventions for system change—how organizations, communities, and providers can learn to deliver new and more effective practices (Powell et al., 2012).

A conversation with a key practice leader stimulated my interest in implementation strategies. Shortly after our school endorsed an MSW curriculum emphasizing evidence-based practices, a pioneering CEO of a major social service agency in St. Louis met with me and asked,

Enola Proctor, I get the importance of delivering evidence based practices. My organization delivers over 20 programs and interventions, and I believe only a handful of them are really evidence based. I want to decrease our provision of ineffective care, and increase our delivery of evidence-based practices. But how? What are the evidence-based ways I, as an agency director, can transform my agency so that we can deliver evidence-based practices?

That agency director was asking a question of how. He was asking for evidence-based implementation strategies. Moving effective programs and practices into routine care settings requires the skillful use of implementation strategies, defined as systematic “methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice into routine service” (Proctor et al., 2013, p. 2).

This question has shaped my work for the past 15 years, as well as the research priorities of several funding agencies, including the National Institutes of Health, the Agency for Healthcare Research and Quality, the Patient-Centered Outcomes Research Institute, and the World Health Organization. Indeed, a National Institutes of Health program announcement—Dissemination and Implementation Research in Health (National Institutes of Health, 2016)—identified the discovery of effective implementation strategies as a primary purpose of implementation science. To date, the implementation science literature cannot yet answer that important question, but we are making progress.

To identify implementation strategies, our teams first turned to the literature—a literature that we found to be scattered across a wide range of journals and disciplines. Most articles were not empirical, and most articles used widely differing terms to characterize implementation strategies. We conducted a structured literature review to generate common nomenclature and a taxonomy of implementation strategies. That review yielded 63 distinct implementation strategies, which fell into six groupings: planning, educating, financing, restructuring, managing quality, and attending to policy context (Powell et al., 2012).

Our team refined that compilation, using Delphi techniques and concept mapping to develop conceptually distinct categories of implementation strategies (Powell et al., 2015; Waltz et al., 2014). The refined compilation of 73 discrete implementation strategies was then further organized into nine clusters: