Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings

Homework answers / question archive / BY DAY 3 Post an explanation of how you ensure the quality, trustworthiness, and credibility of your qualitative research

BY DAY 3 Post an explanation of how you ensure the quality, trustworthiness, and credibility of your qualitative research



Post an explanation of how you ensure the quality, trustworthiness, and credibility of your qualitative research. Provide examples of specific techniques and strategies. Use your Learning Resources as well as the article you found in your search to support your explanation. Use proper APA format, citations, and referencing.


Respond to at least one of your colleagues’ posts and search the Internet and/or the Walden Library for a different article related to trustworthiness and/or quality of qualitative research that offers other techniques or strategies. Explain how these other techniques or strategies might further ensure quality, trustworthiness, and credibility in qualitative research. Finally, evaluate the feasibility of your colleague’s strategies. Reference the article you found to support your explanation. Use proper APA format, citations, and referencing.


OVERVIEW OF VALIDITY IN QUALITATIVE RESEARCH The concept of validity is often confusing for novice researchers, which is in part because there are many terms for the notion, such as “authenticity, goodness, verisimilitude, adequacy, trustworthiness, plausibility, validity, validation, and credibility” (Creswell & Miller, 2000, p. 124). In addition, some qualitative researchers reject the concept of validity, asserting that it is not compatible with qualitative research (e.g., Kvale, 1995; Lather, 1993; Wolcott, 1990) and that it is borrowed from quantitative research and therefore based on epistemological frames incongruent with qualitative values. When qualitative research was garnering acceptability as it developed as a field in the early 1970s, the positivist research paradigm was still regarded as the gold standard, and the standards of quantitative research were used to develop qualitative research constructs. In the 1980s, Lincoln and Guba, two seminal scholars in qualitative research, placed concerns about the standards and criteria used to judge the quality of qualitative research at the forefront of debate in the field (e.g., Guba, 1981; Guba & Lincoln, 1981, 1989; Lincoln, 1995; Lincoln & Guba, 1985). They attempted to define appropriate criteria that better accorded with a naturalistic inquiry and constructivist paradigm rather than with a positivist one, specifically examining the differences between the two paradigms on the nature of reality, the nature of the inquirer– “subject” relationship, and the nature of “truth statements.” Guba and others set in motion what has become a complex and diffuse set of debates not just about the terms validity and trustworthiness but, more robustly, about these issues broadly and their place within debates about qualitative rigor, validity, and value. Validity, in qualitative research, refers to the ways that researchers can affirm that their findings are faithful to participants’ experiences. Put another way, it refers to the quality and rigor of a study. Discomfort with the positivist origins of the concept of validity and its emphasis on objective truth has led some methods scholars to use the term trustworthiness as an alternative, while others use the terms interchangeably or use other terms such as quality or rigor. As described above, the debate is not merely a semantic one, but rather it pushes into questions about epistemological traditions, ontological beliefs, and axiological debates. While the field has not settled on a shared understanding or term, the terms validity and trustworthiness are most commonly used and evoke the importance of ensuring credibility and rigor in qualitative research. We do not want to get caught up with the specific word that one uses to describe the processes and approaches that qualitative researchers use to assess the rigor of qualitative studies. Whichever terms you use, the concept of developing valid studies is paramount in qualitative research. Throughout this book, we use the term validity since it remains the prominent word in the field and since we believe it has been reframed and reclaimed within the qualitative tradition, although we acknowledge the problems with this term. Furthermore, we frame validity in ways that place primacy not simply on the specific concepts or procedures used to attempt to achieve it, but also on doing justice to the complexity of research participants’ experiences and thoroughly contextualizing their lives, perspectives, and experiences in ways that help to present the most complex and therefore valid renderings possible. Given the growing prominence of qualitative research within and across fields, and given that it is increasingly scrutinized in relation to proposals for funding and policy mandates, issues of validity are more central than ever (Barbour, 2001). Here we highlight two distinct approaches to issues of validity: transactional and transformational (Cho & Trent, 2006). Transactional validity includes techniques and attempts to achieve a “higher level of accuracy and consensus by means of revisiting facts, feelings, experiences, and values or beliefs collected and interpreted,” which differs from transformational validity, which is “a more radical approach [that] challenges the very notion of validity”; transformational validity is an emancipatory process leading toward social change that “involves a deeper, selfreflective, empathetic understanding of the researcher while working with the researched” (Cho & Trent, 2006, pp. 321–322). The transformational approach to validity, and qualitative research more broadly, is informed by a critical social theory perspective in which the validity of the research is understood and assessed by the action it generates. These approaches to validity do not need to be dichotomous because “transformational approaches seeking ameliorative change can and should be combined, when deemed relevant by the researcher(s) and/or participants, with more traditional trustworthiness-like criteria” (Cho & Trent, 2006, p. 333).1 Regardless of the approach used, validity in qualitative research can never be fully ensured; it is both a process and a goal. Having a valid study cannot be achieved merely by using specific, technical strategies (Barbour, 2001; Maxwell, 2013). However, there are methods that researchers use to help increase the rigor, and thus validity, of qualitative research studies, and we describe these throughout this chapter. ASSESSING VALIDITY Validity is an approach to achieving complexity through systematic ways of implementing and assessing a study’s rigor. As discussed previously, qualitative research demonstrates a fidelity to participants’ experiences rather than specific methods (Hammersley & Atkinson, 2007); this is equally true for the concept of qualitative validity. In contrast with quantitative researchers, “qualitative researchers use a lens not based on scores, instruments, or research designs but a lens established using the views of people who conduct, participate in, or read and review a study” (Creswell & Miller, 2000, p. 125). The different lenses that shape the validity work of qualitative researchers include the lens of the researcher, of the research participants, and of other individuals external to the study. In addition to these lenses, specific criteria for assessing validity differ for qualitative researchers depending on the qualitative paradigm to which they subscribe (Creswell & Miller, 2000).2 Validity Criteria: Credibility, Transferability, Dependability, and Confirmability Qualitative researchers should adhere to a set of different standards than quantitative researchers to assess validity given the differences in values between the paradigms; these standards include credibility, transferability, dependability, and confirmability (Guba, 1981). While arguing for different qualitative standards, Guba (1981) juxtaposes these criteria onto the respective quantitative notions of internal validity, external validity, reliability, and objectivity.3 These standards may be inadequate to assess rigor in many qualitative studies (Toma, 2011), and rigor can be assessed in many ways and need not parallel quantitative standards. We say this to emphasize that qualitative researchers should develop validity approaches that align with the research questions, goals, and contexts of their studies. Despite this, in this section, we define the commonly accepted concepts for assessing rigor in qualitative research (credibility, transferability, dependability, and confirmability) because they help researchers conceptualize, engage with, and plan for various aspects of validity. Credibility is the researcher’s ability to take into account all of the complexities that present themselves in a study and to deal with patterns that are not easily explained (Guba, 1981). This is akin to the quantitative notion of internal validity (Guba, 1981; Lincoln & Guba, 1985; Miles, Huberman, & Saldaña, 2014). Internal validity entails that “the researcher can draw meaningful inferences from instruments that measure what they intend to measure” (Toma, 2011, p. 269). In other words, in qualitative research, internal validity, or credibility, is directly related to research design and the researcher’s instruments and data. The attempts to establish credibility are achieved by structuring a study to seek and attend to complexity throughout a recursive research design process, and the notion of credibility is a good example of the concept of the inseparability of methods and findings (Emerson, Fretz, & Shaw, 1995). Credibility is an important part of critical research design. While there is not—and should not be—a checklist that can be applied for achieving validity, in Table 6.1 we present a set of questions that are helpful to consider when thinking about the credibility of a study. Qualitative researchers attempt to establish credibility by implementing the validity strategies of triangulation, member checking (what we think of and describe as participant validation), presenting thick description, discussing negative cases, having prolonged engagement in the field, using peer debriefers, and/or having an external auditor (Toma, 2011). We discuss these strategies in detail in the section that follows. Transferability, which is juxtaposed with external validity (Guba, 1981; Lincoln & Guba, 1985) or generalizability (Toma, 2011), entails that qualitative research is bound contextually. The goal of qualitative research is not to produce true statements that can be generalized to other people or settings but rather to develop descriptive, context-relevant statements (Guba, 1981). In this regard, transferability is the way in which qualitative studies can be applicable, or transferable, to broader contexts while still maintaining their context-specific richness. Lincoln and Guba (1985) pose an important question that helps us understand the concept of transferability and the parallel notion of external validity: “How can one determine the degree to which the findings of an inquiry may have applicability in other contexts or with other respondents?” (p. 218). Because primacy is placed on fidelity to participants’ experiences in qualitative research, it is important to understand that the goal of qualitative research is not to produce findings that can be directly applied to other settings and contexts. However, qualitative research can certainly be transferable to other contexts. Methods for achieving transferability include having detailed descriptions of the data themselves as well the context (also called thick description) so that readers/research audiences can make comparisons to other contexts based on as much information as possible (Guba, 1981). This allows the audiences of the research (e.g., readers, other researchers, stakeholders, participants) to Reference Dependability refers to the stability of the data. It is similar to the quantitative concept of reliability (Guba, 1981; Lincoln & Guba, 1985). Qualitative research studies are considered dependable by being what Miles et al. (2014) describe as consistent and stable over time. Dependability entails that you have a reasoned argument for how you are collecting the data, and the data are consistent with your argument. In addition, this notion means that data are dependable in the sense that they are answering your research question(s). This entails using appropriate methods (and making an argument for why the methods you use are appropriate) to answer the core constructs and concepts of your study. The methods for achieving dependability are the triangulation and sequencing of methods and creating a well-articulated rationale for these choices to confirm that you have created the appropriate data collection plan given your research questions. As with the other validity constructs, a solid research design is key to achieving dependability. Confirmability, which is often described as the qualitative equivalent of the quantitative concept of objectivity, takes into account the idea that qualitative researchers do not claim to be objective (Guba, 1981). Instead, qualitative researchers seek to have confirmable data and “relative neutrality and reasonable freedom from unacknowledged researcher biases—at the minimum, explicitness about the inevitable biases that exist” (Miles et al., 2014, p. 311). In other words, building on a foundational premise of qualitative research that the world is a subjective place, qualitative researchers do not seek objectivity; however, your findings should be able to be confirmed. Thus, one goal of confirmability is to acknowledge and explore the ways that our biases and prejudices map onto our interpretations of data and to mediate those to the fullest extent possible through structured reflexivity processes (such as the ones described throughout this book). Methods to achieve confirmability include implementing triangulation strategies, researcher reflexivity processes, and external audits (Guba, 1981). As described in Chapter 3, researcher positionality and bias are important aspects of qualitative research that must be scrutinized, problematized, and complicated. Because the researcher is viewed as a primary instrument in qualitative research (Lofland, Snow, Anderson, & Lofland, 2006; Porter, 2010), researchers must challenge themselves and be challenged by others in systematic and ongoing ways throughout all stages of the research. And this must be concertized within the research design itself. Ravitch, S. M., & Carl, N. M. (2021). Qualitative research: Bridging the conceptual, theoretical, and methodological (2nd ed.) Sage Publications. Education for Information 22 (2004) 63–75 IOS Press 63 Strategies for ensuring trustworthiness in qualitative research projects Andrew K. Shenton∗ Division of Information and Communication Studies, School of Informatics, Lipman Building, Northumbria University, Newcastle upon Tyne, NE1 8ST, UK Received 14 November 2003 Accepted 6 January 2004 Although many critics are reluctant to accept the trustworthiness of qualitative research, frameworks for ensuring rigour in this form of work have been in existence for many years. Guba’s constructs, in particular, have won considerable favour and form the focus of this paper. Here researchers seek to satisfy four criteria. In addressing credibility, investigators attempt to demonstrate that a true picture of the phenomenon under scrutiny is being presented. To allow transferability, they provide sufficient detail of the context of the fieldwork for a reader to be able to decide whether the prevailing environment is similar to another situation with which he or she is familiar and whether the findings can justifiably be applied to the other setting. The meeting of the dependability criterion is difficult in qualitative work, although researchers should at least strive to enable a future investigator to repeat the study. Finally, to achieve confirmability, researchers must take steps to demonstrate that findings emerge from the data and not their own predispositions. The paper concludes by suggesting that it is the responsibility of research methods teachers to ensure that this or a comparable model for ensuring trustworthiness is followed by students undertaking a qualitative inquiry. Keywords: Qualitative methods, research 1. Introduction The trustworthiness of qualitative research generally is often questioned by positivists, perhaps because their concepts of validity and reliability cannot be addressed in the same way in naturalistic work. Nevertheless, several writers on research methods, notably Silverman [1], have demonstrated how qualitative researchers can incorporate measures that deal with these issues, and investigators such as Pitts [2] have attempted to respond directly to the issues of validity and reliability in their own qualitative studies. Many naturalistic investigators have, however, preferred to use different terminology to distance themselves from the positivist paradigm. One such author is Guba, who proposes four criteria that he believes should be considered ∗ Address for correspondence: 92 Claremont Road, Whitley Bay, Tyne and Wear, NE26 3TU, UK. E-mail: 0167-8329/04/$17.00 ? 2004 – IOS Press and the authors. All rights reserved 64 A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects by qualitative researchers in pursuit of a trustworthy study [3]. By addressing similar issues, Guba’s constructs correspond to the criteria employed by the positivist investigator: a) b) c) d) credibility (in preference to internal validity); transferability (in preference to external validity/generalisability); dependability (in preference to reliability); confirmability (in preference to objectivity). Although as recently as the mid 1990s Lincoln wrote that the whole area of qualitative inquiry was “still emerging and being defined” [4], Guba’s constructs have been accepted by many. This paper considers the criteria in detail and suggests provisions that the qualitative researcher may employ to meet them. The strategies advocated are based on the experience gained by Shenton when undertaking a qualitative PhD study devoted to the information-seeking behaviour of school-aged children [5]. 2. Credibility One of the key criteria addressed by positivist researchers is that of internal validity, in which they seek to ensure that their study measures or tests what is actually intended. According to Merriam, the qualitative investigator’s equivalent concept, i.e. credibility, deals with the question, “How congruent are the findings with reality?” [6] Lincoln and Guba argue that ensuring credibility is one of most important factors in establishing trustworthiness [7]. The following provisions may be made by researchers to promote confidence that they have accurately recorded the phenomena under scrutiny: a) the adoption of research methods well established both in qualitative investigation in general and in information science in particular. Yin recognises the importance of incorporating “correct operational measures for the concepts being studied” [8]. Thus, the specific procedures employed, such as the line of questioning pursued in the data gathering sessions and the methods of data analysis, should be derived, where possible, from those that have been successfully utilised in previous comparable projects. In terms of investigation of information-seeking behaviour, the work of Dervin has proved particularly influential in this regard. In their study of the information needs of Seattle’s residents, Dervin et al. initially invited participants to reflect on situations “where you needed help. . . where you didn’t understand something. . . where you needed to decide what to do. . . or, where you were worried about something” [9]. Dervin’s respondents then described in detail a particular instance within one of these categories. Similar strategies have been used subsequently by Chen and Hernon [10], Poston-Anderson and Edwards [11] and Shenton [12] amongst others; A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects 65 b) the development of an early familiarity with the culture of participating organisations before the first data collection dialogues take place. This may be achieved via consultation of appropriate documents and preliminary visits to the organisations themselves. Lincoln and Guba [13] and Erlandson et al. [14] are among the many who recommend “prolonged engagement” between the investigator and the participants in order both for the former to gain an adequate understanding of an organisation and to establish a relationship of trust between the parties. The danger emerges, however, that if too many demands are made on staff, gatekeepers responsible for allowing the researcher access to the organisation may be deterred from cooperating. The investigator may also react with some suspicion to the notion of prolonged engagement in view of the undesirable side effects that have been noted by Lincoln and Guba [15] and Silverman [16]. The former draw particular attention to the way in which investigators may become so immersed in the culture under scrutiny that their professional judgements are influenced; c) random sampling of individuals to serve as informants. Although much qualitative research involves the use of purposive sampling, a random approach may negate charges of researcher bias in the selection of participants. As Preece notes, random sampling also helps to ensure that any “unknown influences” are distributed evenly within the sample [17]. Furthermore, it may be that a random method is particularly appropriate to the nature of the investigation. The work may, for example, take the form of a “collective case study” of the type described by Stake, in that multiple voices, exhibiting characteristics of similarity, dissimilarity, redundancy and variety, are sought in order to gain greater knowledge of a wider group [18], such as a more general population, rather than simply the individual informants who are contributing data. This form of research is also recognised by Hamel, Dufour and Fortin, who dub it a “macroscopic” case study, and emphasise the importance of appropriate selection tactics if the investigator is to be confident that informants are typical of members of a broader, “selected society” [19]. According to Bouma and Atkinson, “A random sampling procedure provides the greatest assurance that those selected are a representative sample of the larger group” [20]. A significant disadvantage of random method, however, stems from the fact that, since the researcher has no control over the choice of informants, it is possible that quiet, uncooperative or inarticulate individuals may be selected; d) triangulation. Triangulation may involve the use of different methods, especially observation, focus groups and individual interviews, which form the major data collection strategies for much qualitative research. Whilst focus groups and individual interviews suffer from some common methodological shortcomings since both are interviews of a kind, their distinct characteristics also result in individual strengths. According to Guba [21] and Brewer and Hunter [22], the use of different methods in concert compensates for their individual limitations and exploits their respective benefits. Where possible, 66 A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects supporting data may be obtained from documents to provide a background to and help explain the attitudes and behaviour of those in the group under scrutiny, as well as to verify particular details that participants have supplied. Opportunities should also be seized to examine any documents referred to by informants during the actual interviews or focus groups where these can shed more light on the behaviour of the people in question. Another form of triangulation may involve the use of a wide range of informants. This is one way of triangulating via data sources. Here individual viewpoints and experiences can be verified against others and, ultimately, a rich picture of the attitudes, needs or behaviour of those under scrutiny may be constructed based on the contributions of a range of people. Van Maanen urges the exploitation of opportunities “to check out bits of information across informants” [23]. Such corroboration may, for example, take the form of comparing the needs and information-seeking action described by one individual with those of others in a comparable position. In addition, the investigator may draw informants from both users of an information service and the professionals who deliver it. Even in a user study, where the thrust of the work is likely to lie in analysing the ideas and experiences of users themselves, data provided by those responsible for the management and delivery of the service under scrutiny may well prove invaluable in order to check that supplied by the users, to help explain their attitudes and behaviour and to enhance the contextual data relating to the fieldwork site(s). Just as triangulation via data sources can involve the use of a diversity of informants, a range of documents may also be employed as source material. For example, documents created corporately by each participating organisation may be examined, as well as those relating to the organisation but produced externally. Further data dealing with the wider context in which the organisation is operating may be elicited from official publications. Where appropriate, site triangulation may be achieved by the participation of informants within several organisations so as to reduce the effect on the study of particular local factors peculiar to one institution. Where similar results emerge at different sites, findings may have greater credibility in the eyes of the reader. The sampling of a range of people in different organisations may be employed to provide the diversity that underpins Dervin’s concept of “circling reality”, which she defines as “the necessity of obtaining a variety of perspectives in order to get a better, more stable view of ‘reality’ based on a wide spectrum of observations from a wide base of points in time-space” [24]; e) tactics to help ensure honesty in informants when contributing data. In particular, each person who is approached should be given opportunities to refuse to participate in the project so as to ensure that the data collection sessions involve only those who are genuinely willing to take part and prepared to offer data freely. Participants should be encouraged to be frank from the outset of each session, with the researcher aiming to establish a rapport in the opening A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects f) g) h) i) 67 moments and indicating that there are no right answers to the questions that will be asked. Where appropriate, the independent status of the researcher should also be emphasised. Participants can, therefore, contribute ideas and talk of their experiences without fear of losing credibility in the eyes of managers of the organisation. It should be made clear to participants that they have the right to withdraw from the study at any point, and they should not even be required to disclose an explanation to the investigator. In many instances, such an unconditional right for subjects to withdraw may be a requirement that must be accepted by the researcher when seeking approval for the work; iterative questioning. In addition to the “preventative” strategies outlined above, specific ploys may be incorporated to uncover deliberate lies. These might include the use of probes to elicit detailed data and iterative questioning, in which the researcher returns to matters previously raised by an informant and extracts related data through rephrased questions. In both cases, where contradictions emerge, falsehoods can be detected and the researcher may decide to discard the suspect data. An alternative approach and one that provides greater transparency lies in drawing attention, within the final research report, to the discrepancies and offering possible explanations; negative case analysis, as recommended by commentators such as Lincoln and Guba [25], Miles and Huberman [26] and Silverman [27]. One form of negative case analysis may see the researcher refining a hypothesis until it addresses all cases within the data. If the study includes the production of typologies, on completing the initial categories the investigator may revisit the data in order to confirm that these constructs do indeed account for all instances of the phenomenon involved, even if some of the types embrace only one instance; frequent debriefing sessions between the researcher and his or her superiors, such as a project director or steering group. Through discussion, the vision of the investigator may be widened as others bring to bear their experiences and perceptions. Such collaborative sessions can be used by the researcher to discuss alternative approaches, and others who are responsible for the work in a more supervisory capacity may draw attention to flaws in the proposed course of action. The meetings also provide a sounding board for the investigator to test his or her developing ideas and interpretations, and probing from others may help the researcher to recognise his or her own biases and preferences; peer scrutiny of the research project. Opportunities for scrutiny of the project by colleagues, peers and academics should be welcomed, as should feedback offered to the researcher at any presentations (e.g. at conferences) that are made over the duration of the project. The fresh perspective that such individuals may be able to bring may allow them to challenge assumptions made by the investigator, whose closeness to the project frequently inhibits his or her ability to view it with real detachment. Questions and observations may well enable the researcher to refine his or her methods, develop a greater explanation of the research design and strengthen his or her arguments in the light of the comments made; 68 A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects j) the researcher’s “reflective commentary”. In addition to the outside scrutiny discussed above, the investigator should seek to evaluate the project, again as it develops. This may be done through a reflective commentary, part of which may be devoted to the effectiveness of the techniques that have been employed. The reflective commentary may also be used to record the researcher’s initial impressions of each data collection session, patterns appearing to emerge in the data collected and theories generated. The commentary can play a key role in what Guba and Lincoln term “progressive subjectivity”, or the monitoring of the researcher’s own developing constructions, which the writers consider critical in establishing credibility [28]. Ultimately, the section of the commentary dealing with emerging patterns and theories should inform that part of the research report that addresses the project’s results, and any discussion in the report of the effectiveness of the study maybe based on the investigator’s methods analysis within the reflective commentary; k) background, qualifications and experience of the investigator. According to Patton, the credibility of the researcher is especially important in qualitative research as it is the person who is the major instrument of data collection and analysis [29]. Alkin, Daillak and White go so far as to suggest that a scrutineer’s trust in the researcher is of equal importance to the adequacy of the procedures themselves [30]. The nature of the biographical information that should be supplied in the research report is a matter of debate. Maykut and Morehouse recommend including any personal and professional information relevant to the phenomenon under study [31], and Patton adds that arrangements by which the investigator is funded should also be addressed [32]. Any approvals given to the project by those providing access to the organisation and individual participants should also be made explicit; l) member checks, which Guba and Lincoln consider the single most important provision that can be made to bolster a study’s credibility [33]. Checks relating to the accuracy of the data may take place “on the spot” in the course, and at the end, of the data collection dialogues. Informants may also be asked to read any transcripts of dialogues in which they have participated. Here the emphasis should be on whether the informants consider that their words match what they actually intended, since, if a tape recorder has been used, the articulations themselves should at least have been accurately captured. Another element of member checking should involve verification of the investigator’s emerging theories and inferences as these were formed during the dialogues. This strategy has been employed by Pitts [34] and is recommended by Brewer and Hunter [35] and Miles and Huberman [36]. Where appropriate, participants may be asked if they can offer reasons for particular patterns observed by the researcher. The importance of developing such a formative understanding is recognised by Van Maanen, who writes that A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects 69 “analysis and verification. . . is something one brings forth with them from the field, not something which can be attended to later, after the data are collected. When making sense of field data, one cannot simply accumulate information without regard to what each bit of information represents in terms of its possible contextual meanings” [37]; m) thick description of the phenomenon under scrutiny. Detailed description in this area can be an important provision for promoting credibility as it helps to convey the actual situations that have been investigated and, to an extent, the contexts that surround them. Without this insight, it is difficult for the reader of the final account to determine the extent to which the overall findings “ring true”. Moreover, if the researcher employs a reporting system in which he or she defines a series of types within a typology and illustrates these types using real qualitative episodes, the inclusion of the latter enables the reader to assess how far the defined types truly embrace the actual situations; n) examination of previous research findings to assess the degree to which the project’s results are congruent with those of past studies. Silverman considers that the ability of the researcher to relate his or her findings to an existing body of knowledge is a key criterion for evaluating works of qualitative inquiry [38]. In this respect, reports of previous studies staged in the same or a similar organisation and addressing comparable issues may be invaluable sources. 3. Transferability Merriam writes that external validity “is concerned with the extent to which the findings of one study can be applied to other situations” [39]. In positivist work, the concern often lies in demonstrating that the results of the work at hand can be applied to a wider population. Since the findings of a qualitative project are specific to a small number of particular environments and individuals, it is impossible to demonstrate that the findings and conclusions are applicable to other situations and populations. Erlandson et al. note that many naturalistic inquirers believe that, in practice, even conventional generalisability is never possible as all observations are defined by the specific contexts in which they occur [40]. A contrasting view is offered by Stake [41] and Denscombe [42], who suggest that, although each case may be unique, it is also an example within a broader group and, as a result, the prospect of transferability should not be immediately rejected. Nevertheless, such an approach can be pursued only with caution since, as Gomm, Hammersley and Foster recognise, it appears to belittle the importance of the contextual factors which impinge on the case [43]. Bassey proposes that, if practitioners believe their situations to be similar to that described in the study, they may relate the findings to their own positions [44]. Lincoln and Guba [45] and Firestone [46] are among those who present a similar argument, and suggest that it is the responsibility of the investigator to ensure that sufficient contextual information about the fieldwork sites is provided to enable the 70 A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects reader to make such a transfer. They maintain that, since the researcher knows only the “sending context”, he or she cannot make transferability inferences. In recent years such a stance has found favour with many qualitative researchers. After perusing the description within the research report of the context in which the work was undertaken, readers must determine how far they can be confident in transferring to other situations the results and conclusions presented. It is also important that sufficient thick description of the phenomenon under investigation is provided to allow readers to have a proper understanding of it, thereby enabling them to compare the instances of the phenomenon described in the research report with those that they have seen emerge in their situations. Authors disagree on the nature and extent of background information that should be offered but few would dispute the need for “a full description of all the contextual factors impinging on the inquiry”, as recommended by Guba [47]. Nevertheless, the situation is complicated by the possibility, noted by Firestone, that factors considered by the researcher to be unimportant, and consequently unaddressed in the research report, may be critical in the eyes of a reader [48]. Many investigators stop short of the course of action advocated by Denscombe that the researcher should demonstrate how, in terms of the contextual data, the case study location(s) compare(s) with other environments [49]. This reluctance is based on the fact that the process would demand a considerable knowledge of the “receiving contexts” of other organisations, and the researcher is in no position to comment on what Merriam calls the “typicality” of the environment(s) in which the fieldwork took place [50]. The work of Cole and Gardner [51], Marchionini and Teague [52] and Pitts [53] highlights the importance of the researcher’s conveying to the reader the boundaries of the study. This additional information must be considered before any attempts at transference are made. Thus information on the following issues should be given at the outset: a) b) c) d) e) f) the number of organisations taking part in the study and where they are based; any restrictions in the type of people who contributed data; the number of participants involved in the fieldwork; the data collection methods that were employed; the number and length of the data collection sessions; the time period over which the data was collected. It is easy for researchers to develop a preoccupation with transferability. Ultimately, the results of a qualitative study must be understood within the context of the particular characteristics of the organisation or organisations and, perhaps, geographical area in which the fieldwork was carried out. In order to assess the extent to which findings may be true of people in other settings, similar projects employing the same methods but conducted in different environments could well be of great value. As Kuhlthau [54] and Gomm, Hammersley and Foster [55] recognise, however, it is rare for such complementary work to be undertaken. Nevertheless, the accumulation of findings from studies staged in different settings might enable a more inclusive, A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects 71 overall picture to be gained. A similar point is made by Gross, in relation to her work on imposed queries in school libraries. She writes of the “multiple environments” in which the phenomenon of her interest takes place and believes her study to provide a “baseline understanding” with which the results of subsequent work should be compared [56]. As Borgman [57] and Pitts [58] have acknowledged, understanding of a phenomenon is gained gradually, through several studies, rather than one major project conducted in isolation. Even when different investigations offer results that are not entirely consistent with one another, this does not, of course, necessarily imply that one or more is untrustworthy. It may be that they simply reflect multiple realities, and, if an appreciation can be gained of the reasons behind the variations, this understanding may prove as useful to the reader as the results actually reported. Such an attitude is consistent with what Dervin considers should be key principles within information-seeking research, namely: “To posit. . . every contradiction, every inconsistency, every diversity not as an error or extraneous but as fodder for contextual analysis. To ask and re-ask what accounts for this difference or this similarity and to anchor possible answers in time-space conceptualizings” [59]. It should thus be questioned whether the notion of producing truly transferable results from a single study is a realistic aim or whether it disregards the importance of context which forms such a key factor in qualitative research. 4. Dependability In addressing the issue of reliability, the positivist employs techniques to show that, if the work were repeated, in the same context, with the same methods and with the same participants, similar results would be obtained. However, as Fidel [60] and Marshall and Rossman [61] note, the changing nature of the phenomena scrutinised by qualitative researchers renders such provisions problematic in their work. Florio-Ruane highlights how the investigator’s observations are tied to the situation of the study, arguing that the “published descriptions are static and frozen in the ‘ethnographic present’ ” [62]. Lincoln and Guba stress the close ties between credibility and dependability, arguing that, in practice, a demonstration of the former goes some distance in ensuring the latter [63]. This may be achieved through the use of “overlapping methods”, such as the focus group and individual interview. In order to address the dependability issue more directly, the processes within the study should be reported in detail, thereby enabling a future researcher to repeat the work, if not necessarily to gain the same results. Thus, the research design may be viewed as a “prototype model”. Such in-depth coverage also allows the reader to assess the extent to which proper research practices have been followed. So as to enable readers of the research report to develop a thorough understanding of the methods and their effectiveness, the text should include sections devoted to a) the research design and its implementation, describing what was planned and executed on a strategic level; 72 A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects b) the operational detail of data gathering, addressing the minutiae of what was done in the field; c) reflective appraisal of the project, evaluating the effectiveness of the process of inquiry undertaken. 5. Confirmability Patton associates objectivity in science with the use of instruments that are not dependent on human skill and perception. He recognises, however, the difficulty of ensuring real objectivity, since, as even tests and questionnaires are designed by humans, the intrusion of the researcher’s biases is inevitable [64]. The concept of confirmability is the qualitative investigator’s comparable concern to objectivity. Here steps must be taken to help ensure as far as possible that the work’s findings are the result of the experiences and ideas of the informants, rather than the characteristics and preferences of the researcher. The role of triangulation in promoting such confirmability must again be emphasised, in this context to reduce the effect of investigator bias. Miles and Huberman consider that a key criterion for confirmability is the extent to which the researcher admits his or her own predispositions [65]. To this end, beliefs underpinning decisions made and methods adopted should be acknowledged within the research report, the reasons for favouring one approach when others could have been taken explained and weaknesses in the techniques actually employed admitted. In terms of results, preliminary theories that ultimately were not borne out by the data should also be discussed. Much of the content in relation to these areas may be derived from the ongoing “reflective commentary”. Once more, detailed methodological description enables the reader to determine how far the data and constructs emerging from it may be accepted. Critical to this process is the “audit trail”, which allows any observer to trace the course of the research step-by-step via the decisions made and procedures described. The “audit trail” may be represented diagrammatically. Two such diagrams may be constructed. One may take a data-oriented approach, showing how the data eventually leading to the formation of recommendations was gathered and processed during the course of the study. This is what is typically understood by the term, “audit trail”. In addition, however, the manner in which the concepts inherent in the research question gave rise to the work to follow may be tracked. This more theoretical “audit trail”, which should be understood in terms of the whole of the duration of the project, may be depicted in a second diagram. 6. Summary and conclusion Over the last twenty years, much has been achieved by advocates of qualitative inquiry in demonstrating the rigour and trustworthiness of their favoured form of A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects 73 Provisions that may be Made by a Qualitative Researcher Wishing to Address Guba’s Four Criteria for Trustworthiness Quality criterion Possible provision made by researcher Credibility Adoption of appropriate, well recognised research methods Development of early familiarity with culture of participating organisations Random sampling of individuals serving as informants Triangulation via use of different methods, different types of informants and different sites Tactics to help ensure honesty in informants Iterative questioning in data collection dialogues Negative case analysis Debriefing sessions between researcher and superiors Peer scrutiny of project Use of “reflective commentary” Description of background, qualifications and experience of the researcher Member checks of data collected and interpretations/theories formed Thick description of phenomenon under scrutiny Examination of previous research to frame findings Transferability Provision of background data to establish context of study and detailed description of phenomenon in question to allow comparisons to be made Dependability Employment of “overlapping methods” In-depth methodological description to allow study to be repeated Confirmability Triangulation to reduce effect of investigator bias Admission of researcher’s beliefs and assumptions Recognition of shortcomings in study’s methods and their potential effects In-depth methodological description to allow integrity of research results to be scrutinised Use of diagrams to demonstrate “audit trail” research. Nevertheless, criticisms of work of this kind continue to be made by positivists. This paper has addressed four criteria that may be addressed by qualitative researchers wishing to present a convincing case that their work is academically sound. A range of strategies that may be adopted by investigators in response to these issues has been highlighted. These are summarised in the chart below: The challenge for those involved in teaching courses in research methods lies in ensuring that those contemplating undertaking qualitative research are not only aware of the criticisms typically made by its detractors but they are also cognisant of the provisions which can be made to address matters such as credibility, transferability, dependability and confirmability. Prospective researchers can then assess the extent to which they are able to apply these generic strategies to their particular investigation. References [1] [2] D. Silverman, Interpreting qualitative data: methods for analysing talk, text and interaction, 2nd ed. London: Sage, 2001. J.M. Pitts, Personal understandings and mental models of information: a qualitative study of factors associated with the information-seeking and use of adolescents, PhD Thesis, Florida State University, 1994. 74 [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects E.G. Guba, Criteria for assessing the trustworthiness of naturalistic inquiries, Educational Communication and Technology Journal 29 (1981), 75–91. Y.S. Lincoln, Emerging criteria for quality in qualitative and interpretive research, Qualitative Inquiry 1 (1995), 275–289. A.K. Shenton, The characteristics and development of young people’s information universes, PhD Thesis, Northumbria University, 2002. S.B. Merriam, Qualitative research and case study applications in education, San Francisco: Jossey-Bass, 1998. Y.S. Lincoln and E.G. Guba, Naturalistic inquiry, Beverly Hills: Sage, 1985. R.K. Yin, Case study research: design and methods, 2nd ed., Thousand Oaks: Sage, 1994, Applied Social Research Methods Series, Vol. 5. B. Dervin, et al., The development of strategies for dealing with the information needs of urban residents, phase one: the citizen study, Seattle: School of Communications, University of Washington, 1976. C-c. Chen and P. Hernon, Information seeking: assessing and anticipating user needs, New York: Neal Schuman, 1982 (Applications in Information Management and Technology Series). B. Poston-Anderson and S. Edwards, The role of information in helping adolescent girls with their life concerns, School Library Media Quarterly 22 (1993), 25–30. A.K. Shenton, op. cit. Y.S. Lincoln and E.G. Guba, op. cit. D.A. Erlandson, et al., Doing naturalistic inquiry: a guide to methods, London: Sage, 1993. Y.S. Lincoln and E.G. Guba, op. cit. D. Silverman, Doing qualitative research: a practical handbook, London: Sage, 2000. R. Preece, Starting research: an introduction to academic research and dissertation writing, London: Pinter, 1994. R.E. Stake, Case studies, in: Handbook of qualitative research, N.K. Denzin and Y.S. Lincoln, eds, Thousand Oaks: Sage, 1994, pp. 236–247. J. Hamel, S. Dufour and D. Fortin, Case study methods, Newbury Park: Sage, 1993, Qualitative Research Methods Series, Vol. 32. G.D. Bouma and G.B.J. Atkinson, A handbook of social science research, 2nd ed., Oxford: Oxford University Press, 1995. E.G. Guba, op. cit. J. Brewer and A. Hunter, Multimethod research: a synthesis of styles, Newbury Park: Sage, 1989, Sage Library of Social Research Series, Vol. 175. J. Van Maanen, The fact and fiction in organizational ethnography, in: Qualitative methodology, J. Van Maanen, ed., Beverly Hills: Sage, 1983, pp. 37–55. B. Dervin, An overview of sense-making: concepts, methods, and results to date, Paper presented at the annual meeting of the International Communications Association, Dallas, TX, May 1983, URL: Y.S. Lincoln and E.G. Guba, op. cit. M.B. Miles and A.M. Huberman, Qualitative data analysis: an expanded sourcebook, 2nd ed. California: Sage, 1994. D. Silverman, op. cit. 2000. E.G. Guba and Y.S. Lincoln, Fourth generation evaluation, Newbury Park: Sage, 1989. M.Q. Patton, Qualitative evaluation and research methods, 2nd ed. Newbury Park: Sage, 1990. M.C. Alkin, R. Daillak and P. White, Using evaluations: does evaluation make a difference? Beverly Hills: Sage, 1979 (Sage Library of Social Research Series, Vol. 76). P. Maykut and R. Morehouse, Beginning qualitative research: a philosophic and practical guide, London: Falmer Press, 1994, The Falmer Press Teachers’ Library: 6. M.Q. Patton, op. cit. E.G. Guba and Y.S. Lincoln, op. cit. J.M. Pitts, op. cit. J. Brewer and A. Hunter, op. cit. A.K. Shenton / Strategies for ensuring trustworthiness in qualitative research projects [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] 75 M.B. Miles and A.M. Huberman, op. cit. J. Van Maanan, op. cit. D. Silverman, op. cit. 2000. S.B. Merriam, op. cit. D.A. Erlandson, et al., op. cit. R.E. Stake. Case studies, in: Handbook of qualitative research, N.K. Denzin and Y.S. Lincoln, eds, Thousand Oaks: Sage, 1994, pp. 236–247. M. Denscombe, The good research guide for small-scale social research projects, Buckingham: Open University Press, 1998. R. Gomm, M. Hammersley and P. Foster, Case study and generalization, in: Case study method, R. Gomm, M. Hammersley and P. Foster, eds, London: Sage, 2000, pp. 98–115. M. Bassey, Pedagogic research: on the relative merits of search for generalisation and study of single events, Oxford Review of Education 7 (1981), 73–93. Y.S. Lincoln and E.G. Guba, op. cit. W.A. Firestone, Alternative arguments for generalizing from data as applied to qualitative research, Educational Researcher 22 (1993), 16–23. E.G. Guba, op. cit. W.A. Firestone, op. cit. M. Denscombe, op. cit. S.B. Merriam, op. cit. J. Cole and K. Gardner, Topic work with first-year secondary pupils, in: The effective use of reading, E. Lunzer and K. Gardner, eds, London: Heinemann, Heinemann Educational Books for the Schools Council, 1979, pp. 167–192. G. Marchionini and J. Teague, Elementary students’ use of electronic information services: an exploratory study, Journal of Research on Computing in Education 20 (1987), 139–155. J.M. Pitts, op. cit. C.C. Kuhlthau, Investigating patterns in information seeking: concepts in context, in: Exploring the contexts of information behaviour, T.D. Wilson and D.K. Allen, eds, London: Taylor Graham, 1999, pp. 10–20. R. Gomm, M. Hammersley and P. Foster, op. cit. M.R. Gross, Imposed queries in the school library media center: a descriptive study, Doctoral Dissertation, University of California, 1998. C.L. Borgman, The user’s mental model of an information retrieval system: an experiment on a prototype online catalog, International Journal of Man-Machine Studies 24 (1986), 47–64. J.M. Pitts, op. cit. B. Dervin, Given a context by any other name: methodological tools for taming the unruly beast, in: Information seeking in context, P. Vakkari, R. Savolainen and B. Dervin, eds, London: Taylor Graham, 1997, pp. 13–38. R. Fidel, Qualitative methods in information retrieval research, Library and Information Science Research 15 (1993), 219–247. C. Marshall and G.B. Rossman, Designing qualitative research, 3rd ed. Newbury Park: Sage, 1999. S. Florio-Ruane, Conversation and narrative in collaborative research, in: Stories lives tell: narrative and dialogue in education, C. Witherell and N. Noddings, eds, New York: Teachers College Press, 1991, pp. 234–256. Y.S. Lincoln and E.G. Guba, op. cit. M.Q. Patton, op. cit. M.B. Miles and A.M. Huberman, op. cit.

Option 1

Low Cost Option
Download this past answer in few clicks

15.89 USD


Already member?

Option 2

Custom new solution created by our subject matter experts