FIVE FOUNDATIONS OF LEARNING ENVIRONMENTS

Get Complete Project Material File(s) Now! »

CHAPTER 3 RESEARCH DESIGN AND METHOD

INTRODUCTION

This chapter describes the study design and methods for sample collection, with a view to determining and comparing the perspectives of learners from selected programmes, on the learning environment prevailing in the School of Medicine of the University of Zambia. The study paid attention to learners’ perception of learning, perception of lecturers and programme organizers, academic self-perception, the learning atmosphere, and social self-perception, which constitute the a posteriori established subscales of the Dundee Ready Educational Environments Measure (DREEM) questionnaire. Since paradigmatic perspective is pivotal to any research, a brief description is hereby presented.
Four paradigms have ostensibly manifested in medical and health sciences educational research. These are positivism, post-positivism, interpretivism, and critical theory (Bunnis & Kelly 2010:361). Their ontological, epistemological, and methodological differences translate into different designs and methodologies in conducting medical education research and the manner in which the results are interpreted (Weaver & Olson 2006:459). These differences are illustrated in table 3.1 below. Bunnis and Kelly (2010:258, 358, 364) further argues that for legitimacy, medical education research should discuss the epistemological stance, suggesting that the quality of research is defined by the integrity and transparency of the research philosophy. The epistemological stance of this study is fundamentally positivism with a tinge of post-positivism. It is rooted in positivism with the view that the reality can be discovered using deductive approach in which ideas or concepts are deduced into variables (Polit & Beck 2010:314) as illustrated in table 3.1.

RESEARCH DESIGN

Bryman and Bell (2015:49) and Happner Wampold, Owen, Thompson, and Wang (2015:118) define research design as the conceptual framework to guide research structure and its execution. The frame work specifies criteria for data collection and analysis, including the criteria to be used for evaluating the research result. Such criteria according to Bryman and Bell (2015:49) include study validity, study reliability, and trustworthiness. Trochim, Donnelly, and Arora (2015), and Donnelly and Trochim (2007) outline the critical components of research design to include the sample, the measurement, the conditions, methods of assignment to study groups, data collection methods, and timing of study procedure (Happner et al 2015:118). The importance attached to some defining factors associated with research results determines the adoption of a specific research design. These factors include, but are not limited to, objectivity of the findings, generalizability to populations beyond that from which the actual study participants were drawn, and the possibility of establishing a cause-effect relationship (Bryman & Bell 2015:49). In medical and health sciences education, quantitative, qualitative, and more recently, mixed methods research are commonly adopted, the choice of approach being determined by the type and uniqueness of the research questions being addressed by the research (Bearman & Dawson 2013:252-260; Clement, Schauman, Graham, Maggioni, Evans-Lacko, Bezborodovs, Morgan, Rüsch, Brown, & Thornicroft 2015:11-27; Holloway & Wheeler 2013; Triola, Huwendiek, Levinson, & Cook 2012:e15-e20; O’brien, Harris, Beckman, Reed, & Cook 2014: 1245-1251).This study adopted the quantitative , descriptive, non-experimental design.

Quantitative design

A quantitative non-experimental descriptive research design was used to investigate the research problem. Quantitative research is described as a study that involves using a systematic scientific method to gather numerical data which when analysed by a mathematical (statistical) procedure, yields results that could be interpreted deductively, and generalized to a wider population (Bryman & Bell 2015:37-38). Bryman and Bell (2015:37-38) further states that quantitative research entails a deductive approach to unravelling the relationship between research and theory, adopts the scientific process, and views reality as external and objective. The outcomes of a quantitative study are therefore objective, generalizable, and neutral (i.e. value-free) (Bunnis & Kelly 2010:361).

 Descriptive design

Descriptive designs describe the existence and characteristics of phenomena, and are useful in exploratory inquiry (Happner et al 2015:286-287). Descriptive designs have been classified into surveys, variable-centred, and person-centred designs. Whereas survey designs are used to characterise occurrence of attributes in the population, variable-centred designs examine relationships between variables, while person-centred designs identify groups of persons with a common attribute within a population (Laursen & Hoff 2006:377).

Non-experimental design

A non-experimental study design is a study design in which the investigator merely observes the phenomenon in its natural setting without actively interfering (Colamesta & Pistelli 2014:249). The design is often referred to as observational study. Observational studies are cheaper to conduct than experimental studies, and in some cases, as in the problem under study, may be the only alternative where the variables such as “perception” are not amenable to experimentation. The methodological quality of observational studies could be assessed using the Newcastle Ottawa Scale-Education (NOS-E) specifically designed for evaluating research in education (Colamesta & Pistelli 2014:251; Liu, Peng, Zhang, Hu, Li & Yan 2016:e2), or with the Medical Education Research Study Quality Instrument (MERSQI) tailored to the needs of evaluation in medical education research (Batt-Rawden, Chisolm, Anton & Flickinger 2013:1171-1177; Cheston, Flickinger, & Chisolm 2013:893-901). The usefulness of both instruments in appraising medical education research was recently evaluated and reported to be comparable (Cook & Reed 2015:1067).

RESEARCH METHOD

A research method specifies the techniques for data collection, including the description of the study population, sampling frame, sampling method, sample size, data collection instrument, as well as the measures to ensure validity and reliability of the data. Polit and Beck (2013:8) defines research method as “the techniques researchers use to structure a study and to gather and analyse relevant information.”

Study setting

The setting for this study was the School of Medicine of the University of Zambia which is located in the Ridge Way Campus of the University in Lusaka, and has offices and facilities in the University Teaching Hospital (UTH) situated adjacent to the Ridge Way Campus.
The School of Medicine was established in 1966 to run only the Bachelor of Medicine and Bachelor of Surgery degree. Overtime, the School has transformed and now runs other programmes as well. These include the Bachelor of Pharmacy, Bachelor of Physiotherapy, Bachelor of Nursing Sciences, Bachelor of Environmental Health Sciences, and Bachelor of Biomedical Sciences degrees. In addition, a host of other postgraduate degree programmes are on course such as Masters and Doctor of Philosophy degrees in several disciplines of the Basic Biomedical Sciences, Nursing Sciences, the specialities of Medicine and Surgery, Health Professions Education, and Public Health.
The Bachelor of Medicine and Surgery degree is a seven (7) year programme comprising four (4) preclinical years that leads to the award of a Bachelor of Science in Human Biology degree on successful completion, and three (3) clinical years culminating in the award of Bachelor of Medicine and Bachelor of Surgery (MBChB) degree. The first two years are spent in the main campus (Great East Road Campus) of the University, where students take courses in advanced basic sciences. For this reason, only students in year 3 to year 7 participated in the study. The curriculum is outcomes based, but primarily lecture based as well. A significant community-based component is integrated into the programme. Assessment methods include continuous assessment and end of year examinations using a variety of approaches such as multiple choice and essay type written examinations, and in the clinical years, objective structured clinical examinations (OSCEs).
The Bachelor of Pharmacy programme lasts 5 years, and like the MBChB programme, its curriculum is competency and lecture based. Students spend the first two years in the main campus as well, and return to the Ridgeway Campus for the clinical years, year 3 to year 5. The Bachelor of Physiotherapy programme is also of 5 years duration, and is also competency and lecture based. However, the students report to the Ridge Way Campus in the second year of the programme.
Zambia has four recognised medical schools – the University of Zambia School of Medicine, Lusaka Apex Medical University (a private university), Cavendish University School of Medicine, and Copper Belt University School of Medicine. The first three universities are located in Lusaka, while Copper Belt University is situated in Kitwe (see figure 3.1). More recently, the Mulungushi University open a new medical school in Kabwe in January 2016, making the fifth medical school in Zambia.

Population

In order to answer the research question, individuals, objects or elements that can shed light to the issues related to the topic under investigation have to be identified. These are termed the ‘research population’.
The study population refers to the population from which the sample is drawn. In this study, this included all undergraduate students enrolled and studying at the School of Medicine, Ridgeway Campus of the University of Zambia at the time of this study. This number was determined to be 1,330. The target population has been defined by Statistics Canada as “the set of elements about which information is wanted and estimates are required” (Statistics Canada 2003). Put in another way, it is the population to whom the results of the study may be generalised.
The target population for this study comprised all undergraduate students enrolled in full time studies in medical and health sciences programmes in universities in Zambia. The study population for this project were those students actively enrolled in full time studies at the School of Medicine of the University of Zambia at the time of this study
However the universal population was not manageable due to size, location, numbers and other practical considerations. In this instance the accessible population becomes practical for sampling (Brink 2006:1230). The accessible population in this study comprised only students studying at the Ridgeway Campus and the University Teaching Hospital, at the time of this study. These included students in year 3 to year 7 for the Medicine/Surgery programme, year 3 to year 5 for the Pharmacy programme, and year 2 to year 5 for the Physiotherapy programme.

Sampling

The goal of quantitative research is to generalize results from a sample to the larger population from which the sample was extracted. Probability sampling allows these inferences to be made with precision, and is very vital to ensuring the validity of the research results (Bryman 2016:178, 181). Stratified random sampling was adopted for this study. This sampling strategy ensures that the sampling distribution is similar to that of the population from which the sample was extracted, and that the variance is minimised thereby improving the precision by eliminating variation between strata (Bryman 2016:178-182).
The programmes in the School are Medicine/Surgery, Pharmacy, Nursing, Physiotherapy, Environmental Health, and Biomedical Sciences. Nursing, Medicine/Surgery, and Pharmacy enrolled the highest number of students. Nursing was not included in the programme because of the heterogeneity of platforms within the programme. The programmes are delivered using a variety of platforms including distance learning, online, regular, and parallel models. Medicine/Surgery, Pharmacy, and Physiotherapy were purposively selected as representative programmes for the study based on researcher’s best judgement.

Sampling frame

The sampling frame specifies the list from which the sample was drawn. In this study, two lists were used – the list of programmes running undergraduate degrees in the School of Medicine and the list of students enrolled in the programmes as indicated in table 3.2.

Inclusion criteria

The inclusion criteria for the study were:
1. Participant must be currently and actively enrolled in one of the selected undergraduate degree programmes;
2. The participant must be a full time student in good standing;
3. The participant must give informed consent to volitionally participate in the study.
4. The participant must be studying at the School of Medicine, Ridgeway Campus of the University of Zambia.

Sample size

The sample size was calculated using an online sample size calculator provided by Raosoft Inc. (available at http://www.raosoft.com/samplesize.html). The calculation utilised a margin of error of 5 %, confidence level of 95 %, population size, and response distribution of 50%, using the formula:
[n = (Z2 × P (1 – P))/e2 ]
where n is the sample size, Z is the confidence level, P is the response distribution, and e is margin of error. To maximize the reliability of the data, sample size was calculated for each individual programme included in the study. To analyse the overall School learning environment, the samples from the selected programmes were pooled. Table 3.2 shows the computed sample sizes for the individual programmes that participated in the study. The list of students enrolled in each programme was drawn to provide the sampling frame. Each programme was stratified into classes according to the level of study. Based on the enrolment in each class, the number of participants required from the class was calculated as follows:
(Sample size for the programme ÷ Total enrolment in programme) × Class enrolment
Participants were then selected from the list by simple random sampling using an online randomization program, Research Randomizer (available at: https://www.randomizer.org/). The students whose serial numbers on the class list correspond to the random numbers generated by the Randomiser were invited to participate in the study.

READ  SUPPORTING LEARNERS EXPERIENCING READING DIFFICULTIES

DATA COLLECTION

Data collection instrument

Literature on learning environment measurement was reviewed to identify the most appropriate instrument for data collection in this study. The characteristics of some of these instruments were reviewed in chapter 2. The DREEM questionnaire was selected based on its wider application in medical and health sciences education research, and because a number of studies report on its reliability and validity in different cultural and socioeconomic settings.

Description of the DREEM questionnaire

The DREEM questionnaire was developed by Roff and colleagues in 1997 as a generic tool for measuring the educational environment of medical schools, using a Delphi panel consisting of seasoned international educator (Miles, Swift & Leinster 2012:e620-e634, Roff 2005:322-5). For two decades, it has been used as the most suitable tool for a variety of purposes relating to assessment of learning environments of medical and health sciences educational institutions. It has been translated into eight languages in over 20 countries (Miles et al 2012:e620), and it has also been modified for use in postgraduate medical education (Roff, McAleer, & Skinner 2005:326-331) and agricultural education (Atapattu, Kumari, Pushpakumara & Mudalige 2015:22-30).
The DREEM consists of fifty (50) close-ended items that would yield quantitative data, to which study participants respond on a five (5) point Likert-like scale ranging from strongly agree to strongly disagree. The sample questionnaire is included as annex C.
The factor structure of the DREEM consists of five (5) subscales, namely students’ perception of learning (SPL) containing 12 items, students’ perception of teachers – lecturers/programme organizers (SPT) containing 11 items, academic self-perception (ASP) containing eight (8) items, perception of the learning atmosphere (SPL) containing 12 items, and social self-perception (SSP) containing 7 items. Of the 50 items in the DREEM, nine (9) are negative statements (items 4, 8, 9, 17, 25, 35, 39, 48, and 50) while the remaining 41 items are positive. McAleer and Roff (2001:29-33) provides a guide to rating the completed copies of the questionnaire (Annex E). For the positive items, responses are rated as follows:
Strongly agree 4
Agree 3
Uncertain 2
Disagree 1
Strongly disagree 0
For the 9 negative items, responses were rated as:
Strongly agree 0
Agree 1
Uncertain 2
Disagree 3
Strongly disagree 4
Based on the above rating rubric, the maximum global score for the entire 50 items is
200. Scores of 0-50 are rated as “Very Poor,” scores of 51-100, as “Plenty of Problems,” 101-150 as “More Positive than Negative,” and 151-200 as “Excellent.” A score of 100 is interpreted as an environment which is viewed with “considerable ambivalence” needs to be improved.
Maximum score for the 12 items in the subscale of perception of learning is 48. Scores were interpreted as: 0-12 “Very Poor,” 13-24 “Teaching is viewed negatively,” 25-36 “A more positive perception,” and 37-48 “Teaching highly thought of.” For the subscale of perception of teachers/course organizers, the maximum score for the 11 items was 44. Scores for this subscale were interpreted as: 0-11 “Abysmal,” 12-22 “staff in need of some retraining,” 23-33 “Moving in the right direction,” and 34-44 “Model lecturers/ course organisers.” The subscale of academic self-perception had 8 items with a maximum score of 32. Interpretation was as follows: 0-8 “Feelings of total failure,” 9-16 “Many negative aspects,” 17-24 “Feeling more on the positive side” and 25-32 “Confident.” The 4th subscale of perception of atmosphere had 12 items and a maximum score of 48. Interpretation of scores in the subscale was as follows: 0-11 “A terrible environment,” 13-24 “There are many issues which need changing,” 25-36 “A more positive attitude,” and 37-38“A good feeling overall.” Finally, the subscale of social self-perception had 7 items and a maximum score of 28. Scores were interpreted as 0-7 “Miserable,” 8-14 “Not a nice place,” 15-21 “Not too bad,” and 22-28 “Very good socially.” Details of interpretation of the subscales are presented in the annexure E.

Data collection process

Collection of data was carried out in the month of March 2016 at the Ridgeway Campus and the University Teaching Hospital, Lusaka. Research assistants comprising undergraduate students who volunteered for the purpose collected the data. After briefing the research assistants at the Ridgeway Campus, they were assigned to different programmes for data collection. This enhanced administration and collation of the questionnaires according to programmes. Before or after a lecture, the students were addressed by their class representatives and the research assistant assigned to the programme, to explain the purpose of the study. Each participating student was given the information sheet (Annex A) and after reading and confirming understanding of the content, signed the consent form (Annex B). The participant was then handed a copy of the DREEM questionnaire (Annex C) and a copy of the demographic questionnaire (Annex D). Permission to use the DREEM questionnaire was sought and obtained from one of the authors (Dr McAleer; see annexes F & G). Each participant was asked to respond as truthfully as possible to each item in the questionnaire unassisted, and to provide 3 responses to the one open ended question included in the questionnaire. It should take about 15 minutes to complete the questionnaire, but the students were allowed to fill the questionnaire at their convenience. Follow up was by personal visits to the class by the research assistants and phone calls to the participants through their class representatives.
Completed copies of the questionnaire were returned in large envelopes to the investigator, who then rated the responses and entered the raw data in an excel spreadsheet template developed by the investigator.

Reliability and Validity

Reliability

Validity and reliability are important attributes of any research report. Reliability measures the consistency and stability of a measurement tool. In quantitative studies, test-retest reliability may be used to assess the stability of the test instrument overtime (Velligan, Fredrick, Mintz, Li, Rubin, Dube, Deshpande, Trivedi, Gautam & Avasthi 2014:1047). Most often, computation of Cronbach’s alpha is used to measure internal consistency (inter-item correlation) of items designed to measure the same construct in a data collection tool (Hammond, O’rourke, Kelly, Bennett & O’flynn 2012:1; Peterson & Kim 2013:194; Tang, Cui & Babenko 2014:205; Tavakol & Dennick 2011:53; Yusoff 2012b:509638; Vaughan, Mulcahy & McLaughlin 2014:1).

Validity

Validity, as used in this study, refers to the ability of the test instrument to provide data that would lead to inferences and conclusions that could be considered “the best approximation to the truth” (Research Methods Knowledge Base, Accessed April 18, 2016). Several factors influence the validity of a research report. These generally arise from the operationalization of the research process. Construct validity refers to the ability of the instrument to measure the construct which it is intended to measure (Yussoff 2012a:314). Construct validity of a quantitative research is often measured by analysing principal component (PCA) which in effect determines the factor structure of the tool used for data collection.
Internal validity refers to ability of the research to demonstrate cause-effect relationships, a factor that is very important in experimental studies. Since this study is non-experimental and descriptive, such cause-effect inferences were not the prime concern of the study, and external and construct validity are given more emphasis.

EXTERNAL VALIDITY OF THE STUDY

External validity of a research report refers to the generalizability of the research report to populations or groups beyond which the sample was collected. External validity is an important accompaniment of any good research whether the design is quantitative, qualitative, or mixed method. External validity is subject to several threats. These are “explanations of what may go wrong when we try to transport results from one study to another while ignoring their differences” according to Pearl and Bareinboim (2014:579). Some of the threats to the external validity of this study could arise from selection bias, homogeneity of the populations, and stability of test instrument. Probability sample technique which was employed in the study helps to control for selection bias. To a large extent, the study sample was representative, as statistical methods were used to calculate sample size for each programme, and each student had a fair chance to participate in the study. To control for heterogeneity, the study population was comparable to the target population in the sense that these are undergraduate students in similar programmes in the healthcare professions, and they were drawn from the same country. Furthermore, the curricula of these four schools are similar, having been designed and developed by teams drawn from the same pool of university faculty in Zambia. The schools share resources include teaching staff, laboratories, and the clinical facilities provided by the University Teaching Hospital in Lusaka. They are also regulated by the same policy frameworks provided by the Health Professions Council of Zambia, General Nursing Council, and the Higher Education Commission. The reliability of the data collection tool has already been discussed above.
Several studies report on the construct validity and internal consistency of the DREEM when used across different cultures (Hammond 2012:1; Vaughan et al 2014:1). Most of these studies employed confirmatory factor analysis to confirm or disprove the factor structure, and computation of Cronbach’s alpha to measure internal consistency. Although some concerns are expressed about the factor structure of the DREEM (Hammond 2012:1; Jakobsson, Danielsen, & Edgren 2011:e237), such concerns have been attributed to the use of sample sizes that are less than the minimum recommended
for such analysis (Roff & McAleer 2015:602-603; Wetzel 2012:1066), and the usefulness of DREEM as a tool for measuring educational climate of medical schools globally remains disputed. This justification led to the adoption of the tool for this study.

TABLE OF CONTENTS
Dedication
Declaration
Acknowledgements
Abstract
Table of contents
List of tables
List of figures
List of Annexes
List of abbreviations
CHAPTER 1 INTRODUCTION
1.1 BACKGROUND
1.2 STATEMENT OF THE RESEARCH PROBLEM
1.3 RESEARCH HYPOTHESIS
1.4 PURPOSE OF THE STUDY
1.5 OBJECTIVES OF THE STUDY
1.6 THEORETICAL GROUNDING OF THE RESEARCH
1.7 DEFINITION OF KEY CONCEPTS
1.8 CONCEPTUAL FRAMEWORK
1.9 THE RESEARCH PARADIGM
1.10 RESEARCH DESIGN
1.11 ETHICS
1.12 SIGNIFICANCE OF THE STUDY
1.13 SCOPE AND LIMITATIONS OF THE STUDY
1.14 OUTLINE OF THE THESIS
1.15 CONCLUSION
CHAPTER 2 LITERATURE REVIEW
2.1 INTRODUCTION
2.2 FIVE FOUNDATIONS OF LEARNING ENVIRONMENTS
2.3 CONCEPTUAL MODELS OF LEARNING ENVIRONMENTS
2.4 LEARNING ENVIRONMENTS IN MEDICAL AND HEALTH SCIENCES EDUCATION
2.5 QUALITY ASSURANCE AND ACCREDITATION IN MEDICAL EDUCATION
2.6 LEARNING ENVIRONMENTS OF MEDICAL SCHOOLS IN AFRICA
2.7 EVALUATION OF THE LEARNING ENVIRONMENTS OF MEDICAL SCHOOLS
2.8 CONCLUSION
CHAPTER 3 RESEARCH DESIGN AND METHODS
3.1 INTRODUCTION
3.2 RESEARCH DESIGN
3.3 RESEARCH METHOD
3.4 DATA COLLECTION
3.5 EXTERNAL VALIDITY OF THE STUDY
3.6 DATA ANALYSIS
3.7 ETHICAL CONSIDERATIONS
3.8 CONCLUSION
CHAPTER 4 ANALYSIS, PRESENTATION, AND DESCRIPTION OF RESEARCH FINDINGS
4.1 INTRODUCTION
4.2 DATA MANAGEMENT AND ANALYSIS..
4.3 PRESENTATION AND DESCRIPTION OF RESEARCH FINDINGS
4.4 CONCLUSION
CHAPTER 5 DISCUSSION AND RECOMMENDATION OF STRATEGIES FOR IMPROVEMENT
5.1 INTRODUCTION
5.2 DISCUSSION
5.3 RECOMMENDING STRATEGIES FOR IMPROVEMENT
CHAPTER 6 CONCLUSIONS AND LIMITATIONS
6.1 INTRODUCTION
6.2 STUDY DESIGN AND METHODS
6.3 SUMMARY OF FINDINGS
6.4 SUMMARY OF RECOMMENDED STRATEGIES
6.5 CONCLUSION
6.6 LIMITATIONS OF THE STUDY
6.7 CONTRIBUTIONS OF THE STUDY
6.8 CONCLUDING REMARKS
REFERENCES
GET THE COMPLETE PROJECT

Related Posts