STUDENT SUPPORT SERVICES IN DISTANCE EDUCATION

Get Complete Project Material File(s) Now! »

CONCEPTUAL FRAMEWORK

INTRODUCTION

The aim of this chapter is to present a conceptual framework that guides the research process for this study. Miles and Huberman (1994:18) define a conceptual framework as a product that “explains, either graphically or in narrative form, the main things to be studied – the key factors, concepts or variables – and the presumed relationships among them”. Due to the lack of a single theory of quality, and the fact that quality is conceptualised differently by different authors, a conceptual framework was devised for this study. This conceptual framework attempted to put together different but related concepts, to understand the problem this study is pursuing, and to answer the research questions. The researcher’s understanding of service quality was guided by the work of academics in the field of service quality. The base model for the conceptual framework is the SERVQUAL model (Parasuraman, 1985; 1988).
The first section of this chapter discusses service quality models that help researchers understand service quality. The next section presents and discusses the development of the SERVQUAL model and constructs that make up SERVQUAL’s components. The next sections present models and theories that can be linked to service quality. The last section discusses the adaptation of SERVQUAL. The SERVQUAL is adapted to consider the underlying characteristics of DE support services.

SERVICE QUALITY MODELS

Kang and James (2004) have identified two perspectives that are employed when assessing the quality of services in service industries. These are the American and European perspectives. The American perspective follows Parasuraman et al’s (1985; 1989; 1991) model of service quality and the European perspective follows Gronroos’s (1982) model of service quality. These two perspectives have influenced the development of various models that are used to judge and assess the quality of services in industries and educational organisations. An overview of some of service quality models is presented here. These models are: Parasuraman et al’s (1985; 1988); SERVQUAL; Cronin and Taylor’s (1992) SERVPERF; Firdaus’s (2004) HEdPEF; Shaik’s (2007) DL-sQUAL; and Tan and Kek’s (2004) enhanced SERVQUAL. Some of these models were designed specifically to measure service quality in education.
The SERVQUAL model was the first instrument developed to measure service quality. It was designed and introduced by Parasuraman et al (1985; 1988). The SERVQUAL model was developed following research studies on service quality (Parasuraman et al 1985; 1988). A detailed description of this model is given in the next section as it is the base model for our conceptual framework.
The SERVQUAL model was criticised on theoretical and operational bases. Researchers such as Buttle (1996) noted that SERVQUAL’s five dimensions (reliability, assurance, responsiveness, tangibility and empathy) are generic and cannot apply to all services. Another criticism is that the expectation measurements of the model were not necessary because perceptions were found to be sufficient to measure service quality (Cronin & Taylor 1992). As a result, Cronin and Taylor (1992) developed a performance-only instrument called SERVPERF to measure service quality. SERVPERF is referred to as performance-only instrument because it measures service quality using perception measurements.
It should be noted that SERVPERF is an adaptation of the SERVQUAL model. Its major limitation for this study was found to be its perceptions-only measurements, without measuring expectations. Expectations have been found to have a diagnostic value (Parasuraman et al, 1988; Jain & Gupta, 2004) that can help managers ascertain where the quality shortfalls prevail and “what possibly can be done to close the gap” (Jain & Gupta, 2004:29).
In 2004, Firdaus Abdullar developed a model specifically to measure service quality in higher education. It was meant to address the lack of appropriate models. The HEdPERF (“Higher Education Performance-only”) is a performance-based scale developed to measure service quality. Nonetheless, the HEdPERF was not found suitable for this study because its dimensions are broad and not suitable to measure DE student support services content.
Randheer (2015) modified HEdPERF to suit the context of Arab higher education culture. The modified HEdPERF called CUL-HEdPERF was evaluated as a better instrument to measure service quality in higher education in Saudi Arabia (Randheer, 2015), than the original HEdPERF and SERVPERF. Furthermore, Tan and Kek (2004:18) developed a survey instrument that they claimed was “especially for use by a university”. They combined Kwan and Ng’s (1999) and Harvey’s (2002) instruments, which are both based on university students’ views about their university experience. Tan and Kek (2004) piloted their instrument and refined it afterwards. The instrument consists of eight factors. However, none of the instrument’s factors was found to be reflective of DE support services. For that reason the instrument was not considered for our study.
Another model the researcher reviewed was the DL-sQUAL scale. The DL-sQUAL scale was developed by Shaik et al (2007), to measure distance learning service quality. This instrument is based on research that involved students from a DE institution in the south-east US. According to Shaik et al (2007), the DL-sQUAL scale demonstrates psychometric properties based on the reliability and validity test analysis. The scale measures three types of service, namely “instructional quality services”, “management and administrative services” and “communication”. Nonetheless, the content of the items of this scale seems to be representative of the types of service offered in that particular DE institution. For example, one item from instructional quality services reads, “Toll-free phone number is available to contact staff for assistance”. One item from management and administrative services reads, “I feel safe in my online financial transaction using the college website”. One item from communication reads, “It is not a hassle to get a refund for dropping or withdrawing from the course(s)”.
The items in Shaik et al’s (2007) scale seem to be limited to the services of a particular DE university. As a result of that, the scale could not be considered for this study. According to Kwan
& Ng (1999), students’ expectations and perceptions are influenced and shaped by their cultural environments.
Having reviewed various models of service quality in higher education and distance education, the SERVQUAL model was considered for this study. The SERVQUAL was found to be more flexible than the other models because it can be used across all service industries. Although the SERVQUAL model was designed to measure services in industries, it can be modified to make it appropriate to measure services in an educational setting. On the other hand these models were developed to address particular contexts, which is not surprising because service quality is context-specific.

READ  ENVIRONMENTAL EDUCATION: THE PATH TO SUSTAINABLE COMMUNITY DEVELOPMENT

THE DEVELOPMENT OF THE SERVQUAL MODEL

The development of the SERVQUAL model started in the 1980s and continued to the early 1990s. The model was developed by A Parasuraman, Valerie Zeithaml and Leonard Berry, following a number of empirical studies. This section of the chapter discusses the important insights of the studies that contributed to the development of the SERVQUAL model. The discussion looks at service quality gaps and the Gap Model; perceived service quality; and the SERVQUAL model.

The Gap Model of Service Quality

The Gap Model of service quality was developed by Parasuraman et al (1985) to help service providers manage service delivery in their sectors. It preceded the SERVQUAL model. The Gap Model is a measurement and a management framework that was designed after an empirical study (Parasuraman et al, 1985). During the initial stages of their study, Parasuraman et al (1985) noted that there was little literature on areas of service quality but abundant literature on the area of goods (product) quality. They also noted that there was hardly any tangible evidence or indicators to be used to evaluate the quality of services. The only tangible evidence found in the service area is “limited to the service provider’s physical facilities, equipment and personnel” (Parasuraman et al 1985:42). These researchers also found that quality management principles for goods were used to understand and evaluate service quality. They pointed out that quality management principles for goods were inadequate to evaluate service quality, because service quality is an abstract construct that cannot be measured objectively using tangible measures. They proposed that an appropriate approach to assess the quality of services in service industries was to measure service users’ expectations and their perceptions of the experiences of the service offered by service providers.
Parasuraman et al’s (1985) first empirical work on service quality began with an exploratory investigation of service quality in four different service sectors. The investigations involved focus group and in-depth interviews with service users and executives (managers) from the following service sectors: credit card, retail banking, securities brokerage, product repair and maintenance. The results of the exploratory investigation revealed the following important insights into service quality.
Firstly, service users evaluate service quality by comparing expectations (the service they expect to receive) with perceptions (the service actually received) on quality dimensions. (This result confirmed earlier studies (Boom & Lewis, 1983; Gronroos 1982), that service users (consumers) compare the service they expect with the service they receive).
Secondly, the results revealed a set of service quality discrepancies or gaps associated with service providers.
Thirdly, service users used the same determinants to evaluate quality.
From these insights Parasuraman et al (1985) developed a service quality model referred to as the Gap Model of service quality. It is an “integrated view” which shows the relationship between an organisation and a service user. The main aim of the Gap model is to identify the gaps between service users’ expectations and their perceptions of the services offered at different stages of service delivery, and to explain the causes of these gaps that occur as a result of quality shortfalls within the organisations.
The Gap Model proposes that service users’ perception of service quality depends on these gaps. In addition, the model depicts that service users’ expectations are highly influenced by statements made by an organisation and its personnel. For example, an advertisement about a service may state that the organisation provides excellent service. However, when the service is delivered, the user’s expectations of “excellent” might be frustrated. The gap will arise when the expectations of “excellent” service are not fulfilled at the time of delivery of the service. According to Parasuraman et al (1985:44), “These gaps can be major hurdles in attempting to deliver a service”.
Figure 3.1 shows the gap model of service quality. There are five service-quality gaps depicted in The Gap Model. These are: Gap 1, Gap 2, Gap3, Gap 4 and Gap 5. These five gaps arise as a result of an organisation not meeting service users’ expectations and needs. The first four gaps are called “company gaps” or internal gaps. Gap 5 is called the service users’ gap. Parasuraman et al (1985) point out that what a service user perceives in a service is a function of the magnitude and direction of the gap between expected service and perceived service. This means that service users’ perceptions are influenced by a series of gaps that prevent the delivery of services within an organisation that provides services. In other words, before the service quality gap can be closed, other gaps should also be addressed. All the gaps are elaborated and explained below:
GAP 1: The gap between service users’ expectations and management perceptions of service users’ expectations
Gap 1 arises when the management of an organisation that provides service does not correctly perceive the service user’s expectations, or what the service users want. For instance, DE institutions’ administrators may think delivering a lot of study material is what students want, but the students may be more concerned with how to access lecturers and tutors to assist them.
GAP 2: Gap between management’s perception and service quality specification
Gap 2 (standards gap) occurs when the management of the organisation that provides a service correctly perceives what the service user wants but does not set performance standards. This means the organisation cannot translate the service user’s expectations into clear quality standards. As a result, there are no quality specifications to guide the personnel of the organisation. In most cases, some standards are described as “adequate” without defining different levels of adequacy.
GAP 3: The gap between service quality specification and service delivery
This gap arises when the specifications of services delivered are not met. This could occur due to poor management or putting service delivery in the hands of people who lack expertise, or have been poorly trained, or are incapable of or unwilling to meet the set service standard.
GAP 4: The gap between service delivery and external communication:
Service users’ expectations are highly influenced by statements made by companies or organisations’ representatives and advertisements. The gap arises when expectations are not fulfilled at the time of delivery of the service. For example, a DE institution may advertise itself to be the best, yet in reality it may be delivering very poor services that fail to meet students’ expectations.
GAP 5: The gap between expected service and experienced service
Gap 5 is called the service users’ gap because it is experienced by the service user. It is also referred to as perceived service quality gap. Gap 5 is the difference between service users’ expectations of service and their perceptions of the service actually delivered. It arises when a service user’s perceptions of the experience with the service do not match the user’s expectations of the service due to a series of shortfalls within the service provider’s organisation.
Perceived service quality is conceptualised differently by researchers. Zeithaml (1987) defines perceived quality as a service user’s judgement of the excellence of a particular service. Parasuraman et al (1985) on the other hand define perceived service quality as the difference or the discrepancy between service user’s expectations and perceptions. This discrepancy depends on the size and the direction of the four gaps concerning the delivery of service by the organisation. Perceived service quality is multi-dimensional in nature. According to Parasuraman (1988:15), perceived quality is a “form of attitude, related to but not equivalent to satisfaction”. Furthermore, Parasuraman et al (1985) state that:
When expected service is less than perceived service, perceived quality is less than satisfactory and will tend to be totally unacceptable quality, with decreased discrepancy between expected service and perceived service.
When expected service is equal to perceived service (ES=PS), perceived quality is satisfactory.
When expected service is greater than perceived service (ES>PS), perceived quality is more than satisfactory and will tend towards ideal quality.

READ  Contibution of the main findings to literature and the body of knowledge

Determinants of service quality

Through the focus group interviews, Parasuraman et al (1985) found that that service users judge the quality of services delivered to them by the service provider using ten determinants/dimensions, namely: tangibles, reliability, responsiveness, competence, access, courtesy, communication, credibility, security and understanding/knowing the service user. Each of the ten dimensions was found to be consistent among the focus groups. Furthermore, the authors found that the ten dimensions could be used to evaluate the quality of services in various service organisations – emphasising that the specific evaluative criteria may vary from service to service. Table 3-1 tabulates the ten service quality dimensions and their explanations.

The SERVQUAL Model

Subsequent to their first study, (Parasuraman et al, 1985), Parasuraman, Zeithaml and Berry carried out another study on service quality (Parasuraman et al, 1988) whose aim was to develop a multiple-item scale instrument for measuring perceived service quality (Gap 5). The model was called SERVQUAL. The Gap Model lacked a scale of its own so the new model offered a methodology to measure service quality. This means that the Gap Model offers theory only and the SERVQUAL model offers a theoretical and a methodological framework.
The first stage of the scale development was the generation of 97 item statements (descriptors) for the ten determinants/dimensions of service. Each item was “recast” into two statements, one to measure expectations and the other to measure perceptions. In order to ascertain the reliability and validity of the scale measurements, Parasuraman et al (1988) carried out extensive statistical and non-statistical tests on the measurements. Several steps were observed. The first step was the collection of data on expectations and perceptions from 200 respondents who were service users in five different service sectors. After the data collection, the ten dimensions went through what Parasuraman et al (1988) call “the purification process” to create a scale to measure service quality. The ten dimensions and their 97 items were subjected to stages of refinement. The initial ten determinants of service quality uncovered in Parasuraman (1985) were then combined and reduced to five, namely: tangibles, reliability, responsiveness, assurance and empathy. The 97 items were reduced to 34, then to 22. (Each item was “recast” into two statements, one statement to measure expectations and the other to measure perceptions). The total scale reliability was found to be 0.9. The scale was found to have “sound and stable psychometric properties” (Parasuraman et al, 1988:24). The validity of the SERVQUAL scale was also tested and the instrument was found to be a valid document. The final version of the instrument was called SERVQUAL.
The SERVQUAL authors propose that the SERVQUAL model can be used by service providers to better understand service users’ expectations and perceptions and to improve the level of service quality in their organisations. They also suggest that the items could be modified according to the needs of the organisation that wants to measure service quality. According to Parasuraman et al (1988), the model represents a global measurement across many service encounters.
Table 3-2 shows how some of the dimensions were combined to create new ones. The table also shows the explanations of the dimensions. The first three dimensions, namely: “tangibles”, “reliability” and “responsiveness”, were retained. The “competence”, “courtesy”, “credibility” and “security” dimensions were combined to create the “assurance” dimension. The “access”, “communication” and “understanding the customer” dimensions were also combined to create a new dimension called “empathy”.

CONTENTS
1.1. INTRODUCTION
1.2. BACKGROUND AND CONTEXT FOR THE STUDY
1.3. PROBLEM STATEMENT
1.4. THE PURPOSE OF THE STUDY
1.5. THE OBJECTIVES OF THE RESEARCH
1.6. PARADIGMATIC PERSPECTIVE OF THE STUDY
1.7. THE SIGNIFICANCE OF THE STUDY
1.8. ETHICAL CONSIDERATIONS
1.9. STRUCTURE OF THE STUDY
2.1. INTRODUCTION
2.2. THE CONCEPT OF QUALITY
2.3. Distance Education
2.4. STUDENT SUPPORT SERVICES IN DISTANCE EDUCATION
2.5. STUDIES ON STUDENT SUPPORT SERVICES
2.6. CONCLUSION
3.1. INTRODUCTION
3.2. SERVICE QUALITY MODELS
3.3. THE DEVELOPMENT OF THE SERVQUAL MODEL
3.4. RELATED MODELS AND THEORIES
3.5. STUDIES THAT HAVE USED SERVQUAL IN HIGHER EDUCATION
3.6. SERVICE QUALITY DIMENSIONS FOR THE PRESENT STUDY
3.7. OPERATIONALISATION OF EXPECTATIONS AND PERCEPTIONS
3.8. EXPLORATORY STUDY
3.9. CONCLUSION
4.1. INTRODUCTION
4.2. THE INTERPRETIVE APPROACH
4.3. RESEARCH APPROACH
4.4. DATA COLLECTION PROCESSES
4.5. QUALITATIVE DATA ANALYSIS PROCESSES
4.6. QUALITATIVE STUDY FINDINGS AND DISCUSSION
4.7. PROPOSED MODEL OF STUDENT SUPPORT SERVICES
4.8. CONCLUSION
5.1. INTRODUCTION
5.2. THE POSITIVIST APPROACH
5.3. QUESTIONNAIRE DEVELOPMENT
5.4. DATA COLLECTION
5.5. PROCESSES OF QUANTITATIVE DATA ANALYSIS
5.6. MEASURING QUESTIONNAIRE RELIABILITY
5.7. ANALYSIS OF STUDENTS’ EXPECTATIONS AND PERCEPTIONS AND THE GAP SCORE
5.8 SIGNIFICANCE OF RELATIONSHIP BETWEEN DEMOGRAPHIC VARIABLES AND GAP SCORES
5.9 CONCLUSION
6.1 INTRODUCTION
6.2 RELIABILITY AND VALIDITY OF THE ANALYSIS PROCESS
6.3 QUALITATIVE RESULTS SUMMARY
6.4 THE RESULTS OF THE QUALITATIVE AND QUANTITATIVE MERGER
6.5 THE RESULTS OF THE QUANTITATIVE STUDY
6.6 CONCLUSION
7.1 INTRODUCTION
7.2 DISCUSSIONS OF THE FINDINGS
7.3 THE STUDY’S LIMITATIONS
7.4 CONTRIBUTION TO KNOWLEDGE
7.5 RECOMMENDATIONS
7.6 FUTURE RESEARCH
7.7 CONCLUSION
REFERENC
GET THE COMPLETE PROJECT

Related Posts