The methodology will be explained by using the Research Onion, by Saunders, Lewis and Thornhill (2016). The onion consists of six layers, namely research philosophy, research approach, methodological choice, research strategy, time horizon, and data collection method (Saunders et al., 2016), see Figure 4. The research onion can be used in the method, both to add credibility to the study and to make it clearer for the reader as well as for the researcher (Saunders et al., 2016).
Research philosophy explains how research knowledge is developed, and it refers to the set of beliefs about the reality that is being investigated (Bryman, 2012). In this thesis, a positivist research philosophy was used. The positivistic philosophy implies that only phenomena that people can observe can lead to the production of credible data (Saunders et al., 2016). A positivist approach often uses existing theory to creates a hypothesis that will be tested, and it should be conducted in a value-free manner, where the researchers should influence the respondents as little as possible to get objective results (Saunders et al., 2016). This philosophy was chosen because it enables the researcher to receive quantifiable measurement and statistical analysis (Saunders et al., 2016).
Research approach explains how the researcher should handle the research road. The two main approaches are deductive and inductive, but it is also possible to use a combination of these two (Saunders et al., 2016).
This thesis uses a deductive approach. The deductive approach can be referred to as the testing theory, and it starts by reviewing existing theory and knowledge within a subject and based on that, hypotheses or research questions are being made. After this, there are some form of testing of how the theory stands up towards own observations (Saunders et al., 2016). A deductive approach is appropriate when the researcher wants to transform general knowledge to create a more specific knowledge, and it can be more time-effective when there are previous findings within the subject. The deductive approach is usually connected to the positivistic philosophy and to quantitative data (Saunders et al., 2016), which will be used in this research.
The methodological choice consists of deciding between a qualitative or quantitative method and between an exploratory, descriptive, or explanatory purpose (Saunders et al., 2016). In this thesis, a descriptive research design was chosen. In descriptive research, the phenomena should be well-defined before starting, it is therefore often used on topics that have been studied previously (Saunders et al., 2016).
In quantitative research the amount of data is essential, and the method is used when the researcher wants to test and measure results from numerical data to make statistical analyzes and generalizations (Saunders et al., 2016). A quantitative method was chosen because it fits well with the purpose to measure and compare different generations’ behavior (Bryman, 2012), and because it allows the researcher to use numerical data and statistical analysis. Further, it goes hand in hand with the decision to use a positivistic philosophy and a deductive approach (Saunders et al., 2016).
The research strategy can be described as the researchers’ plans on how to achieve goals, and how to answer research questions (Saunders et al., 2016). There are multiple strategies which can be used in quantitative research, some of the most common are experiments, surveys, case studies, archival, and documentary research (Saunders et al., 2016).
The decision to use a survey in this research was one of the first choices that were made. A survey enables the researcher to collect a large amount of data, and it is most often very efficient and economical (Saunders et al., 2016). The main downside with surveys is that the questions usually are closed-ended, meaning that researcher only gets answers to the exact questions and have no access to follow-up questions which can give a lower validity (Saunders et al., 2016).
Another question which needs to be answered is for how long time the research will be conducted. There are two ways this can be done, either cross-sectional studies or longitudinal studies. A cross-sectional study is a snapshot, a point in time of how the respondent thinks about the topic when they answer the questions, while a longitudinal study is done over a longer time (Saunders et al., 2016). This thesis has collected data from one survey and asks the respondents about their impulsive buying behavior at a certain point in time, meaning that it was a cross-sectional study.
Data collection methods
Primary data collection
Primary data is the original information collected and used by the researcher (Easterby-Smith, Thorpe, & Jackson, 2018). In this research, the primary data have been collected through a self-administered online survey. Online surveys are beneficial since they are convenient and fast, and because it does not cost more to collect a large sample compared to a small one. Another advantage is that it is anonymous, and the interviewer is not present, which enables the researcher to ask sensible questions (Sue & Ritter, 2007).
One disadvantage of online surveys is that respondents can easily quit the survey without finishing it (Sue & Ritter, 2007). To prevent that, the survey was made as clear and as interesting as possible. Further, the respondents who finished the survey had the opportunity to fill in their e-post addresses to have the chance to win two movie tickets. However, the questions which the authors considered as the most important were still placed in the beginning so that it would be possible to collect some data even if the respondents abandoned the survey. According to Sue and Ritter (2007), another disadvantage with online surveys is that it is impossible to draw conclusions about the whole population since not everyone is on the internet. However, according to IIS (2018), among the Swedish population in Gen X and Y, the amount of internet users is 99-100 percent.
The design of the survey started with a literature review to see what questions previous scholars had used in their research. The questions which the authors considered to be useful in this research were inspired by studies made by Rook and Fisher (1995), Dawson and Kim (2009), and Badgaiyan, Verma and Dixit (2016). These were translated into Swedish and combined with additional questions written by the authors. The survey was constructed in the software program Qualtrics. Before the survey was distributed on social media, a pre-test was made, which will be further described in Section 3.6.4. After the pre-test, some minor adjustments were made.
Before the questionnaire started, the respondents were introduced to the topic and informed that their participation was voluntary, in line with ethical guidelines from Vetenskapsrådet (The Swedish Research Council) (2002). The first part of the survey consisted of a demographic section. In compliance with RFSL’s (2016) Guidelines about gender and trans in surveys, the option other, and do not want to state were added to the gender question along with male and female.
The whole questionnaire consisted of 25 questions, as can be seen in Table 1. Three of these questions consisted of in total 26 statements, and these were constructed in a matrix table with checkboxes in a Likert-type five-point scale. An even number scale can be convenient since it forces the respondents to choose between a positive or negative position. However, it may confuse the respondents who are neutral, therefore, an odd number scaled were used (Sue & Ritter, 2007). The questionnaire also consisted of two open-end questions, where the respondents could share their opinions outside of the multiple-choice questions (Sue & Ritter, 2007). These open-ended questions aimed to enhance different and more in-dept knowledge from the respondents. The full questionnaire can be seen in Appendix A, and a translated version can be seen in Appendix B.
In this survey, the respondents born between 1960 and 1980 have been categorized as Gen X, and the respondents born between 1981 and 2000 have been categorized as Gen Y. This decision was made based on the definition by Gurâu (2012) who stated that Gen X are born between 1961 and 1980, and Gen Y between 1980 and 2000, however, these were slightly adjusted. The respondents who stated to be born after 2000 or before 1960 were excluded from the final sample.
In this research, the target population is every Swede in Gen X and Gen Y. Because there is no register with contact information to every Swede in Gen X and Y, non-probability sampling was used, and more precisely a convenience sampling. The self-administered online survey was spread on the authors’ social media, mainly through Facebook. The survey was shared both in the authors’ Facebook pages and in Facebook groups. Social media was chosen since many people in the target group can be found there (IIS, 2018). Convenience samples are biased because the researcher may approach certain respondents and the respondents who chose to participate may differ from the ones that do not. It is, therefore, not possible to completely generalize the data collected from a convenient sample (Bryman, 2016).
When sharing the link to the survey, the authors also asked people to share it further to their Facebook friends, with the ambition to create a snowball effect, which means that the researcher uses a group of people to reach others (Bryman, 2016). Just as an inconvenient sampling, it is impossible to completely generalize data collected from a snowball sampling (Bryman, 2016). It gives people with many social connections a higher chance of being selected. However, snowball effects are often used because it usually provides a higher response rate (Berg, 2006).
According to SCB (Statistics Sweden) (2018), the Swedish population between 20 and 60 years, which are very similar to the people in the target population, represents almost 4.6 million people. Researchers often work to a 95 percent level of certainty (Saunders et al., 2016). If this level of certainty would be used, the sample size of a population with between 1-10 million should be 384 respondents. However, the level of certainty must always be weighed against time and cost (Bryman, 2016), and due to the limited time, the ambition was to sample 100 people in Gen Y and 100 people in Gen X. This means that a sampling error was tolerated. With 200 respondents from a population of 4.6 million and a 95 percent confidence interval, the margin of error will be 6.93 percent.
Before the survey was distributed, a pre-test was executed. According to Saunders et al. (2016), a pre-test is especially important when the questions are new and untested in order to avoid misunderstandings. Furthermore, it is important in self-administered surveys, when no interviewer is present to clarify potential misunderstandings or uncertainties (Bryman & Bell, 2015).
First, the authors sat down with two people from each generation when they answered the survey. The respondents were asked to provide feedback about the questions. After this, some of the questions were reformulated or removed, and the order of the questions was adjusted to make the survey more cohesive. This method is a way of testing the face validity and it is a way for the researcher in an early stage to ensure that the test has validity (Saunders et al., 2016). After the adjustments, the survey was sent to 18 people from both generations. The responses were tested through a method called test-retest, which is used to check reliability. The answers were not many enough to draw conclusions. However, it showed that the survey had reliability and validity (Saunders et al., 2016).
Secondary data collection
The databases Google scholar and PRIMO were used to search for secondary data. First, the authors searched for Impulse buying in PRIMO, which resulted in almost 45 thousand results. Some of the major articles were read in order to receive an understanding of the subject and to get familiar with some of the most known researchers in the field, such as Rook (1987), Stern (1962), and Beatty and Ferrell (1998). After the general searching, a more specific searching was done. The authors searched for phrases such as impulse buying online, generation impulse buying, apparel impulse buying online, and generation buying impulse apparel. From this search, the authors got familiar with other major researchers within these topics, such as Dholakia (2000), Dawson and Kim (2009), and Lissitsa and Kol (2016). These were supplemented with newer sources in order to get a broad but yet updated secondary data.
The questionnaire was designed in the software program Qualtrics. After the survey was completed, a report was extracted from Qualtrics with the descriptive data, and the report was then analyzed in the software program IBM SPSS Statistics 25. Out of the respondents who did not finish the survey, a threshold with 78 percent closure was included in the report, so that the final questions had a slightly lower answer rate than the first ones.
There were in total of 728 responses collected during the time frame, between the 11th and the 25th of March 2019. 113 of these belonged to Gen X, 596 to Gen Y, and 19 people were excluded from the final sample because they were born after 2000 or before 1960, hence they did not belong to the target group. The number of respondents were then 709.
Reliability and validity
Reliability of a scale refers to the extent the data are yielding consistent findings. There are two factors which should be considered when measuring reliability. First, the stability of test-retest measures whether the data is consistent and how it correlates with previous data (Babin & Zikmund, 2016). Secondly, internal consistency measures to which extent different parts of a summated scale are consistent in what they indicate. This can be done by dividing the test into two and finding the correlation between the separated halves, or by using Cronbach’s coefficient alpha, which measures the average of all split-half coefficients. The Cronbach’s alpha varies between 0-1, and according to Babin and Zikmund (2016), an acceptable value is 0.7 or more. This value was also used as an acceptable value in this analysis.
Validity is the concern that the test measure what it supposed to measure. There are multiple ways of establishing validity. Face validity is to control that questions seems to reflect what should do, this usually demands asking feedback for the question (Bryman, 2016). Concurrent validity is the way to ask the same thing in different ways and see if the result differs. Construct validity uses the measure more abstract questions and convergent validity is comparing measurement that were collected by two different methods (Bryman, 2016).
The hypotheses were tested by using a method called compute variables. Compute variables uses the value of each question ads it together and gives an average for each respondent. Some of the questions were used in more than one hypothesis. These values were then tested with both a t-test and a chi-square test in order to accept or reject the hypotheses.
When comparing two different groups, independent sample t-tests are often used. This test examines if there are any statistically significant differences between the groups that could reflect on the bigger population (Pallant, 2016). An independent t-test will examine the probability that the two groups came from the same populations (Pallant, 2016). A Sig-2 tailed value, also called p-value, under 0.05, means that the difference within the random sample is by 95 percent security found in a bigger population (Pallant, 2016). In this thesis, a p-value of 0.05 or under were considered as a statistically significant.
A cross table is used for two different reasons. The first is to explore the relationship between two different independent variables and compare the observed frequencies in each category (Pallant, 2016). The second reason is to achieve a chi-square value, this test compares the values of the observed frequency within the respective category and if the difference is a statistically significant and can be generalized to a bigger population (Pallant, 2016).
When the chi-square test and t-test indicate different results in the hypothesis testing, the decision was made to prioritize what the t-test stated, and the reason for that was that a parametric test generally has higher statistical power (Pallant, 2016).
Table of Contents
1.2. Problem discussion
2. Literature review
2.1. Theoretical framework
2.2. Theoretical models
2.3. Factors influencing impulse buying online
2.4. Adjusted Revised CIFE
3.1. Research philosophy
3.2. Research approach
3.3. Methodological choice
3.4. Research strategy
3.5. Time horizon
3.6. Data collection methods
3.7. Data analysis
3.8. Methodology summary
3.9. Codes of ethics
3.10. Sources of error in surveys
4. Empirical findings
4.1. Descriptive statistics
4.2. Hypothesis testing
4.3. Missing data issue
4.4. Open-ended questions
6.1. Managerial implications
6.2. Ethical and societal implications
6.4. Future research
GET THE COMPLETE PROJECT