Get Complete Project Material File(s) Now! »


In this chapter we present the method chosen and how we have approached the field of interest in order to find information to fulfill the purpose.


Methodology does not specify the different methods that will be used in the research but rather the meta-study of how the methods will contribute to the overall scientific enterprise (Hoover, 2005). Economical methodology is the concern about the relationship of meta-theoretical questions and if the claims of economists are reliable and true, how one can judge if this is the case and if the conclusions are to be believed in or not (Pålsson, 2001). Which research method that will be used and how the interpretations of the results will be done is to some extent determined by the researches persona and cannot be disassociated from past experience, culture, history and future intentions (Walters, 1994; Koch, 1995, cited in Clark, 1998). The most fundamental level to describe research methods are at the philosophical level where the focus is on the most general features of the world including mind, matter and reason (Blackburn, 1994; Clark, 1998).


According to Saunders, Lewis and Thornhill (2009) when conducting a research one of four major philosophies of science can be chosen as the starting point; Positivism, Interpretivism, Pragmatism or Realism. The common research methods for financial economics are divided into paradigms based on a positivistic philosophy, first developed by John Neville Keynes in 1889 and redefined by Milton Freidman in 1953 (Frankfurter & McGoun, 1999). The positivistic philosophy theory is derived from logic, mathematical treatments and is based on observable objects (Saunders et al 2009) while it is judged by the accuracy, scope and conformity of the estimates (Freidman, 1953). The positive economics has no ethical or normative judgments and is used to provide a system of generalizations and theories that can be used to make predictions of changes in the phenomenon’s not yet observed (Freidman, 1953). The truth regarding the observable patterns is not dependent on beliefs but to facts presented in the external reality (Clark, 1998). These observations are grounded in “hard” data and define the world as linear, which decreases the possibilities for interpretation by focusing on description of the empirical result (Saunders et al, 2012). The positivistic philosophy theory assumes that the physical world operates in accordance to general laws and tries to explain the social reality by describing observed patterns, which leaves very little room for interpretation and focus more on the empirical result.
In the theory of interpretivism, law and moral are perceived as one and it assumes that the access to reality is solely given by social constructions and focuses on interpretations rather than descriptions. The interpretivism assumes that the law is not formed by given facts, but rather by the judicators. Hence, this philosophy can be seen as the opposite to positivism (Saunders et al, 2009). In pragmatism, the major focus lies on the research questions and this philosophy is a mixed one. Hence, it is generally suitable when neither the positivist nor interpretivism philosophy can be adopted (Saunders et al, 2009). The theory of realism believes that the cognitive biases are not errors but rather more like a logic and practical method to deal with the real world. The focus lies in how people think and how their reasoning process looks like. This decision process depends on many different variables of the person, for instance people lie, make errors, memories and that more time results in more changes. It aims to explain abstract subjects that take place in the real world (Saunders et al, 2009).


The study can use two general approaches when conducting the research; it can be based on the inductive or the deductive approach (Hyde, 2000). The deductive approach, also called the “top-down” approach, has several characteristics, such as try to explain the causal relationship between variables and controls to allow the testing of the hypothesis (Saunders et al, 2009). Reichenbach (1958) wrote that logical proof is what the deductive approach is built on by drawing conclusions from premises and Zellner (2007) argued that much economic theory is based on a deductive approach. It is a theory testing approach where the goal is to test if a theory holds (Hyde, 2000) and Saunders et al (2009) argues that when using the deductive approach it is important to use a highly structured method in order to test and retest the hypothesis. It must be measured quantitatively (operationalized), the problems need to be defined as simple as possible to gain a better understanding (reductionism) and that the findings are applicable to other settings and surroundings (generalized). The inductive approach, called the “bottom up” approach, starts with specific observations. From the observations a hypothesis is created and finally an ending theory or conclusion is developed (Saunders et al, 2009). As opposed to the deductive approach, this is a theory building approach based on the observations (Hyde, 2000). A combination of the deductive and the inductive approach is the abductive approach. Instead of only connecting theory to the data or the data to the theory the abductive approaches jump back and forth between the inductive and the deductive approach (Saunders et al, 2009).


The researcher can chose between several different strategies to achieve the conclusion of the research (Saunders et al, 2012). The archival research is based on both recent and historical administrative records and documents as the main source of data. This strategy allows the researcher to focus on the past changes over time and is focused on the day-to-day activities. The implication of only using recorded data is the risk that the data may not involve correct information or it might even be missing information. An advantage of this strategy is that it expands theories with a time-dimension and it is well suited to explain processes of change and evolution (Welch, 2000). Archival research is according to Welch (2000) mostly applied within the business field and when studying economic history, rather than for example fields within marketing.


The data collection techniques and the analysis procedures in the research can be either quantitative, qualitative or a mix of the both. For example, quantitative method is used when the data collection techniques are a questionnaire and data analysis procedures are statistics (Saunders et al, 2009). The quantitative method has a strong academic tradition and lays the trust in numbers and statistics to represent opinions and concepts (Amaratunga, Baldry, Sarshar & Newton, 2002). The strengths of this methodology are its aptitude to compare and replicate and is independent of the researcher. The qualitative method is used when the data collection techniques are interviews and the data analysis consists of interpretations (Saunders et al, 2009). It concentrates on observations and words to express the reality (Amaratunga et al, 2002). If only one of the techniques is used for both the data collection and the data analysis, it is called a mono method (Saunders et al, 2009). If the techniques is combined it is called a multiple method or a triangulation (Amaratunga et al, 2002). In this category there can be either multi-method or mixed-method. The multi-method is when the research is based on more than one data collection, for example both interviews and action research. This can make the conclusions more credible and reliable. The multi-method can be either quantitative or qualitative, but when combined it becomes the mixed-methods which is when the research is based on both quantitative and qualitative methods, either used parallel or sequential but never combined (Saunders et al, 2009). One of the technics is often the predominate technique. In the mixed-methods there is also an area called the mixed-model research, where quantitative data can be analyzed using qualitative methods and vice versa. A reason for using mixed-method can be as a complementary reason, where the quantitative findings can be supported by the qualitative findings (Saunders et al, 2009).

READ  Organizational Façades and Organized Hypocrisy


Data for regression estimation can be studied in different ways, depending on the purpose of the study. The three major techniques are according to Saunders et al (2009) time-series, cross-sectional and panel-data. In time-series data the observation units are observed over a specific time. It can be used when an explanation to the cause-effect is wanted because it generates the ability to track the development over time. The data in a cross-sectional analysis is collected for many units at the same pre-set time frame and generates the ability to compare different groups or units at a single point in time and estimate difference among them (Baltagi, 2013). Hence, it does not care about cause and relationship effect to the same extent as time-series analysis, meaning it does not capture what happens before or after the time frame. A major benefit with cross-sectional data is that many different variables can be compared at the same time, which Kreander et al (2005) utilized with success. Panel data is a combination of time-series analysis and cross-sectional analysis, since it refers to samples of the same cross-sectional units observed at multiple points in time, which increases the statistical power of statistical tests (Baltagi, 2013).


A classical portfolio-level analysis could be misleading when the sample funds is not geographically distributed at the same between all countries in the sample. In that case a matched pair analysis works better as it compares every fund on an individual level, in other words matching the ethical fund with its corresponding conventional fund (Leite and Cortez, 2014). Several researchers have used the matched pair approach to overcome issues with benchmark problems and survivorship biases (Mallin et al., 1995; Gregory, Matako & Luther, 1997; Statman, 2000; Kreander et al, 2005). Mallin et al. (1995) found that previous researches had a severe shortcoming in the selection of a comparing benchmark, since no appropriate ethical benchmark was available at that time. By matching ethical fund to their conventional this problem was solved. When matching the funds it is better to match by age than fund fortune since “…fund size has little role to play in explaining unit trust performance” (Gregory et al, 1997, p. 724) and thus is not significant when paring funds (Kreander et al, 2005; Renneboog et al, 2008b). By using age, a factor that has been proven useful by Renneboog et al. (2008b), survivorship bias disappears since all funds in the sample is included at all investigated years. Leite and Cortez (2014) based their research sample on age, country of origin, investment universe and the Morningstar category (size dimension and value/growth dimension). According to the Leite and Cortez (2014) the three first characteristics have been used in previous studies but the fourth one, the Morningstar category, is less used.


Econometrics incorporates knowledge from the fields of economics, statistics and mathematics and is used to test economic theories, inform policy makers and predict the future (Pinto, 2011). The main purpose of econometrics is to estimate the relation between dependent and independent parameters by gathering observable empirical data and testing hypothesis regarding the relationships of the parameters, their values and signals and concludes the validity of economic theory. However, econometrics is not only about the calculations, a deep understanding of the economic theory is needed in order to understand and interpret the outcome of the model in order to learn more about the economic significance. Pinto (2011) suggests six steps in order to be able to answer a question from the economic reality with an econometric model.
Formulate the problem – The initial questions of what we want to know
Collect data and information – Primary and secondary data sources
3. Choose econometric model – Cross-section, Longitudinal or Panel data
Empirical analysis and Diagnostic testing – Parameter estimation, Goodness-of-fit (R2) and test for non-normality, autocorrelation, heteroscedasticity and non-stationary
Modifications to the model – Changes in the model
Answer the initial question


Based on the previously presented information, this thesis will have a positivistic philosophy due to the fact that the data, the interpretations and the conclusion will be absolute and no subjective meanings will be included. The research will be conducted in an independent and objective approach where only observable objects are used. Based on the focus on the positivistic philosophy, which includes generalization and reductionism, this is the most adequate alignment for this thesis purpose. However, there has been critique to the positivistic philosophy where the positivistic paradigm of Freidman (1953) is argued to not be influenced by any political view, called “value-neutral”, but all theories of financial economics and shareholder value maximization are based on an omnipotent market, which is not considered in a purely positivistic methodology (Frankfurter & McGoun, 1999). Another critique of the positivistic paradigm is the adherence to the researcher’s involvement in the research process, meaning that no science can follow a strict positivistic inquiry due to the incapability of detachment from the own preconception (Caldwell, 2013; Sandelowski, 1993). Samuelsson (1963) critiqued Freidman’s positive economic theory as an apriorism, that some knowledge from the real world can be derived from general principals. It is almost impossible to conduct a purely positivistic inquiry without any biased results, but we will map our approach as detailed as possible to maintain a high order of unbiased results. For this we will follow the econometric method as proposed by Pinto (2011).
As Crowther and Lancaster (2008) and Zellner (2007) argues, when applying a positivistic philosophy on a study a deductive approach is often most appropriate. The deductive approach is also preeminent used when conducting a quantitative research where the focus is to explain a causal relationship between variables. Another characteristics of the deductive approach are that it allows for testing and retesting the hypothesis. We will only use a quantitative method, which is the mono method and has the benefit of allowing us to focus on the data collection to make it highly structured and collect large and reliable samples. Because of this philosophy and approach we will use the strategy of archival research to ensure high quality and unbiased data. According to our stated hypothesis in this thesis, we do not focus on the cause and effect relationship, but rather the differences and relationship between the variables. Since our purpose is to investigate the difference between two time-trends the analysis will be divided in two steps, where the first is a time-series analysis and then a Wald’s coefficient hypothesis test between the ethical and conventional estimates from the time-series analysis, which is performed with a multi-factor model. This thesis is based as a descriptive-explanatory study, where the purpose of the research is to both describe and explain the differences and relationships of the variables.

READ  Technological Acceptance Model (TAM)



This study applies multiple source secondary area-based data. This type of secondary data is consolidated by third part from other sources, before we access the data. We will base our sample on age, country of origin and Morningstar category. The size of the fund will not be considered since it is proven not significant by previous studies when comparing funds (e.g. Gregory et al., 1997; Kreander et al, 2005; Renneboog et al, 2008b), but age is proven to be significant by Renneboog et al (2008b). The ethical funds used in the research will be defined by the pre-determined criteria’s in Table 3.


Information about the fund’s inception date, country of origin and Morningstar category will be collected both from Morningstar’s international webpage and the different countries national Morningstar´s webpage. As we will be collecting the net asset value (NAV) of each fund that has been compiled by the software data program Thomson Reuters DataStream Professional, we will not compile the list our self and that is why we use area-based secondary data. The benefits of secondary data are that it is quiet cost effective, available and in this case also is reliable data. The downside of secondary data collection is that all data might not fit the research and it might be missing some vital information. This has been the case for us when our sample started with 90 ethical funds but due to inconsistencies and errors in the funds returns and data, several funds were removed from this study and our final sample consisted of 33 ethical mutual funds and 33 matched pair conventional mutual funds. Adjustments for general holidays, such as Christmas and New Year, have been made in the final data sample. All data will be denoted in Euros and recalculated when needed based on current exchange rates.


All calculations in this thesis will be performed in Eviews Version 8.1. The returns for each fund is calculated as !,! = ln !!!,! where rj,t is the return for fund j in time t, !,! is the price of fund !,!!!
j share at time t and !,!!! is the price of the funds share at the previous time t-1 (Kreander et al, 2005). As the risk free rate the 1-month Euribor rate will be used and as the market rate the MSCI Europe E will be used. In order to calculate the factors in the multifactor model we will follow the method of Banegas et al (2013) and Leite and Cortez (2014) to create modified European factors. It is a simpler and a more effective approach to obtain reliable factors, which are applicable for a study with many different countries. The SMB factor is calculated as the difference of the MSCI All Country (AC) Europe Small Cap index and the MSCI AC Europe Large Cap index. The HML factor is calculated as the difference in total return of the MSCI AC Europe Value index and the MSCI AC Europe Growth index. The WML factor based on 24 different Dow Jones STOXX 600 Super Sector indices that has a lifespan throughout the sample. The factor is calculated as the difference between the top 8 sectors and the bottom 8 sectors, where the returns are calculated daily as 11-month returns lagged one month. 8 out of 24 is chosen to create the same factor weight as Carhart (1997) used where the top sectors and the bottom sectors equally consisted of 30% each and in our sample they become 33.3%. The local factor is calculated as the return difference between the local country market index weighted according to our sample and the European market index MCSI AC Europe index. By averaging the factor the explanation level will decrease but in order to run our regressions effortlessly it needs to be estimated as one factor.
For the variance modeling, a univariate GARCH model will be used at first. We will the extend the model to become univariate TGARCH and univariate EGARCH since these models are particularly good when estimation shocks in financial data as they exhibit leverage effects and do not have a non-negative constraint, which is important feature in stock return data. All three models will be used in order to compensate for the different weaknesses and strengths among them, but no analysis between the different models will be evaluated. All data will be tested by the Ljung-Box Q statistics on the residuals and the absence of serial correlation implies that there is no need to encompass a higher order of GARCH (Giannopoulos, 1995). GARCH effects are most visible when using high frequency data but as discussed in the theoretical framework financial data often suffers from a leptokurtic distribution. In order to be able to make reliable interpretation of our result we also conducted a part of the research with monthly data, which provided us with a normal distribution but with less significant results. In APPENDIX A – MONTHLY DATA we present the results from monthly data and in APPENDIX E – ERROR DISTRIBUTION FOR DAILY DATA we present the distribution of the error terms for the daily data. Our empirical results will be based in the daily data since the monthly data has, in line with Nelson’s (1991) explanations, too few observations to exhibit any significant heteroscedasticity. A step forward approach is used to fit ARMA terms necessary to fulfill all statistical assumptions for the models used in this thesis.


We follow Nofsinger and Varma (2014) and define the international crisis based on The National Bureau of Economic Research (NBER) from 2015, which identifies the Subprime Mortgage Crisis in the period of December 2007 and June 2009, making the length of the crisis 18 months. In order to capture the effects of crisis times, three time periods are used and denoted as; Before Crisis (01-2005 to 01-2008), Crisis (01-2008 to 01-2011) and After Crisis (01-2011 to 01-2015). For a graphical interpretation of the crisis times refer to APPENDIX D – FUNDS’ ABSOLUTE RETURNS.


Related Posts