Market Efficiency

Get Complete Project Material File(s) Now! »

Methodology

Research Approach: Inductive vs. Deductive

According to Jacobsen (2002) there are two different research approaches regarding data collection. The first one is the deductive approach, which can be explained by moving from theory to empiricism. This is done by determine assumptions in beforehand, and thereafter collect empirical data to see if the results are consistent with the assumptions. The other approach is called inductive and goes the other way around: from empiricism towards theory. This is, empirical data are collected with barely any assumptions, and the theories are then formed based on the results (Jacobsen, 2002).
Jacobsen (2002) further states that, while the deductive approach is target for critiques for limiting relevant information, the inductive research approach is more open for new information.
Since this thesis aims to investigate the accuracy of five common valuation methods, theories and models are used to consolidate the empirical data. Therefore, a deductive viewpoint is applied. However, no predetermined hypotheses are formulated and the theories themselves are not to be tested, but the research will rather be conducted through an “open mind” without any stated assumptions. It can therefore be argued that some inductive elements are included, similar to the combination of deductive and inductive approaches as Jacobsen (2002) argues.

Research Type: Descriptive, Explanatory, and Exploratory

Anderson (2004) states three different research types. The descriptive research is trying to profile situations or events, and focuses on the questions what, when, where, and who. The quantitative and qualitative data used in the descriptive research are then used to draw relevant conclusions. The explanatory research is aiming for explaining a situation or problem. The focus is on why and how of a relationship between different variables. The last research type, exploratory research, is a qualitative approach trying to obtain new insights and find out what is happening.
In this paper, the authors are working with a mix of both descriptive and exploratory research. The major part will use descriptive research, which includes analyzing quantitative data and perform statistical tests. From this analysis conclusions will be drawn. However, the authors also apply an exploratory research in the sense that they will gain new insights about the role of valuations methods versus financial analysts’ target prices.

Data Collection: Quantitative Primary and Secondary Data

According to Jabobsen (2002), qualitative method deals with words, while quantitative method deals with numbers. The quantitative approach is of interest for this research since it provides information in form of numbers (in this case, numerical fundamentals).
Furthermore, Jacobsen (2002) argues that there are two different types of data: primary data and secondary data. The primary data means that the researcher is using the primary information sources where the data collection is tailored for a specific research area. The secondary data, on the other hand, use existing information which will be adjusted to a topic.
The data collection for the empirical study of this research is based upon quantitative data received from primary and secondary sources. The data collected consist of financial reports, analysts’ target prices, and closing prices for stocks, which provide information in form of numbers and are used in statistical methods. The financial reports are gathered from primary sources in form of each company’s website archives. The analysts’ target prices are, however, gathered from a secondary source in form of Avanza Bank’s database. Furthermore, one could argue that the closing prices for the stocks gathered are secondary data since the data is collected through the program Avanza Online Trader in order to reach the NASDAQ OMXS’ database. However, the authors of this research argue that the closing prices are consolidated and collected for the purpose of this thesis, and that the Avanza Online Trader is just a “door-opener” to NASDAQ OMXS. Therefore, the authors states that the data is classified as primary data.

Choice of Valuation Models

The choice of valuation models in this thesis in based on discussion with professional analysts from Danske Bank, Nordea, Nordnet and KPMG. We have from those conversations received recommendations regarding commonly used valuation models. Based on this, we have selected a number of models to investigate further.

Sample Choice: Choice of Stocks

This research covers totally twelve stocks; all defined on either Large Cap or Mid Cap at NASDAQ OMX Stockholm. The stocks are divided into four categories based on the industry the companies are operating within: Telecom, Retail, Construction (and Building), and Oil. In this paper, the stock names will from now on be used, i.e., the abbreviations under “Stock name” in the table below. Table 1 shows the chosen stocks:
The reason why these specific companies have been chosen is because their stocks are under more surveillance compared smaller firms on e.g. Small Cap. This means that more analysts are following these companies with target prices and recommendations on a more regular basis, and thereby increasing the transparency.

Test Period

The test period for this paper covers the period 2008 – 2011. The time frame is divided on a quarterly basis. This means that each stock is evaluated and analyzed twelve times for each valuation model, except for the Gordon Growth model and Free Cash Flow to Equity, which are investigated on a yearly basis.

Calculations

In order to test whether the five models provide accurate estimations or not, all models must be measured in value per share. Both P/E ratio and EV/EBITDA are multiplies that are used for comparisons and do not, originally, provide any values in form of stock prices but rather need to be converted. Therefore, the average multiple will be calculated for each industry and used as a benchmark. Both the models will therefore be reversed in order to estimate the market value of the firm, where the multiple is given from start.
The geometric average growth is calculated according to the formula in Section 3.2 Growth. For practical results, see Appendix I. For the DCF approach, the two-stage FCFE model is calculated according to Formula 3 and Formula 4. The industry averages for P/E and EV/EBITDA multiples are calculated according to Appendix B(6).
The paper deals with the expressions models and multiples (= ratios) regarding the results of P/E and EV/EBTIDA. “Model” refers to the calculated target prices (=calculated stock prices in SEK), while multiple refer the fundamental numbers that are calculated in the first stage All calculations are adjusted for dividends and splits.

Interpretation and Data Analysis

The method used for this research is divided into two parts: non-statistical and statistical. The non-statistical part is what the authors in this thesis call, a table-analysis, which test the empirical findings on two different intervals. This analyze model is customized for the investigation that this research is aiming for. In addition, hit ratios are used as a part of the non-statistical method. The statistical analyze method is, on the other hand, a more traditional one, using SPSS (Statistical Package for the Social Sciences) as a tool. The aim of using several methods is to provide a more complete picture and ensure to cover the whole topic, and thereby answer the previously stated research questions.

Non-Statistical Method

The non-statistical analysis is performed by using a customized model, which uses two intervals, 10% and 15% respectively. This model will test whether the empirical findings are “in line” with the financial analysts’ target prices or not. The intervals will be based on average target prices because the authors of this thesis want to determine if any models can provide accurate estimations relative to the analysts’ target prices. When any of our calculations are within the interval, it will be considered as a “hit” (see Section 4.10 Hit Ratios and Total Number of hits). The number of hits will be summarized in a table to create an overview of the final result.
Many of the models require a number of assumptions, and the more variables the model have, the more the final result can differ. Therefore, the 10% and 15% intervals were chosen because it is more or less impossible to end up at exactly the same value even if the initial approach is the same. Moreover, we also choose to have two intervals (10% is the main interval) because we want to see whether there are any significant differences in the result when the interval is increased with 5%.
At the same time, it is important to know that the analysts’ underlying calculations regarding their target prices were not available for this thesis’ authors. The results will therefore be treated cautiously.

READ  The Evolution of Branding

Hit Ratios and Total number of hits

In order to provide a clearer picture of our analysis of empirical findings, the second part of the analysis is working with hit ratios and total number of hits. The hit ratios are basically the percentage of hits for the whole industry in relation to the maximum possible hits. This is a method to describe to what degree the fundamental valuation methods generate accurate results relative to the analysts’ target prices. The total number of hits is simply the number of hits that each valuation method generates for respectively company. Just as described in Section 4.9, both the hit ratios and the total number of hits are presented separately for 10% and 15% intervals.

Statistical Method

 For the statistical part, the statistic program SPSS is used in order to perform multiple regressions, which will determine whether a valuation method will be accepted as appropriate or not.
 The ANOVA table indicates whether there exist a relationship between the chosen variables and the analysts’ average target prices. The chosen alpha level is 0.10 for all the statistical tests. If the significant value (p-value) is below 0.10 the null hypotheses will be rejected. There is statistical evidence that there is a relationship between the chosen variables and the analysts’ average target prices. Once we conclude that a relationship exists, we need to conduct separate tests to determine which of the parameters are different from zero.
 From the coefficient table the parameters’ significant value can be found. The significant value for each parameter tests against the alpha value of 0.10. If significant value is less than alpha value (i.e., <0.10) the null hypothesis will be rejected. For the parameter that rejects the null hypothesis there is statistical evidence that there is a relationship between the parameter and the financial analysts’ average target prices.
 

Hypotheses
 

The hypotheses are testing if there exist a linear relationship between the selected valuation methods, and the financial analysts’ average target prices. The hypotheses are tested against a significant level of 90% (alpha level 0.10).

Empirical Assumptions

The foundation of this research is based on the valuation models presented in the frame of references. However, in order to submit all empirical results, a number of assumptions have been required. This is because of four major reasons:
insecurity of the right model – we do not know what valuation models the analysts’ have used in their valuations different versions of the same models – although it is possible that we have used the same valuation models as the analysts, there are still different versions of how the models can be applied average target prices – averages do not provide the whole picture of an industry, since companies within the same industry can differ heavily and therefore averages can provide a misleading guidance forecasted versus trailing – in this thesis, the majority of calculations are using trailing numbers rather than forecasted
The following assumptions and adjustments have been made:
Stable growth rate – the stable growth rate that has been used in the calculations, and for the analyses, is the Swedish economic growth rate that is estimated by Riksbanken (2011) to be 2.50 %.
Risk-free rate – From Riksbanken the annual average risk-free rate has been acquired for each year.
Cost of equity – cost of equity = risk free rate + company beta value * risk premium.
Risk premium – Pinto, Henry, Robinson, and Stowe (2010) measured the Swedish risk premium to 5.8% based on historical equity risk premium 1900-2007. The risk premium is assumed to be the same in both high and stable growth period.
Return on equity (ROE) – The ROE is set to 10 % in stable growth. According to Damodaran (2002) ROE should be higher than the cost of capital but not too high, normally lower than industry average. This ROE is used in the FCFE calculations.
Beta – the beta value for each firm has been retrieved from Avanza Bank’s database. However, when the firms is assumed to move into stable growth periods, the beta is assumed to move towards 1, therefore a beta value of 1 has been used in the calculations.
$US exchange rate – a few of the companies that have been analyzed have their financial reports in $US. In order to convert the currency to SEK, the exchange rates used are taken the same date as the financial reports were published. The historical exchange rates were retrieved from the Swedish Riksbanken.
Industry averages – industry averages are used for P/E and EV/EBITDA in order to provide a benchmark for respectively industry
Geometric average growth rate – the growth rate is not adjusted for organic growth, i.e., growth caused by acquisitions

Reliability

Hussey and Hussey (1997) mean that the reliability is a measurement of trustworthiness of a study and its conclusion. A high reliability means that, if someone would repeat the study, the result should be the same. This research will be conducted by using established valuation models, such as Gordon Growth, DCF, Multiples, and NAV. The approaches are straightforward, but could be interpreted in different ways. Moreover, some models require the practitioners to make a number of assumptions, to which some extent, can affect the result. Hence small changes can have large impacts on the estimations. The study could therefore be considered to have high reliability, even though the final result can differ.
Furthermore, there are four different measuring scales: nominal, ordinal, interval, and ratio scale (Lundahl & Skärvad, 1996). According to Arbnor and Bjerke (2008) the differences in the scale is the sensitivity, precision, and reliability. The nominal scale result is the least precision one, while ratio scale gives a more accurate result. It is possible to shift the whole scale (scale formations) without making it less useful. Therefore, our research is based on the interval scale that gives a high precision in the measurement and a high reliability.

Table of Contents
Disposition 
1 Introduction
1.1 Background
1.2 Problem Discussion
1.3 Purpose and Research Questions
1.4 Delimitation
1.5 Literature Review
2 Previous Research
3 Frame of Reference
3.1 Market Efficiency
3.2 Growth
3.3 Dividend Discount Models
3.4 Discounted Cash Flow Model
3.5 Valuation Multiplies
3.6 Net Asset Valuation
4 Methodology
4.1 Research Approach: Inductive vs. Deductive
4.2 Research Type: Descriptive, Explanatory, and Exploratory
4.3 Data Collection: Quantitative Primary and Secondary Data
4.4 Choice of Valuation Models
4.5 Sample Choice: Choice of Stocks
4.6 Test Period
4.7 Calculations
4.8 Interpretation and Data Analysis
4.9 Non-Statistical Method
4.10 Hit Ratios and Total number of hits
4.11 Statistical Method
4.12 Hypotheses
4.13 Empirical Assumptions
4.14 Reliability
4.15 Validity
4.16 Critiques of Method
5 Empirical Tables
6 Empirical Presentation and Analysis
6.1 Empirical Presentation
6.2 Firm-specific Analysis
6.3 Industrial Analysis
6.4 Final Analysis
7 Conclusion 
8 Discussion and Recommendations 
List of references
GET THE COMPLETE PROJECT
Fundamental Stock Analysis A study of the fundamental analysis for practical use at the Swedish Stock Exchange Bachelor’s

Related Posts