This chapter describes the methodological approach. In this context, the chosen research design and research model are justified, followed by the description of the sampling process. Moreover, the chapter presents the processes of analysis and introduces the applied research instruments in more detail.
Creswell (2009) defines research designs as ‘plans and the procedures for research that span the deci-sions from broad assumptions to detailed methods of data collection and analysis’ (Creswell, 2009, p. 3). Building a research design starts with the general decision of whether to base it upon a qualitative or quantitative approach (Hoe & Hoare, 2012). Creswell (2009) extends this view by describing three types of research designs: qualitative research, quantitative re-search, and mixed methods research. While qualitative research helps in “exploring and under-standing the meaning individuals or groups ascribe to a social or human problem […], quantitative re-search is a means for testing objective theories by examining the relationship among variables” (Creswell, 2009, p. 4). He furthermore states that both approaches should not be seen as opposites; rather a ‘study tends to be more qualitative than quantitative or vice versa’ (p. 3). Mixed methods ap-proaches, in turn, are incorporating elements from both approaches. Choosing an appro-priate research design is depending on the nature of the research problem or issues being addressed, but it also includes the researchers’ experiences, and the audience of the study (Creswell, 2009).
This study follows a qualitative approach. The aim of this study was to investigate both how the architecture of the cloud marketplace should look like and whether and why there exists a demand or no demand for the marketplace in practice. Yin (2011) lists five features of qualitative research including (1) ‘Representing the views and perspectives of the people’ (p. 7); (2) ‘Covering the contextual conditions within which people live’ (p. 7), and (3) ‘Contributing insights into ex-isting or emerging concepts that may help to explain human social behavior’ (p. 7). When considering the research questions of this study it becomes clear that the three stated features of quali-tative research are needed to properly investigate the issues. In order to build the architec-ture for the marketplace, different views and perspectives (1), as well as insights into existing and emerging concepts (2) of different practitioners, were needed. In order to understand the way how cloud solutions are provided and consumed by now, contextual conditions within which people live (3) had to be revealed. The latter also helps in justifying the demand for the cloud marketplace.
Furthermore, a mix of inductive and deductive approach was used in this study. In contrast to an inductive approach where data lead to the emergence of concepts, in a deductive ap-proach ‘the concepts […] lead to the definition of the relevant data that need to be collected’ (Yin, 2011, p. 94). The initial theoretical framework of the cloud marketplace which has been created as a result of existing research and theories in the previous chapter represents the deductive part of this study. In this context, parts of existing frameworks and architectures concerning cloud computing and its relation to SOA respectively were assembled to a new model, the initial theoretical framework of the cloud marketplace as illustrated in Figure 2.10. This framework defined the relevant data that needed to be collected and narrowed the research onto that specific model. The sources of which this model was made up were the second-ary data sources of this study. Best and Kahn (2006) state that secondary sources of data are usually of limited worth because they are passed from person to person giving potential for errors to occur (Best & Kahn, 2006). As a consequence, in order to strengthen the va-lidity of the created model, it was then refined constituting the inductive approach of the study. The basis for the refinements was the data that was collected through the applied re-search method representing the primary data source of this study. Primary sources of data are ‘those items that are original to the problem under study’ (Cohen, Manion, & Morrison, 2007, p. 161). Data in this case were the statements related to the three above-mentioned features of qualitative research that were drawn from the data collection. The result of this process was the final empirical framework of the cloud marketplace which is presented at the end of this study. Figure 3.1 illustrates the applied research approach.
Regarding the qualitative research design of this study, Yin (2011) proposes four different data gathering methods. Those are interviewing and conversing, observing, collecting, and feeling. While in conducting interviews the researcher usually has a verbal conversation with the interviewee, she or he rather takes a passive role when conducting an observation by only observing a phenomenon. Collecting refers to compiling objects such as docu-ments, artifacts, and archival records, for instance, which are related to the research topic. Feeling, on the other hand, means ‘covering a variety of traits within yourself that are potentially im-portant in your research and that you should not ignore’ (Yin, 2011, p. 150).
In this study, interviews were conducted in order to grasp the thoughts, perspectives, and views of the interviewees. The nature of the research questions imposed the usage of inter-views as the data collection method since they require the feedback and opinion from external subjects. Regarding interviewing, Yin (2011) distinguishes between structured and qualitative interviews. When conducting structured interviews the researcher uses a formal questionnaire listing every question to be asked and tries to adopt the same consistent be-havior when interviewing every participant. Qualitative interviews, on the other hand, differ in several ways from structured interviews. Here, there is no formal questionnaire contain-ing all the lists of questions to be asked to an interviewee. The researcher rather has a men-tal framework of questions serving as an informal guide for the interview while the actual verbally asked questions might differ between different interviewees depending on the con-text and setting of the interview. In this study, both qualitative and structured interviews were conducted.
Four semi-structured interviews with open-ended questions were conducted in order to fa-cilitate the emergence of constructive discussion and to give further insights regarding the interviewees’ application of cloud computing and subjective opinion of the theoretical cloud marketplace framework at hand. According to Cohen (2007), this open-ended ap-proach allows the interviewee to go into depth if desired, it tests the interviewee’s knowledge; it more easily clears up misunderstandings, it encourages cooperation and it al-so gives the researcher the opportunity to achieve a better assessment of the interviewee’s believes (Cohen et al., 2007). In addition to that, Patton (2001) states that this approach re-sults in both an increased comprehensiveness of the data and a systematic data collection for each respondent. However, since the interview follows a certain guideline, there is a risk to omit important and salient topics. Besides that, due to the interviewers flexibility in for-mulating the questions the results might substantially differ reducing the comparability of responses (Patton, 2001). During the interviews, we, therefore, gave the interviewees the opportunity to elaborate on the topics they considered as important while at the same time trying not to drift off the interview guideline too far.
While one out of those four semi-structured interviews was conducted in person at the in-terviewee’s office, the rest was conducted through Internet calls using the software ‘Skype’. The reason for conducting three out of four semi-structured interviews via Skype was that all three interviewees were living abroad. According to James and Busher (2009), this repre-sents one of the most reasons why researchers conduct structured or semi-structured inter-views by using the Internet (James & Busher, 2009). However, besides the advantage of saving traveling time relative to personal interviews, there is the risk that something of the rapport and richness of the interaction may get lost (Rowley, 2012). Nevertheless, accord-ing to a study conducted by Bertrand and Bourdeau (2010), Skype interviewing can be con-sidered as a valid research method.
In addition to the four verbal semi-structured interviews, two non-verbal structured inter-views were conducted via email. The reason for using email interviewing was that the two respondents did not want to participate in a verbal interview but instead were willing to an-swer the questions via email. While structured interviews have several advantages including an increased comparability of responses, reduced interviewer effect and bias in case of sev-eral interviewers, and a facilitated organization and data analysis, they only offer little flexi-bility in relating the interview to particular individuals and constrain and limit the natural-ness and relevance of questions and answers (Patton, 2001). Email interviews, in particular, have the advantage that they allow extended access to participants due to their asynchro-nous communication (Opdenakker, 2006). They, therefore, allow researchers to interview respondents who are not willing, or do not have the time, to conduct a synchronous inter-view as it was the case in this study. Another advantage is that no disturbing background noises are recorded and that interviewees can answer the questions at their own conven-ience (Opdenakker, 2006). Besides that, also due to their asynchronous nature, respondents can take their time for answering the questions which might result in more meaningful re-sponses compared to a synchronous interview which is usually restricted to a short period (Opdenakker, 2006). James and Busher (2006) stress that the opportunity for respondents to think about their responses and the ability to draft and redraft what they want to write constitutes much of the value of email interviews. Since the interviewees in this study were confronted with the initial theoretical framework of the cloud marketplace, they were able to take their time to understand the idea and to build their own opinions about the model. On the other hand, email interviews are criticized for their perceived lack of depth and their ability to mimic the spontaneous probe of the face-to-face interview (Reid, Petocz, & Gordon, 2008). Nevertheless, in their study, Reid et al. (2008) conclude that this method is beneficial in international settings where participants may have different levels of familiarity with the language of the researchers as it was the case in this study.
According to Miles and Huberman (1994), in contrast to quantitative research where re-searchers aim for larger numbers of cases to seek statistical significance, in qualitative re-search the sample is rather small and nested in the context as well as studied in-depth. Be-cause of that, qualitative samples rather tend to be purposive instead of random. The sam-pling process usually consists of two stages. First, cases need to be defined which will probably include examples of what the researchers want to study. Second, the frame needs to be created which helps to uncover, confirm, or qualify the basic processes or constructs that undergird the study (Miles & Huberman, 1994). In this study, selective and purposeful sampling (Coyne, 1997) was conducted. In this regard, the five actors (cloud auditors, cloud consumers, cloud brokers, cloud providers, and cloud partners) of the initial framework of the cloud marketplace constituted the cases which were qualified as a sample for this study. Considering the available time frame for this research, the desired sample size amounted to ten cases. In order to compare and contrast different perspectives from the same actor cat-egory, the goal was to conduct two different interviews with two different actors of each category. After having identified potential actors to interview (mainly by the help of ‘Google’ and private recommendations and personal experiences), we contacted them. In this context, depending on the available contact information and location, we either called, sent an email, or visited their offices in person. In order to introduce the potential inter-viewees to the topic of this study, we prepared a brief information document (including our contact information), illustrating the main research idea, which we either appended to the emails or handed over in person (see Appendix A).
Eventually, we conducted six interviews with three of the interviewees representing cloud partners, while the remaining three interviewees involved one cloud auditor, one cloud consumer/provider, and one consumer, respectively. Table 3.1 contains more details about the conducted interviews including the name of the interviewee, the company he or she was working for, the location of the company, a company and interviewee description, the type of the interview and the language in which it was conducted, and the length of the in-terview.
Appendix B contains the interview guidelines that were used for the semi-structured inter-views. However, they only served as an orientation, a boundary for the conversation. The goal was to let the interviewees color it by vocalizing their respective priorities (Yin, 2011). It is very important to carefully think about the questions to ask and also about the way they are phrased. Holstein and Gubrium (1995) state that an interview leads to the creation of knowledge by the interviewee along with the interviewer, rather than an extraction of experimental information which might be the researcher’s initial intension. Because of that, when creating the interview guidelines, a lot of attention was paid on the phrasing of the questions as well as on the order of the questions in order the answers not to be biased by the way or order the questions were asked. For instance, the introduction of the initial the-oretical cloud marketplace framework was placed at the end of the interview guideline in order not to direct the rest of the conversation towards or around the model and the asso-ciated issues. The questions were designed in a way that ensures the interviewees to answer the same question the same way in every situation. Furthermore, in order to avoid short answers like “Yes” or “No”, only open-ended questions were used “forcing” the interview-ees not to limit their answers to a single word.
In this study, two different interview guidelines were used. One for EY representing a cloud auditor and one for acmeo and Olexandr representing cloud partners. Depending on their position in the cloud ecosystem, the way they are dealing with cloud computing obvi-ously differs. Therefore, the questions had to be adapted for each actor to match their posi-tion and relation to cloud computing while the main structure of the interview guide re-mained the same for everyone. The interview guidelines were structured as follows. They consisted of four parts. While the first part was rather intended to get familiar with the in-terviewee and the company she or he is working for, the second part went deeper into cloud computing by asking for a way how the interviewee and her or his company are deal-ing with cloud computing and how they interact with other actors of the cloud computing market. The third part of the interview guide put SOA into play and focused on the very topic of this study itself by introducing the model of the cloud marketplace and asking for their opinion and feedback. Finally, the fourth and last part gave the interviewees the pos-sibility to add additional comments and to ask questions about the study.
Before we introduced the initial theoretical cloud marketplace framework (part 3 of the in-terview guideline), the interviewees did not know how the model looked like. They were only provided with a brief introduction document (Appendix A) which they received when we contacted them for the first time. For a personal interview, we brought a printed ver-sion of the framework and for the interviews via Skype, we shared our computer screen with the interviewee and presented him the model this way.
Appendix B contains the questionnaire used for the email interview. It contained seven questions. While question one and two aimed at the companies’ usage of cloud computing and SOA, questions three to six rather focused on the framework of the cloud marketplace. Question seven was intended to give the interviewees the opportunity to state additional comments. Unlike the semi-structured interviews where we only introduced the cloud mar-ketplace framework in the middle of the interview, for the email interview, we attached the model of the cloud marketplace to the email containing the questionnaire.
After asking for the interviewees’ permission, all semi-structured interviews were recorded. For the single interview which was conducted in person a simple audio recorder was used. For the three interviews which were conducted via Skype the software ‘Pamela for Skype’ was used. Following this, all interviews were transcribed and translated into English in or-der to facilitate the analysis. The email interviews did not require transcription since we re-ceived the responses in written form. Since they were conducted in Ukrainian and Russian, respectively, they only required translation into English. The transcribed interviews and the responses to the email interviews can be found in Appendix C.
The analysis of all interviews, whether it be semi-structured or structured email interviews, was conducted by coding using the software NVivo. Coding can be defined as the transla-tion of question responses and interviewee information to specific categories for the pur-pose of analysis (Kerlinger, 1970). In this context, the responses were divided into six cate-gories. This was conducted by both of the authors of this study in order to avoid biased subjective interpretations. The first category was related to general problems the interview-ees experienced with cloud computing, while the second consisted of the answers concern-ing general benefits of cloud computing. The third category concerned experienced prob-lems that could be solved by the help of the cloud marketplace. The fourth category con-tained the problems which the interviewees related to the cloud marketplace, while the fifth category contained the benefits which the interviewees related to the cloud marketplace. Finally, the sixth category concerned features which the marketplace should or could con-tain. In this regard, the numerical order of the categories was of no relevance but was ra-ther chosen randomly.
With regard to the first research question aiming at the framework of the cloud market-place, especially the data of the sixth category, concerning the features that should be in-cluded in the marketplace, were considered. However also the data from category one (general challenges of cloud computing) and four (problems associated with the market-place) were analyzed in order to come up with solutions that could be implemented in the marketplace to solve those issues.
Regarding the second research question aiming at determining whether there exists a de-mand among the actors or not, especially category four and five, problems and benefits as-sociated with the marketplace, were of higher interest.
The analysis was conducted according to the interviewed actors of the marketplace. In this context, the statements of each interviewee were contrasted with those of the other inter-viewees within the same actor category. The analysis follows the approach of a SWOT-analysis (Strengths, Weaknesses, Opportunities, and Threats) in order to provide a view from four different perspectives.
Validity and Reliability
According to Yin (2011), a ‘valid study is one that has properly collected and interpreted its data, so that the conclusions accurately reflect and represent the real world […] that was studied’ (Yin, 2011, p.and the best way of achieving a satisfying validity is to avoid as much bias as possible (Cohen et al., 2007). Cohen lists several possible sources of bias. These include the atti-tudes, opinions, and expectations of the interviewer; misperceptions on the part of the in-terviewer of what the interviewee is saying; and misunderstandings on the part of the inter-viewee of what is asked. Oppenheim (1992) further adds alterations to the sequence of questions; inconsistent coding or responses; and selective or interpreted recording of da-ta/transcripts. As already mentioned above, in this study several approaches were applied to minimize the amount of bias. For instance, the initial model of the cloud marketplace was an assembly of elements of many different existing concepts created by previous re-search. By creating a model based on existing frameworks and architectures, the resulting initial model leaves only little room for bias resulting of the researchers’ individual subjec-tive opinions and views. Furthermore, when creating the interview guides and question-naires, much attention was put on the phrasing, understanding and sequence of the ques-tion in order to avoid biased responses from the interviewees. In addition to that, in those parts of the study that required interpretations based on a person’s own perspective, peer review was applied. For instance, the analysis of interviews presents an opportunity for bi-as. Here, both of the researchers conducted coding to guarantee an objective result. Finally, instead of refining the model after each interview and presenting the refined version to the subsequent interviewees only the initial version of the cloud marketplace framework was presented to the interviewees. By doing this, we avoided the risk of different results based on the order in which the interviews were conducted. Furthermore, this approach also en-sured that all interviewees provided feedback to the same model which facilitated the com-parison of results. On the other hand, a study is reliable when ‘the process of the study is con-sistent, reasonably stable over time and across researchers and methods’ (Miles & Huberman, 1994, p. 278). In order to ensure reliability of this study, we formulated clear research questions and designed the study according to them. In addition to that, we clearly described the role of the researchers and the process of the research in order to illustrate our research approach in a transparent way. We also conducted coding checks to guarantee a common agreement and understanding among the researchers. Furthermore, we clearly demonstrated the re-sults of the empirical research and their connection to the theory.
Table of Contents
1.3 Research questions
2 Theoretical Framework
2.1 Cloud Computing
2.2 Service-Oriented Architecture
2.3 Service-Oriented Cloud Computing
2.4 Introducing the Cloud Marketplace
3.1 Research Design
3.2 Research Method
3.4 Research Instrument
3.6 Validity and Reliability
4.1 Interview with Cloud Partner I
4.2 Interview with Cloud Partner II
4.3 Interview with Cloud Partner III
4.4 Interview with Cloud Auditor
4.5 Interview with Cloud Provider/Consumer
4.6 Interview with Cloud Consumer
5.1 Cloud Partner’s Perspective
5.2 Cloud Auditor’s Perspective
5.3 Cloud Consumer’s Perspective
5.4 Cloud Provider’s Perspective
5.5 Cloud Broker’s Perspective
5.6 Final Framework of the Cloud Marketplace
6.1 Result discussion
6.2 Methods discussion
6.3 Implication for research
6.4 Implication for practice
6.5 Future research
List of References
GET THE COMPLETE PROJECT
The Cloud Marketplace A Capability Based Framework for Cloud E cosystem Governance