Research Methodology

Research methodology should normally be broken down sequentially into at least the following sections:

  • Research Philosophy
  • Research Approach
  • Research Design
  • Research Strategy
  • Reliability, Validity and Generalisability
  • Data Analysis
  • Ethical Considerations

 

Research Philosophy


Set out your research paradigm, depending on the philosophy that underpins your research. The philosophical approach sets a framework of the study which provides the right answers to the research questions. According to Saunders et al. (2012), research philosophy is about development of knowledge in a particular field of study. As per the research onion by Saunders et al. (2012), there are four different approaches of research philosophy that are namely: positivismrealisminterpretivism and pragmatism, which are elaborated below.

Each philosophy is suitable for a different sort of study, and each involves different assumptions about the world (ontology), how we know that world (epistemology) and the nature of knowledge. The following table sums up key details about each philosophy, and should help you decide which is most useful for your area of study.

Positivism

The positivism approach is mainly based on facts rather than impressions which reflect the notion of social reality. The research is conducted in a value-free way. Positivism is also about formulating a research strategy for data collection and use existing theories to develop hypotheses. It has firmly been claimed that it is completely possible to adopt the key characteristics of the positivism in research (Saunders et al., 2012).

If your research reflects the philosophy of positivism, then you will probably adopt the philosophical stance of the natural scientist. You will prefer collecting data about an observable reality, and search for regularities and causal relationships in your data to create law-like generalisations like those produced by scientists (Gill and Johnson, 2010).

Realism

This philosophy relates mainly to scientific enquiries. It's about debating on whether objects exist independently of our knowledge to their existence. The essence is about projecting the reality as truth, independent of the human mind. This approach is purely based on existence of the mind and existence of its content only. Realism is further explained through direct and critical realism. The former states that you see what you get, while the latter argues that what we experience are mainly sensations in the real world which is far from being the actual direct things.

Realism is a branch of epistemology, which is similar to positivism in that it assumes a scientific approach to the development of knowledge. This assumption underpins the collection of data and the understanding of those data. The meaning becomes when these two forms of realism are contrasted (Saunders et al., 2012).

Interpretivism

Interpretivism is about understanding differences between human as social factors. Emphasis is being made on differentiating research among people to that of objects. Interpretivism can be further explained through phenomenology and symbolic interactionism. The former refers to the way humans make sense of the world while the latter is a continual process of interpreting the social world around us.

Crucial to the interpretivist philosophy is that the researcher has to adopt an empathetic stance. The challenge is to enter the social world of research subjects and understand their world from their point of view (Saunders et al., 2012).

Pragmatism

If it is about adopting your views to a position, then pragmatism is the philosophy to be adopted. This philosophy is about adopting one position which is somewhat unrealistic in practice. This approach avoids the debates on reality or truth but rather to what the researcher perceives of what is of interest or value based on a personal value system.

For pragmatists, the importance of the meaning of an idea are its practical consequences. They recognise that there are many different ways of interpreting the world and undertaking research, that no single point of view can ever give the entire picture and that there may be multiple realities (Saunders et al., 2012).

 

Research Approach

You will need to explain the context of your research, its limitations, as well as answer the questions How, Why, What, Where and When? It is in this section that you will explain and justify your decision regarding the use of qualitativequantitative or mixed methods. Remember that each method is associated with a different approach to gathering data.

As an example, you first need to decide whether you are going to work along broadly positivist, scientific lines, starting with one or more defined hypotheses and testing them against reality. If so, you are likely to be collecting numerical data in reasonably large quantities (I’d say a sample size of 100 or more) and running statistical tests on this data. In other words, you'll be using a quantitative approach (related to collecting and manipulating data).

On the other hand, you might be more interested in exploring broad areas, probably to do with people's experiences of, perceptions of or emotional reactions to a subject, and looking in detail at these responses in all their richness. By looking at broad areas of interest, you are aiming to generate theories about the area you are investigating. If this is the case, you will be adopting a qualitative approach (related to the analysis of textual responses in detail).

Lastly, you might want to use a mixture of both of the abovementioned methods, i.e., mixed methods research, which is becoming increasingly popular. It's particularly useful when you want to reflect different perspectives on a subject, or put quantitative information into a robust real-world context.

 

Research Design

Research design provides an overall guidance for the collection and analysis of data. It is a blueprint that enables researchers to find answers to the research questions in a study. Research can be classified as descriptiveexploratory or explanatory in nature (Saunders et al., 2012).

Descriptive research

According to Saunders et al. (2012), a descriptive approach is conducted with an attempt to portray an accurate profile of a situation. This may be an extension to a piece of exploratory research and/or explanatory research. The concept is useful and crucial when the researcher's achievement focuses on details and description of phenomena or when it should show prediction about specific findings.

Exploratory research

Exploratory research is undertaken to explore an area where little is known or to investigate the possibilities of undertaking a particular research study. This type of research is particularly useful when the nature of the problem is unsure. As per Saunders et al. (2012), this concept is characterised as flexible and adaptable to daily changes as result of new information upcoming or new hypotheses appearing. Exploratory research design is the most suitable and useful for new studies giving significant contribution to understanding the findings of the main subject.

Explanatory (causal) research

Explanatory research is also known as causal research, and is conducted for answering the how and why questions. According to Saunders et al. (2012), emphasis is made on studying a situation or a problem in order to explain the relationships among variables. In general, this concept makes use of quantitative ways of research to describe relationships expressing the cause and effect of particular incident. However, explanatory research design also allows the use of a qualitative approach to answer the questions how and why, depending on the sources of the relationship between cause and effect.

 

Research Strategy

The need to have a clear research strategy is very important for any research under study, which can be categorised as positivist or phenomenological. Research strategies can be used for exploratory, descriptive and explanatory research. There are different types of research strategies among which the commonly used are experiment, survey, case study, action research, grounded theory, ethnography and archival research (Saunders et al., 2012).

In this section, you will therefore have to outline how you collected your data and explain your choice for using the methods you did, i.e., survey, interview, focus group discussion and so on.

The following important aspects must be covered:

  • Identification and definition of your target population (from a reliable and referenced source)
  • Sampling strategy, including your sampling technique(s), justification of your sample size, choice of demographic variables, method of data-collection
  • Choice of questions in terms of their contribution towards answering your research questions and relevance to how they helped test the research hypotheses that formed the basis of your research
  • Pilot testing (if you have designed a survey questionnaire) in order to confirm the face and content validity, as well as the internal consistency (reliability) of your measuring instrument

 

Note
 

It is recommended that you write the above at the start of your research, so that it can be changed if your methods are not producing the results you need. However, since dissertations are rather written in hindsight, you will have to be honest about the flaws and limitations in your research design. When writing or planning this section, it's good practice to refer back to your research questions, aims and objectives, and ask yourself whether what you are planning to do is the best way to answer the questions and achieve these objectives. It's always better to do this at an early stage, rather than look at the data you collected and eventually find that it doesn't throw any light on the topic you wanted to investigate.

 

Reliability, Validity and Generalisability

Reliability

According to Wiener et al. (2017), the reliability of a measure is the degree to which a measurement technique can be depended upon to secure consistent results upon repeated application. Reliability also measures the internal consistency of a measuring instrument by the amount of intercorrelation between a set of items. When there are statements measured on the Likert scale in a survey questionnaire, the Cronbach alpha coefficient is the most appropriate measure of internal consistency (Laerd Statistics, 2018).

Despite the fact that there is no distinct cut-off point for a reliability coefficient, a Cronbach Alpha value of at least 0.7 is acceptable as a proof of reliability (Abraham and Barker, 2014). In some cases, we may even accept values as low as 0.6 (Malhotra, 2019). Moreover, it was further limited at its upper end by Tavakol and Dennick (2011), who argued that a coefficient exceeding 0.95 might mean that some items in the measuring instrument are redundant.

Validity

Validity determines whether an instrument measures what it is supposed to measure (Wiener et al., 2017). During the piloting stage of the questionnaire, the face and content validity are normally verified by respondents' feedback.

A more scientific and objective way of validating a measuring instrument is via construct validity which, determines whether the data contains underlying constructs, i.e., exhibits dimensionality (Ahmad and Sabri, 2013). One way of verifying construct validity is by way of factor validity (Nako and Barnard, 2012), which consists of subjecting the data to factor analysis and then observing the results for Bartlett's test of Sphericity. The instrument, and hence the data, is deemed to be valid if the Sig. value (or p-value) obtained does not exceed 0.05 (Field, 2016), the usual default level of significance.

 

Note
 


In SPSS, the Kaiser-Meyer-Olkin (KMO) test is performed simultaneously during factor analysis and its result is given in the same output as that for construct validity testing. The presence of dimensionality is confirmed if the KMO sample adequacy statistic is greater than 0.5 (Field, 2016).

Generalisability

You need to discuss about the extent to which your results and findings are also true of other populations. Remember that not all studies are as generalisable as others, namely case studies. Bearing in mind the delimitations of your research and its external validity, you need to discuss how generalisable your results are likely to be, and why.

 

Data Analysis

Specify and justify the use of the techniques and tests that you intend to analyse your data with. For example, if you have sampled texts, or have a lot of qualitative data, will you be using semiotics analysis, discourse analysis and so on? You may also mention which software (if any) that you have used (e.g., NVivo), and why you chose to use these particular ones.

With regards to quantitative analysis, mention, with justification, the software you used (e.g., SPSS, MS Excel, Stata and so on). With hindsight, you will realise the importance of the chosen measurement scales in your survey questionnaire, as they are crucial for the implementation of statistical tests and advanced techniques. This is why it is important to have an idea of how you will analyse your data, prior to designing your questionnaire.

In quantitative data analysis, tests and techniques should be judiciously chosen in order to maximise the accuracy and reliability of your findings. Ensure that you check all the necessary assumptions before you use any test or apply any technique. As an example, normality testing via the Shapiro-Wilk test or the Kolmogorov-Smirnov test is essential before deciding whether to use parametric or non-parametric tests. There is a wide variety of techniques that may be used, just to name a few well-known ones:

  • Descriptive analysis: charts and tables, measures of central tendency, dispersion, skewness and kurtosis, method of weighted means (including clustering), gap analysis (SERVQUAL)
  • Parametric tests: paired t-test, independent-sample t-test, one-way ANOVA
  • Non-parametric tests: Chi-Squared test of independence, Mann-Whitney U test, Kruskal-Wallis H test
  • Multivariate techniques: Correlation analysis, exploratory factor analysis (EFA), multiple regression analysis (OLS), binary or multinomial logistic regression, structural equation modelling (SEM)
     


Ethical Considerations

According to Cooper and Schindler (2014, p.108), “the goal of ethics in research is to ensure that no one is harmed or suffers adverse consequences from the research activities.” Therefore, you need to explain how you have adhered to ethics in your research, particularly if it includes human subjects. The usual aspects covered in this section are:

  • Informed consent and right of withdrawal
  • No harm to participants
  • Anonymity and confidentiality
  • Authorisation for data collection
  • Violation of intellectual property rights (plagiarism)

 

References


Abraham, J and Barker, K (2014) “Exploring gender difference in motivation, engagement and enrolment behaviour of senior secondary physics students in New South Wales”, Research in Science Education, Vol. 45, No. 1, pp. 59-73.

Ahmad, NS and Sabri, A (2013) “Assessing the unidimensionality, reliability, validity and fitness of influential factors of 8th grades student's Mathematics achievement in Malaysia”, International Journal of Advance Research, Vol. 1, No. 2, pp. 1-7.

Cooper, DR and Schindler, PS (2014) Business Research Methods (12th edn), McGraw-Hill, New York.

Field, A (2016) Discovering Statistics Using IBM SPSS Statistics (4th edn), SAGE Publications Ltd, London.

Gill, J and Johnson, P (2010) Research Methods for Managers (4th edn), SAGE, London.

Laerd Statistics (2018) “Cronbach's Alpha (?) using SPSS Statistics” [online] Available from: https://statistics.laerd.com/spss-tutorials/testing-for-normality-using-spss-statistics.php

Malhotra, NK (2019) Marketing research: An applied orientation (7th edn), Pearson/Prentice Hall, Upper Saddle River, NJ.

Nako, Z and Barnard, A (2012) “Construct validity of competency dimensions in a leadership assessment and development centre”, African Journal of Business Management, Vol. 6, No. 34, pp. 9730-9737.

Saunders, M, Lewis, P and Thornhill, A (2012) Research Methods for Business Students, (6th edn), Pearson Education Limited, England.

Tavakol, M and Dennick, R (2011) “Making sense of Cronbach's Alpha”, International Journal of Medical Education, Vol. 2, pp. 53-55.

Wiener, BJ, Lewis, CC, Stanick, C, Powell, BJ, Dorsey, CN, Clary, AS, Boynton, MH and Halko, H (2017) “Psychometric assessment of three newly developed implementation outcome measures”, Implementation Science, Vol. 12, No. 1, pp. 1-12.