As a PhD student in Engineering Education, I recently had to take my qualifying exams. What this basically is, is a 10.5 days long written exam on three questions. One of the questions was on critiquing an existing journal article’s methods section, and proposing a follow-up study. The second question was on critiquing an existing assessment plan in a journal publication. The third question was on proposing a Minor and justifying it using Learning Theories.
I thought I would post my responses on this blog, so that those interested in Engineering Education related studies get an idea of the kind of work we are involved with. This particular post is my response to the Qualifier Questions August 2015.
Engineering Education Qualifier Exam, August 2015: Research Methods
In their paper on “A study of the relationship between learning styles and cognitive abilities”, Hames and Baker use a quantitative approach to explore the relationship between student learning styles, as determined by the Felder-Solomon Inventory of Learning Styles (FSIL), and student cognitive abilities assessed by student performance on three tasks: a matrix reasoning task, a tower of London task, and a mental rotation task. The authors used statistical t-tests and correlations to quantify the results from the responses of 51 engineering students from a university in the USA. The results indicated that the global-sequential, active-referential, and visual-verbal learning styles scales were related to performance on cognitive tasks, for response time.
In this response, I will provide an in-depth evaluation of the published research by Hames and Baker (2014) and propose a follow-up qualitative study to explore one focused researched question driven by the findings from this study. In the evaluation of the research by Hames and Baker (2014) I will show that the publication has major drawbacks in data measurement and classification; and analysis and interpretation, due to lack of sufficient discussion of validity and reliability of instrument, and inadequate discussion of assumptions before conducting statistical tests. Despite the drawbacks of the paper, I will acknowledge the results to be accurate and propose a follow-up qualitative study. In the follow-up study, I will use Grounded Theory methodology to understand the process of solving cognitive skills by reflective learners. This explanatory follow up, will seek to provide a model explaining the process of solving cognitive tasks by reflective learners, and in the process address the main aim of understanding what mechanisms can be put to use in engineering education classrooms to facilitate reflective learners, by providing an understanding of the strategies that the reflective learners employ while solving a cognitive task.
Evaluation
Evaluation Criteria
The Standards for Reporting on Empirical Social Science Research in AERA Publications (AERA, 2006) will be used as the guidelines to form criteria to evaluate the research by Hames and Baker (2014). The AERA guidelines were established to help researchers in the preparation of manuscripts for publication by providing a framework of expectations about what a report of empirical work should address (AERA, 2006); hence it provides a suitable and comprehensive framework by which to evaluate this research study. Table 1 provides a list of the eight evaluation criteria along with their description.
Table 1: Criteria for Evaluation of Quantitative Study
Criterion | Appraisal | |
1 | Problem Formulation | Clear statement of purpose and scope |
2 | Design and Logic | Appropriateness of the methods and procedures used |
3 | Sources of Evidence | Appropriately describes relevant characteristics of unit studied, including how they were selected and how data was collected using appropriate instrument. |
4 | Measurement and Classification | How information is measured and classified.
Validity and reliability of instrument discussed. |
5 | Analysis and Interpretation | Appropriate evidence is provided that warrant the outcomes and conclusion. Statistical test employed is critiquedAlternate viewpoints should have been considered. Validity and reliability of results discussed. |
6 | Generalization | Appropriate justifications for generalizability of results to different contexts are provided along with rationale |
7 | Ethical considerations | Appropriate ethical considerations are discussed |
8 | Title, abstract, and heading | Appropriate writing style and structure to help the reader follow the logic of inquiry |
In addition to the criteria based on the AERA (2006), sub-criteria developed by other authors (eg.Borrego, Douglas, & Amelink, 2009; Creswell, 2014) will be used to better inform the critiquing process and evaluation results.
The evaluation of the research by Hames and Baker (2014) found that the research lacked mainly in the area of problem formulation, measurement and classification, and design and logic. The strengths of the research were: in addressing generalization and ethical concerns, and recognizing limitations. These strengths and weaknesses will be addressed in detail in the following paragraphs.
Assumptions Made Before Evaluation. Prior to beginning the discussion on the results of the evaluation of the research study by Hames and Baker (2014), in this sub-section, I shall address few of the assumptions that were made while evaluating the research. The first interpretation made was that the general research purpose was to determine any possible correlations between learning styles as measured by the FSILS instrument, and performance on selected cognitive tasks in order to understand how learning styles impact cognitive processes found in engineering education. This is specified explicitly by the authors in the paper as goals for their research (Hames & Baker, 2014, p. 3). Creswell (2014) identifies research purpose and research questions as two distinct and different aspects of a research design. The research purpose “sets the objectives, intent, or the major idea of a proposal or study. This idea builds on a need and is then refined into research questions” (Creswell, 2014, p. 124). The research questions specified by Hames and Baker (2014) are identified as RQ1 and RQ2 and will be referred to in the following evaluation as such:
RQ1: Are self-assessed learning styles in any way related to performance on cognitive tasks? For example would students with a strong visual learning preference actually exhibit a higher level of visual-spatial skills as measured by certain tasks; and
RQ2: How can the combination of cognitive tasks and FSILS assessment yield insight into current issues facing the engineering education community? For example, how can engineering teaching styles and curriculum be adapted in ways to appeal to different learning styles, and will this be effective in increasing student persistence and success (p.3).
The second assumption is that Hames and Baker (2014) conducted the research from a post-positivist perspective, where a theoretical perspective is the researcher’s philosophy which informs and contextualizes research methodologies (Crotty, 1998, p. 3). These theoretical perspectives have been identified by authors as a paradigm (Guba & Lincoln, 1994), an epistemology (Crotty, 1998), or a worldview (Creswell, 2014). Creswell (2014) describes worldviews as a general philosophical orientation about the world and the nature of research that a researcher brings to a study. Creswell (2014) recommends researchers to explicitly state their worldview in publications of their research in order to inform the reader of the larger philosophical ideas they espouse. He identifies four main worldviews: Postpositivism, constructivism, transformative, and pragmatism. Postpositivism is characterized by determinism, reductionism, empirical observations and measurement, and theory verification (Creswell, 2014).
Although Hames and Baker (2014) are not explicit about their worldviews, the design of their research study hints at a postpositivist worldview evident mainly because of the authors’ decision to pursue a quantitative research to investigate their research purpose. The research questions are deterministic in nature. In a deterministic philosophy, causes (such as type of learning style of a student) determine effects or outcome (such as performance on cognitive tasks), and the authors reflect the need to identify and assess the causes that influence outcomes (such as how learning styles are related to cognitive processes) such as found in experiments. It is reductionist since the authors break the problem statement into small, discrete set to test their research questions. Finally, the researchers carry out empirical observations to collect data to refine theories that exist (such as refine existing theories on student learning styles). Thus, in the absence of the author’s explicitly stating their theoretical perspectives, it is reasonable to assume that the research by Hames and Baker (2014) are governed by a postpositivist worldview.
Evaluation Results
The criteria presented in Table 1 were used to evaluate the strengths and weakness of the research by (Hames & Baker, 2014). A summary of the results of the evaluation, along with primary reason influencing the evaluation result, is presented in Table 2.
Table 2: Summary of Evaluation Results
Criterion | Reason | Result | |
1 | Problem Formulation | Clear purpose
Lacking theory Lacking rationale for conceptual and methodological orientation of the study. |
Weakness |
2 | Design and Logic | Lacking clear logic of inquiry | Weakness |
3 | Sources of Evidence | Describe participant demographic details Describe limitation in collection of participant details No reference to sampling/means of participant selection from volunteer list |
Weakness with minor strengths |
4 | Measurement and Classification | Lacking instrument validity and reliability for both the FSILS survey and the questions assessing cognitive abilities | Weakness |
5 | Analysis and Interpretation | Limitations in the analysis Validity concerns not addressed |
Weakness with minor strengths |
6 | Generalization | Cannot be generalized Not described in detail |
Weakness |
7 | Ethical considerations | Address IRB Acknowledge funding support |
Strength |
8 | Title, abstract, and heading | Paper is balanced Meaningful transitions guided through headers and sub-headers |
Strength |
In this section, I shall elaborate on each of the criteria listed in Table 2.
Criteria 1: Problem Formulation. Problem formulation seeks to address the question of why the research would be of interest to the research community, and how the research investigation is linked to prior research and knowledge (AERA, 2006). Table 3 provides the sub-criteria for Problem Formulation, adapted from the recommendations of the AERA (2006).
Table 3: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness
Sub-Criterion | Result |
Clear statement of purpose | (+) |
States contribution to knowledge | (+) |
Reviews relevant scholarship | (+) |
Rationale for theoretical, methodological, or conceptual orientation is described | (-) |
Rationale for problem formulation as it relates to group studied is provided | (-) |
Hames and Baker (2014), explicitly state a research goal which can be interpreted as a statement of purpose. They are explicit about the potential contribution of their research to knowledge and state “a better understanding of the relationship between learning styles and cognitive abilities will allow educators to optimize classroom experience for students” (p.2). Thus they are able to clearly situate and justify their investigation. They successfully introduce the problem statement and identify the gaps in literature through review of scholarship: “few studies have correlated student learning styles with cognitive abilities” (p.2).
While there is a strong literature review that justifies the need to explore learning styles in engineering education classrooms, there is no explicit discussion of theory that is used to frame the research. Creswell (2014) defines theory as a discussion of how a set of variables or constructs are interrelated and how these relationships describe a particular phenomenon. Further, theory in quantitative research is important since it provides insight into the hypothesis being tested by the authors. Hames and Baker (2014) do not explicitly provide any hypotheses to describe the relationship among variables. Even in instances in which Hames and Baker (2014) provide a statement for expectations from the outcome: “active learners might be expected to take a more motor-oriented approach to the rotation task, resulting in different levels of accuracy and different response times from reflective learners, who might not engage the motor rotation process in the brain to the same extent” (p.5), the statements are based on prior research rather than a theory. In their article on What Theory is Not, Sutton and Staw (1995) state that researchers often use prior work as a smoke-screen during absence of theory guiding a hypotheses that needs to be tested. Sutton and Staw (1995) are explicit in stating that data, hypotheses, diagrams, references, and list of variables are not theory. They insist quantitative researchers publish papers with adequate theory building.
(Kuhn, 2012) presents a counter-argument to the discussion on use of theory in social science research by stating that the mission of research should be an accumulation of empirical findings rather than an ebb and flow of theoretical paradigms. This meta-analysis view of theory tend to value research publications simply because they serve as storage devices for obtained correlations, and not because they elaborate a set of theoretical ideas. However, the arguments made by Hames and Baker (2014) for their research implies an attempt to “fill-a-gap” in existing literature by contributing to theory. They are not explicit about their choice to not use theory or subscription to a meta-analysis view similar to Kuhn (2012). In absence of a strong theoretical framework and lack of justification of this absence, the argument put forth by Sutton and Staw (1995) for greater theoretical emphasis in quantitative research along with more appreciation of empiricism of qualitative endeavors, seems a valid point to critique the research method of Hames and Baker (2014).
The review of literature by the authors does not provide justification for the study to be conducted through all years of engineering. Since the authors are interested in engineering cognitive development and its relation to student preference of learning style, and significant engineering education research suggests that there is very little cognitive development of the students in the first two years of undergraduate education, where the emphasis is on rote-learning, application of formulae, and courses that tend not to promote reflective learning (Knight, 2011; Marra, Palmer, & Litzinger, 2000; Wise, Lee, Litzinger, Marra, & Palmer, 2001). It is recommended that Hames and Baker (2014) explicitly state their motivation in studying engineering students’ ability to perform cognitive tasks, and explicitly justify choice and implications of a sample comprising engineering students from different departments and of different academic standing.
Additionally, the authors are also not explicit about their worldview which might have had an influence on their review of literature. Creswell (2014) recommends researchers to explicitly state their worldview in publications of their research in order to inform the reader of the larger philosophical ideas they espouse. Although this evaluation assumes that the authors present a postpositivist framework, this has not been stated explicitly by the authors in the paper. This is a weakness of the problem formulation.
Criteria 2: Design and Logic.
This criteria promotes a clear chain of reasoning and a specific understanding of the study design. Specifically, the design and logic of a research study comprise the choice of method, approach to problem formulated, the research questions, the approach to analysis and interpretation, and the format of reporting (AERA, 2006). Table 4 lists the sub-criteria for this criteria and the corresponding result of evaluation.
Table 4: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness
Sub-Criterion | Result |
Clear logic of inquiry | (-) |
Specific and unambiguous description of the design | (-) |
The design of the study is a drawback of this research. The weakness lies in the authors’ decision to adopt quantitative methods, a choice of method they fail to explain and justify. In their research question, Hames and Baker (2014) state “are self-assessed learning styles in any way related to performance on cognitive tasks” and “how can the combination of cognitive tasks and FSILS assessment yield insight into current issues facing the engineering education community” as their research questions. The use of the terms in any way and insight into current issues suggest broad, open-ended questions, rather than those testing a hypotheses. Quantitative methods are best-suited in instances wherein hypotheses need to be tested while qualitative methods are suitable in instances wherein a concept needs to be explored. The use of open-ended questions warrants an exploratory method, especially while addressing the second research question of the paper, which seeks insight into current issues facing the engineering education community.
Additionally, this research question seems to be broad, as opposed to narrow and focused research questions typical of questions warranting a quantitative approach (Creswell, 2014). The research questions could have benefited from a mixed method approach, since it is felt that the research questions are not adequately answered employing only a quantitative method. Using mixed method research, the researchers may want to both generalize the findings to a population as well as develop a detailed view of the meaning of a phenomenon or concept for individuals (Creswell, 2014). This approach might have catered best to these set of research questions which sought to understand the specific correlation relationship between learning styles and cognitive processes in individuals, as well as explore general insights on issues in engineering education related to learning styles. Hames and Baker (2014) do not explicitly detail the hypotheses being tested or state the rationale behind use of a quantitative approach; nor do they adequately answer their research questions. As a result, the research fails to adequately contribute to the gap in literature as identified by Hames and Baker (2014). Thus decision to adopt a quantitative method is identified as a strong weakness of this paper.
Criteria 3: Sources of Evidence. Sources of evidence refers to both: the units of the research as well as the data collected to address the research question. Researchers should address sources of evidence since the role of the researcher and the relationship between researcher and participants can influence the data collection process (AERA, 2006), and so it is important to let the reader know of the process, choices and judgements taken during a research for them to be able to replicate it, if required. Table 5 lists the sub-criteria for this criteria and the corresponding result of evaluation.
Table 5: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness
Sub-Criterion | Result |
Units of study
Relevant characteristics with relation to parent population Means of selection of sites, groups, participants, etc. Description of groups Treatment described in detail |
(+)(+) (-) (-) |
Collection of data or empirical materials Time and duration of data collection Report on instruments |
(-) (+) |
Hames and Baker (2014) state: “51 students from a university volunteered for the study, 18 female and 33 male. The mean age was 22.4 years” (p.5). The academic standing and the participant major was also presented to the reader in the form of a pie-chart. The authors acknowledge that the convenience sample (ie. Volunteers for the study) is not representative of the parent population, and address this as a limitation of the paper, in not recording ethnic backgrounds of the student. Creswell (2014) indicates that researchers usually select a sample size based on selecting a sample of the population, size that is typically used in previous literature, or base the sample size on the margin of error the researchers are willing to tolerate (p.159). In the paper by Hames and Baker (2014) authors state that the sample represents 2% of the population, but do not provide any justification for this choice of sample size. The authors do not provide any justification for use of a single site to study the effect of learning styles on the students’ cognitive ability. The authors are also not explicit about the role of the researchers in the data collection process.
Additionally, participant consent can be assumed by the authors’ statement: “(students) volunteered for the study”. However, the description would have benefited from a more detailed description of the student consent process, how much time was spent on assessing each respondents on the cognitive task, how rapport was established, whether the tasks was described in detail to the participant, etc. The authors provide a brief review of literature as a report on the instruments used. Further considerations such as validity and reliability of the instruments are addressed as weakness and discussed in detail as part of the next criteria.
Criteria 4: Measurement and classification. Measurement is the process by which observation or behavior (in this case: cognitive ability and learning style preference) is converted into a quantity (eg. score on FSILS and score on task and response time for specific question on task) which is then subject to quantitative analysis (eg. t-test and correlations), while classification is the process of segmenting data into units of analysis (AERA, 2006). Table 6 lists the sub-criteria for this criteria and the corresponding result of evaluation.
Table 6: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness
Sub-Criterion | Result |
Measure should preserve characteristics of phenomenon being studied | (-) |
Classification should be comprehensively described | (+) |
Reporting should describe data elements unambiguously | (-) |
Rationale should be provided for relevance of a measure or classification | (-) |
Measurement of behavior and classification was achieved using two instruments. The first instrument, FSILS, was used to classify students on the basis of preference of learning styles. Self-reported survey was used for this process. Self-reported surveys have the disadvantage of being misinterpreted by the respondents and hence answered incorrectly (Singleton Jr, Straits, & Straits, 1993). The authors do not provide any justification for their selection of self-reported surveys as part of this research.
In order to assess the quality of the measurements and classification, the validity and reliability of the survey instrument needs to be explored. The authors in their introductory section state: “additional papers exist that examine the robustness and validity of the FSILS, but with not clear consensus” (p.2). Validity and reliability of existing instruments needs to be examined to decide as to whether meaningful inferences can be drawn from the use of the instrument, and hence if instrument should be used (Creswell, 2014). Content validity (do the items measure the content as intended), construct validity (do items measure hypothetical constructs, such as individual’s reasoning which is internal to the respondent), concurrent validity (do scores predict a criterion measure) and reliability (such as measures of internal consistency) are typically expected to be reported by researchers using an instrument for data collection (Creswell, 2014). However, for the FSILS instrument, the authors do not report any validity claims.
Answering a survey requires that respondents: comprehend the question, retrieve the information requested from memory, formulate a response in accord with the question and information, and finally communicate a response deemed appropriate (Singleton Jr et al., 1993). A good survey question is one which is easy to comprehend and understand what is being asked. A recommendation to establish content validity and hence improve the understandability of the questions is to pilot the instrument on a small sample, before using it to measure the actual sample (Creswell, 2014; Singleton Jr et al., 1993). For instance, in the case of the FSILS instrument, the authors do not refer to piloting of the instrument nor do they address any of the validity, despite their review of literature suggesting no clear consensus on validity or reliability of the instrument. Also, if it is a concern that engineering education students have changed over time, the present-day validity of an instrument developed in 1988, FSILS, can be questioned.
The authors justify the FSILS instrument’s appropriateness by stating: “the instrument was used due to the fact that it was originally designed for engineering students – and has been cited over 3300 times according to Google scholar” (p.3). Since number of citations of an instrument by research papers do not imply validity and reliability of an instrument, it is felt that the authors inadequately addressed the justification for use of the FSILS instrument, resulting in a major weakness of the paper. Similarly, Hames and Baker (2014) provide no validity and reliability of the questions assessing the cognitive tasks either. Inadequate theory was cited to indicate that these tasks indeed measured cognitive abilities of engineering students. Additionally, since one of the results of the research is based on student response time to solve the task, it would have been helpful to establish content validity of the questions on the task. For instance, if a student is confused by the wording of the question, the response time would not be a true indication of the cognitive process of reflecting on the question, but rather an indication of the student trying to decipher it. In such a situation, a think-out-loud exercise with the student walking an observer through the cognitive process, would have provided better understanding.
Criteria 5: Analysis and Interpretations. This criteria examines the evidence provided to confirm that the outcomes and conclusions are warranted, and that counter-examples or viable alternatives have been appropriately considered. Table 7 lists the sub-criteria for this criteria and the corresponding result of evaluation.
Table 7: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness, and NA indicates data was unavailable.
Sub-Criterion | Result |
Transparent description of procedures | (+) |
Sufficiently described analytic techniques | (-) |
Analysis and presentation of outcomes is clear | (-) |
Intended or unintended circumstances affecting interpretation of outcomes are described | (-) |
Descriptive and inferential statistics provided for each of the statistical analyses | (+) |
Clear presentation of summary/conclusion | (-) |
Considerations that may have arisen in data collection and processing (such as missing data) should be adequately discussed | (-) |
Considerations that may have arisen in data analysis (such as violation of assumption of statistical procedures) should be adequately discussed | (-) |
Detailed discussion of statistical results | (-) |
The authors provide a detailed discussion of the interpretations of the data analysis. They provide evidence for each claim and address few alternative explanations to understand the relationship between learning styles and cognitive abilities. Descriptive and inferential statistics were represented through diagrams and in tabular form. However, it was felt that the authors did not adequately answer RQ2, since very little insight into current issues facing engineering education could have been achieved through use of a strictly quantitative method. Insight and perspectives warrant deeper understanding of attitudes and beliefs of individual participants, which are facilitated by use of qualitative investigations. As mentioned earlier, this study would have benefited from a mixed method approach in order to answer the research questions.
In the research method’s use of statistical tests, the authors do not provide the information to the reader clearly or coherently. There is no explicit identification of variables (a characteristic or attribute of an individual that can be measured or observed and that varies among the people or organization studied), nor are dependent and independent variables identified for each of the tests. Instead, the authors provide a large number of tables and graphs, without attempting to guide the reader through all the results individually thus resulting in lack of methodical transparency.
The authors assume a normal distribution, but do not provide any statistical justification for this assumption, apart from stating that a Chi square test was used. The results would have benefitted from a discussion of the result of the test for normality, since the normality assumption is crucial for t-tests and the results are invalid without the same. The independent sample t-test is an analytical test used to determine if there is a statistically significant difference between two independent samples. T-tests assume normal distribution and random samples, violation of which can result in inaccurate results. The authors do not provide any discussion of significance of non-random sample, skewed sample to the statistical test.
A detailed discussion of all variables would have improved the quality of this publication. The authors provide some references which suggest that male and females approach cognitive tasks differently, however, it has not been made explicit that these references provided basis for investigating the differences between genders. No justification has been given for why other variables may have been excluded from the statistical analysis.
The authors use the t-test to compare performance between genders, but do not use it to test differences among department of the respondents, or other demographics such as year in the program. A control variable is an independent variable important in research since it influences the dependent variable (Creswell, 2014). Hames and Baker (2014) could have controlled for the demographics and provided ANOVA based results as analysis. The authors do not address possibilities of confounding variables either. Confounding variables are those which cannot be measured or directly observed, but can serve to explain the relationship between the dependent and independent variables (Creswell, 2014). For instance a student who has prior knowledge on performing the Tower of London task could potentially score higher on the task, without it being representative of their cognitive ability or learning style preference. In this case, the prior knowledge can be identified as a confounding variable.
To aid readers understand the flow of logic, it is recommended that for the correlations conducted, the authors provide a single correlation matrix to help the reader understand the correlation of different independent variables with each other, rather than multiple and scattered tables providing correlation values.
Validity and Reliability. The other set of drawbacks in the analysis and interpretations were due to the various validity and reliability considerations. Validity refers to the absolute truth of an inference (Shadish, Cook, & Campbell, 2002). The AERA (2006) recommends authors identify any considerations during data collection or data analysis which might compromise the validity of the analyses of inferences.
For instance, threats to internal validity such as diffusion threat may be experienced since the researchers did not have all individuals participate in the experiment simultaneously. Diffusion threat occur as result of communication about the tasks between participants of different groups which can unduly influence the outcome. If an student A who has already completed the cognitive tasks, discusses the tasks with another student B who is yet to take the test, the validity of the results is affected adversely, since student B might perform better on the tasks. (Hames & Baker, 2014) do not provide any discussion of any such considerations which might impact validity. This lack of discussion of anticipated threats to validity is identified as another key drawback of this research design.
Criteria 6: Generalization. Generalization of the results of a research investigation are intended from a sample to the sampling frame, such as a population or a universe (AERA, 2006). Generalization of a study provides the implications of the particular study for a larger population, of which the sample might be a representation. Thus, in order for the inferences drawn for the larger population to be correct, it is necessary for researchers to address how the results might be generalized to certain people, settings and times (Creswell, 2014). In the case of (Hames & Baker, 2014) however, the researchers address the limitation of the sample and accept that it is not representative of engineering, and that the sample has over-representation in terms of number of electrical engineering students and women, as compared to the parent population. Table 8 lists the sub-criteria for this criteria based on the AERA (2006) and the corresponding result of evaluation.
Table 8: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness
Sub-Criterion | Result |
Specifics of participants, contexts, activities, data collections and manipulations provided | (-) |
Clearly stated intended scope of generalization of findings of the study | (-) |
Clearly stated logic and rationale behind generalization | (-) |
Generalizability of this research is a drawback since the authors do not provide adequate information about the research participant’s background. Hames and Baker (2014) acknowledge this lack of information as a limitation and warn readers: “the results should not be generalized to students from other countries” (p.18). The authors however fail to provide any context for which the results could be generalized. This lack of logic and rationale for generalization of the results of the paper is identified as a limitation of the study.
Criteria 7: Ethical Considerations. The sub-criteria identified as part of the criteria on ethical considerations by the AERA (2006) describe those ethical issues which are directly relevant to reporting research. Table 9 lists the sub-criteria for this criteria and the corresponding result of evaluation. Research ethics involve the application of ethical principles to scientific research. Ethical research involves three main areas: ethics of data collection and analysis, ethical treatment of participants, and ethical responsibilities to society(Singleton Jr et al., 1993).
Table 9: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness. NA indicates data was unavailable.
Sub-Criterion | Result |
Ethical considerations in data collection, analysis and reporting addressed | (+) |
Honors consent agreements with human participants | (+) |
No emission, falsification, or fabrication | (+) |
Data available and maintained in a way that is reproducible by reader, if needed | (+) |
Funding support acknowledged | (+) |
Hames and Baker (2014) address the consent by the institutional review board, which can be used to acknowledge ethical treatment of research participants. The authors also acknowledge their funding sponsorship towards the end of the paper.
Although the researchers do not explicitly address the participant debriefing or address deception (lying or withholding information from participants), since the principle of informed consent does not require researchers to reveal the entire study to the participants, withholding information about hypothesis is not considered deception (Singleton Jr et al., 1993). Additionally, since no indication is provided otherwise, and there is sufficient methodological transparency, it is reasonable to assume that the authors did not falsify, fabricate or emit any data.
Criteria 8: Title, abstract, and heading. A well-constructed article is important to guide the reader through the logic of inquiry. Smart (2005) describes exemplary manuscripts as those which build a reader’s trust by providing a balanced attention to the three fundamental components of research manuscripts: issue component; technical and analytical component; and contextual component (p. 463). The issue component relates to the background and literature review, the technical component relates to the data collection and data analysis techniques employed, while the contextual component relates to the discussion and inferences drawn from the results. Table 10 provides a summary of the sub-criteria for this criteria, and the result of the evaluation.
Table 10: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness
Sub-Criterion | Result |
Title is indicative of research summary | (+) |
Abstract provides a concise summary | (+) |
Headings and sub-headings make clear the logic or inquiry | (+) |
In the paper by Hames and Baker (2014) it was found that the authors devoted enough attention to each of the three components, with adequate discussion provided in each section. The paper was well-balanced and had well-constructed and clear headings and sub-headings to guide the reader through the logic of inquiry. This was a strength of the paper.
Evaluation conclusion
In summary, this evaluation critiqued the research method employed by Hames and Baker (2014) using criteria based on guidelines for publications recommended by the AERA (2006).
This evaluation found that the research method had certain major drawbacks. The decision to adopt a quantitative approach was not adequately justified by the authors, nor was it found to be adequate to answer the two research questions identified. There was inadequate theory and the authors did not justify most of their research choices. The FSILS survey and the questions for the cognitive tasks that were employed to assess the learning styles and the cognitive abilities of the students, respectively, were not pilot tested. The discussions on validity and reliability of the instruments, based on prior literature were inadequate due to the lack of consensus in the review provided on establishing validity amongst them. For the statistical tests conducted, the authors do not adequately address the assumptions of normality needed before conducting the t-tests. They also use parametric tests for analysis (such as t-test, which assume normality) without addressing the underlying assumptions nor justifying the reason to adopt these test. The authors’ conclude that the study not be generalized to international contexts. However, they do not provide a discussion of contexts to which the results can be generalized. Despite the major limitations of the study, the study had strength in balance and structure of research report, and can be assumed to be conducted and reported ethically.
Proposed Follow-up Study
For the proposed follow up, I shall assume that the results of the paper by Hames and Baker (2014) are valid and reliable, and justify a follow-up study to further examine some of the results. In their paper, the authors conclude that performance on a cognitive task was found to relate to the learning styles of the respondents based on the FSILS instrument. Based on this result, the authors recommend that in the engineering classrooms suitable steps must be taken to accommodate different learning styles.
For the context of reflective learners, Hames and Baker (2014) present results which show that response time on the cognitive tasks for reflective learners was higher, compared to active learners. The authors conclude that the higher response time of reflective learners to respond to a cognitive task indicates that these type of learners take more time to process the information. Understanding how a reflective learner approaches a cognitive problem will give deeper insight into insight on pedagogical approaches implemented in classroom to cater to reflective learners. The follow-up research outlined below will thus contribute to one focused aspect of understanding the importance of learning style preference with student performance on cognitive tasks.
Introduction
The paper by Hames and Baker (2014) provides a statistical relation of the reflective learners’ performance on cognitive task to learning style (ie. positively correlated). The authors state that more reflective a learner is, according to the FSILS instrument, more time they take to respond to a cognitive task process. In order to recommend classroom approaches to cater to reflective learners, the authors suggest classes provide enough time for reflective students to process the information at a slower pace. However, for a deeper understanding of why reflective learners take more time to respond on a cognitive task and how they utilize this time, a possible follow-up study could be carried out to understand what strategies are employed by the reflective learner during the process of solving a cognitive problem.
In this section I will justify a Grounded Theory methodology based follow-up study with the research purpose: to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. I will first provide the research questions which will guide the explanatory follow-up study. I will also provide a discussion on my own worldview as the primary researcher who will conduct this follow up study. I will explain the choice of Grounded Theory methodology to investigate the research questions and explain the role of prior knowledge and theory in helping sensitize the primary researcher before conducting this research. I shall then outline the data collection and data analysis processes, and finally discuss the potential limitations of the research study developed.
Research Question
This research study seeks to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. In particular, the research seeks to answer the following sub-questions:
- What strategies employed by reflective learners result in their successful solving of a cognitive problems?
- What causal mechanism is responsible for reflective students solve cognitive problems?
The research will finally develop a theoretical model to explain the process of solving of cognitive tasks by reflective learners. The theoretical model will be grounded in data and will emerge from the researchers’ interactions with the reflective learners.
Why Grounded Theory
The selection of any research design should be directly informed by the research question that is being asked (Borrego et al., 2009). In this case the intent of the research is to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. Grounded Theory is used to learn about the causal relationships involved in a process (Charmaz, 2006). A causal relationship explains the whys and hows for a process, while a process itself consists of unfolding temporal sequences that may have identifiable markers with clear beginnings and endings and benchmarks in between (Charmaz, 2006). In this research, process of solving cognitive tasks by reflective learners will be studied. Grounded theory will be a suitable approach to gain interpretive understanding of this process since it will allow the researcher to gain access to the research participants perceptions and views, and construct an understanding of the process grounded on the participant perceptions (Charmaz, 2006).
Since this research will primarily be driven by the findings of the research by Hames and Baker (2014), the follow-up research may be understood as a sequential explanatory mixed method research. Creswell (2014) identifies sequential explanatory mixed method research as those in which a more in-depth understanding of the quantitative results is sought. Charmaz (2006) states that grounded theory can contribute to mixed method researches which are driven by quantitative results by providing interpretive understanding of experience with explanations. Since the follow-up seeks to understand the process of solving of cognitive tasks by reflective learners, and that it is based on findings from an earlier quantitative study, results of which seek further explanation; the choice of selection of Grounded Theory method is justified.
The findings of this research will lead to development of a substantive theory. A substantive theory is an empirical area of sociological inquiry which is specific to a group or a set of people. In this case, the theory will be contextualized to reflective learners at the institution at which Hames and Baker (2014) conducted their research.
Researcher Positionality and Sensitization.
The intended primary researcher subscribes to a pragmatic worldview. A pragmatic worldview, as described by Creswell (2014) is one in which the researcher does not subscribe to any particular method, whether quantitative or qualitative. Rather, the researcher uses a “what works” approach to conduct the research in an attempt to best analyze the problem (p.11).
The researchers who adopt Grounded Theory have different theoretical perspectives depending on their views on objectivity of the researcher and use of theory. For instance classical “Glasserian” Grounded Theory focussed on objectivity and prescribed researcher neutrality emphasizing that the theory will form from the data and is “grounded in the views of the participants” (Glaser, 1978). However, absolute empathetic neutrality is not possible for a researcher. Lempert (2007) offers the explanation of research as a negotiation between the researcher and the research participants. She describes this as a “practice of give and take” which is an unplanned process of continuous negotiation by all participants in the research process including the researcher. The research method outlined in this section will subscribe to this definition of the role of the researcher in conducting this study.
With regards to use of theory, classical Grounded Theory researchers advocate delaying comprehensive use of literature until after the entire story has emerged from empirical data (Glaser, 1978; Glaser & Strauss, 1967). However, a more pragmatic approach to use of literature and theory is explained by Lempert (2007) in terms of sensitizing concepts, stating that literature reviews alert a researcher of the gaps in existing literature, which can aid in theorizing and lead to a more nuanced story, however, this reference to literature does not define the research. For the research proposed, the researcher will refer to literature as a tool to aid the constant comparative process of Grounded Theory, in building a substantive theory. In order to not be limited by the prior knowledge while seeking new point of view, the researcher(s) will extensively “bracket” (identify and suspend prior knowledge and continue without pre-conceived notions) all ideas that are formed as a result of prior knowledge (Backman & Kyngäs, 1999).
In order to maintain methodical transparency in the research, memoing procedures used in Grounded Theory methodology: such as theoretical memoing (the track of ideas in constant comparison analysis that makes up a GT), and procedural memoing (record of how researchers made decisions) will be implemented in this research to inform the reader of the rationale behind the methodological choices made, and guide them through the logic of inquiry.
Data Collection
The sample for this research will be drawn from the identified sub-set of reflective students part of the set of participants in the research conducted by Hames and Baker (2014), and will be carried out at the same site. IRB consent will be sought based on justifying the need for an explanatory follow-up to the research conducted previously. Students will be emailed and their consent will be sought for the interview process. A protocol for the semi-structured interview will be submitted for approval, to the IRB.
The sampling process (selection of participants) can be identified as purposeful random sampling. Once the reflective learners are purposefully identified and consent attained, a set of 5 will be randomly selected to participate in the first round of interviews.
Data collection will be through an interview process in which the students will describe their thought process while solving various cognitive tasks. The proceedings of the interview will be recorded and transcribed for future reference in the continuous comparative process. Additionally students will also be asked to write down the mental steps they took while solving the cognitive problem.
The interview process is selected for data collection as it is a popular choice while conducting Grounded Theory research, when perspective of the participants is sought (Leedy & Ormrod, 2005), which is what this investigation seeks. Charmaz (2006) states that the interview process is a “good fit” for Grounded Theory since like GT, intensive interviews are: “open-ended yet directed, shaped yet emergent, paced yet unrestricted”. Charmaz (2006) also recommends the use of an additional method to complement the interviewing process such as requesting participants to document their thoughts.
The participants will first be given a question eliciting a cognitive solution. The participant will then be asked to briefly document intended steps that they might take to solve the problem, after which they are allowed to complete the task. After completion of the task, the interviewer asks them a few questions related to the process of task completion, based on a semi-structured interview process.
An outline of an example protocol for a semi-structured interview is presented in Table 11.
Table 11: Elaborating on the interview protocol
Semi-structured interview protocol |
Introductions (~2 minutes)
· Ask participant to introduce themselves, where they are from, engineering background, year of study in academic program · Introduce researcher to participant
Show participant the first question (~1 minutes)
Ask participant to briefly write the “plan of action” for solving (~2 minutes)
Allow participant to solve the question (~2 minutes)
Ask participants questions specific to solving of the cognitive task such as: (~10) · What was your initial reaction when you saw the question? · What was your approach to solving the question? · What part of the question did you find interesting/challenging (eg. it was visual, verbal, etc.)
Repeat for next task |
At this stage in the study, solving a cognitive task will be generally defined as the ability to reach a solution for a cognitive task such as mental rotation (MR), matrix reasoning (MatR), or tower of London (TOL). Each interview will last about 45 minutes and focus on 2 tasks selected randomly from a pool of cognitive tasks. Since the students are familiar with the cognitive tasks carried out during the study by Hames and Baker (2014), the researcher in this follow-up will design a set of tasks to test cognitive abilities based on a review of tasks which require cognitive ability, existing in literature.
Based on the interview transcripts, and the memos taken by the researcher during the interview process, categories will be formed (This process is explained in the data analysis section of this response). Once preliminary categories are formed, the next round of interview will be conducted for 5 new participants from the pool, and their views will be used to add and refine existing categories. This iterative process will be repeated till saturation in the categories is observed. (Charmaz, 2006) defines saturation as the point at which no new categories or ideas are revealed from new data collected. This will be followed by theoretical sampling (starting with data, constructing tentative ideas and then examining these ideas through further empirical inquiry).
Grounded Theory presents an inductive model of theory development wherein theory emerges from field study insights, such as the views of participants interviewed (Creswell, 2014; Yin, 2009). A minimum of 25 interviews followed by a second theoretical sample is suggested in literature, to help Grounded Theory research to be grounded in participant views (Creswell, 2014; Tashakkori & Teddlie, 1998). The number of reflective learners as identified in the paper by Hames and Baker (2014) is twenty five. However, in some exemplar GT research publications, saturation was observed at a much lower number of participant. Figure 1 shows the iterative steps in a grounded theory process. The bi-directional arrows indicates the characteristic of GT, being that it is an iterative process of constant comparison.
Figure 1: The iterative steps in a grounded theory process to develop theory.
Data Analysis
Following the interviewing process, for the coding of the interview transcripts, steps described by GT experts (Charmaz, 2006; Glaser, 1978; Glaser & Strauss, 1967) will be followed. Coding of the transcripts will start with open coding (creating fractures and labeling data) continue onto axial coding (making connection between categories and subcategories), and finally selective coding (an integrative process of selecting core categories and organizing the relationship among all emerged categories). Theoretical coding (a sophisticated level of coding that follows the codes selected during focused coding) will also conducted. The categories developed from the code will be used to develop a model to understand the process.
The theoretical model developed as a result of this study will indicate the relationship between the central phenomenon (performance on cognitive tasks) being studied: the causal mechanism (established through analysis of perceptions of participants, and grounded in their views) succeeds for a set of causal conditions (will emerge from data) which shape a phenomenon (completion of a cognitive task), while the context (related to reflective learners identified through self-reported scores on the FSILS) as well as intervening conditions (will emerge from data) influence the strategies (strategies implemented by the learners in solving the task, will emerge from data) to bring about a set of outcomes (completing task). A paradigm model is thus a theoretical model suggesting that when casual conditions influence the phenomenon, the context and intervening conditions affect the strategies that are used to bring certain consequences or outcome (Charmaz, 2006). Figure 2 shows an example of the outline for one such model for this particular research.
Figure 2: An outline of a Grounded Theory model
Quality of Research
Anticipated Limitations and Strategies to Overcome Them
Robson (2002) elaborates on the deficiencies of human as an analyst and states: uneven reliability, large influence of first impressions, limitations on processing capacity as some of them. In order to deal with these real world challenges in research, this research will incorporate measures to assess research credibility, dependability and trustworthiness at each step of the process of data interpretation.
Dependability is the qualitative equivalent of reliability in quantitative research and refers to the consistency of findings. In addition to extensive memoing as mentioned above, the researcher will implement an intercoder agreement by having multiple researchers code the same transcripts to cross verify code generated so as to achieve dependability.
Trustworthiness of a qualitative research is the characteristic of the research that leads to readers’ ability to trust the findings, information and procedures described by the researcher (Leydens, Moskal, & Pavelich, 2004). Leydens et al. (2004) recommend clarifying researcher bias through regular memoing; employing member checking; purposeful sampling; and dense description in a study to help establish quality of the research. Creswell (2014) defines validity in qualitative research as accuracy of research findings; and reliability as consistency of the approach across the researchers and projects. While conducting the research, the researcher(s) will consciously partake in activities to enhance the accuracy of the findings and establish a consistency across the researchers. This will be possible through extensive memoing as detailed earlier, and through regular communication between the researchers involved, in case of multiple investigators.
A challenge of low participation from students might be faced while seeking participants for the interview process. It is hoped however, that since the students had previously self-volunteered to partake in the research, they might still be willing to do so. One way to potentially increase response rate could be to email the suitable candidates and inform them of the potential changes their inputs and views could bring to pedagogy in the engineering classrooms.
The purposive sampling implemented with the intention to select “reflective students” is based on results of a survey using FSILS. The assumption that the students are indeed reflective learners forms the basis of this investigation. One way to work around this challenge will be to explicitly inform readers of the self-reported preference in learning styles of the interviewed participants. Methodological transparency will be a key step in this regard, to facilitate reader to contextualize the generated theory to other circumstances. The challenges related to employing interviewing as the means of data collection will be associated with this research design. Patton (2005) notes that the quality of information obtained during an interview is largely dependent on the interviewer (p.341).
Additionally, it will take time before this research can be implemented, due to time-dependent processes of obtaining funding, IRB approval and student consent. The time-dependence of learning style preference of students is another challenge that will need to be addressed. In order to address the challenges that may arise from self-reported survey results, the researcher(s) will sensitize themselves to literature on possible change in preference of learning styles over time. This review of literature is however beyond the scope of this particular outline.
Discussion/Conclusion
In this response I have critiqued the research method described by (Hames & Baker, 2014). I have used AERA (2006) guidelines to serve as criteria to evaluate the publication. The evaluation of the publication identified various strengths and weakness in the research design. The paper was particularly weak in data measurement and classification; and analysis and interpretation, due to lack of sufficient discussion of validity and reliability of instrument, and inadequate discussion of assumptions before conducting statistical tests. These were discussed in detail. Assuming validity and reliability of the reported results and inferences, I then presented an outline for a follow-up research study using Grounded Theory to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. I provided a discussion of the rationale behind adopting a GT methodology and provided my theoretical perspective to help the reader understand choices relating to the method better. I outlined steps to arrive at a Grounded Theory model to explain the process of solving of cognitive tasks by reflective learners, including data collection and data analysis. Finally, a discussion of the limitations of the study was provided along with strategies to improve the quality of the proposed research.
References
AERA. (2006). Standards for reporting on empirical social science research in AERA publications Educational Researcher (Vol. 35, pp. 33-40).
Backman, K., & Kyngäs, H. A. (1999). Challenges of the grounded theory approach to a novice researcher. Nursing & Health Sciences, 1(3), 147-153.
Borrego, M., Douglas, E. P., & Amelink, C. T. (2009). Quantitative, Qualitative, and Mixed Research Methods in Engineering Education. Journal of Engineering Education, 98(1), 53-66.
Charmaz, K. (2006). Constructing grounded theory. Thousand Oaks, Calif; London: Sage.
Creswell, J. W. (2014). Research design: qualitative, quantitative, and mixed method approaches (4th ed.). Thousand Oaks, Calif: Sage Publications.
Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process: Sage.
Glaser, B. G. (1978). Theoretical sensitivity: advances in the methodology of grounded theory. Mill Valley, Calif U6 Book: Sociology Press.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: strategies for qualitative research. Chicago U6 Aldine Pub. Co.
Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. Handbook of qualitative research, 2(163-194).
Hames, E., & Baker, M. (2014). A study of the relationship between learning styles and cognitive abilities in engineering students. European journal of engineering education, 40(2), 167-185.
Knight, D. B. (2011). Educating broad thinkers: A quantitative analysis of curricular and pedagogical techniques used to promote interdisciplinary skills. Paper presented at the American Society for Engineering Education.
Kuhn, T. S. (2012). The structure of scientific revolutions: University of Chicago press.
Leedy, P. D., & Ormrod, J. E. (2005). Practical research. Planning and design, 8.
Lempert, L. B. (2007). Asking Questions of the Data. In A. Bryant & K. Charmaz (Eds.), The SAGE Handbook of Grounded Theory. Thousand Oaks; London: SAGE Publications, Limited.
Leydens, J. A., Moskal, B. M., & Pavelich, M. (2004). Qualitative methods used in the assessment of engineering education. Journal of Engineering Education, 93(1), 65-72.
Marra, R. M., Palmer, B., & Litzinger, T. A. (2000). The effects of a first-year engineering design course on student intellectual development as measured by the Perry scheme. JOURNAL OF ENGINEERING EDUCATION-WASHINGTON-, 89(1), 39-46.
Patton, M. Q. (2005). Qualitative research: Wiley Online Library.
Robson, C. (2002). Real world research: A resource for social scientists and practitioner-researchers (Vol. 2): Blackwell Oxford.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference: Wadsworth Cengage learning.
Singleton Jr, R. A., Straits, B. C., & Straits, M. M. (1993). Approaches to social research: Oxford University Press.
Smart, J. C. (2005). Attributes of exemplary research manuscripts employing quantitative analyses. Research in Higher Education, 46(4), 461-477.
Sutton, R. I., & Staw, B. M. (1995). What theory is not. Administrative science quarterly, 371-384.
Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: combining qualitative and quantitative approaches (Vol. 46.; 46). Thousand Oaks, Calif: Sage.
Wise, J., Lee, S., Litzinger, T., Marra, R., & Palmer, B. (2001). Measuring cognitive growth in engineering undergraduates: A longitudinal study. Paper presented at the ASEE Annual Conference, Albuquerque, NM.
Yin, R. K. (2009). Case study research: design and methods (Vol. 5.). Los Angeles, Calif: Sage Publications.