Qualifying Exam Response 1: Research Methods

As a PhD student in Engineering Education, I recently had to take my qualifying exams. What this basically is, is a 10.5 days long written exam on three questions. One of the questions was on critiquing an existing journal article’s methods section, and proposing a follow-up study. The second question was on critiquing an existing assessment plan in a journal publication. The third question was on proposing a Minor and justifying it using Learning Theories.

I thought I would post my responses on this blog, so that those interested in Engineering Education related studies get an idea of the kind of work we are involved with. This particular post is my response to the Qualifier Questions August 2015.

Engineering Education Qualifier Exam, August 2015:  Research Methods

In their paper on “A study of the relationship between learning styles and cognitive abilities”, Hames and Baker use a quantitative approach to explore the relationship between student learning styles, as determined by the Felder-Solomon Inventory of Learning Styles (FSIL), and student cognitive abilities assessed by student performance on three tasks: a matrix reasoning task, a tower of London task, and a mental rotation task. The authors used statistical t-tests and correlations to quantify the results from the responses of 51 engineering students from a university in the USA. The results indicated that the global-sequential, active-referential, and visual-verbal learning styles scales were related to performance on cognitive tasks, for response time.

In this response, I will provide an  in-depth evaluation of the published research by Hames and Baker (2014) and propose a follow-up qualitative study to explore one focused researched question driven by the findings from this study. In the evaluation of the research by Hames and Baker (2014) I will show that the publication has major drawbacks in data measurement and classification; and analysis and interpretation, due to lack of sufficient discussion of validity and reliability of instrument, and inadequate discussion of assumptions before conducting statistical tests. Despite the drawbacks of the paper, I will acknowledge the results to be accurate and propose a follow-up qualitative study. In the follow-up study, I will use Grounded Theory methodology to understand the process of solving cognitive skills by reflective learners. This explanatory follow up, will seek to provide a model explaining the process of solving cognitive tasks by reflective learners, and in the process address the main aim of understanding what mechanisms can be put to use in engineering education classrooms to facilitate reflective learners, by providing an understanding of the strategies that the reflective learners employ while solving a cognitive task.

Evaluation

Evaluation Criteria

The Standards for Reporting on Empirical Social Science Research in AERA Publications (AERA, 2006) will be used as the guidelines to form criteria to evaluate the research by Hames and Baker (2014). The AERA guidelines were established to help researchers in the preparation of manuscripts for publication by providing a framework of expectations about what a report of empirical work should address (AERA, 2006); hence it provides a suitable and comprehensive framework by which to evaluate this research study.  Table 1 provides a list of the eight evaluation criteria along with their description.

Table 1: Criteria for Evaluation of Quantitative Study

  Criterion Appraisal
1 Problem Formulation Clear statement of purpose and scope
2 Design and Logic Appropriateness of the methods and procedures used
3 Sources of Evidence Appropriately describes relevant characteristics of unit studied, including how they were selected and how data was collected using appropriate instrument.
4 Measurement and Classification How information is measured and classified.

Validity and reliability of instrument discussed.

5 Analysis and Interpretation Appropriate evidence is provided that warrant the outcomes and conclusion.
Statistical test employed is critiquedAlternate viewpoints should have been considered.

Validity and reliability of results discussed.

6 Generalization Appropriate justifications for generalizability of results to different contexts are provided along with rationale
7 Ethical considerations Appropriate ethical considerations are discussed
8 Title, abstract, and heading Appropriate writing style and structure to help the reader follow the logic of inquiry

In addition to the criteria based on the AERA (2006), sub-criteria developed by other authors (eg.Borrego, Douglas, & Amelink, 2009; Creswell, 2014) will be used to better inform the critiquing process and evaluation results.

The evaluation of the research by Hames and Baker (2014) found that the research lacked mainly in the area of problem formulation, measurement and classification, and design and logic. The strengths of the research were: in addressing generalization and ethical concerns, and recognizing limitations. These strengths and weaknesses will be addressed in detail in the following paragraphs.

Assumptions Made Before Evaluation. Prior to beginning the discussion on the results of the evaluation of the research study by Hames and Baker (2014), in this sub-section,  I shall address few of the assumptions that were made while evaluating the research. The first interpretation made was that the general research purpose was to determine any possible correlations between learning styles as measured by the FSILS instrument, and performance on selected cognitive tasks in order to understand how learning styles impact cognitive processes found in engineering education. This is specified explicitly by the authors in the paper as goals for their research (Hames & Baker, 2014, p. 3). Creswell (2014) identifies research purpose and research questions as two distinct and different aspects of a research design. The research purpose “sets the objectives, intent, or the major idea of a proposal or study. This idea builds on a need and is then refined into research questions” (Creswell, 2014, p. 124). The research questions specified by Hames and Baker (2014) are identified as RQ1 and RQ2 and will be referred to in the following evaluation as such:

RQ1: Are self-assessed learning styles in any way related to performance on cognitive tasks? For example would students with a strong visual learning preference actually exhibit a higher level of visual-spatial skills as measured by certain tasks; and

RQ2: How can the combination of cognitive tasks and FSILS assessment yield insight into current issues facing the engineering education community? For example, how can engineering teaching styles and curriculum be adapted in ways to appeal to different learning styles, and will this be effective in increasing student persistence and success (p.3).

The second assumption is that Hames and Baker (2014) conducted the research from a post-positivist perspective, where a theoretical perspective is the researcher’s philosophy which informs and contextualizes research methodologies (Crotty, 1998, p. 3). These theoretical perspectives have been identified by authors as a paradigm (Guba & Lincoln, 1994), an epistemology (Crotty, 1998), or a worldview (Creswell, 2014)Creswell (2014) describes worldviews as a general philosophical orientation about the world and the nature of research that a researcher brings to a study. Creswell (2014) recommends researchers to explicitly state their worldview in publications of their research in order to inform the reader of the larger philosophical ideas they espouse. He identifies four main worldviews: Postpositivism, constructivism, transformative, and pragmatism. Postpositivism is characterized by determinism, reductionism, empirical observations and measurement, and theory verification (Creswell, 2014).

Although Hames and Baker (2014) are not explicit about their worldviews, the design of their research study hints at a postpositivist worldview evident mainly because of the authors’ decision to pursue a quantitative research to investigate their research purpose. The research questions are deterministic in nature. In a deterministic philosophy, causes (such as type of learning style of a student) determine effects or outcome (such as performance on cognitive tasks), and the authors reflect the need to identify and assess the causes that influence outcomes (such as how learning styles are related to cognitive processes) such as found in experiments. It is reductionist since the authors break the problem statement into small, discrete set to test their research questions. Finally, the researchers carry out empirical observations to collect data to refine theories that exist (such as refine existing theories on student learning styles). Thus, in the absence of the author’s explicitly stating their theoretical perspectives, it is reasonable to assume that the research by Hames and Baker (2014) are governed by a postpositivist worldview.

Evaluation Results

The criteria presented in Table 1 were used to evaluate the strengths and weakness of the research by (Hames & Baker, 2014). A summary of the results of the evaluation, along with primary reason influencing the evaluation result, is presented in Table 2.

Table 2: Summary of Evaluation Results

Criterion Reason Result
1 Problem Formulation Clear purpose

Lacking theory

Lacking rationale for conceptual and methodological orientation of the study.

Weakness
2 Design and Logic Lacking clear logic of inquiry Weakness
3 Sources of Evidence Describe participant demographic details
Describe limitation in collection of participant details
No reference to sampling/means of participant selection from volunteer list
Weakness with minor strengths
4 Measurement and Classification Lacking instrument validity and reliability for both the FSILS survey and the questions assessing cognitive abilities Weakness
5 Analysis and Interpretation Limitations in the analysis
Validity concerns not addressed
Weakness with minor strengths
6 Generalization Cannot be generalized
Not described in detail
Weakness
7 Ethical considerations Address IRB
Acknowledge funding support
Strength
8 Title, abstract, and heading Paper is balanced
Meaningful transitions guided through headers and sub-headers
Strength

 

In this section, I shall elaborate on each of the criteria listed in Table 2.

Criteria 1: Problem Formulation. Problem formulation seeks to address the question of why the research would be of interest to the research community, and how the research investigation is linked to prior research and knowledge (AERA, 2006). Table 3 provides the sub-criteria for Problem Formulation, adapted from the recommendations of the AERA (2006).

Table 3: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness

Sub-Criterion Result
Clear statement of purpose (+)
States contribution to knowledge (+)
Reviews relevant scholarship (+)
Rationale for theoretical, methodological, or conceptual orientation is described (-)
Rationale for problem formulation as it relates to group studied is provided (-)

 

Hames and Baker (2014), explicitly state a research goal which can be interpreted as a statement of purpose. They are explicit about the potential contribution of their research to knowledge and state “a better understanding of the relationship between learning styles and cognitive abilities will allow educators to optimize classroom experience for students” (p.2). Thus they are able to clearly situate and justify their investigation. They successfully introduce the problem statement and identify the gaps in literature through review of scholarship: “few studies have correlated student learning styles with cognitive abilities” (p.2).

While there is a strong literature review that justifies the need to explore learning styles in engineering education classrooms, there is no explicit discussion of theory that is used to frame the research. Creswell (2014) defines theory as a discussion of how a set of variables or constructs are interrelated and how these relationships describe a particular phenomenon. Further, theory in quantitative research is important since it provides insight into the hypothesis being tested by the authors. Hames and Baker (2014) do not explicitly provide any hypotheses to describe the relationship among variables. Even in instances in which Hames and Baker (2014) provide a statement for expectations from the outcome: “active learners might be expected to take a more motor-oriented approach to the rotation task, resulting in different levels of accuracy and different response times from reflective learners, who might not engage the motor rotation process in the brain to the same extent” (p.5), the statements are based on prior research rather than a theory. In their article on What Theory is Not, Sutton and Staw (1995) state that researchers often use prior work as a smoke-screen during absence of theory guiding a hypotheses that needs to be tested. Sutton and Staw (1995) are explicit in stating that data, hypotheses, diagrams, references, and list of variables are not theory. They insist quantitative researchers publish papers with adequate theory building.

(Kuhn, 2012) presents a counter-argument to the discussion on use of theory in social science research by stating that the mission of research should be an accumulation of empirical findings rather than an ebb and flow of theoretical paradigms. This meta-analysis view of theory tend to value research publications simply because they serve as storage devices for obtained correlations, and not because they elaborate a set of theoretical ideas. However, the arguments made by Hames and Baker (2014) for their research implies an attempt to “fill-a-gap” in existing literature by contributing to theory. They are not explicit about their choice to not use theory or subscription to a meta-analysis view similar to Kuhn (2012). In absence of a strong theoretical framework and lack of justification of this absence, the argument put forth by Sutton and Staw (1995) for greater theoretical emphasis in quantitative research along with more appreciation of empiricism of qualitative endeavors, seems a valid point to critique the research method of Hames and Baker (2014).

The review of literature by the authors does not provide justification for the study to be conducted through all years of engineering. Since the authors are interested in engineering cognitive development and its relation to student preference of learning style, and significant engineering education research suggests that there is very little cognitive development of the students in the first two years of undergraduate education, where the emphasis is on rote-learning, application of formulae, and courses that tend not to promote reflective learning (Knight, 2011; Marra, Palmer, & Litzinger, 2000; Wise, Lee, Litzinger, Marra, & Palmer, 2001). It is recommended that Hames and Baker (2014) explicitly state their motivation in studying engineering students’ ability to perform cognitive tasks, and explicitly justify choice and implications of a sample comprising engineering students from different departments and of different academic standing.

Additionally, the authors are also not explicit about their worldview which might have had an influence on their review of literature. Creswell (2014) recommends researchers to explicitly state their worldview in publications of their research in order to inform the reader of the larger philosophical ideas they espouse. Although this evaluation assumes that the authors present a postpositivist framework, this has not been stated explicitly by the authors in the paper. This is a weakness of the problem formulation.

Criteria 2: Design and Logic. 

This criteria promotes a clear chain of reasoning and a specific understanding of the study design. Specifically, the design and logic of a research study comprise the choice of method, approach to problem formulated, the research questions, the approach to analysis and interpretation, and the format of reporting (AERA, 2006). Table 4 lists the sub-criteria for this criteria and the corresponding result of evaluation.

Table 4: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness

Sub-Criterion Result
Clear logic of inquiry (-)
Specific and unambiguous description of the design (-)

 

The design of the study is a drawback of this research. The weakness lies in the authors’ decision to adopt quantitative methods, a choice of method they fail to explain and justify. In their research question, Hames and Baker (2014) state “are self-assessed learning styles in any way related to performance on cognitive tasks” and “how can the combination of cognitive tasks and FSILS assessment yield insight into current issues facing the engineering education community” as their research questions. The use of the terms in any way and insight into current issues suggest broad, open-ended questions, rather than those testing a hypotheses. Quantitative methods are best-suited in instances wherein hypotheses need to be tested while qualitative methods are suitable in instances wherein a concept needs to be explored. The use of open-ended questions warrants an exploratory method, especially while addressing the second research question of the paper, which seeks insight into current issues facing the engineering education community.

Additionally, this research question seems to be broad, as opposed to narrow and focused research questions typical of questions warranting a quantitative approach (Creswell, 2014). The research questions could have benefited from a mixed method approach, since it is felt that the research questions are not adequately answered employing only a quantitative method. Using mixed method research, the researchers may want to both generalize the findings to a population as well as develop a detailed view of the meaning of a phenomenon or concept for individuals (Creswell, 2014). This approach might have catered best to these set of research questions which sought to understand the specific correlation relationship between learning styles and cognitive processes in individuals, as well as explore general insights on issues in engineering education related to learning styles. Hames and Baker (2014) do not explicitly detail the hypotheses being tested or state the rationale behind use of a quantitative approach; nor do they adequately answer their research questions. As a result, the research fails to adequately contribute to the gap in literature as identified by Hames and Baker (2014). Thus decision to adopt a quantitative method is identified as a strong weakness of this paper.

Criteria 3: Sources of Evidence.  Sources of evidence refers to both: the units of the research as well as the data collected to address the research question. Researchers should address sources of evidence since the role of the researcher and the relationship between researcher and participants can influence the data collection process (AERA, 2006), and so it is important to let the reader know of the process, choices and judgements taken during a research for them to be able to replicate it, if required. Table 5 lists the sub-criteria for this criteria and the corresponding result of evaluation.

Table 5: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness

Sub-Criterion Result
Units of study

Relevant characteristics with relation to parent population

Means of selection of sites, groups, participants, etc.

Description of groups

Treatment described in detail

(+)(+)
(-)
(-)
Collection of data or empirical materials
Time and duration of data collection
Report on instruments
(-)
(+)

 

Hames and Baker (2014) state: “51 students from a university volunteered for the study, 18 female and 33 male. The mean age was 22.4 years” (p.5). The academic standing and the participant major was also presented to the reader in the form of a pie-chart. The authors acknowledge that the convenience sample (ie. Volunteers for the study) is not representative of the parent population, and address this as a limitation of the paper, in not recording ethnic backgrounds of the student. Creswell (2014) indicates that researchers usually select a sample size based on selecting a sample of the population, size that is typically used in previous literature, or base the sample size on the margin of error the researchers are willing to tolerate (p.159). In the paper by Hames and Baker (2014) authors state that the sample represents 2% of the population, but do not provide any justification for this choice of sample size. The authors do not provide any justification for use of a single site to study the effect of learning styles on the students’ cognitive ability. The authors are also not explicit about the role of the researchers in the data collection process.

Additionally, participant consent can be assumed by the authors’ statement: “(students) volunteered for the study”. However, the description would have benefited from a more detailed description of the student consent process, how much time was spent on assessing each respondents on the cognitive task, how rapport was established, whether the tasks was described in detail to the participant, etc. The authors provide a brief review of literature as a report on the instruments used. Further considerations such as validity and reliability of the instruments are addressed as weakness and discussed in detail as part of the next criteria.

Criteria 4: Measurement and classification.  Measurement is the process by which observation or behavior (in this case: cognitive ability and learning style preference) is converted into a quantity (eg. score on FSILS and score on task and response time for specific question on task) which is then subject to quantitative analysis (eg. t-test and correlations), while classification is the process of segmenting data into units of analysis (AERA, 2006). Table 6 lists the sub-criteria for this criteria and the corresponding result of evaluation.

 

Table 6: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness

Sub-Criterion Result
Measure should preserve characteristics of phenomenon being studied (-)
Classification should be comprehensively described (+)
Reporting should describe data elements unambiguously (-)
Rationale should be provided for relevance of a measure or classification (-)

 

Measurement of behavior and classification was achieved using two instruments. The first instrument, FSILS, was used to classify students on the basis of preference of learning styles. Self-reported survey was used for this process. Self-reported surveys have the disadvantage of being misinterpreted by the respondents and hence answered incorrectly (Singleton Jr, Straits, & Straits, 1993). The authors do not provide any justification for their selection of self-reported surveys as part of this research.

In order to assess the quality of the measurements and classification, the validity and reliability of the survey instrument needs to be explored.  The authors in their introductory section state: “additional papers exist that examine the robustness and validity of the FSILS, but with not clear consensus” (p.2). Validity and reliability of existing instruments needs to be examined to decide as to whether meaningful inferences can be drawn from the use of the instrument, and hence if instrument should be used (Creswell, 2014). Content validity (do the items measure the content as intended), construct validity (do items measure hypothetical constructs, such as individual’s reasoning which is internal to the respondent), concurrent validity (do scores predict a criterion measure) and reliability (such as measures of internal consistency) are typically expected to be reported by researchers using an instrument for data collection (Creswell, 2014). However, for the FSILS instrument, the authors do not report any validity claims.

Answering a survey requires that respondents: comprehend the question, retrieve the information requested from memory, formulate a response in accord with the question and information, and finally communicate a response deemed appropriate (Singleton Jr et al., 1993). A good survey question is one which is easy to comprehend and understand what is being asked. A recommendation to establish content validity and hence improve the understandability of the questions is to pilot the instrument on a small sample, before using it to measure the actual sample (Creswell, 2014; Singleton Jr et al., 1993). For instance, in the case of the FSILS instrument, the authors do not refer to piloting of the instrument nor do they address any of the validity, despite their review of literature suggesting no clear consensus on validity or reliability of the instrument. Also, if it is a concern that engineering education students have changed over time, the present-day validity of an instrument developed in 1988, FSILS, can be questioned.

The authors justify the FSILS instrument’s appropriateness by stating: “the instrument was used due to the fact that it was originally designed for engineering students – and has been cited over 3300 times according to Google scholar” (p.3). Since number of citations of an instrument by research papers do not imply validity and reliability of an instrument, it is felt that the authors inadequately addressed the justification for use of the FSILS instrument, resulting in a major weakness of the paper. Similarly, Hames and Baker (2014) provide no validity and reliability of the questions assessing the cognitive tasks either. Inadequate theory was cited to indicate that these tasks indeed measured cognitive abilities of engineering students. Additionally, since one of the results of the research is based on student response time to solve the task, it would have been helpful to establish content validity of the questions on the task. For instance, if a student is confused by the wording of the question, the response time would not be a true indication of the cognitive process of reflecting on the question, but rather an indication of the student trying to decipher it. In such a situation, a think-out-loud exercise with the student walking an observer through the cognitive process, would have provided better understanding.

Criteria 5: Analysis and Interpretations.  This criteria examines the evidence provided to confirm that the outcomes and conclusions are warranted, and that counter-examples or viable alternatives have been appropriately considered. Table 7 lists the sub-criteria for this criteria and the corresponding result of evaluation.

Table 7: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness, and NA indicates data was unavailable.

Sub-Criterion Result
Transparent description of procedures (+)
Sufficiently described analytic techniques (-)
Analysis and presentation of outcomes is clear (-)
Intended or unintended circumstances affecting interpretation of outcomes are described (-)
Descriptive and inferential statistics provided for each of the statistical analyses (+)
Clear presentation of summary/conclusion (-)
Considerations that may have arisen in data collection and processing (such as missing data) should be adequately discussed (-)
Considerations that may have arisen in data analysis (such as violation of assumption of statistical procedures) should be adequately discussed (-)
Detailed discussion of statistical results (-)

 

The authors provide a detailed discussion of the interpretations of the data analysis. They provide evidence for each claim and address few alternative explanations to understand the relationship between learning styles and cognitive abilities. Descriptive and inferential statistics were represented through diagrams and in tabular form. However, it was felt that the authors did not adequately answer RQ2, since very little insight into current issues facing engineering education could have been achieved through use of a strictly quantitative method. Insight and perspectives warrant deeper understanding of attitudes and beliefs of individual participants, which are facilitated by use of qualitative investigations. As mentioned earlier, this study would have benefited from a mixed method approach in order to answer the research questions.

In the research method’s use of statistical tests, the authors do not provide the information to the reader clearly or coherently. There is no explicit identification of variables (a characteristic or attribute of an individual that can be measured or observed and that varies among the people or organization studied), nor are dependent and independent variables identified for each of the tests. Instead, the authors provide a large number of tables and graphs, without attempting to guide the reader through all the results individually thus resulting in lack of methodical transparency.

The authors assume a normal distribution, but do not provide any statistical justification for this assumption, apart from stating that a Chi square test was used. The results would have benefitted from a discussion of the result of the test for normality, since the normality assumption is crucial for t-tests and the results are invalid without the same. The independent sample t-test is an analytical test used to determine if there is a statistically significant difference between two independent samples. T-tests assume normal distribution and random samples, violation of which can result in inaccurate results. The authors do not provide any discussion of significance of non-random sample, skewed sample to the statistical test.

A detailed discussion of all variables would have improved the quality of this publication. The authors provide some references which suggest that male and females approach cognitive tasks differently, however, it has not been made explicit that these references provided basis for investigating the differences between genders. No justification has been given for why other variables may have been excluded from the statistical analysis.

The authors use the t-test to compare performance between genders, but do not use it to test differences among department of the respondents, or other demographics such as year in the program. A control variable is an independent variable important in research since it influences the dependent variable (Creswell, 2014). Hames and Baker (2014) could have controlled for the demographics and provided ANOVA based results as analysis. The authors do not address possibilities of confounding variables either. Confounding variables are those which cannot be measured or directly observed, but can serve to explain the relationship between the dependent and independent variables (Creswell, 2014). For instance a student who has prior knowledge on performing the Tower of London task could potentially score higher on the task, without it being representative of their cognitive ability or learning style preference. In this case, the prior knowledge can be identified as a confounding variable.

To aid readers understand the flow of logic, it is recommended that for the correlations conducted, the authors provide a single correlation matrix to help the reader understand the correlation of different independent variables with each other, rather than multiple and scattered tables providing correlation values.

Validity and Reliability. The other set of drawbacks in the analysis and interpretations were due to the various validity and reliability considerations. Validity refers to the absolute truth of an inference (Shadish, Cook, & Campbell, 2002). The AERA (2006) recommends authors identify any considerations during data collection or data analysis which might compromise the validity of the analyses of inferences.

For instance, threats to internal validity such as diffusion threat may be experienced since the researchers did not have all individuals participate in the experiment simultaneously. Diffusion threat occur as result of communication about the tasks between participants of different groups which can unduly influence the outcome. If an student A who has already completed the cognitive tasks, discusses the tasks with another student B who is yet to take the test, the validity of the results is affected adversely, since student B might perform better on the tasks. (Hames & Baker, 2014) do not provide any discussion of any such considerations which might impact validity. This lack of discussion of anticipated threats to validity is identified as another key drawback of this research design.

Criteria 6: Generalization.  Generalization of the results of a research investigation are intended from a sample to the sampling frame, such as a population or a universe (AERA, 2006). Generalization of a study provides the implications of the particular study for a larger population, of which the sample might be a representation. Thus, in order for the inferences drawn for the larger population to be correct, it is necessary for researchers to address how the results might be generalized to certain people, settings and times (Creswell, 2014). In the case of (Hames & Baker, 2014) however, the researchers address the limitation of the sample and accept that it is not representative of engineering, and that the sample has over-representation in terms of number of electrical engineering students and women, as compared to the parent population. Table 8 lists the sub-criteria for this criteria based on the AERA (2006) and the corresponding result of evaluation.

Table 8: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness

Sub-Criterion Result
Specifics of participants, contexts, activities, data collections and manipulations provided (-)
Clearly stated intended scope of generalization of findings of the study (-)
Clearly stated logic and rationale behind generalization (-)

 

Generalizability of this research is a drawback since the authors do not provide adequate information about the research participant’s background. Hames and Baker (2014) acknowledge this lack of information as a limitation and warn readers: “the results should not be generalized to students from other countries” (p.18). The authors however fail to provide any context for which the results could be generalized. This lack of logic and rationale for generalization of the results of the paper is identified as a limitation of the study.

Criteria 7: Ethical Considerations.  The sub-criteria identified as part of the criteria on ethical considerations by the AERA (2006) describe those ethical issues which are directly relevant to reporting research. Table 9 lists the sub-criteria for this criteria and the corresponding result of evaluation. Research ethics involve the application of ethical principles to scientific research. Ethical research involves three main areas: ethics of data collection and analysis, ethical treatment of participants, and ethical responsibilities to society(Singleton Jr et al., 1993).

Table 9: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness. NA indicates data was unavailable.

Sub-Criterion Result
Ethical considerations in data collection, analysis and reporting addressed (+)
Honors consent agreements with human participants (+)
No emission, falsification, or fabrication (+)
Data available and maintained in a way that is reproducible by reader, if needed (+)
Funding support acknowledged (+)

 

Hames and Baker (2014) address the consent by the institutional review board, which can be used to acknowledge ethical treatment of research participants. The authors also acknowledge their funding sponsorship towards the end of the paper.

Although the researchers do not explicitly address the participant debriefing or address deception (lying or withholding information from participants), since the principle of informed consent does not require researchers to reveal the entire study to the participants, withholding information about hypothesis is not considered deception (Singleton Jr et al., 1993). Additionally, since no indication is provided otherwise, and there is sufficient methodological transparency, it is reasonable to assume that the authors did not falsify, fabricate or emit any data.

Criteria 8: Title, abstract, and heading.  A well-constructed article is important to guide the reader through the logic of inquiry. Smart (2005) describes exemplary manuscripts as those which build a reader’s trust by providing a balanced attention to the three fundamental components of research manuscripts: issue component; technical and analytical component; and contextual component (p. 463). The issue component relates to the background and literature review, the technical component relates to the data collection and data analysis techniques employed, while the contextual component relates to the discussion and inferences drawn from the results. Table 10 provides a summary of the sub-criteria for this criteria, and the result of the evaluation.

Table 10: Summary of Evaluation Results (+) indicates a strength in design, (-) indicates a weakness

Sub-Criterion Result
Title is indicative of research summary (+)
Abstract provides a concise summary (+)
Headings and sub-headings make clear the logic or inquiry (+)

 

In the paper by Hames and Baker (2014) it was found that the authors devoted enough attention to each of the three components, with adequate discussion provided in each section. The paper was well-balanced and had well-constructed and clear headings and sub-headings to guide the reader through the logic of inquiry. This was a strength of the paper.

Evaluation conclusion

In summary, this evaluation critiqued the research method employed by Hames and Baker (2014) using criteria based on guidelines for publications recommended by the AERA (2006).

This evaluation found that the research method had certain major drawbacks. The decision to adopt a quantitative approach was not adequately justified by the authors, nor was it found to be adequate to answer the two research questions identified. There was inadequate theory and the authors did not justify most of their research choices. The FSILS survey and the questions for the cognitive tasks that were employed to assess the learning styles and the cognitive abilities of the students, respectively, were not pilot tested. The discussions on validity and reliability of the instruments, based on prior literature were inadequate due to the lack of consensus in the review provided on establishing validity amongst them.  For the statistical tests conducted, the authors do not adequately address the assumptions of normality needed before conducting the t-tests. They also use parametric tests for analysis (such as t-test, which assume normality) without addressing the underlying assumptions nor justifying the reason to adopt these test. The authors’ conclude that the study not be generalized to international contexts. However, they do not provide a discussion of contexts to which the results can be generalized. Despite the major limitations of the study, the study had strength in balance and structure of research report, and can be assumed to be conducted and reported ethically.

Proposed Follow-up Study

For the proposed follow up, I shall assume that the results of the paper by Hames and Baker (2014) are valid and reliable, and justify a follow-up study to further examine some of the results. In their paper, the authors conclude that performance on a cognitive task was found to relate to the learning styles of the respondents based on the FSILS instrument. Based on this result, the authors recommend that in the engineering classrooms suitable steps must be taken to accommodate different learning styles.

For the context of reflective learners, Hames and Baker (2014) present results which show that response time on the cognitive tasks for reflective learners was higher, compared to active learners. The authors conclude that the higher response time of reflective learners to respond to a cognitive task indicates that these type of learners take more time to process the information. Understanding how a reflective learner approaches a cognitive problem will give deeper insight into insight on pedagogical approaches implemented in classroom to cater to reflective learners. The follow-up research outlined below will thus contribute to one focused aspect of understanding the importance of learning style preference with student performance on cognitive tasks.

Introduction

The paper by Hames and Baker (2014) provides a statistical relation of the reflective learners’ performance on cognitive task to learning style (ie. positively correlated). The authors state that more reflective a learner is, according to the FSILS instrument, more time they take to respond to a cognitive task process. In order to recommend classroom approaches to cater to reflective learners, the authors suggest classes provide enough time for reflective students to process the information at a slower pace. However, for a deeper understanding of why reflective learners take more time to respond on a cognitive task and how they utilize this time, a possible follow-up study could be carried out to understand what strategies are employed by the reflective learner during the process of solving a cognitive problem.

In this section I will justify a Grounded Theory methodology based follow-up study with the research purpose: to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. I will first provide the research questions which will guide the explanatory follow-up study. I will also provide a discussion on my own worldview as the primary researcher who will conduct this follow up study. I will explain the choice of Grounded Theory methodology to investigate the research questions and explain the role of prior knowledge and theory in helping sensitize the primary researcher before conducting this research. I shall then outline the data collection and data analysis processes, and finally discuss the potential limitations of the research study developed.

Research Question

This research study seeks to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. In particular, the research seeks to answer the following sub-questions:

  1. What strategies employed by reflective learners result in their successful solving of a cognitive problems?
  2. What causal mechanism is responsible for reflective students solve cognitive problems?

The research will finally develop a theoretical model to explain the process of solving of cognitive tasks by reflective learners. The theoretical model will be grounded in data and will emerge from the researchers’ interactions with the reflective learners.

Why Grounded Theory

The selection of any research design should be directly informed by the research question that is being asked (Borrego et al., 2009). In this case the intent of the research is to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. Grounded Theory is used to learn about the causal relationships involved in a process (Charmaz, 2006). A causal relationship explains the whys and hows for a process, while a process itself consists of unfolding temporal sequences that may have identifiable markers with clear beginnings and endings and benchmarks in between (Charmaz, 2006). In this research, process of solving cognitive tasks by reflective learners will be studied. Grounded theory will be a suitable approach to gain interpretive understanding of this process since it will allow the researcher to gain access to the research participants perceptions and views, and construct an understanding of the process grounded on the participant perceptions (Charmaz, 2006).

Since this research will primarily be driven by the findings of the research by Hames and Baker (2014), the follow-up research may be understood as a sequential explanatory mixed method research. Creswell (2014) identifies sequential explanatory mixed method research as those in which a more in-depth understanding of the quantitative results is sought. Charmaz (2006) states that grounded theory can contribute to mixed method researches which are driven by quantitative results by providing interpretive understanding of experience with explanations. Since the follow-up seeks to understand the process of solving of cognitive tasks by reflective learners, and that it is based on findings from an earlier quantitative study, results of which seek further explanation; the choice of selection of Grounded Theory method is justified.

The findings of this research will lead to development of a substantive theory. A substantive theory is an empirical area of sociological inquiry which is specific to a group or a set of people. In this case, the theory will be contextualized to reflective learners at the institution at which Hames and Baker (2014) conducted their research.

 

 

Researcher Positionality and Sensitization.

The intended primary researcher subscribes to a pragmatic worldview. A pragmatic worldview, as described by Creswell (2014) is one in which the researcher does not subscribe to any particular method, whether quantitative or qualitative. Rather, the researcher uses a “what works” approach to conduct the research in an attempt to best analyze the problem (p.11).

The researchers who adopt Grounded Theory have different theoretical perspectives depending on their views on objectivity of the researcher and use of theory. For instance classical “Glasserian” Grounded Theory focussed on objectivity and prescribed researcher neutrality emphasizing that the theory will form from the data and is “grounded in the views of the participants” (Glaser, 1978). However, absolute empathetic neutrality is not possible for a researcher. Lempert (2007) offers the explanation of research as a negotiation between the researcher and the research participants. She describes this as a “practice of give and take” which is an unplanned process of continuous negotiation by all participants in the research process including the researcher. The research method outlined in this section will subscribe to this definition of the role of the researcher in conducting this study.

With regards to use of theory, classical Grounded Theory researchers advocate delaying comprehensive use of literature until after the entire story has emerged from empirical data (Glaser, 1978; Glaser & Strauss, 1967). However, a more pragmatic approach to use of literature and theory is explained by Lempert (2007) in terms of sensitizing concepts, stating that literature reviews alert a researcher of the gaps in existing literature, which can aid in theorizing and lead to a more nuanced story, however, this reference to literature does not define the research. For the research proposed, the researcher will refer to literature as a tool to aid the constant comparative process of Grounded Theory, in building a substantive theory. In order to not be limited by the prior knowledge while seeking new point of view, the researcher(s) will extensively “bracket” (identify and suspend prior knowledge and continue without pre-conceived notions) all ideas that are formed as a result of prior knowledge (Backman & Kyngäs, 1999).

In order to maintain methodical transparency in the research, memoing procedures used in Grounded Theory methodology: such as theoretical memoing (the track of ideas in constant comparison analysis that makes up a GT), and procedural memoing (record of how researchers made decisions) will be implemented in this research to inform the reader of the rationale behind the methodological choices made, and guide them through the logic of inquiry.

Data Collection

The sample for this research will be drawn from the identified sub-set of reflective students part of the set of participants in the research conducted by Hames and Baker (2014), and will be carried out at the same site. IRB consent will be sought based on justifying the need for an explanatory follow-up to the research conducted previously. Students will be emailed and their consent will be sought for the interview process. A protocol for the semi-structured interview will be submitted for approval, to the IRB.

The sampling process (selection of participants) can be identified as purposeful random sampling. Once the reflective learners are purposefully identified and consent attained, a set of 5 will be randomly selected to participate in the first round of interviews.

Data collection will be through an interview process in which the students will describe their thought process while solving various cognitive tasks. The proceedings of the interview will be recorded and transcribed for future reference in the continuous comparative process. Additionally students will also be asked to write down the mental steps they took while solving the cognitive problem.

The interview process is selected for data collection as it is a popular choice while conducting Grounded Theory research, when perspective of the participants is sought (Leedy & Ormrod, 2005), which is what this investigation seeks. Charmaz (2006) states that the interview process is a “good fit” for Grounded Theory since like GT, intensive interviews are: “open-ended yet directed, shaped yet emergent, paced yet unrestricted”. Charmaz (2006) also recommends the use of an additional method to complement the interviewing process such as requesting participants to document their thoughts.

The participants will first be given a question eliciting a cognitive solution. The participant will then be asked to briefly document intended steps that they might take to solve the problem, after which they are allowed to complete the task. After completion of the task, the interviewer asks them a few questions related to the process of task completion, based on a semi-structured interview process.

An outline of an example protocol for a semi-structured interview is presented in Table 11.

 

 

Table 11: Elaborating on the interview protocol

Semi-structured interview protocol
Introductions (~2 minutes)

·         Ask participant to introduce themselves, where they are from, engineering background, year of study in academic program

·         Introduce researcher to participant

 

Show participant the first question (~1 minutes)

 

Ask participant to briefly write the “plan of action” for solving (~2 minutes)

 

Allow participant to solve the question (~2 minutes)

 

Ask participants questions specific to solving of the cognitive task such as: (~10)

·         What was your initial reaction when you saw the question?

·         What was your approach to solving the question?

·         What part of the question did you find interesting/challenging (eg. it was visual, verbal, etc.)

 

Repeat for next task

 

At this stage in the study, solving a cognitive task will be generally defined as the ability to reach a solution for a cognitive task such as mental rotation (MR), matrix reasoning (MatR), or tower of London (TOL). Each interview will last about 45 minutes and focus on 2 tasks selected randomly from a pool of cognitive tasks. Since the students are familiar with the cognitive tasks carried out during the study by Hames and Baker (2014), the researcher in this follow-up will design a set of tasks to test cognitive abilities based on a review of tasks which require cognitive ability, existing in literature.

Based on the interview transcripts, and the memos taken by the researcher during the interview process, categories will be formed (This process is explained in the data analysis section of this response). Once preliminary categories are formed, the next round of interview will be conducted for 5 new participants from the pool, and their views will be used to add and refine existing categories. This iterative process will be repeated till saturation in the categories is observed. (Charmaz, 2006) defines saturation as the point at which no new categories or ideas are revealed from new data collected. This will be followed by theoretical sampling (starting with data, constructing tentative ideas and then examining these ideas through further empirical inquiry).  

Grounded Theory presents an inductive model of theory development wherein theory emerges from field study insights, such as the views of participants interviewed (Creswell, 2014; Yin, 2009). A minimum of 25 interviews followed by a second theoretical sample is suggested in literature, to help Grounded Theory research to be grounded in participant views (Creswell, 2014; Tashakkori & Teddlie, 1998). The number of reflective learners as identified in the paper by Hames and Baker (2014) is twenty five. However, in some exemplar GT research publications, saturation was observed at a much lower number of participant. Figure 1 shows the iterative steps in a grounded theory process. The bi-directional arrows indicates the characteristic of GT, being that it is an iterative process of constant comparison.

Figure 1: The iterative steps in a grounded theory process to develop theory.

gt1

Data Analysis

Following the interviewing process, for the coding of the interview transcripts, steps described by GT experts (Charmaz, 2006; Glaser, 1978; Glaser & Strauss, 1967) will be followed. Coding of the transcripts will start with open coding (creating fractures and labeling data) continue onto axial coding (making connection between categories and subcategories), and finally selective coding (an integrative process of selecting core categories and organizing the relationship among all emerged categories). Theoretical coding (a sophisticated level of coding that follows the codes selected during focused coding) will also conducted. The categories developed from the code will be used to develop a model to understand the process.

The theoretical model developed as a result of this study will indicate the relationship between the central phenomenon (performance on cognitive tasks) being studied: the causal mechanism (established through analysis of perceptions of participants, and grounded in their views) succeeds for a set of causal conditions (will emerge from data) which shape a phenomenon (completion of a cognitive task), while the context (related to reflective learners identified through self-reported scores on the FSILS) as well as intervening conditions (will emerge from data) influence the strategies (strategies implemented by the learners in solving the task, will emerge from data) to bring about a set of outcomes (completing task). A paradigm model is thus a theoretical model suggesting that when casual conditions influence the phenomenon, the context and intervening conditions affect the strategies that are used to bring certain consequences or outcome (Charmaz, 2006). Figure 2 shows an example of the outline for one such model for this particular research.

Figure 2: An outline of a Grounded Theory model

 gt2

Quality of Research

Anticipated Limitations and Strategies to Overcome Them

Robson (2002) elaborates on the deficiencies of human as an analyst and states: uneven reliability, large influence of first impressions, limitations on processing capacity as some of them. In order to deal with these real world challenges in research, this research will incorporate measures to assess research credibility, dependability and trustworthiness at each step of the process of data interpretation.

Dependability is the qualitative equivalent of reliability in quantitative research and refers to the consistency of findings. In addition to extensive memoing as mentioned above, the researcher will implement an intercoder agreement by having multiple researchers code the same transcripts to cross verify code generated so as to achieve dependability.

Trustworthiness of a qualitative research is the characteristic of the research that leads to readers’ ability to trust the findings, information and procedures described by the researcher (Leydens, Moskal, & Pavelich, 2004). Leydens et al. (2004) recommend clarifying researcher bias through regular memoing; employing member checking; purposeful sampling; and dense description in a study to help establish quality of the research. Creswell (2014) defines validity in qualitative research as accuracy of research findings; and reliability as consistency of the approach across the researchers and projects. While conducting the research, the researcher(s) will consciously partake in activities to enhance the accuracy of the findings and establish a consistency across the researchers. This will be possible through extensive memoing as detailed earlier, and through regular communication between the researchers involved, in case of multiple investigators.

A challenge of low participation from students might be faced while seeking participants for the interview process. It is hoped however, that since the students had previously self-volunteered to partake in the research, they might still be willing to do so. One way to potentially increase response rate could be to email the suitable candidates and inform them of the potential changes their inputs and views could bring to pedagogy in the engineering classrooms.

The purposive sampling implemented with the intention to select “reflective students” is based on results of a survey using FSILS. The assumption that the students are indeed reflective learners forms the basis of this investigation. One way to work around this challenge will be to explicitly inform readers of the self-reported preference in learning styles of the interviewed participants. Methodological transparency will be a key step in this regard, to facilitate reader to contextualize the generated theory to other circumstances. The challenges related to employing interviewing as the means of data collection will be associated with this research design. Patton (2005) notes that the quality of information obtained during an interview is largely dependent on the interviewer (p.341).

Additionally, it will take time before this research can be implemented, due to time-dependent processes of obtaining funding, IRB approval and student consent. The time-dependence of learning style preference of students is another challenge that will need to be addressed. In order to address the challenges that may arise from self-reported survey results, the researcher(s) will sensitize themselves to literature on possible change in preference of learning styles over time. This review of literature is however beyond the scope of this particular outline.

Discussion/Conclusion

In this response I have critiqued the research method described by (Hames & Baker, 2014). I have used AERA (2006) guidelines to serve as criteria to evaluate the publication. The evaluation of the publication identified various strengths and weakness in the research design. The paper was particularly weak in data measurement and classification; and analysis and interpretation, due to lack of sufficient discussion of validity and reliability of instrument, and inadequate discussion of assumptions before conducting statistical tests.  These were discussed in detail. Assuming validity and reliability of the reported results and inferences, I then presented an outline for a follow-up research study using Grounded Theory to develop an understanding of the process of solving of cognitive tasks by reflective learners, grounded in the perceptions of the learners. I provided a discussion of the rationale behind adopting a GT methodology and provided my theoretical perspective to help the reader understand choices relating to the method better.  I outlined steps to arrive at a Grounded Theory model to explain the process of solving of cognitive tasks by reflective learners, including data collection and data analysis. Finally, a discussion of the limitations of the study was provided along with strategies to improve the quality of the proposed research.

References

AERA. (2006). Standards for reporting on empirical social science research in AERA publications Educational Researcher (Vol. 35, pp. 33-40).

Backman, K., & Kyngäs, H. A. (1999). Challenges of the grounded theory approach to a novice researcher. Nursing & Health Sciences, 1(3), 147-153.

Borrego, M., Douglas, E. P., & Amelink, C. T. (2009). Quantitative, Qualitative, and Mixed Research Methods in Engineering Education. Journal of Engineering Education, 98(1), 53-66.

Charmaz, K. (2006). Constructing grounded theory. Thousand Oaks, Calif; London: Sage.

Creswell, J. W. (2014). Research design: qualitative, quantitative, and mixed method approaches (4th ed.). Thousand Oaks, Calif: Sage Publications.

Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process: Sage.

Glaser, B. G. (1978). Theoretical sensitivity: advances in the methodology of grounded theory. Mill Valley, Calif U6 Book: Sociology Press.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: strategies for qualitative research. Chicago U6 Aldine Pub. Co.

Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. Handbook of qualitative research, 2(163-194).

Hames, E., & Baker, M. (2014). A study of the relationship between learning styles and cognitive abilities in engineering students. European journal of engineering education, 40(2), 167-185.

Knight, D. B. (2011). Educating broad thinkers: A quantitative analysis of curricular and pedagogical techniques used to promote interdisciplinary skills. Paper presented at the American Society for Engineering Education.

Kuhn, T. S. (2012). The structure of scientific revolutions: University of Chicago press.

Leedy, P. D., & Ormrod, J. E. (2005). Practical research. Planning and design, 8.

Lempert, L. B. (2007). Asking Questions of the Data. In A. Bryant & K. Charmaz (Eds.), The SAGE Handbook of Grounded Theory. Thousand Oaks; London: SAGE Publications, Limited.

Leydens, J. A., Moskal, B. M., & Pavelich, M. (2004). Qualitative methods used in the assessment of engineering education. Journal of Engineering Education, 93(1), 65-72.

Marra, R. M., Palmer, B., & Litzinger, T. A. (2000). The effects of a first-year engineering design course on student intellectual development as measured by the Perry scheme. JOURNAL OF ENGINEERING EDUCATION-WASHINGTON-, 89(1), 39-46.

Patton, M. Q. (2005). Qualitative research: Wiley Online Library.

Robson, C. (2002). Real world research: A resource for social scientists and practitioner-researchers (Vol. 2): Blackwell Oxford.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference: Wadsworth Cengage learning.

Singleton Jr, R. A., Straits, B. C., & Straits, M. M. (1993). Approaches to social research: Oxford University Press.

Smart, J. C. (2005). Attributes of exemplary research manuscripts employing quantitative analyses. Research in Higher Education, 46(4), 461-477.

Sutton, R. I., & Staw, B. M. (1995). What theory is not. Administrative science quarterly, 371-384.

Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: combining qualitative and quantitative approaches (Vol. 46.; 46). Thousand Oaks, Calif: Sage.

Wise, J., Lee, S., Litzinger, T., Marra, R., & Palmer, B. (2001). Measuring cognitive growth in engineering undergraduates: A longitudinal study. Paper presented at the ASEE Annual Conference, Albuquerque, NM.

Yin, R. K. (2009). Case study research: design and methods (Vol. 5.). Los Angeles, Calif: Sage Publications.

 

Posted in Musings

As My First Semester Ends, My Second Begins

Last month, I finished my first semester on the tenure-track.  It was a very busy semester, but a happy and productive one as well.  I taught courses in General Psychology and Personality Psychology, submitted 3 manuscripts for publication, submitted and had 6 posters accepted to international conferences, advised students, served on committees, volunteered in the residential colleges, and attended as many professional development workshops as I could find.  I also made incredible friends and found fantastic mentors.  I’m tired, but I’m incredibly proud of all that I accomplished with the help of supportive colleagues.

 

Now I’m gearing up for my second semester.  I learned a lot from the mistakes of my first semester.  Yes, I’m perfectly happy in admitting my mistakes.  This was a new role for me, and I was bound to find some teaching and research strategies that, though they had worked in the past, were not suited for this environment.  But I’m excited about implementing changes that will allow for me to continue doing what works very well and to improve upon those areas that were less successful.

 

Just like my students, I am learning.  It’s fun to learn, and I look forward to transforming myself into a seasoned, tenured professor.

 

Posted in Academia, New Semester, publishing, research, Teaching

Excited to Connect to this Caravan

Hello, fellow “Caravanistas“!

I am excited about embarking on this active co-learning Connected Courses adventure, and  am looking forward to ‘meeting’ and exploring with all of you.

Posted in connected courses

Go Racers!

Just a quick note to report that I am settling in well to my new home in Murray, KY.  The last two weeks have been a whirlwind of paperwork, but it appears that I’m now a resident of “The Friendliest Small Town in America”.

Friendliest Small Town in America

I’m also settling well into the university.  I have an office and a lab, a faculty ID, and wonderful coworkers.

 

I’m staying busy writing syllabi, tests, assignments, and lectures.  I’m also working on completing required and supplemental trainings and preparing papers and manuscript submissions as I anticipate the demands on the tenure track.

I’ve explored my beautiful campus and town.  Have shopped for necessities.  Have learned how to cook on an electric stove (I previously had a gas range, so things like, “preheating” are new to me).  I’ve found the nearest craft and thrift stores.  And I’ve learned where to find the best pizza in town.

Now, it seems, the only thing left to do is to keep moving forward and to bask in the glory of being fortunate enough to have found my dream job.

Go Racers!

Posted in Academia, Job Market, New Semester, summer

Being a Caring Professor to Improve Graduate Engagement

This week is another busy week.  I’ve just submitted my students’ final grades, and I’m preparing to have family in town for my hooding on Friday.  Whew!

Just a quick post today to share an article from the Chronicle that shows how caring professors impact their students long after college.  In fact, having caring professors can improve engagement in one’s job after graduation.

College professors can positively impact their students’ outcomes by demonstrating some key characteristics:

College graduates had double the odds of being engaged at work and three times the odds of thriving in Gallup’s five elements of well-being if they had had “emotional support”—professors who “made me excited about learning,” “cared about me as a person,” or “encouraged my hopes and dreams.”

Yet many don’t:

The bad news, in Mr. Busteed’s view, based on Gallup’s findings, is that colleges have failed on most of those measures. For example, while 63 percent of respondents said they had encountered professors who got them fired up about a subject, only 32 percent said they had worked on a long-term project, 27 percent had had professors who cared about them, and 22 percent had found mentors who encouraged them.

So how do you, blogosphere, encourage, excite, and support your students?  And how do you encourage others to do the same?

Posted in inspiration, Students, Teaching

Using “my voice” in class

Maybe you didn’t notice, but I used my voice frequently in class. No, I didn’t talk about the specific needs for a returning combat veteran. But at times, I shared in the psychological struggle, the similar feelings associated with being “an outsider” to an established and close knit group, and the courage needed to stand up for my beliefs.

Silence

I started silent for first two classes and then found my courage to voice a different opinion – I noticed, seriously, classmates snickering when I would speak. It didn’t matter. I knew my perspective was different and something they didn’t want to hear, but something they needed to hear in order to be more effective when serving others.

Pedagogy

My success and connection to the course is a direct function of the pedagogy employed – the student-centered, group-based learning community model. My courage to speak in class was derived from my belongingness, which began with the friendly folks in my learning community and spread to the class. The professor could have suggested to each student, “hey, go out of your way to befriend Shane, since he’s new to the cohort group.” But, I didn’t want to feel like a charity case. Instead, the course was structured in such a way that my success was a function of my teamwork, which required me to know my team.

It’s on me too

It’s easier to expect or feel entitled to getting extra benefits or even to complain when you don’t, but I learned early on that you are responsible and accountable for the success of a team. Thus, I held a 3-hour initial meeting with my learning community team in order to talk about their “why,” our shared and differing philosophies on people and education. It wasn’t the instructor’s responsibility to direct individual behaviors, but to set the goals and structure with suggestions for processes (e.g., work together and collaboratively on each project). I knew I was responsible for my learning and my group’s learning, and I took that role seriously.

Learning to “fail”

In class, we discussed the idea of “letting people fail to promote learning and growth,” but it’s not that simple. Failing and learning is moderated by two essential factors: mindset and social support. In Tagg, mastery vs. performance mindsets are discussed as one’s learning orientation as process (mastery) or outcome (performance) focused. If an individual with a performance mindset fails, s/he internalizes the failure, it does not promote learning and reduces self-efficacy. Additionally, if this individual or even a mastery-oriented person does not feel supported by a teacher or classmates, or even friends outside of the classroom, the failing could be catastrophic.  As we learned from visiting the architecture students, these students could fail and learn, because they felt connected and competent.

Competence and Connection 

In my learning from this course, I am even more confident that every organizational decision should include student competence and connection as the primary outcomes – not just as learning outcomes, but as effective measures for organizational outcomes. For every decision, we must ask: what specific resources does this student (or group) need to facilitate competence and community? And we shouldn’t employ an equality ideal, but rather one of equity. If a former solider needs significant resources for a student club or a resource center to attract other veterans and build relationships, we should provide it at a different level than the student who needs a club to sing a cappella. Objectively unequal allocation of resources is always a challenge to individual’s perceptions of fairness, especially among those receiving less, so our challenge is to alter environments disproportionately in favor of those groups needing more in order to achieve the same level of competence and connection as others.

What’s easy and profitable vs. hard and meaningful

In my numerous rants throughout the course, I think I embodied the perspective of a student veteran with the vision to learn from history and improve it.  Rather than be a usual suspect by working for Northrop, Lockheed, etc. on a high-paying salary, I wanted to re-write history so we don’t head to war again, rather than follow the big contracts to facilitate the continuation of war. Similarly, I am a year away from my Ph.D. in organizational psychology and the big jobs are waiting to fill those 6-figure “organizational consultant” roles. Sorry, but that won’t be me. My commitment is to people – finding a way to give psychology to the masses in order to improve the lives of students worldwide. I’m not a historian, but rather a futurian(??). I’m prepared to re-write history by practicing with the future in mind, but I need to know my classmates – those student affairs professionals and educators – are willing to join the mission.


Posted in Student Environment

Winding Down

Life has been hectic lately.  I was in the very fortunate position of traveling the country for job interviews and conferences while also putting the finishing touches on my dissertation.  While all of my data collection was completed on campus at Virginia Tech during the summer and fall semesters and the winter holiday, my writing had a less straight-forward path.  The results section was written mostly in a combination of conference hotel lobbies and coffee shops.  The discussion was completed in a total of 14 different states, in airports, in hotel lobbies, on various campuses, and even in the air flying across the country.  There were many sleepless nights during which time I wasn’t sure that it was all going to come together.

But it did.

I successfully defended my dissertation on March 21st, and I submitted a final draft of the document to the graduate school soon after, thus completing all of the requirements for my PhD.

So, with that one day, my whole life changed.  I’m Dr. {Psychobabble}.

There are still plenty of lose ends that need to be tied here.  IRB protocols to extend for data analysis, grant reports to file, documents to be turned into manuscript publications, and a semester of teaching to bring to a close.  Next, there will be many preparations for my new faculty life.  It’s exciting to begin the process of writing syllabi and finding textbooks again, and this time it gets to be supplemented by house-hunting and finding new friends.

It’s an exciting time, blogosphere.

 

Are you finding equal excitement in the end of the semester and the sunshine that springtime brings?

 

Posted in Academia, dissertation, Grad School, Job Market

It’s not our first rodeo…

Veterans, Civilians and Cadets

I heard the Student Government President talking about cadet-civilian relations. In fact, he suggested an executive cabinet position in order to improve those relationships. I don’t know much about that, but I do know I don’t fit in that dyad. We aren’t civilians and sure aren’t cadet members. I hate being grouped as a cadet.

I was sitting in Torgersen and talked to a student who asked why I wasn’t sitting with my friends – as she pointed to the cadets in the front row. I wish I had a soapbox to explain the difference. I’m not 18, young, naive, and protected. In fact, I am the exact opposite of those. I am a 31-year old protector with some wisdom and a plethora of experience that most people could only imagine.

I was reading an article from the Atlantic about other fellow veterans and a few words really stuck. “Universities have long been a place where young people develop a purpose in life. But for older students with wartime experience, those lessons have already been learned.” I couldn’t agree more, which is probably why I feel so disconnected. I am a different developmental level and place in life.

The first rodeo

Homer wrote the Odyssey about 600 BC. Who knew the message would foreshadow the experiences of so many veterans who are not returning home but rather to college campuses?

“If we did a better job of listening, history wouldn’t have to repeat itself.” This philosophy dictates my life. In fact, it was my sole reason for becoming a history major at Virginia Tech. How many times has such an influx of people like me.. required the university to alter protcols and procedures to meet needs. I recognize that it probably hasn’t happened for any person, but what about the institutional memory?

According to the VT website, world war I and II, Vietnam and Korea,  and now post-911 wars are producing nearly identical situations for students returning home. This isn’t a fad or a trend. It’s a reoccurring trend. And so, I hope to document my experience and the experiences of others to meet the needs of the next wave of veterans. I want to support the infrastructure and ideas that lay the ground work for this generation and many to come.

 

 


Posted in Student Environment

Grateful for Virginia Tech, but longing to belong

I recognize that I am lucky. After returning home and to Virginia Tech, a few of my buddies received calls from the University of Phoenix and other for-profit colleges. According to data from ncpa.org, degree completion for veterans is only 28% for students at for-profits compared to 56% for public institutions. I feel terrible that they don’t have the support I have. Here, we have a center and resources for dealing with all of the post-9/11 GI issues in order to pay tuition and support my family.

I don’t understand how ungrateful all of these students are for this gift of education.  I’m in this amazing class and everyone is texting and facebooking. Sure, the environment could be more inclusive, but it works for now.

To be honest, I am struggling in my classes. I cannot connect to students who sit next to me in class, or eat near me at lunch. I’m alone and confused about what might happen next…


Posted in Student Environment

“To this day Project” and helping Humankind remember to be both. . . .

Ever notice how synchronicity and connections occur when we’re open to them?  I was lucky to have such a moment last semester.  After a seminar conversation last fall on inclusive pedagogy and diversity, an undergraduate came in to grab some late evening study time in the GEDI seminar room as I was packing up.  I heard what sounded like a spoken word performance emanating from his laptop.  He looked up and asked if he was bothering me and I said not at all.  I asked him if that was a recording of him doing spoken word.  He said no, but wished he could make a powerful impact on people like spoken word poet and activist Shane Koyzcan does.  He struck up a conversation with me about Shane Koyzcan and and hit replay on the YouTube video.  I asked him to send me the link and he did.  So, thank you, Nathan Chung, for crossing paths with me on that Wednesday evening last fall and sharing, and making a powerful impact by doing so.  If you haven’t watched this before, Koyzcan brings the pain of being marginalized and misunderstood to all of our attention in his “To This Day Project.”

One aspect of our focus on inclusive pedagogy is to provide a welcoming learning environment and to find ways to model inclusive engagement in the learning communities we create with our students.  Most of the time we hope this happens of its own accord, but I’d suggest to you that we need to be actively involved in the process.  Our pedagogical praxis should focus on how we can encourage learners to choose to be their ‘best selves’ for their own learning and in their interactions with peer colleagues.  Learning can be uncomfortable at moments, and learning that changes our world view, that has us examining new ideas, new data, new discoveries that shift our understanding of the world around us—those are powerful moments, and sometimes they are powerfully unsettling moments. . .at least initially.  Those are the times when out of fear and insecurity and discomfort around change, our ‘lizard brain’ may kick in, and we attempt to make ourselves feel better/bigger/stronger by picking on someone who appears vulnerable.  You may have witnessed, or even experienced, behavior inside and outside of higher ed.  Bullying doesn’t just occur in K-12; we have bullying and emotional hazing going on in our university classrooms as well.  Should we think that ‘victims’ oughta just toughen up, we may want to remember that affective and intellectual connection go hand-in-hand.  This is not about rigidly prescriptive politically correct behavior.  It is about being human and kind.

We are all weird, as Seth Godin declares.  Indeed!  Small amygdalas should not and shall not rule.  Celebrate weirdness–yours and others’.