AI in Teaching: Reflections

Artificial Intelligence(AI) brings lots of opportunities to the way we operate in our lives, and one of the areas that could hugely benefit from AI is higher education. In my other posts, I mentioned three main aspects in which teaching might make great use of AI: Understanding Student Needs, Predicting Student’s Success, and Grading – please go to corresponding posts for details. However, no matter why and how we use AI, there are some challenges we should be aware of: bias-related, privacy and security, and ethics.

Algorithm fairness is a topic that captured attention in the last decade. It primarily focuses on how algorithms behave across different groups of people. For example, an algorithm that predicts lower grades for women than men is not considered fair, and it is considered biased against women. Sometimes these types of biasses come from the data which is used to train algorithm; that is, if the data collected from former students happened to have such a difference (this don’t mean that it is true – these purely depend on the sample we have), the algorithm is likely to learn it, and the bias happens. Another bias observed with virtual assistants is that their behavior changes based on name of the people they interact: some discriminate middle eastern names. The algorithm fairness is something that should be thoroughly analyzed before using any type of AI tool, and this issue should be resolved.

Another challenge is privacy and security. Any type of algorithm requires data to operate which comes from students. This could be student grades, answers to questions, conversations with virtual assistants, or even eye tracking and attention data. While education is customized for each student, vast amount of data is collected at the same time which causes privacy and security concerns. To be more precise, over personalization of the content delivered by the AI agents may make the student feel like she is being tracked all the time, which might cause privacy concerns. Furthermore, each means to collect data means a possible vulnerability which makes a possible breach to the students’ devices more likely.

The last challenge I would like to cover is ethical use. First of all, “robots replacing humans” type of scenarios should not be considered or applied in any way. Although experts say that AI algorithms will create more jobs than they destroy, there could still be some organizations that might use AI algorithms to lay people off. This is a completely wrong way to consider AI. Instead, the whole organization should be transformed and operated with increased productivity. Last but not the least, social topics such as inclusion and diversity should also be included in AI agents. In other words, AI agents should be designed so that they will not only be fair but also inclusive.

Shortly, I think AI is a huge opportunity for the higher education. However, it has some risks just like every component of a system. While benefitting this opportunity, we should also be aware of these challenges and act accordingly in order to protect both young generations and our institutions.