Education today is very different from what it was 10 years ago, it is no longer just chalk and chalkboard. Nowadays, technology plays a major role in the education of the new generation. Some of the cutting edged techniques that support these new technologies include gamification learning, item response theory, and detection of disengagement behavior.
Gamification Learning
Gamification learning is an educational approach to teach students through a game like environment. The assignments are similar to quests in normal video games and the achievements are rewarded using a coin/star-based system. The people of Knowre understands the shift in the new style of education and are now offering a very unique method of learning using an adaptive and personalized curriculum through gamification learning. The product offered by KnowRe offers students their own personalized curriculum that includes:
- Step by step instructions to arriving at the solution
- Hundreds of concept math videos using cartoons
- Practice problems
- Application problems
- Personalized review targeting individual student’s weaknesses (KnowRe, n.d.).
Another advantage to this product is that it offers a teacher dashboard which gives teachers real-time assessment of each student in the class. This allows teachers to know what their students are struggling with and allows them to give real-time support to help reinforce what their students need. The dashboard also includes:
- Color-coded classroom and student progress data to monitor the grow of each student
- Student bio page
- Adjustable score setting
- Easy class setup and student invitation features
- Assignment feature with ability to track student and class progress (KnowRe, n.d.).
There are many challenges and limitations of gamification learning because it’s still relatively new field. The major challenge is to create a “game” that all students will enjoy playing. Gamification learning can only be used to promote the engagement of the students desire to learn and making it enjoyable in the process. At the end of the day it is up to the students’ personal liking that will ultimately determine the value of this learning method.
Item Response (IR) Theory
IR models were used in the 1970s and 1980s as a technique for analyzing responses collected as part of specialized standardized testing or surveys (Fox, 2010, p. 1). With increased access to technology in ordinary educational settings, companies such as Knewton are leveraging these models to provide more accurate measures of student proficiency and crowdsourcing answer checking.
The most basic Item Response Model (IRM), the Rasch Model, defines the probability of a respondent correctly answering a question as a function of the respondent’s ability as well as the difficulty level of the question:
Where i represents the latent ability of the ith respondent and represents the latent difficulty of the kth question. Given this probability and a set of question/response observations, one can approximate a maximum likelihood function to discover the parameters (The Knewton Tech, 2013):
Additional parameters can be added for capturing features such as guessing, question quality, complex sampling designs and nested or hierarchical structures (Fox, 2010, p. 2). The combined model from this approach provides an ongoing and arguably more accurate measurement of student aptitude than traditional testing techniques – particularly because a group’s response data can be used to predict a respondent’s ability on questions which he or she hasn’t even been tested on. This predictive ability allows an educator to hone in and intervene with struggling students.
For companies such as Knewton and Coursera performing these tasks in a real-time education setting requires these probability distribution models to reside in an online graph structure which allows incoming data to update the network at a mass scale (The Knewton Tech, 2013). As the amount of data increases, these requirements become more demanding. Additional challenges that will become important with increased volume is dealing with the increasingly sparse response space.
The COO at Knewton does recognize limitations on where IR models can be helpful: “learning experiences must be at least partially online and there must be generally agreed upon ‘correct’ and ‘incorrect’ answers” (Liu, 2014). Thus, multi-dimensional topic areas such as language learning, will require traditional assessment and support structures. It remains to be seen how the two approaches integrate together in traditional learning environments.
Other challenges in this space involve deriving inferences out of these models given their complexity and the fact that this information resides in a hierarchical network of probability. Fox points out that this type of inferential statistical modeling becomes more “challenging given that inferences have to be made at different levels (schools, regions, etc.)” (Fox, 2010, p. 3).
Detection of Disengagement Behavior
One of the major challenges intelligent tutors and adaptive learners involves student disengagement. Students’ exploitation of the system or lack of motivation lowers student performance, negating the potential benefits of the intelligent tutors. Three types of disengaged behavior include gaming the system, off-task behavior, and careless errors (Koedinger, et al. 2013). Detection of gaming behavior is particularly focused on in the research community.
Gaming behavior involves attempting to systematically exploit the intelligent tutoring environment in order to advance through the learning process as quickly as possible. The two primary methods that students use to game the system include overuse of the tutor’s hint and feedback functionality or by systematically guessing (Walonoski and Heffernan 2006). The log history of engaged students tend to possess fewer incorrect answers and help requests. If engaged students do utilize the hint functionality, they tend to carefully examine the hints, causing fewer errors. However, students that exhibit disengaged behavior tend to skim the hints, resulting in a greater frequency of mistakes (Baker, et al. 2012).
A variety of classification techniques have been tested to automatically detect gaming behavior. These techniques include logistic regression, Bayesian method, PRISM, decision trees, neural networks, and locally weighted learning. However, the preferred machine learning method is J48 decision trees due to their clean confusion matrices and their generation of reasonable rules given our previous knowledge of learning behavior. The primary trade-off for use of this algorithm is that it produces a relatively lower accuracy rate compared to the alternative methods (Walonoski and Heffernan 2006).
The J48 decision trees algorithm is utilized to create univariate decision trees in WEKA. The algorithm utilizes the divisive approach that splits on information gain (Neeraj Bhargava, Bhargava and Mathuria 2013). WEKA is a convenient tool for engagement modeling since the data is generally obtained from student log history or field observations (Koedinger, et al. 2013).
Automated detection of gaming behavior has been integrated into intelligent tutors to assist in regaining student engagement or act as post-tutoring reporting devices (Walonoski and Heffernan 2006). These detection agents have also been used to discourage gaming behavior and negate its negative affects by providing disengaged students additional exercises that relate to the bypassed material. This technique has resulted in gaming behavior being reduced by 50% (Koedinger, et al. 2013)
Future Hope
These three cutting edge technique are just one of many that are used in today’s education. With ever advancing technology, we can be sure that there will be many more techniques to come. Some of these new techniques might be combinations of existing techniques, or they can be a whole new concept completely. I guess we will just have to wait and see what new technology will come out and spark some new ideas!
References
Baker, R.S.J.d., et al. “Sensor-free automated detection of affect in a Cognitive Tutor forAlgebra.” In Proceedings of the 5th International Conference on Educational Data Mining. Chanai, 2012. 126-133.
Fox, J.-P. (2010). Bayesian Item Response Modeling : Theory and Applications. New York: Springer.
Koedinger, Kenneth R., Emma Brunskill, Ryan S.J.d. Baker, Elizabeth A. McLaughlin, and John Stamper. “New Potentials for Data-Driven Intelligent Tutoring System Development and Optimization.” PACT Center. 2013. http://pact.cs.cmu.edu/pubs/New%20potentials%20for%20ITS-source.pdf (accessed April 9, 2014).
KnowRe. (n.d.). The KnowRe Story. Retrieved April 3, 2014, from http://about.knowre.com/about/who-we-are/
KnowRe. (n.d.). What Is KnowRe? Retrieved April 3, 2014, from http://about.knowre.com/
Liu, D. (2014, March 7). Knewton replies. (L. Horrison, Interviewer) Retrieved April 4, 2014, from http://www.eltjam.com/knewton-replies/
Neeraj Bhargava, Girja Sharma, Ritu Bhargava, and Manish Mathuria. “Decision Tree Analysis on J48 Algorithm for Data Mining.” International Journal in Advanced Research in Software Engineering and Computer Science, 2013: 1114-1119.
The Knewton Tech. (2013, October 13). N choose K. Retrieved from Knewton Tech Blog: http://feeds.feedburner.com/knewtontechblog
Walonoski, Jason A., and Neil T. Heffernan. “Detection and Analysis of Off-Task Gaming Behaviorin Intelligent Tutoring Systems.” Proceedings of the Eight International Conference on Intelligent Tutoring Systems. Berlin: Springer-Verlag, 2006. 382-391.