Sign in or register
for additional privileges

C2C Digital Magazine (Spring / Summer 2019)

Colleague 2 Colleague, Author
Cover, page 17 of 23

 

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Book Review: Exploring Intelligent Tutoring Systems by UX, Knowledge Structures, Domain Expertise, and Technologies

By Shalin Hai-Jew, Kansas State University

Intelligent Tutoring Systems: Structure, Applications and Challenges
By Robert Kenneth Atkinson, Editor
New York: Nova Science Publishers
2016




Intelligent tutoring systems (ITSes), while they come in many forms, are computerized systems that respond to learner misunderstandings and errors and adjust to them. The adjustments may include the provision of hints, customized learning sequences, available help, encouragement, and other adaptive supports. In many cases, the interventions are just-in-time when learners are perceived to need the help; some are anticipatory of learner needs. These systems are mostly automated ones, to disintermediate for human tutors and human experts for increased convenience and portability in teaching and learning and cost savings (because people’s labor is expensive). The customizations are mostly based on their learning behaviors and actions during runtime although some may be built in part from pre-learning learner profiles (their learning needs and prior knowledge, for example).

A core idea seems to be that to close the novice-expert gap, learners need information and supportive practice. The tutorial systems may be the differences by working in the learner’s zone of proximal development (per Vygotsky’s ZPD work). More specifically to intelligent tutoring, researchers have built their ideas on later work by Benjamin Bloom:

Bloom (1984) found that students who received ‘one-to-one tutoring,’ with human tutors, performed around the 98th percentile, two standard deviations above traditionally-trained students. Intelligent tutoring systems (ITS) follow the premise that one-to-one tutoring by a computer can lead to the same impressive learning gains when appropriate student-learning profiles guide the provision of adaptive feedback to learners. (Poitras, Lajoie, Jarrell, Doleck, & Naismith, 2016, p. 18)

Some of these systems are communicated through tutor-based personages (as characters), and others are built into a system as help, and some seem invisible to learners. These are built into simulations, online learning systems, serious learning games, dedicated learning software, and immersive virtual worlds, among others.

According to Robert Kenneth Atkinson, editor of Intelligent Tutoring Systems: Structure, Applications and Challenges (2016), ITSes have been around for the past 40 years (p. vii).

Knowledge Foundations for Intelligent Tutoring Systems

Joice Barbosa Mendes, Isabela Neves Drummond, and Alexandre Carlos Brandão Ramos’s “A Case Based Reasoning Framework to Structure a Knowledge Base in Intelligent Tutoring Systems” (Ch. 1) describes a mentoring system created for new helicopter pilots. The learning tool was built using Microsoft Flight Simulator software, which apparently enables pre-settings for particular flight scenarios. A screenshot of their Model shows variables including the following: “Flight type, Rookie Pilot, Pilot with aircraft familiarity, Pilot phisical (sic) conditions, Pilot psicological (sic) conditions, Passengers, Fight (sic) time, Tank level, Available flight time, Performed flight time, Adequate maintenance, Hydraulic system check, Electric system check, Colective (sic), Wind direction, Speed,” and others (p. 13) A number of parameters are apparently available in dropdowns next to these variables, but many are vague. “Comercial” (sic) flight time is listed. Many of the options start with “Yes,” and the other potential likely options are “No” and “Maybe”? A “phisical” (sic) condition of a pilot is “Good.” Parameters that vague seem too roomy to be helpful. It is unclear what the settings are behind the interface. What does it mean if a parameter for a variable is set one way or another? [The typos in the text and the visuals do a disservice to the authoring team, the editor, and to readers.]

The co-authors describe a design approach built around case-based reasoning, which they describe as consisting of “knowledge representation, similarity measures, adaptation, and learning. The knowledge representation describes cases that have been tried. The similarity measure defines the calculations (that) will be performed to measure the similarity between two different cases. Adaptation defines mechanisms to adapt retrieved cases according to the case. Learning ensures that the system is constantly evolving, continually updating their knowledge base (C.G. von Wangenheim and Wangenheim, 2003” (Mendes, Drummond, & Ramos, 2016, p. 3).

Ultimately, the researchers propose a framework for how to structure a domain-specific knowledge base to support an ITS. This is conceptualized as involving four elements: a “domain model, user model, strategy and interface design model” (p. 7).

This chapter seems to be working towards using a local experience of using a pilot simulator software to a generalizable approach to using case-based learning to create an intelligent tutoring system. At best, the work seems to be in its early phases and does not read as fully actualized in thought or deed. What are the core elements of a case? What can be learned with each new case? Or a case of one? What functions does a simulation system need to have to ensure that the case-based reasoning is applied appropriately and works?

High-Impact Decision Making: Diagnostic Health Assessment for Novice Physicians

What are some common challenges novice physicians may face in diagnosing diseases in humans, given the ambiguity of health symptoms, the many available sources of information, and the healthcare ecosystem? Eric G. Poitras, Suzanne P. Lajoie, Amanda Jarrell, Tenzin Doleck, and Laura Naismith’s “Intelligent Tutoring Systems in the Medical Domain: Fostering Self-Regulator Skills in Problem-Solving” (Ch. 2) explores two tutoring platforms involving medical diagnosis in the medical science domain: BioWorld and MedU (which seem to either exist into the present or have some version existing into the present).

ITS technologies enable novices to “engage in complex reasoning processes and receive supportive feedback in the context of problem solving” (Poitras, Lajoie, Jarrell, Doleck, & Naismith, 2016, p. 18). For medical doctors, they cannot commit too early to a diagnosis (in “premature closure”). They need to test their theories. They cannot rule out an improbable underlying health problem without sufficient information. They need to learn continuously. And to top all, a person’s health and well-being is on the line.

People’s cognitive limits and cognitive biases are often invisible to the individual, who may assume he or she is engaging the world fully as it is and not through their own perceptual and brain processing limits. To perform well, learners need to engage in proper help-seeking. In high pressure and high risk environments, they need complementary systems to help them engage effectively. These systems help learners avoid “dysregulation,” which refers to “instances when learners fail to adaptively regulate their own thought processes in order to make necessary corrections while solving problems” (Azevedo & Feyzi-Behnagh, 2010, as cited in Poitras, Lajoie, Jarrell, Doleck, & Naismith, 2016, p. 20). Properly trained metacognition may lower incidences of dysregulation:

Cognitive and metacognitive tools can help physicians while they diagnose virtual patients by helping them regulate their efforts to solve problems and avoid common pitfalls and impasses. Cognitive errors can be examined, detected, corrected, and ultimately avoided by embedding learner models in the system that are trained to recognize and react to these situations in a suitable manner. (Poitras, Lajoie, Jarrell, Doleck, & Naismith, 2016, p. 22)

The information environment related to medicine is described as dynamic, with changing senses of plausible diagnoses. The learner may request lab tests of different types and acquire information in multiple channels. They rule in and rule out particular possibilities for the symptoms and other available information. Throughout, they may fall into traps of decision making and diagnostic error. Most typically, these include “faulty synthesis” or “incorrect processing of the available information” and “faulty data gathering” (Graber et al., 2005, as cited in Poitras, Lajoie, Jarrell, Doleck, & Naismith, 2016, p. 20). These systems simulate complex and high-risk decision making contexts, requiring iterative work and troubleshooting.

Intelligent tutoring systems in such spaces are informed by observations of prior learning sequences and log data.

Learner interactions are captured and stored in a database, thereby providing a wealth of information for researchers to discover useful knowledge and hidden patterns that are predictive of learning gains or difficulties. The challenge in designing adaptive systems for medical domains is in creating algorithms that can intelligently process the massive amount of data that are logged by these systems (i.e., as physicals order lab tests, search libraries, request consults, identify symptoms, annotate evidence, and receive feedback). (Poitras, Lajoie, Jarrell, Doleck, & Naismith, 2016, p. 18)

In these systems, learner actions are observed, and supportive “hints and feedback” provided along the way (Poitras, Lajoie, Jarrell, Doleck, & Naismith, 2016, p. 23). There are Evidence Tables to enable objective analytics in decision making and elicitations of confidence measures and short case write-ups from the learners (in BioWorld). Students receive a custom student report at the end with formative feedback and the solution steps of an expert (p. 24).

Another program features virtual patients and a series of screens with new information in sequence. There are interviews with the patients. As new information comes in, the learners can reorder diagnoses in the order of most likely to least likely diagnoses. There are real-world scenarios such as “a discussion of patient hospitalization, monitoring patient outcomes, and descriptions of patient management plans” (p. 25). In some cases, the learners (as doctors) must take certain actions right away because of the high levels of risk to the virtual patient. Finally, the authors conduct a side-by-side comparison of the two studied ITS systems in terms of learner proficiency measures (p. 28).

Core Modules for Procedural ITSes

One of the more transferable and generalizable works is Diego Riofrio-Luzcando and Jaime Ramirez’s “Predictive Student Action Model for Procedural Training in 3D Virtual Environments (Ch. 3). These co-authors, from the Universidad-Politécnica de Madrid, in Madrid, Spain, propose a model as an “extended automaton, in which each of its states represents the effect of a student’s action (or a failed student’s action)” (Riofrio-Luzcando & Ramirez, 2016, p. 37). Understanding the various learning paths will enable the design of interventions to prevent the learner from taking wrong actions, though hints, guidance, and (in)validation of student actions and whatever is “pedagogically appropriate” (Riofrio-Luzcando & Ramirez, 2016, p. 39).

Educational data mining (EDM) methods—including prediction, clustering, outlier detection, relationship mining, social network analysis, process mining, text mining, “distillation of data for human judgment,” discovery with models, and knowledge tracing—are promising ways to harness online learning data for the building of intelligent tutoring systems (Romero & Ventura, 2013, as cited in Riofrio-Luzcando & Ramirez, 2016, p. 42). The online learning data may be from computer-based educational systems, learning management systems, adaptive and intelligent hypermedia systems, and others. The idea is that the data may be explored to extract meaningful learning insights—about variable relationships, sequences of learning actions, interrelationships, intercommunications, and others.

In their work, an ITS architecture (as “inspired on MAEVIF architecture (de Antonio et al., 2005; Imbert et al., 2007)” would be comprised of six modules: Communication Module, Student Module, Expert Module, World Module, Tutorial Module, (and) Student Predictor Module” (Riofrio-Luzcando & Ramirez, 2016, p. 45). The authors describe the types of data to be captured in each module and the system functions and dynamics. If learner help is often “reactive” based on learner errors, there are also moves towards “proactive” help, which anticipates early misconceptions and apparently serves to head these off. The system design is based on rules to direct readers to the correct learning flows. Then, too, there are “repairing actions” to amend learner errors and to “redirect to one state in the correct flow” (Riofrio-Luzcando & Ramirez, 2016, p. 54).

To these ends, the authors describe the building of the “automaton” as algorithmic steps to understand the learning state for each of the learners and acting accordingly to their needs (Riofrio-Luzcando & Ramirez, 2016, p. 54). They describe “the process for obtaining the next most frequent student action”:

  1. Update the model with the new action performed by the student.
  2. Find the cluster which the student belongs to.
  3. Get the automaton of the cluster.
  4. Obtain the next state after applying the new action, considering the current state of the student.
  5. Calculate the next most frequent action. (Riofrio-Luzcando & Ramirez, 2016, p. 56)

This generic model could work in a variety of learning disciplines and domains, with the subject contents and content experts informing the work. The processes are helpfully illustrated with diagrams.

Automated Feedback for Shiphandling Simulations

One of the more thorough and engaging works is Alan D. Koenig, John J. Lee, Elizabeth O. Bratt, and Stanley Peters’ “Intelligent Automated Assessment and Tutoring: A Pairing of an Intelligent Tutoring System with an Automated Assessment Engine for U.S. Navy Shiphandling Training” (Ch. 4). The ITS in this work, the Conning Officers Virtual Environment – Intelligent Tutoring System, involves a virtual multi-instrumented guided missile destroyer ship’s bridge for the training of surface warfare officers of the U.S. Navy in “shiphandling, seamanship and navigation practices” (Koenig, Lee, Bratt, & Peters, 2016, p. 66). The learning space setup involves three monitors and a head-mounted display.

In this simulation-based training environment, learners practice piloting a virtual ship in various scenarios. The system was built based on the subjective evaluation of student actions (and spoken coaching) by an expert instructor and the capturing of that feedback for integration into the automated tutoring process to remediate learner performance. Going to an automated system enables less live human supervision of the practice and attendant cost savings. The authors describe core components of intelligent tutoring systems (ITS):

To support its intelligent reasoning about the student’s performance and how to deliver instruction, an ITS typically has separate interacting components for the domain model, the learner model, the tutoring model, and the user-interface model (Sottilare et al., 2014, as cited in Koenig, Lee, Bratt, & Peters, 2016, pp. 64 - 65)

A designed system—from a team intimately tied to a complex design—requires deep knowledge of the target areas of learning (such as domain ontologies and knowledge bases), a thorough understanding of the learners and their learning sequences, clear pedagogical goals, expert feedback to learners, and other data. To this end, the co-researchers share some of the artifacts from their project planning, including not only the prior but a scoring table for their performance with defined “skill areas and tasks” (p. 70). From this information, an “automated assessment engine” was created as a software module: “The student performance information consists of the second-by-second, raw data describing the ‘state-of-the-world’ within the simulator as the student engages with it. Specifically, this includes environmental data (i.e., telemetry describing the state of elements in the simulated environment not under the control of the student), as well as data describing each student action—which often will have a resulting effect on the environment” (Koenig, Lee, Bratt, & Peters, 2016, p. 79). Underlying the simulation are Bayesian network analyses [“directed-acyclic graphical models for representing probabilistic dependency relationships between variables” (Jensen & Nielsen, 2007, as cited in Koenig, Lee, Bratt, & Peters, 2016, p. 79)] to understand human learning. For a learner to perform a particular way, he or she has to have a certain level of knowledge and understanding, and the inferences possible from Bayesian analysis can enable a sense of what is going on inside the learner. This work shows the precision and logic and meticulous attention to details needed to create complex ITS systems for simulations.

Further, these co-authors (with members from the University of California, Los Angeles, and Stanford University) tested their designed systems with a control group and an experimental group and compared their performance results using statistical t-tests and found no difference between those who received human oversight vs. ITS-feedback. Even beyond closing the loop by testing their system, they offer rich insights about what goes into a solid ITS design and development.

Measuring Affect -> Performance

Gustavo A. Lujan-Moreno, Robert K. Atkinson, and George Runger’s “EEG-Based User Performance Prediction Using Random Forest in a Dynamic Learning Environment” (Ch. 5) explores physiological signals indicating learner affect to see if those might be suggestive of learning performance (and particularly non-error or error). For this study, the team used Guitar Hero for the dynamic learning environment for six test subjects (three males, three females, all between 18 – 28 years of age). They connected the research subjects to Emotiv EEG (Electroencephalogram) to capture 14 channels of raw data to indicate five affective states: “short term excitement, long term excitement, engagement, meditation and frustration” (Lujan-Moreno, Atkinson, & Runger, 2016, p. 112).

Given the focus on the uses of anticipation of user failure and interventions prior to the anticipated mistake, could real-time (or near-real-time) learner affect provide indicators for intelligent tutor systems about when an intervention might be needed? About the research, the authors write:

Our objective is to predict whether a specific segment contains an error or not based on information regarding the affective state or the EEG raw signal. Several approaches are considered. The first approach uses a classifier constructed from summaries (mean, mode, max-imum (sic), minimum and standard deviation) of the Emotiv affective variables. The second approach is based on the same summaries but adding the rate of change of the affective states. In the third approach we used EEG raw data where the signal was transformed from time to frequency domain using the bands traditionally used in the literature (delta, theta, alpha and beta). In all the approaches, the predictors are computed by second and then the summary of statistics is computed by song segment (Lujan-Moreno, Atkinson, & Runger, 2016, p. 112)

The EEG signals data was run through a random forest ensemble method to create decision trees to identify possible precursor relationships leading to non-error and error states to see if affect was a predictor. (This approach seems to involve more of a whole person approach, involving the human mind and body in conscious and subconscious ways.)

What did these researchers find? They write of the importance of “meditation” for learning performance:

When classifying errors by segment using the affective states the most important feature was the minimum meditation, which is defined as the minimum value of the variable meditation on a given segment. This implies that meditation and especially its derived feature, minimum meditation, plays an important role in classifying a segment with an error. We have to recall that meditation is experienced as calm and clearness of mind. Also the meditation mode and meditation maximum were among the most important features. In addition, it was no surprise that the ‘gray area’ approach worked better since it is logical to think that the algorithm would perform better discriminating zero errors from many errors than to notice small differences in errors. This implies that meditation could be either integrated into the design of a ITS or it could be used as a form of feedback for the user to be aware of his own affective state. (Lujan-Moreno, Atkinson, & Runger, 2016, p. 124)

The rate of change between affect states also was informative for predicting errors. “The most important features overall were: meditation mean, engagement maximum 5 seconds backwards, engagement maximum 3 seconds forward, meditation minimum 5 seconds forward and engagement mean. This is an interesting result since it could indicate that the change of rate of the user engagement (or distraction) before, during and after an error is made could have an impact of (sic) the performance of the user.”
(Lujan-Moreno, Atkinson, & Runger, 2016, pp. 124 - 125) In the discussion phase, the authors elaborate on their research results:

A final comment regarding the approaches when classifying errors is that we don’t think that the algorithm is able to tell exactly when an error occurs. Rather, we suspect that the measure of an affective state (or power in a given frequency) is related to the probability of an error occurring. In other words, when a subject is in high/low meditation, frustration, engagement, etc., the probability of an error occurring is also high/low. (Lujan-Moreno, Atkinson, & Runger, 2016, p. 125)

If technology systems are so critical to learning, are there ways to ensure that these systems serve learner needs and are used effectively? Rehman Chughtai, Shasha Zhang, and Scotty D. Craig’s “Opportunities from Usability Design for Improving Intelligent Tutoring Systems” (Ch. 6) explores the importance of usability design, user experience (UX) design, and user interface design as applied to ITSes. The design of the ITS are important for user adoption and the proper system usage. While research has gone into other aspects of ITS technologies, there has been a gap in terms of “usability testing, usability evaluations, or user interface design process during development” (Chughtai, Zhang, & Craig, 2016, p. 130).




Figure 1. Intelligent Tutoring Systems as a “Common Ground” Between AI and Education (Chughtai, Zhang, & Craig, 2016, p. 130)


The co-authors introduce some basic approaches in human-computer interface design and then explore some examples of interfaces in ITS tools. Some of the critiques are simplistic (“the bottom row of buttons is not visible to the user” (p. 145). Perhaps starting with a blank slate and designing a theorized ITS interface would be more directly helpful. Still, the authors make an important point about the need to ensure that the user experience is considered along with the user interface.

ITS for Ill-Defined Domains

The conventional wisdom suggests that intelligent tutoring systems may work especially well for sequences that are highly structured and sequenced since the automated tutorial aspects are for correctable learning approaches. A different approach uses ITSes for less-defined ill-structured learning sequences, even those involving creative work like writing. (A writing assignment may be equally successfully completed in a wide number of ways and will often diverge in terms of execution than converge, so there is not an “equifinality” sense to such learning assignments.) Matthew E. Jacovina and Danielle S. McNamara’s “Intelligent Tutoring Systems for Literacy: Existing Technologies and Continuing Challenges” (Ch. 7) provides a glimpse at the state-of-the-art of ITS for reading comprehension and writing, both of which are core foundational capabilities.

First, Jacovina and McNamara assess four systems, one for vocabulary training, two for reading comprehension training, and one for assessing reading skills and conveying recommendations to teachers (Jacovina & McNamara, 2016, p. 157). These include Dynamic Support of Contextual Vocabulary Acquisition for Reading (DSCoVAR) for 4-8th grade students’ vocabulary; Intelligent Tutoring of the Structure Strategy (ITSS) also for the 4th to 8th grade learner range (with an animated tutor); Interactive Strategy Tutor for Active Reading and Thinking – 2 (iSTART-2) for middle school learners to college level, and Assessment-to-Instruction (A2i) for kindergarten through 3rd grade levels. Some of the adaptivity seems light, such as adjusting the levels of follow-on readings if a learner is assessing poorly.

The work gets more interesting with the ITS-enhanced writing tools for formal composition. The co-researchers observe: “Like reading, writing is a complex process that requires the combination of a number of skills, including the ability to engage in critical thinking, knowledge about conventions of writing, and flexibility to apply these skills in a variety of domains (Framework for Success in Postsecondary Writing, 2011),” as cited in Jacovina & McNamara, 2016, p. 162). These assessed systems include the following: The Education Testing Service’s Criterion system for 4th grade through college; Writing Pal for high school students; Scaffolded Writing and Rewriting in the Discipline (SWoRD) for high school and college students, and Research Writing Tutor (RWT) for graduate students engaged in research writing. These are lightly explored, with only a few paragraphs describing each. Understanding how each was developed and how they function would be more powerful.

Automatic Standalone ITS or High-Touch Teacher-Supported?

If ITS technologies are set up to disintermediate the human-in-the-middle of learning, when might such a system retain the teacher for teacher-managed (vs. student-managed) systems? Kausalai (Kay) Wijekumar, Bonnie J.F. Meyer, Karen R. Harris, Steve Graham, and Andrea Beerwinkle’s “Comparing Learning Outcomes and Implementation Factors from Student-Managed vs. Teacher-Managed Intelligent Tutoring Systems” (Ch. 8) compares a learner-centered reading system [Intelligent Tutoring System for the Text Structure Strategy (ITSS)] and a teacher-managed writing one [We-Write].

ITSS is a web-basted ITS to help learners in Grades 4 – 8 read and understand expository texts. It was based on the self-regulated strategies development (SRSD) model (Harris, Graham, & Santangelo, 2013, as cited in Wijekumar, Meyer, Harris, Graham, & Beerwinkle, 2016, p. 186). It was developed from “observations of expert teachers delivering text structure based instruction to elementary and middle grade students” (Wijekumar, Meyer, Harris, Graham, & Beerwinkle, 2016, p. 179). The software requires learners to go through various learning activities, such as identifying signaling words, classifying various grammatical text structures, identifying important ideas in writing, encoding memory by identifying main ideas, showing reading comprehension, “writing a recall using the strategic organization of the text as a guide,” “differentiating between the top-level-structure and details,” transferring knowledge form one text to another, and “nesting text structures to understand complex real-life texts” (Wijekumar, Meyer, Harris, Graham, & Beerwinkle, 2016, pp. 179 - 180). The Gray Silent Reading Test (GSRT) was used to test the ITS for learning effect size, with the finding that the learners using this learning resource outperformed the control group.

The authors then studied We-Write, a teacher-managed ITS to “improve persuasive writing skills of fourth and fifth grade children” (p. 186). Here, the researchers purposefully designed the teacher in the loop, to ensure that “the teacher would deliver portions of the instruction to the students and that these teacher-lessons would be carefully choreographed with web-based intelligent tutor lessons” and that teachers must review student performance after the web-based lessons for the children (Wijekumar, Meyer, Harris, Graham, & Beerwinkle, 2016, p. 189). This ITS focuses on “the writing processes, cognitive skills, metacognitive skills (e.g., planning, writing, revising), and efficacy skills” (Wijekumar, Meyer, Harris, Graham, & Beerwinkle, 2016, p. 187). The learning experience with We-Write involves a cycle of interactions between the We-Write Teacher and the computer-based lessons (p. 191) for the learners. The participating teachers seem to have mixed experiences with their assigned roles:

One of the most common challenges was the teacher’s expectations of the computer software. Most teachers were used to leaving students to work on the computer and periodically monitoring student work. All the teachers reported that We-Write was the first software tool where they had to complete some activities prior to using the software and teachers had to unlock the next lesson for the students after reviewing the student performance. This posed some challenges for the teachers and some felt the system relied on the teacher too much. Others reported satisfaction about their ability to monitor and control progress. All teachers acknowledged that the students worked much harder on the activities when the teacher was actively engaged (Wijekumar, Meyer, Harris, Graham, & Beerwinkle, 2016, p. 192)

This human integration dimension may help ITSes align in the so-called “high-tech high-touch” modalities of instructor-led teaching and learning common in higher education.

Conclusion

Dr. Robert Kenneth Atkinson’s Intelligent Tutoring Systems: Structure, Applications and Challenges (2016) offers insightful research works around the building and usages of ITSes. These works show some serious players in the space, well-funded and well-connected individuals, who are using the most cutting edge analytical techniques (Bayesian logic and networks) and artificial intelligence to advance the work. Between those who are in the know, there seems to be a wide gap and then users, who may or may not understand the complexities behind such systems. In terms of ITSes as a field, there seem to be plenty of open research questions and design challenges:

  • What are some workable core designs for the development of ITS broadly? In particular disciplines and domains? In case-based contexts?
  • What types of tutorial supports are most applicable in a particular context?
  • When are tutorial supports detrimental to learning?
  • How much awareness should learners have of what is going on behind the scenes with intelligent tutorial systems?
  • For intelligent tutoring systems technologies that are hand-coded instead of directly dynamic-data-informed, when should these be updated, and how?
  • When should ITS, even dynamic ones informed by incoming data, be updated, and how?

It does seem like ITSes are still finding their place in the learning space. These do not seem to have been adopted as much in higher education, except perhaps in particular pockets in some domains (those that use simulations as part of the learning). Perhaps some massive open online courses (MOOCs) also harness such technologies to meet the needs of large populations of learners. With advances in statistical analysis, data mining, AI, natural language processing, case-based logics, and knowledge of human learning, it also seems like ITSes may become much more effective.




About the Author




Shalin Hai-Jew works as an instructional designer at Kansas State University. She may be reached at shalin@k-state.edu.
Comment on this page
 

Discussion of "Book Review: Exploring Intelligent Tutoring Systems by UX, Knowledge Structures, Domain Expertise, and Technologies"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Cover, page 17 of 23 Next page on path

Related:  (No related content)