Book Review: Strategies for Harnessing Educational Technologies
By Shalin Hai-Jew, Kansas State University
Educational Technologies: Challenges, Applications and Learning Outcomes
By Lijia Lin and Robert Atkinson, Editors
Nova Publishers
New York
2016
Two core theories inform Lijia Lin and Robert Atkinson’s edited book, Educational Technologies: Challenges, Applications and Learning Outcomes: (1) cognitive load theory, and (2) the cognitive theory of multimedia learning.
In essence, the first theory suggests that there are affordances and constraints of the human learner (particularly in terms of information processing), and it is advisable to build to those empirically observed strengths and weaknesses. The latter, generally, suggests that there are ways to harness educational technologies to human strengths and weaknesses.
Educational Technologies…brings together accomplished researchers from various parts of the world (Spain, Australia, Singapore, Germany, U.S., and China, among others) to share their research insights. This short collection offers a review of some of the latest research, and some transferable and practical insights.
Harnessing Visuals to Align with Human Capabilities
Humans have powerful visual processing capabilities, and images, both static and dynamic, are often employed to communicate information. Yet, many who design instruction are not aware of the underlying principles in the design of visualizations or of how learners’ cognitive mechanisms process visual data. Lijia Lin and Amy Leh’s “Using Visualizations to Enhance Learning: Theories and Techniques” (Ch. 1) cites the “cognitive-affective theory of learning with media” (Moreno, 2009). While a learning objective contains a certain “intrinsic” cognitive load to achieve, poor instructional design may result in “extraneous” cognitive loads. “Germane” load involves the mental effort to achieve the learning (Lin & Leh, 2016, “Using Visualizations…,” p. 2).
With the popularization of video and simulations, animated visualizations may capture learner attention and enhance learning in some contexts but not all. The authors write: “Animations may be so transient that learners need to mentally hold the previously presented information while at the same time processing the current information presented in the animations. This temporal holding may impose extraneous cognitive load upon learners, resulting in disadvantaged learning” (Lowe, 1999, as cited in Lin & Leh, 2016, “Using Visualizations…,” p. 4). Visual dynamism may encourage passive consumption instead of actual mental engagement, which is another concern. Instructional supports for learners may be necessary when using dynamic visualizations—like videos and simulations—so learners know where to focus their attention.
Some may assume that the more realism in a visualization, the more effective the learning. The impact of realism is mixed. A certain amount of realism in imagery may create a sense of real-world context. Excessive realism may be too complex and overwhelming, especially for new learners to the topic. In some learning contexts, simple schematics may be preferable to visual depictions with full details. There are appropriate levels of realism based on the learning context: the learning content and the learners.
Pedagogical Agentry
What about animated pedagogical agents? The various characters in online learning contexts may enhance learner engagement, learning, and retention, if they are designed properly. It is important to assess their actual effectiveness with learners in live learning contexts. One insight: Pedagogical agents are not as effective if learners receive simple and corrective feedback in the learning context. In other words, effective learning feedback may moderate the positive impact of pedagogical agents and make some aspects of their presence redundant.
Learner Self-talk
Learner self-explanations for their learning can be an important part to learners’ self-awareness of their own progress in learning and their learning. Learners construct knowledge “by generating inferences, integrating different sources of information, and monitoring and repairing misconceptions (Roy and Chi, 2005, as cited in Lin & Leh, 2016, “Using Visualizations…,” p. 6).
This learner self-talk information may also be used by researchers to understand how learners are engaging with the learning contents. There has been research on how advanced learners self-talk through new learning as compared to less advanced learners, with focuses on particular domains. Self-explanations have been elicited for research through talk-aloud and written protocols, but they have not been used so much in online learning systems.
Lin and Leh (2016) suggest that for independent learners using computer-based learning systems, they benefit from system elicitations for their self-explanations of the studied phenomena. Learners should also have opportunities to self-evaluate their own responses.
Finally, there is the issue of effective feedback for learning and effective transfer. Should the feedback be adaptive or non-adaptive, delivered in an auditory or textual way, immediate or delayed? What feedback should be provided when in the learning sequence? (Lin & Leh, 2016, “Using Visualizations…,” p. 8)
Seven Plus or Minus Two
Slava Kalyuga’s “Managing Effects of Transient Information in Multimedia Learning” (Ch. 2) suggests that the “human cognitive architecture” is pressured when handling transient information from videos and simulations because of the need to recall what came prior but also to manage what is being communicated in real time. As the authors note, the working memory may contain “seven plus or minus two” items at a time while long-term memory serves as a knowledge base over time. When learners are dealing with novel information, the cognitive demands are even higher.
The authors write in their abstract:
“A common example of such challenges involves multimedia techniques that transform permanent information into a fleeting, transient form by replacing written explanatory text with spoken text and static pictures with dynamic visualizations such as animations. Due to well-established processing limitations of human cognitive system, such transient forms of information may create fundamental challenges to learning faced in multimedia environments. From an educational technology design perspective, it is important to a) identify when transiency overloads learner working memory and b) develop and apply techniques that can minimize transiency effects. Cognitive load theory is the instructional theory that deals with negative effect of working memory limitations in learning, and its principles could be applied to ensure that learning outcomes are maximised” (Kalyuga, 2016, “Managing Effects…,” p. 17).
This work makes the case that it is important to give learners controls over the speed of a simulation and other types of dynamism, so they may ultimately control transient information.
The authors summarize some of the findings related to transient data and its consumption. While the conventional thinking is that information should be distributed over the visual and auditory channels (such as building multimedia with visuals but textual information delivered by an auditory channel instead of a written textual one, so as not to overload the visual channel), there is also research that a “reverse modality effect” is in play in other cases, where on-screen text was more effective for learning than the same information presented in an auditory way (Kalyuga, 2016, “Managing Effects…,” pp. 20 - 21). The length of the verbal information is also important, with longer textual information better conveyed in written form than auditory form—in a transient multimedia context (Kalyuga, 2016, “Managing Effects…,” p. 21).
So what were some tips to deal with transient data? Simultaneous informational redundancy is generally not advisable, but there may be situations when verbal redundancy may be helpful, such as in the reading of a slideshow with a lot of text. It may help to provide learner breaks between more intense learning segments. Animations may be better set up if sequenced into smaller units instead of delivered in a continuous way; the idea is to enable time and space for processing the information mentally. Multimodal learning should be designed to the level of knowledge and of experience of the learners. Learners should have control over the learning sequence. (Kalyuga, 2016, “Managing Effects…,” p. 27)
Touch Screens
Swiping, pinching, twirling, tapping, pointing, tracing…Of late, mobile devices have been imbued with touch screen inputs. Designing in a tactual way requires some fresh thinking. Shirley Agostinho, Paul Ginns, Sharon Tindall-Ford, Myrto-Foteini Mavilidi, and Fred Paas’ “’Touch the Screen’: Linking Touch-Based Educational Technology with Learning—A Synthesis of Current Research” (Ch. 3) explores the potential of touch screens to enhance learning. The underlying theory is a biological evolutionary one: hand gestures and movements are a biological primary skill with lower demands on cognitive resources, leaving more cognition to apply to the learning (Agostinho, Ginns, Tindall-Ford, Mavilidi, & Paas, 2016, p. 34). The physical motions may tap into muscle memory learning as well.
Their work opens with a literature review of the topic. The team first used an algorithm to search through Scopus holdings for related articles, and they then whittled the articles down based on a few criteria: the article had to use empirical research; the gesturing had to be done on the touch screen; the research had to report some observable and measurable learning; the study would not be applied to a “special needs” or “rehabilitation” context (Agostinho, Ginns, Tindall-Ford, Mavilidi, & Paas, 2016, pp. 41 - 42). These four requirements resulted in a reduced set of only nine articles (which might suggest challenges with the quality of the extant research and / or gappiness in terms of researcher interest in the topic or something else altogether). [While the standards for acceptance make sense, there are likely many of the rejected articles that have insights to the issue. So even if a work is not included in the final meta-analysis, it still would be important to read through others’ work.]
Their core summary finding from this targeted meta-analysis?
“The critique from these nine studies found that finger-gesture use on a touch screen has the potential to support learning when it is closely aligned with what is being learnt” (Agostinho, Ginns, Tindall-Ford, Mavilidi, & Paas, 2016, p. 33).
The authors build a table to summarize these nine works. The column headers in this table are Study, Context of touch screen use / type of finger gesture, Learners / sample, Research methodology, and Theoretical framework (Agostinho, Ginns, Tindall-Ford, Mavilidi, & Paas, 2016, pp. 43 - 44). In this summary view, it is clear that even articles that pass muster for this chapter are not complete (at least to the requirements of this team). The authors engage in a close reading of the selected works and summarize these in their chapter. One summarized work described “tactile vision,” the idea that hand movements may help guide eye movements. Another summarized work identified a transfer gap:
“The study found that touch screen devices increased young learners’ engagement and interactivity, shown by increased use of gestures when learning from and interacting with touch screen technology compared to other 2D mediums e.g., video. The importance of this study is that for very young children there seems to be a transfer deficit, as learning from 2D medium does not transfer to solving the same problem in a 3D context. This suggests when using 2D touch screens, there is a need to scaffold the learning of young children to facilitate transfer to real-world, three dimensional problems” (Agostinho, Ginns, Tindall-Ford, Mavilidi, & Paas, 2016, p. 52).
In addition to the insights about tactual design, this work really does show the importance of solid research work: starting with deep knowledge of a field, wielding relevant theories well, hypothesizing with clarity and relevance before collecting data, conducting research with high professional standards, capturing empirical data, applying logical analysis of the data, and sharing findings with honesty, comprehensiveness, and transparency.
Designed Computer-Enabled Feedback
Laura M. Schaeffer, Lauren E. Margulieux, Dar-Wei Chen, and Richard Catrambone’s “Feedback via Educational Technology” (Ch. 4) explains the challenge as designing proper feedback for learners without make-work for the human instructors (with limited time and energy). Some aspects of educational feedback include the contents of the feedback, the modality and characteristics of the feedback, the level of specificity of the feedback, and the design of social presence for feedback. The general thinking is that the feedback should be personalized to the learners. This particular work is not addressing “programmed instruction” or “intelligent tutoring systems”.
The authors begin with a humble and basic point—that for many, using computer screens adds to cognitive load and may contribute to fatigue and stress (Schaeffer, Margulieux, Chen, & Catrambone, 2016, p. 61). Designing automated feedback has to be balanced against potential downsides in using computational systems. Feedback may involve three general categories: knowledge of results, knowledge of correct response, and elaborated feedback (Shute, 2008, as cited in Schaeffer, Margulieux, Chen, & Catrambone, 2016, pp. 61 – 62). Overall, elaborated feedback is preferable to the prior two in terms of learning efficacy, but these require more attentive design (Schaeffer, Margulieux, Chen, & Catrambone, 2016, pp. 61 - 62). Virtual learning agents may be used to convey encouraging affective feedback, but they can be distractive during difficult learning sequences (Schaeffer, Margulieux, Chen, & Catrambone, 2016, p. 63).
The research literature shows up- and down- sides to feedback. Learners with relatively low retention and low transfer tend to rely overly-much on feedback instead of forming their own accurate self-assessments of their own learning; another challenge is that feedback may lead learners to be overly-confident of their skills (for example, learners sometimes misunderstand in-the-moment retrieval skills—bolstered by feedback—with long-term learning) (Schaeffer, Margulieux, Chen, & Catrambone, 2016, p. 64). When given the chance, often learners choose feedback that benefits the near-term present but not actual long-term learning. In other words, it’s complicated.
Classroom Orchestration Systems
One of the more unique technologies described occurs in Kurt VanLehn, Salman Cheema, Jon Wetzel, and Daniel Pead’s “Some Less Obvious Features of Classroom Orchestration Systems” (Ch. 5). Their work addresses an orchestration system for classroom learning, which enables learners using mobile devices to access a virtual learning poster (for either individual or small group work) and a faculty member to view all the learners’ works on screen. Also, the various projects may be projected on a large-screen monitor for whole-class viewing of and engagement with the respective projects.
This work describes the creation of the Formative Assessment with Computational Technology (FACT) system “to support formative assessment and collaborative learning. Students use tablets to edit electronic posters bearing movable cards. Teachers use a dashboard that both increases their awareness of their students’ state and helps them move fluidly and easily among activities. The FACT system has undergone extensive iterative development, with 28 classroom trials so far” (VanLehn, Cheema, Wetzel, & Pead, 2016, p. 73). A value-added feature that the authors conceptualized is an “alerting” feature that may indicate that a small group has run aground and may need instructor intervention and support. (A system that suggests courses of action for an instructor may be enhancing of instructor skills but may also be de-skilling in other contexts.)
VanLehn, et al., observe that properly evaluating an orchestration system may be challenging. They write:
“It will be hard to tell if FACT or any other orchestration system is a success. One cannot evaluate an orchestration system by simple measures such as learning gains from a pre-test to a post-test. Teacher’s opinions are important, but should probably not be the only measure. Naked-eye classroom observation is clearly needed, but has its limitations—for instance, observers cannot easily see the students’ written work. Video recording of individuals or small groups is feasible, but only captures a small part of the whole classes (sic) activity” (VanLehn, Cheema, Wetzel, & Pead, 2016, pp. 91 - 92).
Anna Arici, Sasha Barab and Ryen Borden’s “Gaming Up the Practice of Teacher Education: Quest2Teach” (Ch. 6) describes an immersive virtual role-playing simulation in which pre-service teachers engage with simulated teaching contexts through their virtual avatars. In Quest2Teach, learners develop important professional competencies, in particular: (1) suspending judgment (and realizing when one does not have complete information to make a judgment), (2) asset-based thinking (and assessing “the strengths of a person or situation, using a person’s strengths to reach positive outcomes and looking for strengths even if they are not immediately visible”), (3) locus of control (identifying “what is within their own ability to control and persevering in the face of challenges”), and (4) interpersonal awareness (understanding others’ viewpoints), while engaging with a virtual mentor teacher, virtual students, and other virtual characters in the environments (Arici, Barab, & Borden, 2016, pp. 95, 103-104).
Learners select how they may style their avatar and how they may deal with conundrums such as a mentor teacher who has a different theoretical framework and approach than the preservice teacher learner. The decisions that the learners make in the virtual world have both immediate and long-term effects on their in-world professional achievements. The point is that learners can “fail safely” in a virtual environment while experiencing authentic contexts (Arici, Barab, & Borden, 2016, p. 96). This game has been used with over 3,500 students. The activating theory behind the game space is “transformational play,” which is described as follows:
“Many of the strengths of game-based learning can be summarized in the theory of transformational play: a 3-fold theory that positions the person with intentionality, the content with legitimacy, and the context with consequentialitiy (Arici, 2009; Barab, Gresalfi, and Ingram-Goble, 2010). In these games, learners become virtual protagonists who use the knowledge, skills, and concepts of the educational content to first make sense of a situation and then make choices that actually transform the play space and the player—they are able to see how that space changed because of their own efforts” (Arici, Barab, & Borden, 2016, p. 97).
Knowing how to implement virtual world games for effective learning is an important lesson from a number of studies, including this one (Arici, Barab, & Borden, 2016, p. 97). To this end, the game itself comes with a curriculum guide and a Teacher Toolkit management system. In this chapter, the authors provide some of the feedback from the various pre-service teachers-learners about this learning experience (with the critical lesson of how to resolve differences and also how to engage learners effectively). One important finding is that learners with more coursework experiences tend to better learn from this game: “Players in their first semester of teacher preparation lacked the context and experience to make sense of the game and its complexities. They enjoyed the game, and learned the core concepts of professionalism as demonstrated on applied tests, but the larger experience, relevance, and impact was lacking. In contrast, when student teachers played the same game curriculum with three semesters more experience in the field, they found it engaging and relevant, with many students even asking for a transcript so they could review and practice the language for saying things professionally” (Arici, Barab, & Borden, 2016, p. 111).
Interpreting Eye Movements
Irene T. Skuballa, Jasmin Leber, Holger Schmidt, Gottfried Zimmermann, and Alexander Renkl’s “Using Online Eye-Movement Analyses in an Adaptive Learning Environment” (Ch. 7) addresses some capabilities and limitations of eye movement analysis in an adaptive learning environment. Adaptive learning systems are designed to make changes to the learning environment and learning sequences based on information about the learners—their actions, the timing of their responses, their profiles, and other factors. Eye movements, fixations on stationary objects, and “smooth pursuit” of moving objects, and others, are thought to potentially convey information about attentional focus and cognitive processes (Skuballa, Leber, Schmidt, Zimmermann, & Renkl, 2016, p. 119), in a non-invasive way. There seems to be mixed results on whether pupil dilation may be seen as an indicator of “working memory load” (Skuballa, Leber, Schmidt, Zimmermann, & Renkl, 2016, p. 119). Expertise has been shown though to have a link to eye movements (how and where an expert glances, in sequence).
This authoring team made close-in measures of how much time it generally takes to process information “to be able to answer a rapid assessment task correctly” (Skuballa, Leber, Schmidt, Zimmermann, & Renkl, 2016, pp. 133 – 134). They found that learners generally focus longer on areas with text than with visuals. The authors also found the following: “Using eye-movement thresholds to decide whether a rapid assessment task needs to be presented resulted in higher diagnostic sensitivity and reduced learning times while not impairing learning outcomes. However, we did not detect some knowledge gaps by using the implemented thresholds for eye movements. Hence, the adaptive mechanisms based on learners’ eye-movements still need to be improved” (Skuballa, Leber, Schmidt, Zimmermann, & Renkl, 2016, p. 137). They conclude that some triangulation of multiple data sources, including eye tracking, may lead to advancements in harnessing computational adaptivity in automated learning systems.
TPACK: Technological Pedagogical Content Knowledge
Lih Ing Goh and Joyce Hwee Ling Koh’s “The Use of a Corpus-Based Tool to Support Teachers’ Assessment Design: An Examination through the Lens of TPACK” (Ch. 8) is a case study that uses TPACK in a novel way. TPACK stands for Technological Pedagogical Content Knowledge, and this framework was created in 2006 by Drs. Punya Mishra and Matthew J. Koehler. According to the TPACK concept, teaching effectively with technology requires consideration of three interacting and overlapping domains: technological knowledge, pedagogical knowledge, and content knowledge (Goh & Koh, 2016, p. 146). The researchers were interested in analyzing the use of a corpus-based support tool (a software tool that enables the counting of various language-based features of a text) for Chinese language instructors who are designing a Chinese comprehension test (and expressing themselves through think-aloud protocols and being recorded)…and identifying which parts of TPACK were most salient in their design work based on their comments. TPACK is historically applied to pedagogical design but not specifically application to the creation of a language learning assessment.
Figure 2: Technological Pedagogical Content Knowledge (TPACK)
The researchers explain their work in this case study:
“To understand teachers’ TPACK, video recordings were made of two Chinese Language teachers as they designed a reading comprehension test with and without using a corpus-based language support tool. Reading comprehension tests based on the multiple-choice format was chosen as a study context because it is increasingly being used to test language competencies in international assessments (Sonnleitner, 2008). To analyze the data, the different categories of TPACK that emerged from the decision moves made by teachers were identified through content analysis of the video recordings. The qualitative themes that emerged from coding were corroborated with chi-square analyses to identify how the corpus-based language support tool influenced teachers’ decision-making during assessment” (Goh & Koh, 2016, p. 144).
The Flash-based corpus-based tool captures the level of difficulty of language, character counts, word counts, and text coverage (Goh & Koh, 2016, p. 149). The coding of the instructors’ comments are shown in relation to TPACK concepts with particular concepts being more predominant (Goh & Koh, 2016, p. 153). Based on a chi-square goodness-of-fit test, there were some statistically significant categories identified (Goh & Koh, 2016, p. 156). Unsurprisingly, the two instructors who participated in this research took different approaches to the task, and they used different aspects of the TPACK framework more often. The two parts of TPACK most practically relevant in their assessment designs were “pedagogical content knowledge” and “technical content knowledge” (Goh & Koh, 2016, p. 143). In other words, the overlap between knowledge of learning and learning strategies and the content area…and the overlap between the area field and technological knowledge are critical. That said, this is an early work about the potential effects of ICT considerations in the creation of language learning assessments in a real world context.
Communities of Practice
Hirah A. Mir and Holly L. Meredith’s “Using Facebook to Enhance Learning and Interactions in Undergraduate Courses” (Ch. 9) describes the harnessing of social networking sites to enhance learning-focused socializing among learners and the building of learning communities. Here, this team used Facebook groups to “deliver course content, post announcements, share resources, support discussions and critical thinking exercises, post and like comments, and provide links to materials” (Mir & Meredith, 2016, p. 165). The authors note that studies have compared Facebook to traditional LMSes and was found to be beneficial for community building and class discussions (Mir & Meredith, 2016, p. 166). Faculty may be seen as more approachable when they engage in social media. “High self-disclosure profile teachers were judged to be more trustworthy and caring” (Mazer, et al., 2009, as cited in Mir & Meredith, 2016, p. 166). Language barriers may be lessened. Outside experts may become parts of communities of practice. Using Facebook for learning is not without its challenges, such as potential intrusions in participants’ private social networks and security breaches.
Erdem Demiroz and Rita Barger’s “Course Redesign and Infusion of Educational Technology into College Algebra” (Ch. 10) wrangles with a perennial challenge of improving learning and retention for a common course: remedial college algebra. The core question in this work is what instructional and learning technologies (ILT) may be brought to bear to ensure that learners are ready for college level pre-calculus (or other less-math-heavy learning paths). The authors begin their chapter as follows: “College algebra is renowned for undesirable pedagogical outcomes such as high dropout rates, low academic achievement and low retention rates” (Demiroz & Barger, 2016, p. 175). According to various published sources, there are some 50 – 90% dropout rates from this foundational course across institutions of higher education in the U.S. The challenges come from the high variance in math preparation for the students and the fast diminishing STEM pipeline particularly for American high schoolers. These courses tend to be high-enrollment and also high weed-out ones. In the near future, there are expected to be high numbers of students.
What are some methods that might improve such courses in the research literature? The authors note benefits to using calculators to “externalize information processing” (Demiroz & Barger, 2016, p. 181). They support the “supplemental instruction” model by Dr. Deanna Marin in 1973 at the University of Missouri-Kansas City, which designed in collaboration and mutually supported peer-led review sessions out-of-class. The authors also propose the uses of online tutorials which enable learners to repeat learning sequences to enhance comprehension.
Learning for Lifelong Learning and Life
Angela Barrus, Jared Chapman, Robert Bodily, and Peter J. Rich’s “Using Educational Technologies to Scaffold High School and College Students’ Skill and Will to Plan, Practice and Produce” (Ch. 11) takes yet another tact to improve learners’ lives. They argue that learners not only benefit from enhancing their study skills (for lifelong learning) but enhancing life skills for satisfactory employment and successful personal lives. Life skills are complex and multi-dimensional. To be successful, people need to set goals, plan effectively, and follow through on their plans. They have to face discouragements and still persist with self-confidence, will, hope, actionable plans, and actions. To these ends, the authors propose employing the “3P learner-centered design framework” (“planning, practicing, producing”) and using technologies “to operationalize learners’ skill and will strategies in a variety of digital learning domains and environments” (Barrus, Chapman, Bodily, & Rich, 2016, p. 208). Several case studies were used as exemplars: Utah Valley University Project Delphinium, Brigham Young University Student Engagement Project, and Arizona State University Adaptive Learning Project.
A Customized E-Portfolio System
One of the more ambitious chapters is Cecile M. Foshee, Neil Mehta, S. Beth Bierer, and Elaine F. Dannefer’s “A Model for Integrating Technology into an Assessment System: Building an E-Portfolio to Support Learning” (Ch. 12). This describes the methodical custom-built e-portfolio system at the Cleveland Clinic Lerner College of Medicine (CCLCM) at Case Western Reserve University. Over a 12-year period, a team at this medical school built an e-portfolio system informed by an “assessment for learning” framework. Of main focus: competencies as “observable sets of complex abilities (i.e., behaviors and skills) that integrate knowledge, skills, values, and attitudes (Albanese, Mejicano, Mullan, Kokotailo, & Gruppen, 2008; Frank, et al., 2010), as cited in Foshee, Mehta, Bierer, & Dannefer, 2016, p. 224). In this conceptualization, students not only create the contents of their e-portfolios but write self-reflections to explore their learning. What were some of the competencies? In the domain of “Research and Scholarship,” there was “Demonstrates knowledge and skills required to interpret, critically evaluate, and conduct research.” In “Professionalism,” “Demonstrates commitment to high standards of ethical, respectful, compassionate, reliable, and responsible behaviors in all settings and recognizes and addresses lapses in behavior.” In “Patient Care,” the competencies read: “Demonstrates proficiency in clinical skills and clinical reasoning; engages in patient-centered care that is appropriate, compassionate and collaborative in promoting health and treating disease” (Foshee, Mehta, Bierer, & Dannefer, 2016, p. 226). The objects in the e-portfolios were arrived at through curricular processes. All staff involved in the teaching and learning and the promotion committee and learners were integrated into this process so that the e-portfolio technology could meet their respective needs (Foshee, Mehta, Bierer, & Dannefer, 2016, p. 234).
These and other aspects of the e-portfolio system were collaboratively arrived at by the professionals at this school, through disciplined reasoning. To complement the formative assessments, there was allowance for “unique-evidence, generated from individualized experiences or other extra-curricular activities” (Foshee, Mehta, Bierer, & Dannefer, 2016, p. 234). The technology system enabled analysis of macro-level information on the dashboard to identify trends. For security, the team decided on a web-facing tool but enforced rigorous password protections for the learner contents.
Technology and Language Learning
Language learning is a highly popular pursuit. Justin Shewell and Shane Dixon’s “Technology-Supported Language Learning” (Ch. 13) suggests that educational technologies may benefit this domain, even as language instructors may not have sufficient technology training. As an older technology, they point to Cuisinaire rods and a “colored sound chart” (Shewell & Dixon, 2016, p. 254).
Figure 3: Dr. Caleb Gattegno’s Original “Sound-Color Chart for English" (from Wikipedia)
The authors find inspiration in some of the technologies employed in massive open online courses (MOOCs), gamified learning (enhancing online learning with game features, such as points-earning), and flipped classrooms (recording lectures for delivery online and using face-to-face class time for group work, community building, and applied work) (Shewell & Dixon, 2016). This work does not really provide management insights but does offer some encouragement to explore technologies for pedagogical enhancements.
The final chapter is Antonio Sarasa Cabezuelo’s “Data Visualization Tools for Teaching” (Ch. 14). This work suggests that much of big data is represented in data visualizations—from data tables to maps to graphics—and because these contain so much summary information and meaning, these forms should be better understood. Data visualizations need to be accessible for all potential users, including those with perceptual challenges.
Educational Technologies…offers a solid collection with some transferable approaches. This book is highly readable, with important principles for newcomers to instructional design but also sufficient complexity for advanced practitioners. This work shows that theories should not be mindlessly applied. Research on learning effects should be sufficiently nuanced to catch moderator effects. Educational research should be practical and applied. Educational technologies should be wielded in an informed way.
The technologies described—e-portfolio systems, data visualizations, sociality and communities of practice, pedagogical agentry—are fairly widely accessible and may be applied to a variety of learning contexts. Quite a number of other relevant technology topics were not necessarily engaged here: augmented reality, online labs, AI, LMS data portals, and so on.
Acknowledgments: Thanks to Nova Science Publishing for making the book available for review. The book was provided as a watermarked e-book.
About the Author
Shalin Hai-Jew works as an instructional designer at Kansas State University. Her email is shalin@k-state.edu.
Previous page on path | Cover, page 21 of 27 | Next page on path |
Discussion of "Book Review: Strategies for Harnessing Educational Technologies"
Add your voice to this discussion.
Checking your signed in status ...