Linda Merillat photo
1 media/Merillat_photo_thumb.png 2023-12-03T15:04:55-08:00 Dr. Robert Bruce Scott 6e92010469710dace11b27a901faa9971d3e6566 43299 1 Linda Merillat photo plain 2023-12-03T15:04:55-08:00 Dr. Robert Bruce Scott 6e92010469710dace11b27a901faa9971d3e6566This page is referenced by:
-
1
2023-05-26T13:33:45-07:00
A Fresh Look at the Faculty Evaluation Process
82
Article 1 authored by Linda Merillat, Jane Carpenter, Bobbe Mansfield, and Debbie Isaacson, Washburn University, School of Nursing
plain
2024-01-09T06:06:17-08:00
By Linda Merillat, Jane Carpenter, Bobbe Mansfield, and Debbie Isaacson, Washburn University, School of Nursing
Introduction
In the ever-evolving landscape of higher education, the quality of teaching and the learning experiences are paramount for program success. It is within this context that faculty annual evaluations emerge as a critical mechanism for assessing and improving the performance of educators in universities and colleges. These evaluations should offer a comprehensive view of faculty members' personal strengths and areas in need of development which impact the quality of the learning experience for students. These evaluations should serve as a cornerstone of academic excellence and a driving force behind continuous program improvement. This article provides an overview of how a medium-sized Midwestern university stepped back, took a fresh look at the faculty annual evaluation process, and transformed the process from an archaic process into one that encourages and supports faculty in their professional development.Problem
The problems with the faculty annual evaluation process within the School of Nursing (SON) were long-standing. The process was based on a form that was developed over twenty years ago. The evaluation consisted of two parts. First, each year the faculty were required to complete a narrative activities report (Figure 1, Figure 2). Then, they were asked to evaluate themselves using a Likert scale-type form (Figure 3, Figure 4). When done, a meeting was scheduled with the Associate Dean. During the meeting, evaluation scores were often changed or altered by the evaluator. The reasons for these changes seemed subjective, since no rubric nor details were provided about what was expected for the levels of performance. At the end of the process, faculty were provided with an overall score of 1 to 4, with 1 being Unsatisfactory and 4 being Exemplary. Faculty often felt demeaned by a process that had no clear rules, and the experience felt capricious and punitive.
From a management perspective, there was no way to track faculty performance over time. The written form provided no means for collecting metadata for longitudinal analysis. During this time, the SON was tasked with collecting data for a Center of Excellence application, and faculty performance data was found to be lacking.
Alt text for Figures 1 & 2 - Outdated Dean's Annual Evaluation of Faculty MS Word doc - provided by Linda Merillat.
Alt text for Figures 3 & 4 - Outdated Faculty Annual Report Template MS Word doc - provided by Linda Merillat.Solution
The solution was to develop a comprehensive, rubric-based form that focused on continual improvement rather than a summative score. The criteria were primarily based on requirements established in the SON’s promotion and tenure guidelines. A draft of the proposed rubric was presented to the deans and program directors. Over several meetings, the language of the rubrics was refined. The current criteria and associated rubrics are provided in an appendix (Appendix- Faculty Annual Evaluation Rubric).
As the project unfolded, there were discussions about the best way for faculty to complete the forms. Using a PDF form filler was suggested. Ultimately, it was decided to implement the form as a series of tabs in Excel. The benefits of using Excel were that faculty could download the form at the beginning of the academic year and update it throughout the year; and then, after meeting with their Associate Dean, the data from the form could be imported into the SON’s database to track metadata and longitudinal trends.
The first page of the Excel workbook was a summary page (Figure 5 - Summary Page of New Faculty Annual Evaluation Form). On the summary page, faculty entered their name. The form listed all the categories under evaluation organized by a widely accepted model of Teaching, Scholarship, Service, and Other. Steps in the overall process with due dates were provided. Hyperlinks were provided to make it easier to navigate the form.
Each tab in the form followed a consistent format (Figure 6 - Sample tab in New Faculty Annual Evaluation form). The rubric for the criteria was developed and clearly provided. Under the rubric were radio buttons for selection. In the process, faculty assessed themselves for each criterion. Next, the faculty members provided support for their selection with relevant examples of their performance. During their meeting with the Associate Dean, changes were collaboratively made to these determinations.
Alt txt for Figures 5, 6, 7, 8, 9 - New SON Faculty Evaluation Form Template - 2023-2024 MS Excel spreadsheet - provided by Linda Merillat.Benefits
The introduction of this new faculty annual evaluation form and process has provided the SON with several benefits.
First, it shifted the atmosphere of faculty annual evaluations from punitive and subjective to a more flexible, formative format focused on continual improvement. Faculty no longer felt that they were being assigned an arbitrary score.
Next, faculty annual evaluations became more consistent. The evaluations were administered by two different associate deans. The new form with the embedded rubrics ensured that each of the criteria was given the same treatment.
Faculty members also saw benefits. The form was under the control of each faculty member which made it easier for faculty to track and document accomplishments throughout the academic year. The raw data from each form was incorporated into the SON’s database which allowed detailed performance reports to be provided to each faculty member in support of their tenure and promotion applications.
From a managerial perspective, the incorporation of the raw data into the SON’s database allowed reports to be provided that showed faculty performance and trends over time (Figure 7) and a scholarship summary that was used for other departmental required reporting (Figure 8).
Lessons Learned
The general construction of the form has remained stable, but several improvements to the form and the process were made. Most importantly, it was critical to provide a clear delineation of what accomplishments should appear on which tab and to communicate these expectations to faculty. Examples of the types of activities to be included within each criterion area were added. To facilitate ease of use for faculty, the headings of each tab were color-coded to designate the major areas of teaching, scholarship, service, and an ‘other’ category. An unexpected issue arose when faculty were required to sign the form. The process was primarily digital, and a formal signature added an unnecessary layer of complexity. Instead, faculty members type in their initials electronically during the review meeting with their Associate Dean.
The process of incorporating the raw data into the SON’s database was problematic at times. Faculty had to be advised to limit lines to 250 characters, so that their comments didn’t get truncated during the import process. Special attention also had to be given when the final spreadsheets were submitted for import into the database, requiring a manual review for correctness and accuracy. To minimize these potential problems faculty were given several tips for completing the form.- Don't insert new rows.
- Keep all your accomplishments in one row.
- Don't use spaces to force a new line.
- When editing in Excel (not in the browser), type ALT+ENTER to force a new line.
Technical Details for Creating the Form
The next section of the article discusses some of the technical details used to create the form.
Developing the Rubric
The rubric development was a crucial aspect of the creation of the form. The rubric was developed and reviewed over several months. The general process was:
- Criterion areas were decided based on the SON’s Promotion and Tenure Guidelines.
- A rubric for each criterion was drafted. A 4-point scale was adopted (Exemplary, Professional,Improvement Required, Unsatisfactory Performance).
- Most criteria included a Not Applicable option. This allowed for flexibility in how the form was used based on tenure-track vs. non-tenure-track faculty and undergraduate vs. graduate faculty.
- The criteria were grouped into the general areas of teaching, scholarship, service, and other.
- The rubric was reviewed several times before it was finalized.
Creating a Faculty Annual Evaluation Form in Excel
The form uses some advanced features of Excel and the general process for creating the form is outlined
below.
The form used radio buttons. To use radio buttons in Excel, the Developer tab was enabled. The
Developer tab is not displayed by default, but it was added to the ribbon.
1. On the File tab, go to Options: Customize Ribbon.
2. Under Customize the Ribbon and under Main Tabs, select the Developer check box.
Next,- Opened a blank Excel sheet.
- Created a summary sheet.
- Created tabs for Goal Status, New Goals, and Review Comments.
- For each criteria area,
b. Added the rubric at the top
c. Created a set of radio buttons under each criterion in the rubric.
d. Added lines for Support for Evaluation
5. Added tabs for additional requirements or other
a. Sending current CV to SON Coordinator
b. Providing APA references for scholarship
c. Providing details for Grants
d. Completing assigned Office 365 Planner tasks
e. Any other faculty accomplishments
6. Added hyperlinks to ease navigation
a. At the top of each tab, added a Return to Summary link
b. On the Summary page, added hyperlinks from the criteria category to each spreadsheet tab.
7. Added a hidden spreadsheet to associate button values with descriptions ( Figure 9 ). Added logic and formulas to display descriptions on the Summary sheet corresponding to the button value selected for each criteria area.
8. Added hidden spreadsheet to facilitate drop-down lists for faculty name and academic year.
Integrating the Form with Our Database
This form was designed to integrate with an Access database.
- The database was designed (Figure 10 - Database Design in Access).
- Values were determined.
- Several hidden sheets were created. Each corresponded to one of the tables in the database design.
a. Report Summary
4. Formulas were used to transpose data from each sheet into the hidden sheets for easy import into the database.
b. Faculty Evaluation Import
c. Faculty Accomplishment Import
d. Goal Status Import
e. New Goals ImportEstablishing an Annual Process for Faculty to Follow
The annual process used by each faculty member evolved. The final process for faculty was documented in the form:- Enter your name – use the drop-down.
- For each criteria area, click on the link or the tab at the bottom of the sheet.
- Review the rubric.
- Provide support for your evaluation of the criteria.
- Select a self-evaluation score. (Don't forget the CV and other tabs at the end.)
- Click on the link to return to this Summary page. Continue for each criteria area.
- Update the status of previous goals.
- Establish new goals for new upcoming academic year.
- Send CV to SON Coordinator.
- Send the completed form to your immediate supervisor - DUE BY MAY 31
- Your supervisor will review and add comments.
- Meet with your supervisor. During your review meeting with your supervisor, updates can be made.
- Add final response comment, if desired.
- Send the completed form to the database administrator. - DUE BY AUGUST 15
- A copy of the final summary report will be put in your official faculty folder.
- A complete final report will be sent to you by the database administrator for your records.
Faculty Annual Evaluation Form Template
The Faculty Annual Evaluation template is available for download. Download the form before viewing. The radio buttons do not work in the Office 365 web view.
Appendix- Faculty Annual Evaluation Rubric
The Faculty Annual Evaluation Rubric is designed to promote formative professional growth. The rubric below is flexible. There may be some criteria that do not apply to all faculty positions or an individual faculty member’s scholarship plan. The rubric is integrated into the SON Faculty Annual Evaluation Form. In the SON Faculty Annual Evaluation form, each faculty member self-evaluates their performance for each criterion and documents support for the selected evaluation rating. If a criterion does not apply, the faculty member selects ‘Not Applicable.’
* Select only those areas that reflect your scholarship plan – it is not expected that all areas will be addressed during each evaluation periodCriteria Exemplary
Performance
Professional
Performance
Improvement
Required
Unsatisfactory Performance
Teaching Academic Program Planning/ Curriculum Development Takes a leadership role in ongoing program planning and curriculum development; ensures courses meet objectives for the overall program plan; and develops student learning activities to achieve course outcomes. Actively participates in ongoing program planning and curriculum development; ensures courses meet objectives for the overall program plan; and develops student learning activities to achieve course outcomes. Participates minimally in ongoing program planning and curriculum development; courses need improvement to meet the objectives for the overall program plan; or student learning activities are not linked clearly with course outcomes Does not participate in ongoing program planning and curriculum development and course/program outcomes are not achieved. Content Expertise
(Requires peer review to confirm – if no peer review was done, select “Improvement Required”)Demonstrates personal scholarship/expert knowledge in content areas by incorporating key topics, supporting instruction with empirical evidence, presenting the latest advances, connecting relevant information to expected outcomes/roles, and grounding learning activities in real-world experience; this level of performance is confirmed by peer review evaluation.
Demonstrates knowledge in content areas by incorporating key topics, supporting instruction with empirical evidence, presenting the latest advances, connecting relevant information to expected outcomes/roles, and grounding learning activities in real-world experience; this level of performance is confirmed by peer review evaluation ratings. Demonstrates knowledge in content areas that is deficient in one of the following areas: focus on key topics, empirical evidence support, latest advances related to topics, relevance to expected outcomes/roles, or grounding of learning activities in real-world experience; this level of performance is supported by peer review evaluation ratings or no peer review was done. Demonstrates knowledge in content areas that is deficient in multiple areas and this level of performance is confirmed by peer evaluation. Course Delivery Initiates active engagement with students throughout each course; provides constructive, relevant, and frequent feedback to students within published time frames; maintains a positive learning environment. Engages and maintains availability with students throughout each course; provides constructive and relevant feedback to students within published time frames; maintains a positive learning environment. Complaints from two or more students in a course concerning instructor availability and/or engagement; feedback is not considered to be constructive, relevant, or timely; or the learning environment is not considered to be positive. Multiple complaints from students concerning instructor responsiveness/engagement; or significant concerns about the content or timeliness in receiving feedback. Course Design Looks for opportunities to make continual course improvements; and finds/uses resources to make changes and incorporate innovative learning strategies. Maintains an average score of 4.0 or above across all Course categories: Course Design, Clinical Course Design, and Overall Course Experience. Recognizes the need for continual course improvements; and uses resources to make changes or to pilot new learning strategies. Maintains an average score of 3.5 to 3.9 across all Course categories: Course Design, Clinical Course Design, and Overall Course Experience. Demonstrates limited awareness of the need for course improvements or does not make effective changes to improve learning outcomes. Maintains an average score of less than 3.5 in some courses four Course categories: Course Design, Clinical Course Design, and Overall Course Experience. No evidence of continual course improvement and its opportunities. Course Management
D2L course shells are consistently well-managed: ready for students on the first day of the course; updated and correct syllabus uploaded; assignment dates accurate; old or obsolete materials are deleted. D2L course shells are generally well-managed with only an occasional error in one or more of the following areas: ready for students on the first day of the course; updated and correct syllabus uploaded; assignment dates accurate; old or obsolete materials are deleted.
D2L course shells are managed with several deficiencies noted in one or more of the following areas: ready for students on the first day of the course; updated and correct syllabus uploaded; assignment dates accurate; old or obsolete materials are deleted. Numerous deficiencies were noted related to D2L course shell management. Classroom Management Lectures/learning activities are current and well-organized for face-to-face and/or online courses. Lectures/learning activities are prepared with minor issues reported in face-to-face and/or online courses. Lectures/learning activities are prepared with several issues reported in face-to-face and/or online courses. Poorly prepared for lectures/learning activities in face-to-face and/or online courses. Clinical/Practicum Management Coordination of student, clinical faculty/adjunct, agency, and/or preceptor needs is anticipated and is reflected in well-developed clinical placement planning. Coordination of student, clinical faculty/adjunct, agency, and/or preceptor needs is reflected in clinical placement planning. Student, clinical faculty/adjunct, agency, and/or preceptor needs are not well-considered in clinical placement planning. Poor coordination of student, clinical faculty/adjunct, agency, and/or preceptor needs related to clinical placement planning. Instructor Experience Maintains an average score of 4.0 or above across all courses for Instructor categories: Instructor Role, Clinical Instructor Role, Instructor Feedback, and Overall Instructor Experience. Maintains an average score of 3.5-3.9 across all courses for Instructor categories: Instructor Role, Clinical Instructor Role, Instructor Feedback, and Overall Instructor Experience. Receives an average score of less than 3.5 in some courses for Instructor categories: Instructor Role, Clinical Instructor Role, Instructor Feedback, and Overall Instructor Experience. Receives average scores of less than 3.5 across all courses for Instructor categories: Instructor Role, Clinical Instructor Role, Instructor Feedback, and Overall Instructor Experience. Advising
Develops, revises, and/or monitors an appropriate educational plan for advisees; and engages with each advisee at least once each semester to provide information and support for attainment of educational goals. Develops, revises, and/or monitors an appropriate education plan for advisees; and engages with advisees as needed to provide information and support for attainment of educational goals.
Several issues with the process of developing, revising, and/or monitoring educational plans for advisees; or does not actively engage with advisees outside of the course instructor role. Student complaints received regarding the quality or accuracy of advising; or is not responsive when students request advising assistance. Pre-licensure Advising Coordinates advising sessions for designated level; and/or volunteers for 2 or more pre-nursing advising sessions.
Assists with advising sessions for designated level; and/or volunteers for 1 pre-nursing advising session.
Does not assist with advising sessions for the designated level. Student complaints received regarding the quality or accuracy of advising; or is not responsive when students request advising assistance. Professional Development for Teaching
Participated in at least 3 SON, university, or outside professional development activities during the evaluation period.
Participated in 2 SON, university, or outside professional development activities during the evaluation period. Participated in 1 SON, university, or outside professional development activity during the evaluation period.
Did not participate in any SON, university, or outside professional development activities during the evaluation period.
Scholarship *Scholarship of Discovery or Scientific Inquiry 2 or more publications/ presentations were completed during the evaluation period, and further scholarship is actively being planned. One publication/presentation was completed during the evaluation period, and a scholarship agenda was established.
A research agenda was established but no publications or presentations were completed during this evaluation period. No research agenda has been established for this evaluation period. *Scholarship of Practice
(Examples)Demonstrates two or more scholarship of practice activities completed during this evaluation period.
Demonstrates one scholarship of practice activity completed during this evaluation period. No scholarship of practice activities were completed during this evaluation period, but at least one activity is planned for the next evaluation period. No scholarship of practice activities planned or completed. *Scholarship of Teaching
(Examples)Demonstrates two or more scholarship of teaching activities completed during this evaluation period. Demonstrates one scholarship of teaching activity completed during this evaluation period.
No scholarship of teaching activities were completed during this evaluation period, but at least one activity is planned for the next evaluation period. No scholarship of teaching activities planned or completed. *Scholarship of Community Engagement
(Examples)Demonstrates two or more scholarship of community engagement activities completed during this evaluation period.
Demonstrates one scholarship of community engagement activity completed during this evaluation period.
No scholarship of community engagement activities were completed during this evaluation period, but at least one activity is planned for the next evaluation period. No community engagement were activities planned or completed. Service Academic Engagement
Attends all required academic events: Convocation, Light the Lamp, Graduation, SON Recognition Ceremony; and participates in two or more other university or SON events. Attends all required academic events: Convocation, Light the Lamp, Graduation, SON Recognition Ceremony; and participates in one other university or SON event. Has one or more unexcused absences from required academic events; or does not participate in other university or SON events. Frequently absent from required academic events. Service to the School of Nursing
(Examples)Provides exceptional service through active participation on at least 2 SON committees, with a leadership position held in at least one; and voluntarily engages in at least 2 additional activities to promote the SON and student success. Provides service through active participation on at least 2 SON committees; and engages in one additional activity to promote the SON and student success. Participation in assigned committee activities is not consistent, or there is no engagement in other activities to promote the SON and student success. No service commitments to the School of Nursing.
Service to the University
(Examples)Provides exceptional service through active participation on at least 2 university committees, with a leadership position held on at least one; and voluntarily engages in at least 2 additional activities to promote the university and student success. Provides service through active participation on at least one university committee; and engages in one additional activity to promote the university and student success.
Participation in assigned committee activities is not consistent, or there is no engagement in other activities to promote the university and student success.
No service commitments to the University. Service to the Profession and/or Community
(Examples)Provides exceptional service in 2 or more settings that demonstrate engagement through active membership, appointment, consultation, or other voluntary activities that benefit the profession or the community; holds a leadership position in one or more of these activities. Provides exceptional service in 2 or more settings that demonstrate engagement through active membership, appointment, consultation, or other voluntary activities that benefit the profession or the community.
A plan for providing service to the profession and the community is established. No service plan or demonstrated engagement with the profession and/or the community. Criteria Example Activities
Scholarship of Practice Example Activities
- securing external, competitive funding to support innovations in practice;
- manuscript submission in peer-reviewed venues to influence practice;
- dissemination of policy papers through peer-reviewed venues;
- program development for special populations;
- evaluation and dissemination of patient outcomes data.
Scholarship of Teaching Example Activities
- redesign or development of educational systems;
- development and implementation of evidence-based educational strategies to promote clinical decision-making;
- development of innovative teaching methods and strategies;
- use and evaluation of instructional technology;
- innovation in inter-professional education.
Scholarship of Community Engagement Activities
- development of training manuals;
- creation of policy briefs;
- professional and community presentations;
- creation of instructional videos;
- securing competitive funding to support community innovations;
- development of online curricula with an academic practice partner
- providing community-based professional lectures;
- co-managing a health agency that provides direct care services;
- conducting workshops or seminars;
- contributing to community service publications
Service to School of Nursing Example Activities
- serving on School of Nursing committees;
- mentoring of students outside of regular advising duties;
- career counseling and formal recruitment activities;
- assuming delegated administrative responsibilities;
- providing leadership in the development of special projects or grants that will benefit the School of Nursing;
- making financial contributions to the School;
- conducting institutional studies for the School;
- providing leadership in faculty, student, or School organizations or functions;
- supervision of independent study courses;
- supervision of honors projects;
- serving as a member of graduate project committees.
Service to University Example Activities
- serving on University committees;
- providing leadership in the development of special projects or grants that will benefit the University;
- making financial contributions to the University;
- conducting institutional studies for the University;
- supervision of Washburn transformational experience projects;
- teaching WU-101
Service to Profession Example Activities
- membership, leadership, or offices held on committees of professional associations and organizations at the local, state, regional, national, or international levels;
- appointments to editorial boards of refereed journals and offering continuing education programs for professional peers;
- serving on a peer review abstract committee for podium or poster presentations
Service to Community Example Activities
- appointments to professional or civic boards and providing professional consultation services to community groups, government, business, or industry
- offering educational programs to the community that derive from professional knowledge
About the Authors
Dr. Linda Merillat
Linda Merillat’s experience and skills represent a union between technology, education, and interaction design. During her career, she has played many different roles: programmer, systems analyst, business analyst, interaction designer, program manager, project manager, consultant, trainer, educator, instructional designer, researcher, author, and entrepreneur. The common thread running throughout has always been the challenge of how to successfully use and integrate the latest technology into an organization. She currently holds a faculty position in the School of Nursing at Washburn University with the role of Instructional Designer.Dr. Jane Carpenter
Jane Carpenter is the Dean of the School of Nursing. She has been a faculty member at Washburn University for over 27 years.
Dr. Bobbe Mansfield
Bobbe Mansfield's professional career was inspired by registered nurses working in independent roles and has grown as opportunities for advanced practice nurses were formalized. From community-based care to family medicine and education, her trajectory has been shaped by new developments in healthcare that have opened doors for her and other advanced practice providers. Her most exciting nursing job is the one that she is doing right now.
Dr. Debbie Isaacson
Debbie Isaacson earned a BSN from Fort Hays State University, a M.S.Ed. in School Nursing/Health Education from the University of Kansas, and a DNP in Systems Leadership from Rush University, Chicago. Her nursing experience is with pediatrics; acute care, school nursing, and camp nursing. She is currently the PI for an HRSA NEPQR grant that focuses on preparing RNs as primary care nurses. Her DNP project focused on undergraduate nursing curriculum transformation. -
1
media/SIDLIT-RethinkingAssessment_TitleSlide.png
media/SIDLIT-RethinkingAssessment_TitleSlide.png
2024-01-05T01:38:39-08:00
Rethinking Assessment in Light of Generative AI
67
Article by David Swisher & Annie Els of Indiana Wesleyan University
plain
2024-01-07T07:54:24-08:00
By David Swisher & Annie Els, Indiana Wesleyan University
The rise of generative AI has undoubtedly shaken the foundations of traditional assessment. Gone are the days of relying solely on essays and multiple-choice tests, as students now have access to powerful tools capable of churning out seemingly flawless content in mere seconds. This begs a critical question: Are we on the precipice of a dystopian future where assessments are rendered meaningless by AI-powered plagiarism? Or can we navigate this new landscape and harness generative AI’s potential to create a more insightful and equitable approach to measuring learning?
This article delves into the complex interplay between generative AI and assessment, exploring the ethical concerns surrounding cheating and plagiarism, the potential for a utopian future where AI enhances learning, and the crucial need to understand how generative AI “thinks” in order to devise effective assessment strategies. We will then dissect the limitations of generative AI, revealing its blind spots and vulnerabilities that educators can leverage to ensure true understanding prevails. Finally, we will equip you with practical tips and tricks to address generative AI-powered plagiarism and design assessments that foster critical thinking and creative problem-solving skills – the very competencies that AI cannot easily replicate. By understanding its challenges and opportunities, we can not only safeguard the integrity of assessment, but also harness generative AI’s potential to create a more meaningful and personalized learning experience for all.
So how easily can students plagiarize with ChatGPT? Let’s take a quick look at what we are facing in classrooms around the globe…The Cheating & Plagiarism Dilemma
So as you can see in that video, it is extremely easy for a student to use generative AI to answer the questions in their assigned work, without actually doing any of the work themselves…and worse, because the AI is generating an entirely new answer each time it responds, there’s a fairly high likelihood that the student’s copy-and-pasted answer will not be detected as plagiarism by detection software.
So what are we to make of this?
Many blame the technology that makes it possible. But we think it’s time to rethink how we do
assessment.
Often we like to think of plagiarism and cheating in binary terms, presuming that it’s clearly either plagiarism or it’s not. In reality, plagiarism and cheating is more like a continuum.
There are levels of computer-generated assistance that we would admit are perfectly OK, and then there are other instances and uses that would clearly cross the line. Often it depends on the nature of the assignment, the objectives of the class, and/or even the grade level of the student.
So take a look at these examples here. Where would you draw the line?
That first one at the top is what was demonstrated in Annie’s video example. And the bottom-most one is what we like to think our students have been doing. But what if it’s somewhere in the middle? What if the student consulted the Internet or used an AI for ideas, but then wrote and submitted their own work? What if they wrote the main ideas, but asked an AI to create the first draft? What if they wrote and planned the main ideas, got assistance from an AI in the thinking process, modified and refined the output themselves (so that it’s mostly their work, with assistance), and then manually edited their final submission?
So here are a few questions to ask with that continuum in mind:- Which of these would you consider “cheating”?
- Which of these would you use as an adult? Or maybe it’s more accurate to say, “Which of these do we already use?” For example: How many of us use & recommend spell-check? Or Grammarly to improve our writing? Or reminders & prompts?
- Which of these is relevant to our students’ future? That’s the most important question!
Most would agree that using AI for ideation assistance, brainstorming, etc., is probably okay, but at what point does it cross the line?
One of the major themes that Matt Miller highlights in his AI for Educators book is that most of our fear-based reactions to generative AI right now are based on what he calls “today glasses”: We are looking at the issues and ramifications purely as it impacts and concerns us TODAY.
So, looking through TODAY glasses, we sense panic, dread, and fear…- Some schools, districts, and teachers have decided to just ban ChatGPT altogether.
- Some instructors have opted to revert to paper and pencil
- Some have ditched written work altogether.
But that’s NOT the world that our students will be working in!
And worse, each of these stopgap measures only treats the symptom, while simultaneously adding new complications. It might even feel like the end of education as we know it.Banning ChatGPT won’t stop cheating. What about Bard? QuillBot? Claude? Magai? Or any of the dozen or more other generative AIs that are already out there using natural language? And even if you adopt a policy which names all of the ones that are currently available today, there will be new ones emerge within the next few months!
Switching to paper and pencil creates new complications:- It can create more barriers to demonstrating understanding than it removes (including significant accessibility concerns).
- Resourceful students can still find ways to access the tools you’re trying to avoid.
- Paper and pencil slow down the feedback loop.
- It’s not very authentic to their future.
But again, this is looking at the issue through “TODAY” glasses. This is not the world our students will be operating in.
The reality before us is that the world our students will be operating in is one where AI is rampant and collaboration with AI and output from AI is normative.
I (David) study and research technology innovation, and some of my background and current research involves technological innovation throughout church history. And on a global and historical scale, looking at technology through the ages, I would fairly confidently put the rise of generative AI on par with the wheel and the printing press in terms of its paradigm-shifting impact. Some leaders are even referring to it as the “4th revolution” (after the Industrial Revolution).
Here is what that means for our graduates:- Their employers will expect them to be competent with AI and have complete familiarity with working collaboratively with it.
- Their social media influencers will be actively using and promoting the use of generative AI.
- Their peers and colleagues will be using it extensively.
- Their transportation options and commerce interactions will be dramatically shaped by AI.
- The intellectual property policies they will operate under will be shifting to accommodate & redefine creativity with generative AI in mind.
None of this is far-fetched or unrealistic because it is already happening (everything is rapidly moving in that direction).
Dystopian or Utopian Future?
AI and generative AI are not the dystopian sci-fi artificial life-form that movies have propagated. The sort of worst-case scenario if we relinquish all governance and human input. We can rest assured that such scenarios are, at best, centuries away!What I (David) find fascinating is that in all of the dystopian sci-fi literature, the “worst-case scenario” is always what gets featured when an AI goes rogue. And whenever there’s a news report about AI that didn’t work exactly right, it is always a really bad outcome they focus on.
Why is that? Fear sells.
But AIs behaving badly is usually what happens when humans are not actively engaged in the
process. And with every technological innovation, there have always been naysayers who predict how that new tech will be the end of “whatever” as we know it, and yet it never seems to actually pan out like that.
We have actually been using AI for decades, quite well, and most of us have been using AI extensively for many years and just aren’t aware of it. Here are just a few common scenarios:- When auto-correct on mobile devices suggests alternate words or automatically “fixes” spellings in a word processing document
- When PowerPoint Designer suggests various layout options after you’ve added images and text
- When you’re driving and your Maps app suggests an alternate route or notifies you that there’s road construction or an accident ahead
- When you’re driving in the morning and launch your Maps app and your mobile device recognizes the road you’re on and the time of day, so automatically pulls up the route for your school or workplace
- When your fast food or restaurant’s app detects that you’re a quarter mile from their location and automatically puts your order in the oven…or notifies them you’re almost there
- When Amazon recommends products based on your order history or on what other people who’ve purchased the same item typically buy
- When the photocopier you place materials on detects the orientation and placement, so loads up all the available options (or makes cropping recommendations)
- When an online application automatically recognizes that you’re doing a typical operation and asks if you want it to complete it for you
- When tools like Grammarly suggest alternate wordings or improvements to style, tone, or grammar
- When your mobile device or computer notices a pattern (such as when you’re typically answering work emails) and asks if you’d like to make that your norm (such as your “available hours”)
All of these are examples of AI in every day use that we’ve been doing for years…for so long that we don’t even think about the fact that is an AI
So, what is generative AI? Or more specifically, what is ChatGPT? Let’s break the name down a bit to better understand it.
It’s using a familiar Chat interface, so that’s where the first part of the name comes from. Many of us are used to using chat tools on websites for customer support, solving issues, & identifying and triaging problems before it gets handed over to a support rep. ChatGPT uses this familiar
interface: “ ‘Chat’ refers to the conversational aspect of this AI model. It's designed to simulate a chat or conversation with a human user, thus the name ‘ChatGPT.’ ”
According to ChatGPT, here is what each of those terms means:- Generative: In the context of AI, “generative” refers to the capability of a model to create new outputs from learned data. For instance, a generative AI model can create a new piece of text, image, or music that has never been seen or heard before, based on its understanding from the training data.
- Pre-trained: Pre-training refers to the process of training an AI model on a large benchmark dataset before it’s fine-tuned for a specific task. Pre-training allows a model to learn from a large amount of data, helping it understand general patterns and structures in that data.
- Transformer: Transformer models are a specific type of AI model used predominantly in the field of natural language processing (NLP). They are designed to handle sequential data, such as text, in a way that pays attention to the contextual relevance of each data point (like a word in a sentence). The term “transformer” comes from the model's ability to “transform” input data (like a sentence) into an output (like a translation of the sentence) while maintaining the complex dependencies between data points.
I should also mention that although ChatGPT is by far the best-known, there are dozens of other generative AIs, such as Otter.ai (for voice & captioning); Grammarly GO (for writing improvement); Gamma (for presentation graphics); Jasper, Bard, Claude, Magai, Quillbot, & more (for writing); DALL-E, MidJourney, & StableDiffusion (for visuals); HeyGen for video translation; and Eleven Labs (for voice cloning). These are just the ones I am most familiar with, and I use all but 2 of them. However, almost every week I learn about a new one I had not heard of previously. In the last couple of months of 2023, the race to beat OpenAI’s ChatGPT led to the introduction of several more potential game changers, including Microsoft’s Copilot, Amazon’s Olympus, and Google DeepMind’s Gemini.
How Generative AI "Thinks"
So how does generative AI “think”? Well, it works entirely by predictive analytics using a large dataset. According to Stephen Wolfram, “…what ChatGPT is always fundamentally trying to do is produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we
mean ‘what one might expect someone to write after seeing what people have written on billions of webpages'." ¹
Believe it or not, it Is literally adding just one word at a time!
So if we’ve got the phrase, “The sky is [blank]," then it’s going to be scanning the massive dataset of its Large Language Model to predict the next most likely word in our context. If the data it’s using shows stats like you see in the blue box here, then the word it appends here will undoubtedly be “blue.”
However, it is also scanning the related context of our input window and the LLM's context, and it is adjudicating its data through the lens of variable “tokens,” so…- If the context is a weather forecast, it will likely forego the obvious larger number from our GENERAL context and instead look at instances from the LLM when weather forecasting is the context, and in that case, it will likely generate “stormy” or perhaps “hazy” (depending on other tokens in the context window).
- If the context is painting or poetry, then it’s likely going to forego the obvious larger numbers from the GENERAL and WEATHER context and will look at its LLM for instances where poetry and artistic beauty are the context, and so in that case, it will likely generate “turquoise” or “tranquil.”
It is important to keep in mind that generative AI is only as reliable as its training data. For ChatGPT, for most of the first nine months it was available, the free version had a cutoff of September 2021. That meant that it did not have access to any information on events that occurred after
that date.
For other AIs, it depends on the data it was trained on. Here is an example:
For one of my daughter’s school assignments, she was asked to write a creative story in the style of Laura Numeroff's book “If You Give a Mouse a Cookie.”
Well, my daughter plays flute, so she decided to go with “If you Give a Flamingo a Flute.” She wanted to illustrate it, so after verifying that the teacher didn’t have any originality requirements for the illustration, I excitedly told her how we could use a generative AI tool called DALL-E to create imaginative images like that.
So, first we told it we wanted to see, “a pink cartoon-like flamingo playing a metal flute.” Here are a few of the images it generated for us:
Notice that NONE of these images look like a flute! Two of them look more like clarinets, the one on the left looks more like a trumpet (and the bird’s doing a song & dance number with it), and that 3rd one?
I have no idea what THAT’s supposed to be! So what is going on here? Well, this is a good example of the challenge of limited examples in the dataset. If the data it’s trained on doesn’t have sufficient examples, it can’t generate artwork that mimics it. I’m guessing that whoever compiled the training data tagged most of the training examples as “instruments,” with only some specific examples of certain types of instruments. And the AI knows that a “flute” is a type of instrument, so it tried to render those. But without sufficient examples specifically of a flute in its training data, it couldn’t generate what it doesn’t know.
This is also why we often see examples of AI-driven bias regarding gender, ethnicity, and other variables. The output as a whole reflects the data it trained on, whether that was comprehensive and robust or fairly limited...or even lacking diversity.What Generative AI Can't Do
Another big limitation of AI is that it doesn't know the subject or understand the data it’s spitting out (remember, it’s just using predictive analytics to determine what words come next in context). Because of this…- A generative AI can’t THINK. It doesn't know the subject or understand the data it’s spitting out (remember, it’s just using predictive analytics to determine what words come next in context).
- Likewise, it has no EMPATHY. It has no idea why these words (or visual examples) it generates are significant or occur more frequently in the dataset, and it has no context or connection to explain that.
- Furthermore, it can’t do EMOTION. It may find examples of where humans have used emotional terms and phrases to communicate, but it won’t understand what that means or how it connects.
- Lastly, it has no UNDERSTANDING. It knows the terms and phrases that appeared most frequently in the dataset, and it scanned the context to be able to generate that, but it has no comprehension of what it actually means.
It’s simply making a prediction with each word choice, essentially "Other people who've answered this question (in the data it was trained on) have answered it ____, so the next most logical word is going to be ____."
I ran into a humorous example of this when I was testing how well ChatGPT could respond to questions that are typical of a credentialing interview.- The questions requiring objective, unbiased knowledge? It nailed it.
- The questions that required some subjective contextual awareness coupled with knowledge? It did very well once I gave it a role and context to play.
- But then when I got to behavioral and situational questions, it answered as if it were a human describing instances and examples of where it had done that particular behavior or reflective practice. Huh? How?
It was finding multiple examples of where humans had done that and answered along those lines, so it followed suit. But it had no context for what that meant, and that became VERY evident when I asked it follow-up questions. It quickly reminded me that it’s an AI and has no personal history or experience to draw upon.
As we’ve seen and likely already experienced at our home institutions, generative AI makes it easy to cheat. But this isn’t the first time that technology has radically shifted the knowledge economy.- Calculator - When calculators first came out, teachers worried that these machines would make students worse at math because they couldn’t do calculations by hand. As it turned out, calculators became a vital part of every secondary student’s school supplies because they saved students from having to do routine calculations, allowing teachers to focus on helping them grasp more complex concepts…and students progress far more quickly.
- Search Engines – When students first got access to computers with search engines, teachers worried that the ability to look up all of the answers would be catastrophic to learning. What actually happened is that we began teaching information literacy and began focusing on the application of those basic facts rather than on simply recalling them from memory.
- And let’s not forget Socrates…one of the forefathers of Western education. Socrates was all about rhetoric and effective communication, and he believed face-to-face communication was the only way one person could transmit knowledge to another. Consequently, he was adamantly opposed to writing, and believed that reducing human communication to writing would be a travesty from which civilization would not recover. We all know how that panned out.
So, What CAN We Do to Address Plagiarism via AI?
As we see it, there are a few basic options before us. We can:
- Ignore it
- Fight it / ban it
OR, we can
3. Embrace it by incorporating it into our teaching, and adjust our assessments to mitigate against using generative AI to cheat.In January 2023 (shortly after the holiday break when ChatGPT debuted), our Office of Academic Innovation at Indiana Wesleyan University formed a cross-disciplinary Professional Learning Community (PLC) comprised of IWU’s College of Adult & Professional Studies faculty, Learning eXperience Designers, and Faculty Enrichment staff for an 8-week exploration of the challenge of generative AI. Out of that PLC, they developed a report which reached 3 essential conclusions:
- We assume students already use Generative AI tools like Chat GPT to complete their coursework and that they will also proactively inform and help classmates to discover and use generative AI technology.
- We believe that it is impractical, and ultimately more harmful than helpful, to ban generative AI, like Chat GPT, in the learning environment. Instead, we hope to model and promote the practical and ethical use of these ground-breaking learning automation tools and draw attention to both opportunities and limitations.
- We support student use of generative AI technology, like Chat GPT, in the learning environment in most, but not all, cases when that use is properly documented and credited. However, we also emphasize the importance of originality and critical thinking in academic work and expect students to use AI responsibly and without plagiarizing. As of April 4, 2023, Turnitin will automatically detect AI-generated content in student submissions.
These are the same conclusions we would recommend.
This is where we want to challenge you to rethink assessment.Consider the Learner
First, we want you to consider your learner:- What do you believe about how students learn?
- How will that belief affect how you deliver content?
- Will you give students time and opportunities to practice?
- Is any of it meaningful for the students?
CAST developed a framework called Universal Design for Learning (UDL) as an approach to curriculum design that minimizes barriers and maximizes learning for all students. Neuroscience tells us that our brains have three broad networks. One for recognition, the WHAT of learning; one for skills & strategies, the HOW of learning; and one for caring and prioritizing, the WHY of learning.
So often, we worry about the “What” of learning (the content) and think that as long as we tell students what they are supposed to know and expect them to repeat it back to us via a quiz or paper, then they must understand it. UDL says, “Hang on a minute. Consider HOW you are delivering that content. Does it need a lecture, video, podcast, game, story, and so forth? And make sure to be clear with the students about WHY they need this information.”
Truly internalized learning comes through rehearsal…repetition…practice (not recall). This is why multimedia learning experts who do research on the effectiveness of various techniques make a distinction between retention and transfer of learning. Retention is simply being able to recall the information on a quiz or other assessment soon after it was presented. But transfer is when the student takes the insights and applies it to other contexts; THAT is when meaningful learning occurs! Retention is easily measured by summative assessments, but it is short-lived, and now we know that generative AI can easily help students by providing content to answer those kinds of assessments. But transfer learning involves the formative process.
Then use the What, How, and Why networks of the brain to consider the content.
It is all so tempting to just grab the pre-made textbook assessments, but chances are that the answers to those are already readily available on Course Hero and other online “study” sites (and thus likely available to a generative AI to include, too). And since we likely had to write a paper to show mastery of a subject, we often assume that’s the best form of assessment.
But there’s a process we use regularly as instructional designers called Backward Design. It is the process of beginning the course with the end in mind: First, identify the desired results of the class. Then determine what you consider acceptable evidence of having learned that. Then (and only then) do we plan the learning experiences and instruction. When we jump straight to quiz questions or written paper or presentation assessments, we miss out on so many opportunities to make learning meaningful.
So, we’ve talked about how to consider our learners and our content. Now, let’s consider the delivery.Consider the Delivery
We believe that to avoid cheating with generative AI, good teaching practice includes meaningful experiences, formative assessments, and personalization.
- Meaningful experiences: Use multiple modes of instruction (audio, video, models, games) to represent the information in a variety of ways, even if it’s repetitive. This helps to reinforce the content and build neuropathways.
- Formative assessments: It's better to measure progress consistently rather than relying solely on big final projects or tests. While final projects and tests can be evaluated, they shouldn't be the main focus. Generative AI can easily cheat on summative assessments, which may not even measure what you want them to. Consider a scenario where a student already knows the class content on the first day. Do we make that student attend the class for several weeks?
- Personalization: We aim to enhance the value of our content and make it more personalized by changing its delivery method. Moreover, we want to encourage students to apply the content to real-life scenarios and current situations. Additionally, we should provide students with the liberty to personalize how they showcase their understanding of the material using tools they are already familiar with.
Consider How the Brain Learns
Briefly, in order for the brain to learn, it forms pathways connecting parts of the brain, this is called a neural pathway. Initially, that pathway is really weak, but with practice over time that
neural pathway is reinforced, becoming clearer and stronger. The learner moves from novice to master as she repeats the process of learning. That practice could simply be repeating the process the same way over and over again. However, repeating the pathway using multiple modes (modes being, words, images, objects) powerfully strengthens the neural pathway as Cope states: “powerful learning involves these shifts in mode between one mode to another.” ²
This is a positive aspect of generative AI that educators should learn how to work with (rather than resist).- Consider how AI can help you generate the WHAT of learning. We can use generative AI to develop lecture notes, rubrics, videos, presentation outlines, or even lesson plans. Let AI suggest ways to represent your content.
- Consider how AI can help you develop the HOW of learning. Ask it: “How can I teach my students about xyz?” You can use generative AI to develop Case Studies, Branching Scenarios, discussion questions, quiz questions.
- Consider how AI can help you motivate your students regarding the WHY of learning. Teach your students how to be prompt engineers, so that they can create new knowledge faster and more effectively in subject areas that interest and engage them.
Certainly, you can use summative assessments, but with how readily knowledge is available, we have to shift our perspective to measuring the process of learning, not just the summation of learning.
We do want to clarify that we’re not advocating that you have the AI do all the work for you. We are both very strong advocates regarding accountability and credit for the work that’s done (we are educators, after all, and content creators). But generative AI makes for a VERY helpful collaboration partner. And one of the things our students need most as they move into an era that’s dominated by AI is discussion of ethical implications and modeling of good practices in the use of generative AI.
So, to give you a peek behind the curtain, we will share with you a bit about our own process as we developed this presentation.
Once we had a good idea of what we wanted to cover, we had a conversation with ChatGPT about the subject area and asked it to give us some specific examples. We iteratively dialogued with it over the course of several sessions, asking for applications, strategies, and specific examples. It generated quite a few ideas we had not considered. We also identified some of the ideas and applications of our own and asked it for feedback, insights, and suggestions on how to approach those in light of the topic. It was the same kind of conversation we might have in the teacher’s lounge with other faculty, or by phone talking with another educator, or while sitting down with a mentor, department head, or principal to discuss challenges and ideas for improving instruction.Practical Tips & Tricks
So, what are some practical ways to apply all of this?
Think about what you are asking your students to demonstrate and then go a step further in personalizing the assignment. Generative AI can write a persuasive essay for our students, but there are several steps to writing a persuasive essay; smaller skills that they need to learn in order to write an effective persuasive essay. As the instructor, break down the skills, and walk the students through each skill. Even if students use generative AI to complete that specific skill, at the very least, students are learning that there are smaller skills that go into completing the whole essay.
Have students write personal memoirs, personal reflections, and journals about their experiences:- Reflective responses to writing or assignment submissions: Why did you choose this topic? Or subject? What did this mean for you?
- Using multiple drafts, with revisions between them.
- After delivering content, require an “Exit ticket” from your students. Ask students to write a short paragraph about what stood out to them from class that day.
- Create interactive videos or books that prompt responses from students to keep them engaged.
- Adjust rubrics to measure at the developing level rather than mastery.
Personal reflection – Another way to rethink assessments so that generative AI is not very helpful is to add personal reflection elements…and to weight the grading rubric accordingly. Personal reflection adds a new dimension that makes it very difficult for an AI to generative plausible answers, but it also forces the student to consider the impact and application of the lesson concepts in their own life. When we do this, we not only help personalize learning and improve retention and transfer, but we make it challenging for a generative AI to produce a viable response (and fairly obvious when one is used).
Allow a revise & resubmit option – Since generative AIs always create a brand-new answer each time through predictive analytics, that means there is an inherent weakness…they cannot tell you (or re-create) what they did before. Just like the savory cook who no longer needs to follow the recipe precisely but knows what needs to be included and adjusts what they include based on taste, they cannot give you the recipe or re-create it exactly because next time it will be entirely different. So leverage this aspect in revising your assessment approaches. The more you use iterative processes (such as outline, first draft, second draft, final, etc.) or allow – or require – a revise and resubmit option, the harder it will be for a generative AI to assist, and the greater the likelihood that a student will have to do that work themselves.
Local context – Add personal, local, or specific nuances or limiters in assignments. For example, instead of just writing an executive summary for a hypothetical business, ask adult learners to do it for their specific workplace context. Or, instead of the wide-ranging scenario the textbook provides, assign them a specific community or business in a specific state with a specific type of challenge to write their response based on. This will force them to have to do some research to apply the lesson concepts to that local context. Furthermore, local contexts like this will not be in the training data, so not only does this make it challenge to using a generative AI, it also gives your assessments a much more authentic “real-world” context.
Writing code – instead of just asking the student to write code (which even ChatGPT can do amazingly well), have the student explain the programming choices they made and what each section or variable is having the computer do. Instead of just having them write code that does “X,” create a challenge or two that requires solution-finding, and have the assessment be about demonstrating how they addressed the challenge. If they used a generative AI to help them write the code, it will be very evident in this kind of scenario because they will struggle to explain the decisions or choices that were made to solve the challenges because they didn’t make them.Conclusion
The emergence of generative AI has presented educators with a formidable challenge, yet it also offers a unique opportunity to redefine the very purpose of assessment. By moving beyond the fear of plagiarism and embracing the transformative potential of AI, we can create assessments that go beyond rote memorization and truly measure the diverse ways in which knowledge and skills are acquired and applied.
This requires a shift in mindset. Instead of viewing generative AI as an enemy to be vanquished, we must see it as a tool to be understood and utilized strategically. This article's practical tips and strategies are just the first steps in this journey. As we continue to learn and adapt alongside generative AI, we can build assessments that safeguard against plagiarism and foster deeper learning, critical thinking, and creativity that will define the future of education. Generative AI is not a harbinger of doom but a catalyst for reinvention. By harnessing its power and focusing on the irreplaceable strengths of human learning, we can create a future where generative AI augments, rather than replaces, the irreplaceable art of teaching and learning.
Therefore, let us not succumb to fear but embrace the challenge. Let us be the architects of a future where generative AI empowers learning, not hinders it, and where assessments become not just tools for measurement, but stepping stones on a lifelong journey of exploration and discovery.About the Authors
Dr. Linda Merillat
Linda Merillat’s experience and skills represent a union between technology, education, and interaction design. During her career, she has played many different roles: programmer, systems analyst, business analyst, interaction designer, program manager, project manager, consultant, trainer, educator, instructional designer, researcher, author, and entrepreneur. The common thread running throughout has always been the challenge of how to successfully use and integrate the latest technology into an organization. She currently holds a faculty position in the School of Nursing at Washburn University with the role of Instructional Designer.Dr. Jane Carpenter
Jane Carpenter is the Dean of the School of Nursing. She has been a faculty member at Washburn University for over 27 years.
Annie Els
Annie Els, M.Ed., is a Learning eXperience Designer at Indiana Wesleyan University. She serves as Lead ID for the innovative Self-Paced General Education courses. Annie loves to explore new tools of the trade to create excellent learning experiences for students and teach other instructional designers about her findings. With a background as an elementary school teacher, her work in gamification in Higher Education comes naturally. Annie’s fascination with the neuroscience of learning drives her to create engaging and motivating learning experiences. She was part of the team that won SIDLIT’s 2022 Outstanding Online Course award. Additionally, she teamed up with Mike Jones and David Swisher to publish “Modalities and Experiences: Unlocking the Gamified Metaversity” in C2C's January 2023 Digital Magazine.
Annie has an M.Ed from Indiana Wesleyan University and a B.A. from Azusa Pacific University. She also holds a Professional Educator License in the State of Indiana and is passionate about diversity, equity, inclusion, & belonging as well as Universal Design for Learning. Prior to her work at IWU, she has 16 years of experience designing curriculum in ministry and elementary education settings.
Dr. David J. Swisher
Dr. David J. Swisher is a Senior Learning eXperience Designer in the Office of Academic Innovation at Indiana Wesleyan University, and he is a former chair of Colleague 2 Colleague who coordinated the successful transition to virtual for the 2020 SIDLIT conference. He currently serves as the Lead Instructional Designer for the Cybersecurity and Ministry Leadership programs, and is actively engaged in leadership on best practices with OER, copyright, VR/metaverse, and assessment. He was one of the campus’ earliest leaders to proactively engage in working with generative AI, recommending the formation of an exploratory PLC in Jan. 2023. Swisher regularly uses about half a dozen different generative AIs himself, is active in several Facebook groups which are focused on innovative technology (including VR/metaverse and AI applications), and he was one of three featured webinar presenters for the Fall faculty professional development emphasis on “Generative AI and Adult Education,” presenting a session on “Academic Integrity in an Era of Generative AI” and hosting the dialogue.
Swisher has a D.Min in Semiotics & Future Studies from George Fox University and a M.S. in Instructional Design & Technology from Emporia State University. Prior to his work at IWU, he was the Director of Learning Management Technologies at Tabor College, and before that served as the classroom technology coordinator and LMS instance manager at Kansas State Polytechnic.
More tk -
1
2024-01-08T03:14:08-08:00
OneRoom: A Partnership with RISE, HP/Poly, and Zoom
6
Interview with Courtnie Mullen, One Room
plain
2024-01-09T12:13:52-08:00
An Interview with Courtnie Mullen, OneRoom
Imagine being a student in a rural school district and wanting to take a dual credit course in chemistry or a class in astronomy but those classes are not available. Enter OneRoom, a partnership with Rural Illinois Shared Education (RISE), HP/Poly and Zoom which can provide synchronous virtual classrooms for instructors and students who may be many miles apart.
By Dennis Peirce, West-Central independent Living Solutions (WILS)
“Last year, two schools 40 miles apart shared a dual credit course in English," said Courtnie Mullen, Vice-President and Director of Classroom Development at OneRoom. "The students became friends. They would meet at Denny’s at the town in between for breakfast and attend each other’s homecoming.” According to the OneRoom website,
Mullen started her career as a Family and Consumer Sciences teacher at a high school in rural Illinois. She got involved with RISE when her husband received a job transfer and teaching positions were not available in November when they moved. “First, I taught a couple courses, then I assisted with managing courses," said Mullen. "The retirement of the tech coordinator gave me room to grow into my current position.”“OneRoom brings students closer together by connecting them together with audio and video solutions from Poly. Based on practical understanding of instruction, our classroom solutions have been developed by educators for educators. Since 2016, we have been providing interactive and highly collaborative video conferencing solutions to encourage teacher and course sharing for schools. Our solutions are designed to resolve the problems with rurality, teacher shortages, and the inability of funding to keep pace with rising costs across the public education landscape that has put pressure on school districts to reduce and eliminate classes.”
Mullen pointed out how the teacher shortage is affecting schools across the nation. One subject that has a lot of openings is Math. “There are so many applications for OneRoom," Mullen said. "Alternative schools could collaborate with a local school district for classroom instruction. Virtual field trips allow schools across the world enhance the content being taught.”
Mullen explained that the creator of RISE started it because he grew up in rural Illinois and felt like he was behind students from Chicago:
Mullen also discussed how OneRoom and RISE are growing beyond the borders of Illinois. This spring, RISE will provide dual credit courses to the Finger Lakes region of New York. “RISE also works with the Cherokee Nation in Oklahoma," she said. "The classes focus on their language and are shared across the reservation.” Mullen added that RISE is also interested in partnering with organizations in Missouri and Kansas. “It’s cool to see the growth and everyone relying on each other," said OneRoom VP Courtnie Mullen. "Everyone has talents they bring to the table and it takes them working together to be successful. Educators do it to give students every possible opportunity.”“RISE [was created] so students could have dual credit or advanced classes such as classes with authors or colleges. There’s so much opportunity when the technology makes a class so similar to an in-person class. A student who may stay in a rural town gets to experience a museum or a class from another country and it gets them to think more about what’s out there. Advanced students can get the content they need. Opening up access and the opportunity to work with schools across the state equalizes educational opportunity for teachers and students.” - Courtnie Mullen, Vice-President at RISE partner OneRoom.
Courtnie Mullen will be the presenter for #C2CLive in February, so be sure to register with the C2C Community to learn more about how OneRoom and RISE may benefit your community.About the Authors
Courtnie Mullen
Courtnie Mullen is the Vice President & Director of Classroom Development at OneRoom Inc. With her prior experience as a teacher, Courtnie leads the RISE [Rural Instruction & Shared Education] network. She specializes in working with districts in the RISE network to create and implement their own distance learning programs while partnering with other RISE districts to share educational resources. Courtnie currently resides in Illinois and enjoys spending time outside with her husband and daughter.Dennis Peirce, M.S.
Dennis Peirce is an I.T. Systems Coordinator for West-Central Independent Living Solutions, a consumer driven, non-residential, 501(c)3 nonprofit resource center that serves people with disabilities and their families at all stages of life. He provides technical support and manages projects such as software research/implementation while also designing educational resources for WILS staff, consumers and attendants. Prior to this, he served 18 years at the University of Central Missouri as systems coordinator, field technician and help desk specialist. He earned his Master’s Degree in Educational Technology from UCM and has served as an adjunct instructor. Dennis currently serves on C2C’s Communications & Community committee, as co-editor of the C2C Digital Magazine and is a former chair of the SIDLIT Steering Committee. He has attended SIDLIT since 2015 and has been on the steering committee for the past four years.