Skip to main content

Large Language Models (LLMs) that generate text, such as OpenAI’s ChatGPT, can support authentic assessment by facilitating opportunities for professional roleplaying. ChatGPT responds to user prompts in a conversational structure, and it builds on previous content as the dialogue continues. In an asynchronous online environment, this immediate feedback can simulate a live conversation. Therefore, ChatGPT can replicate synchronous roleplaying experiences, which learners can then analyze, revise, and repurpose into alternative forms of media.

Roleplaying allows learners to participate in and experiment with real-life scenarios (Bonwell & Eison, 1991). ChatGPT presents these scenarios as a self-contained debate, replicating what Bonwell and Eison (1991) describe as “one of the more innovative and personally risky methods of role playing,” when an instructor is debating both sides of an argument simultaneously (p. 48). ChatGPT can also expand on each side of a debate scenario to show why individuals might advocate for an alternative perspective. Learners can request varying or contradicting answers to an issue, which would facilitate more “innovative” considerations of these scenarios; however, ChatGPT “does not have the ability to make connections, or to critically think about a topic, or [to] reflect” (Brown, 2023). By critically reflecting on these scenarios, learners can consider real-world applications beyond the generated text.

As the generated conversations continue, learners can also prompt revisions to modify syntactical structure or to strengthen an argument. Revision opportunities encourage a growth mindset (Dweck, 2010). As learners prompt the LLM to modify responses, they are intimately involved in the revision process and able to hone their individual skills. Instructors can ask learners to share these conversations, either through screenshots, screen capture, or links produced by the ShareGPT extension for Chrome browsers. By asking students to share their conversations, instructors have the opportunity to comment on all steps of the revision process. Simultaneously, this process allows learners to experiment with prompt engineering, which is emerging as a marketable skill (Mok, 2023). Learners can practice their prompt generation in real-time, after which instructors can provide personalized feedback as a means of authentic assessment.

Authentic assessments ask learners to produce work that replicates the performance of essential tasks within a discipline (Wiggins, 1989). These assessments “teach students and teachers alike the kind of work that most matters,” which “are enabling and forward-looking, not just reflective of prior teaching” (Wiggins, 1990). LLMs are trained to predict text based on the sources on which they are trained; thus, they are only able to generate words that are reflective of those prior models. Instructors, however, can leverage generative text to allow learners to customize assessments toward individualized interests and long-term goals.

Link to Example artifact(s)

After engaging with ChatGPT, learners have the opportunity to:

  • Identify topics conducive to roleplaying professional conversations
  • Explain how prompts change a generated conversation
  • Apply evidence to support or refute an argument
  • Analyze the strengths and weaknesses of a generated argument
  • Evaluate the effectiveness of a generated text
  • Compose a unique prompt to elicit a more effective response

Note: Any copyright implications of inputting intellectual property into LLMs remain unclear. Rather than asking learners to input their own writing into the software, consider prioritizing the software as a working tool to facilitate exercises rather one to produce written work.

Examples feature ChatGPT, but strategies would apply to any tool that incorporates generative artificial intelligence.

Example 1

ChatGPT is asked to “Provide a script role-playing a student/teacher interaction. The student is taking an introductory writing course at an American university. The teacher is describing the capabilities of Large Language Models that generate text.”

  • In this first example, the LLM generates two sides of a script, writing roles for both the student and teacher.
  • Learners could use a similar technique in any discipline where roleplaying could simulate professional conversations or skills.

Example 2

ChatGPT is asked to revise the scenario to consider an alternative perspective; this time, “The teacher is against the use of LLMs in college writing courses.”

  • Learners could be asked to explain the changes, to compare and contrast the LLMs attitudes to any aspects of the subject, or to analyze the strengths and weaknesses of each position.

Example 3

ChatGPT is asked to write “the prompt for a 3-5 page essay for an introductory composition course, which asks the student to explore whether LLMs should be used in a composition classroom.”

  • Learners could be asked to analyze the prompt and suggest potential opportunities for revision.

Example 4

ChatGPT is asked to revise the prompt for concision.

  • This last example models revision strategies, which could be customized to include additional modifications.
  • Learners could be asked to analyze the changes in syntax; instructors could then comment on these revision requests to encourage their learners to prioritize a growth mindset and also to help those learners refine their own prompt engineering skills.

Link to scholarly reference(s)

Atlas, S. & Pelletier, K. (2023, March 22). Prompt engineering for AI-enhanced teaching and learning. EDUCAUSE Member QuickTalk.

Bonwell, C. C., & Eison, J. A. (1991). Active learning: Creating excitement in the classroom. Washington, DC: The George Washington University.

Brown, H. M. (2023). ChatGPT – Instances of faculty now recognizing student use – Inquiry. EDUCAUSE Instructional Design Community Group.

Center for Innovative Teaching and Learning. Northern Illinois University. (n.d.) Role playing.

Dweck, C. (2010). Even geniuses work hard. Educational Leadership, 66(1), 16-20.

Faculty Center for Teaching and Learning. University of Central Florida. (n.d.). Artificial Intelligence tools.

FeedbackFruits (2023). Transform writing assignments in the age of AI: 5 best strategies.

FitzGibbon, J., & Riedel, N. (2023, March 22). Embracing ChatGPT and AI with best authentic assessment strategies. FeedbackFruits Webinar.

Mok, A. (2023, March 1). “Prompt engineering” is one of the hottest jobs in generative AI. Here’s how it works. Business Insider.

OpenAI. (n.d.). Introducing ChatGPT.

Wiggins, G. (1989). A true test: Toward more authentic and equitable assessment. Phi Delta Kappan, 70(9), 703‒713.

Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research, and Evaluation, 2(2).

Citation

McNulty, R., Greenwood, E., & Fazzalari, R. (2023). Use Large Language Models to Simulate Professional Roleplaying as Opportunity for Authentic Assessment. In deNoyelles, A., Bauer, S., & Wyatt, S. (Eds.), Teaching Online Pedagogical Repository. Orlando, FL: University of Central Florida Center for Distributed Learning.