Designing Conflict-Based Communicative Tasks in TCFL with ChatGPT: A Process Analysis
Analysis of using ChatGPT to design conflict-based communicative tasks for university-level Chinese oral expression courses, examining interaction patterns and pedagogical impacts.
Home »
Documentation »
Designing Conflict-Based Communicative Tasks in TCFL with ChatGPT: A Process Analysis
1. Introduction
The integration of Artificial Intelligence (AI), particularly generative models like ChatGPT, into language pedagogy represents a significant shift. This paper investigates a specific application: using ChatGPT to assist in designing conflict-based communicative tasks for a university-level Oral Expression course in Teaching Chinese as a Foreign Language (TCFL). The research adopts a descriptive approach to analyze the teacher-AI interaction during the curriculum development process and assess its impact on the final teaching program.
2. Research Context & Methodology
The study is situated within the practical development of a TCFL oral expression syllabus, where the instructor sought to create tasks that stimulate genuine interaction.
2.1 Context: Course & Task Development
The core challenge was designing tasks that move beyond scripted dialogue to foster spontaneous, meaningful oral interaction. The pedagogical choice was to base tasks on conflict scenarios (e.g., disagreements, negotiations, problem-solving), which inherently require learners to employ persuasive language, manage turns, and express opinions—key components of oral interaction competence.
2.2 Methodology: Descriptive Research & Corpus
The research follows a descriptive methodology (Olivier de Sardan, 2008; Catroux, 2018). The primary corpus consists of the log of interactions between the teacher-researcher and ChatGPT during the task-design phase. This log is analyzed to identify salient features of the interaction and trace how AI suggestions were integrated, modified, or rejected in the final curriculum.
Research Questions:
How is ChatGPT utilized in the process of designing conflict-based communicative tasks?
To what extent does its use influence the final teaching program?
3. Theoretical Framework
3.1 Communicative Tasks & Conflict Theory
A communicative task is defined as an activity where meaning is primary, there is a communicative goal, and success is evaluated in terms of outcome. Integrating conflict theory provides a robust framework for task design. Conflict scenarios create an "information gap" and a "reason to communicate," driving learners to use language strategically to achieve a goal (e.g., resolve a dispute, win an argument, find a compromise), thereby developing pragmatic and interactive competence.
3.2 Task Design Criteria
The design of these tasks considers several criteria: authenticity of the conflict scenario, cognitive and linguistic demand appropriate for the learner level, clear roles and goals for participants, and a defined outcome to evaluate task success. ChatGPT was leveraged to brainstorm, refine, and evaluate scenarios against these criteria.
4. Interaction Analysis with ChatGPT
4.1 Process & Manifestation of Use
The interaction was iterative and dialogic. The teacher initiated the process with specific prompts (e.g., "Generate a conflict scenario for intermediate Chinese learners about planning a group trip"). ChatGPT responded with narrative outlines, potential dialogue starters, and role descriptions. The teacher then refined prompts based on responses, asking for variations, simplifications, or cultural adjustments. The AI acted as a collaborative brainstorming partner and a rapid prototype generator.
4.2 Impact on Final Teaching Program
The analysis suggests ChatGPT's impact was multifaceted: 1) Efficiency: Accelerated the ideation and drafting phase. 2) Diversity: Increased the variety and creativity of conflict scenarios proposed. 3) Scaffolding: Provided a starting point that the expert teacher could critically evaluate and adapt. The final program reflected a synthesis of AI-generated ideas and expert pedagogical judgment, rather than a direct adoption of AI output.
Conceptual Impact Model:
Input (Teacher Prompt) → AI Processing (Scenario Generation) → Human Evaluation & Adaptation → Integrated Output (Final Task). The critical filter of teacher expertise ensured pedagogical soundness and cultural appropriateness.
5. Core Analyst Insight: A Four-Step Deconstruction
5.1 Core Insight
This paper isn't about AI replacing teachers; it's about AI augmenting the creative and cognitive load of expert curriculum design. The real story is the emergence of a human-in-the-loop, prompt-engineering-driven pedagogy. The value isn't in ChatGPT's raw output, but in the teacher's ability to craft prompts that steer it toward pedagogically valid constructs—like conflict-based tasks—and then critically curate the results. This mirrors findings in creative industries where AI tools like DALL-E or GPT-3 are most powerful when guided by a strong human creative director (Ammanabrolu et al., 2021, on narrative generation).
5.2 Logical Flow
The paper's logic is sound but reveals a tension: it champions a descriptive approach to show "what happened," yet the underlying promise is prescriptive—implying this is a replicable model. The flow moves from context (AI in education) to a specific problem (task design), then details the method (analyzing chat logs), and finally assesses impact. However, it stops short of providing a formalized framework for the prompt-engineering process itself, which is the most transferable knowledge product.
5.3 Strengths & Flaws
Strengths: The focus on a high-value, cognitively demanding teaching task (design, not just content delivery) is astute. The choice of conflict-based tasks is excellent, as it tests the AI's ability to handle nuance and human dynamics. The descriptive methodology is appropriate for this early-stage exploration.
Flaws: The analysis is inherently post-hoc and subjective, based on a single teacher's interaction log. There's no control group (design without AI) or measurable learning outcome data to substantiate the claim of positive "impact." The discussion of "impacts" remains speculative regarding actual student learning gains. It risks conflating design-process efficiency with pedagogical effectiveness.
5.4 Actionable Insights
For educators and institutions: 1) Invest in Prompt Literacy: Training for teachers should shift from "how to use AI" to "how to craft pedagogical prompts." 2) Develop Evaluation Rubrics: Create shared criteria for evaluating AI-generated educational content, focusing on pedagogical principles, not just linguistic correctness. 3) Pilot with a Clear Hypothesis: Don't just describe the process; design A/B tests comparing AI-assisted and traditional design methods on both efficiency metrics and, crucially, subsequent student engagement/performance. 4) Document the Prompt Chain: The real intellectual property is the sequence of prompts that yielded the best results. This should be systematically archived and shared.
6. Technical Details & Analytical Framework
6.1 Interaction Modeling & Prompt Engineering
The human-AI collaboration can be modeled as a series of iterative cycles. A key technical aspect is the evolution of the prompt. The initial prompt $P_0$ (e.g., "a conflict scenario") is refined based on output $O_n$ and pedagogical goals $G$. This can be conceptualized as: $P_{n+1} = f(P_n, O_n, G, C)$, where $C$ represents constraints (language level, cultural context). The function $f$ is the teacher's prompt-engineering skill. The quality of the final task $T_{final}$ is a function of the initial AI output and the number and quality of refinement iterations: $T_{final} \approx \sum_{i=1}^{n} (\alpha \cdot O_i + \beta \cdot H_i)$, where $\alpha$ is the AI weight, $\beta$ is the human expert weight, and $H_i$ is the human input at iteration $i$.
6.2 Analysis Framework: A Non-Code Case Example
Scenario: Designing a task for B1-level learners on "negotiating a work schedule." Analytical Framework Applied:
1. Prompt Deconstruction: Teacher's prompt: "Generate a dialogue where two colleagues disagree on weekend shift schedules. Include expressions of preference, suggestion, and mild disagreement. Use B1-level vocabulary." This prompt specifies context, conflict, linguistic functions, and level.
2. Output Evaluation Matrix: The AI's output is evaluated against:
- Pedagogical Fit: Are the target language functions present?
- Linguistic Appropriateness: Is the vocabulary/syntax aligned with B1?
- Scenario Authenticity: Is the conflict believable?
- Task Potential: Can this be turned into a role-play with clear goals?
3. Iteration Tracking: The teacher notes that the AI's first draft used overly formal disagreement phrases. The next prompt refines: "...Use more common colloquial phrases for disagreement like '我觉得可能不太行' (I think that might not work) instead of '我坚决反对' (I firmly oppose)." This demonstrates the framework in action.
7. Future Applications & Research Directions
The trajectory points beyond task design. Future applications include: 1) Dynamic Difficulty Adjustment: AI could generate multiple versions of a conflict scenario in real-time based on learner performance. 2) Personalized Conflict Scenarios: Using learner interests (sourced from surveys or previous interactions) to seed scenario generation. 3) AI as a Role-play Simulator: Learners practicing negotiation with an AI character, which adapts its strategy based on the learner's language proficiency and persuasiveness, a concept adjacent to work on AI for interactive storytelling (Riedl & Bulitko, 2012).
Critical Research Directions: Longitudinal studies measuring learning outcomes; development of standardized "pedagogical prompt libraries"; exploration of multimodal task design (integrating AI-generated images/videos into scenarios); and serious investigation of ethical issues—ensuring AI does not reinforce stereotypes in its generated conflict narratives.
8. References
Catroux, M. (2018). Introduction à la recherche en didactique des langues. Éditions Maison des Langues.
Olivier de Sardan, J.-P. (2008). La rigueur du qualitatif. Les contraintes empiriques de l'interprétation socio-anthropologique. Academia-Bruylant.
Ammanabrolu, P., et al. (2021). How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics.
Riedl, M. O., & Bulitko, V. (2012). Interactive Narrative: An Intelligent Systems Approach. AI Magazine, 34(1), 67-77.
OpenAI. (2022). ChatGPT: Optimizing Language Models for Dialogue. Retrieved from https://openai.com/blog/chatgpt
Ellis, R. (2003). Task-based Language Learning and Teaching. Oxford University Press.