Intelligent Tutoring Systems for Ill-Defined Domains: Assessment and Feedback in Ill-Defined Domains.
Workshop at ITS 2008



Intelligent tutoring systems have achieved reproducible successes and wide acceptance in well-defined domains such as physics, chemistry and mathematics. Many of the most commonly taught educational tasks, however, are not well-defined but ill-defined. These include such domains as law, design, history, and medical diagnosis. As interest in ill-defined domains has expanded within and beyond the ITS community, researchers have devoted increasing attention to these domains and are developing approaches adapted to the special challenges of teaching and learning in these domains. The prior workshops in Taiwan (ITS 2006) and Marina-Del-Rey, California (AIED 2007) have demonstrated the high level of interest in ill-defined domains and the quality of ITS work addressing them.

Developing ITSs for ill-defined domains may require a fundamental rethinking of the predominant ITS approaches. Well-defined domains, by definition, allow for a clear distinction between right and wrong answers. This assumption underlies most if not all existing ITS systems. One of the central advantages of classical ITSs over human tutors is the potential for on-line feedback and assessment. Rather than waiting until a task is completed, or even long after, a student receives guidance as they work enabling them to focus clearly on useful paths and to detect immediately 'the' step that started them down the wrong path.

Ill-defined domains typically lack clear distinctions between "right" and "wrong" answers. Instead, there often are competing reasonable answers. Often, there is no way to classify a step as necessarily incorrect or to claim that this step will lead the user irrevocably astray as compared to any other. This makes the process of assessing students' progress and giving them reasonable advice difficult if not impossible by classical means.

This workshop will provide a forum for presenting good work on all aspects of designing ITSs for illdefined domains. In order to build on the successes of the prior workshops in Taiwan and Marina-Del-Rey this workshop will focus particularly on the issues of how to provide feedback in ITS systems designed for ill-defined domains and how to assess such systems.

We invite work at all stages of development, including particularly innovative approaches in their early phases. Full research papers (up to 5,000 words) and demonstrations (up to 2,500 words, describing an application or other work to be demonstrated live at the workshop) are welcome for submission.


The Workshop has been scheduled for Monday the 23rd of June from 9am to 4pm. Paper presentations are 20 minutes each with 10 minutes for questions and discussion. An advance copy of the proceedings may be downloaded here. The fontmatter may be obtained separately here

9:00 - 9:15Introduction: Kevin D. Ashley (pdf)
9:15 - 10:15

Opening Session

  1. Two Approaches for Providing Adaptive Support for Discussion in an Ill-Defined Domain Erin Walker, Amy Ogan, Vincent Aleven, Chris Jones (pdf)

  2. Interactive Narrative and Intelligent Tutoring for Ethics Domain Rania Hodhod and Daniel Kudenko (pdf)

10:15 - 10:35Coffee Break
10:35 - 12:15

Prelunch Session

  1. A Selection Strategy to Improve Cloze Question Quality Juan Pino, Michael Heilman, and Maxine Eskenazi (pdf)

  2. Generating and Evaluating Object-Oriented Designs for Instructors and Novice Students Sally Moritz and Glenn Blank (pdf)

  3. General Discussion

12:15 - 1:30Lunch
1:30 - 2:30

Post-lunch Session

  1. A Sequential Pattern Mining Algorithm for Extracting Partial Problem Spaces from Logged User Interactions Philippe Fournier-Viger, Roger Nkambou and Engelbert Mephu Nguifo (pdf)

  2. What Do Argument Diagrams Tell Us About Students' Aptitude Or Experience? A Statistical Analysis In An Ill-Defined Domain Collin Lynch, Niels Pinkwart, Kevin Ashley and Vincent Aleven (pdf)

2:30 - 2:50Tea Break
2:50 - 4:00

Closing Session

  1. Using Expert Decision Maps to Promote Reflection and Self-Assessment in Medical Case-Based Instruction Geneviève Gauthier, Laura Naismith, Susanne P. Lajoie, and Jeffrey Wiseman (pdf)

  2. Closing Discussion


Paper topics of special interest are:

  1. Assessment: Development of student and tutor assessment strategies for ill-defined domains. These may include, for example, studies of related-problem transfer and qualitative assessments.

  2. Feedback: Identification of feedback and guidance strategies for ill-defined domains. These may include, for example, Socratic (question-based) methods or related-problem transfer.

Other topic of interest include:

  1. Model Development: Production of formal or informal domain models and their use in guidance.

  2. Teaching Strategies: Development of teaching strategies for such domains, and the interaction of those strategies with the students.

  3. Search and Inference Strategies: Definition of suitable search strategies and the communication of those strategies to the students.

  4. Exploratory Systems: Development of intelligent tutoring systems for open-ended domains. These may include, for example, user-driven exploration models and constructivist approaches.

  5. Collaboration: The use of peer-collaboration within ill-defined domain for guidance or other purposes.

  6. Representation: Free form text is often the most appropriate representation for problems and answers in ill-defined domains; AIED in this area needs tools and techniques for accommodating text.

The topics can be approached from different perspectives: theoretical, systems engineering, application oriented, case study, system evaluation, etc.


  • Submissions Due: April 25th 2008

  • Author Notification: May 16th 2008

  • Final Camera-Ready Copy: May 23rd 2008

All submissions should be formatted using the llncs formatting guidelines (see here and here) for the main conference and should not exceed 5,000 words. Submissions should be sent to Collin Lynch:


  • Vincent Aleven, Carnegie Mellon University, USA
  • Kevin Ashley, University of Pittsburgh, USA
  • Collin Lynch, University of Pittsburgh, USA
  • Niels Pinkwart, Clausthal University of Technology, Germany


  • Vincent Aleven, Carnegie Mellon University, USA
  • Jerry Andriessen, University of Utrecht, The Netherlands
  • Kevin Ashley, University of Pittsburgh, USA
  • Paul Brna, University of Glasgow, UK
  • Jill Burstein, Educational Testing Service, USA
  • Rebecca Crowley, University of Pittsburgh, USA
  • Andreas Harrer, University of Duisburg-Essen, Germany
  • H. Chad Lane, Institute For Creative Technologies, USC
  • Susanne Lajoie, McGill University, Canada
  • Collin Lynch, University of Pittsburgh, USA
  • Bruce McLaren, German Research Center for Artificial Intelligence, Germany
  • Antoinette Muntjewerff, University of Amsterdam, The Netherlands
  • Katsumi Nitta, Tokyo Institute of Technology, Japan
  • Niels Pinkwart, Clausthal University of Technology, Germany
  • Beverly Woolf, University of Massachusetts, USA

Register for the 20th International Conference on Intelligent Tutoring Systems.