Note: In the following sections, WPI students, who are co-authors, have their names in italics.


Here is my DBLP page. My Google Scholar page is here. This page has all my publications, including many data mining papers. I have a separate website highlighting the 20+ randomized controlled experiments I have conducted and it is available here


Most Cited Papers (and Why)

I am most well-known for ASSISTments. The following paper provides a general overview of ASSISTments:  

The efficacy of ASSISTments, when paired with good content, has been evaluated by academics on many occasions, and it has been found to raise student learning rates when compared to traditional paper and pencil methods. While there are other similar studies showing ASSISTments is effective, the most rigorous study to date was conducted by Stanford Research International (I am not an author):


I am also known for running the only platform for open science in education: I host as a shared scientific instrument.
ASSISTments has a long history, having started back in 2005.
  • CP11  Razzaq, L., Feng, M., Nuzzo-Jones, G., Heffernan, N.T., Koedinger, K. R., Junker, B., Ritter, S., Knight, A., Aniszczyk, C., Choksey, S., Livak, T., Mercado, E., Turner, T.E., Upalekar. R, Walonoski, J.A., Macasek. M.A. & Rasmussen, K.P. (2005). The Assistment project: Blending assessment and assisting. In C.K. Looi, G. McCalla, B. Bredeweg, & J. Breuker (Eds.) Proceedings of the 12th Artificial Intelligence in Education, Amsterdam: ISO Press. pp. 555-562.

I am known for the fact that ASSISTments is a good assessor of student knowledge and can predict state test scores better than traditional methods because we use things like how many hints and attempts a student makes.
ASSISTments has a demonstrated capacity to adapt to users as discussed in the following paper:
  • CP25 Razzaq, L. & Heffernan, N. (2009). To Tutor or Not to Tutor: That is the Question. In Dimitrova, Mizoguchi, du Boulay & Graesser (Eds.) Proceedings of the 2009 Artificial Intelligence in Education Conference. IOS Press. pp. 457-464. Honorable Mention for Best Paper First Authored by a Student.

If you are interested in the methodological issues related to estimating treatment effects, you might want to start with the following papers. We released a set of 22 experiments to test which interventions increased student learning the most. An active interest of mine is detecting heterogeneous treatment effects to learn which kids should be given which type of feedback.


I am known for my work using Bayes Nets in Student Modeling.

I am known for my role in helping create the Cognitive Tutor Authoring Tools (CTAT) before deciding to make my own better version, which is where ASSISTments came from.

I am known for work on detecting “gaming” and why students do it.

Journal Articles


D1  Heffernan, N. T (2001). Intelligent tutoring systems have forgotten the tutor: Adding a cognitive model of human tutors Dissertation. Computer Science Department, School of Computer Science, Carnegie Mellon University. Technical Report CMU-CS-01-127. (Pieces published as J1, CP8, CP5, CP4, and CP3)


Book Chapters  


Conference Papers

Strictly Reviewed Conferences (Acceptance rates in the 30% range or below)

Note: Unlike most other disciplines where journal papers are more prestigious than conference papers, in Computer Science as a discipline, conference publications are often more difficult to get accepted and are more prestigious than most journal publications. These conference proceedings are stringently peer-reviewed, with at least three reviewers. The acceptance rate is usually in the 30% to 39% range. (The Educational Data Mining conference in 2010 was unusual in that they accepted 42% of the papers, but that is non-standard.) I have started labeling the acceptance rates on new papers to make that easier to understand.

Short Papers

The Educational Data Mining Conference created a new category of paper, called a short paper, in addition to the 12-page “Full Papers” and the two page “Poster” category. In 2011, the acceptance rates for short papers was 46%, which is considerably higher than the 33% acceptance rate for “Full Papers.”

Published 3-4 Page Papers (aka “Poster”) in Prestigious Conferences (Acceptance rates of  50-60%).

Workshop Paper


Unknown Acceptance Rate

  • U9 McGuire, P., Logue, M., Mason, C., Tu, S., Heffernan, C., Heffernan, N., Ostrow, K. & Li, Y. (2016, accepted). To See or Not To See: Putting Image-Based Feedback in Question. Interactive lecture at the International Society for Technology in Education Conference. Denver, CO. 
  • U8 Williams, J. J., Krause, M., Paritosh, P., Whitehill, J., Reich, J., Kim, J., Mitros, P., Heffernan, N., & Keegan, B. C. (2015). Connecting Collaborative & Crowd Work with Online Education. Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing (pp. 313-318). 
  • U7 Williams, J., J., Li, N., Kim, J., Whitehill, J., Maldonado, S., Pechenizkiy, M., Chu, L., & Heffernan, N. (2014). MOOClets: A Framework for Improving Online Education through Experimental Comparison and Personalization of Modules (Working Paper No. 2523265). Retrieved from the Social Science Research Network: 
  • U6 Williams, J. J., Maldonado, S., Williams, B. A., Rutherford-Quach, S., & Heffernan, N. (2015). How can digital online educational resources be used to bridge experimental research and practical applications? Embedding In Vivo Experiments in “MOOClets” . Paper presented at the Spring 2015 Conference of the Society for Research on Educational Effectiveness, Washington, D. C.
  • U5 Williams, J., J., Li, N., Kim, J., Whitehill, J., Maldonado, S., Pechenizkiy, M., Chu, L., & Heffernan, N. (2015). MOOClets: A Framework for Improving Online Education through Experimental Comparison and Personalization of Modules (Working Paper No. 2523265). Retrieved from the Social Science Research Network:
  • U4 Kelly, K., Heffernan, N., Heffernan, C., Goldman, S., Pellegrino, J., & Soffer-Goldstein, D. (2014). Improving student learning in math through web-based homework review. In Liljedahl, P., Nicol, C., Oesterle, S., & Allan, D. (Eds.). (2014). Proceedings of the Joint Meeting of PME 38 and PME-NA 36 (Vol. 3). Vancouver, Canada: PME. pp. 417-424.
  • U3   Pellegrino, J., Goldman, S., Soffer-Goldstein, D., Stoelinga, T., Heffernan, N., & Heffernan, C. (2014).  Technology Enabled Assessment:Adapting to the Needs of Students and Teachers. American Educational Research Association (AERA 2014) Conference.  
  • U2  Soffer,-Goldstein, D., Das, V., Pellegrino, J., Goldman, S., Heffernan, N., Heffernan, C., & Dietz, K. (2014). Improving Long-term Retention of Mathematical Knowledge through Automatic Reassessment and Relearning. American Educational Research Association (AERA 2014) Conference. Division C - Learning and Instruction / Section 1c: Mathematics. PDF (peer reviewed but unknown rate) Poster Nominated for the best poster of the session
  • U1 Heffernan, N., Heffernan, C., Dietz, K., Soffer, D., Pellegrino, J. W., Goldman, S. R. &  DaileyM. (2012). Cognitively-Based Instructional Design Principles: A Technology for Testing their Applicability via Within-Classroom Randomized Experiments. AERA 2012.