Our mission is to advance the field of computing education through innovative research, collaboration, and dissemination of knowledge. Here, you will find a wealth of resources, including our latest research findings, educational tools, and information about ongoing projects. We are dedicated to exploring effective teaching methodologies, understanding student learning processes, and developing technologies that enhance the educational experience. Whether you are a researcher, educator, student, or simply interested in the field of computing education, we invite you to explore our work, engage with our community, and contribute to the collective effort of shaping the future of computing education.
This website contains all the research related to Computing Education research. You will find all papers, artifacts, datasets, and scripts we publicly share for replication and extension. If you have questions, please email us at ealomar@stevens.edu or mwmvse@rit.edu
Feel free to visit our other research.
Published at International Conference on Software Engineering (ICSE'2023)
To increase the awareness of potential coding issues that violate coding standards, in this paper, we aim to reflect on our experience with teaching the use of static analysis for the purpose of evaluating its effectiveness in helping students with respect to improving software quality. This paper discusses the results of an experiment in the classroom, over a period of 3 academic semesters, involving 65 submissions that carried out code review activity of 690 rules using PMD
In particular, we addressed the following research questions:
RQ1. What problems are typically perceived by students as true positives versus false positives?
RQ2. What category of problems typically takes longer to be fixed?
RQ3. What is the perceived usefulness of PMD?
The dataset utilized in this study is available for download here.
Published at the Proceedings of the 55th ACM Technical Symposium on Computer Science Education (SIGCSE'2024)
Refactoring is the practice of improving software quality without altering its external behavior. Developers intuitively refactor their code for multiple purposes, such as improving program comprehension, reducing code complexity, dealing with technical debt, and removing code smells. However, no prior studies have exposed the students to an experience of the process of antipattern detection and refactoring correction and provided students with the toolset to practice it. To understand and increase the awareness of refactoring concepts, in this paper, we aim to reflect on our experience with teaching refactoring and how it helps students become more aware of bad programming practices and the importance of correcting them via refactoring. This paper discusses the results of an experiment in the classroom that involved carrying out various refactoring activities for the purpose of removing antipatterns using JDeodorant, an Eclipse plugin that supports antipatterns detection and refactoring. The results of the qualitative analysis with 171 students show that students tend to appreciate the idea of learning refactoring and are satisfied with various aspects of the JDeodorant plugin’s operation. Through this experiment, refactoring can turn into a vital part of the computing educational plan. We envision our findings enabling educators to support students with refactoring tools tuned towards safer and trustworthy refactoring.
The dataset utilized in this study is available for download here.
Published at the Proceedings of the 36th the Conference on Software Engineering Education and Training (CSEE&T'2024)
Large Language Models (LLMs), like ChatGPT, have gained widespread popularity and usage in various software engineering tasks, including programming, testing, code review, and program comprehension. However, their effectiveness in improving software quality in the classroom remains uncertain. In this paper, we aim to shed light on our experience in teaching the use of ChatGPT to cultivate a bugfix culture and leverage LLMs to improve software quality in educational settings. This paper discusses the results of an experiment involving 102 submissions that carried out a code review activity of 1,230 rules. Our quantitative and qualitative analysis reveals that a set of PMD quality issues influences the acceptance or rejection of the issues, and design-related categories that take longer to resolve. While students acknowledge the potential of using ChatGPT during code review, some skepticism persists. Code review can become a vital part of the educational computing plan through this experiment. We envision our findings to enable educators to support students with code review strategies to raise students' awareness about LLM and promote software quality in education.
The dataset utilized in this study is available for download here.
Published at the Transactions on Computing Education (TOCE'2025)
Large Language Models (LLMs), such as ChatGPT, have become widely popular for various software engineering tasks, including programming, testing, code review, and program comprehension. However, their impact on improving software quality in educational settings remains uncertain. This paper explores our experience teaching the use of Programming Mistake Detector (PMD) to foster a culture of bug fixing and leverage LLM to improve software quality in the classroom. This paper discusses the results of an experiment involving 155 submissions that carried out a code review activity of 1,658 rules. Our quantitative and qualitative analysis reveals that a set of PMD quality issues influences the acceptance or rejection of the issues, and design-related categories that take longer to resolve. Although students acknowledge the potential of using ChatGPT during code review, some skepticism persists. Further, constructing prompts for ChatGPT that possess clarity, complexity, and context nurtures vital learning outcomes, such as enhanced critical thinking, and among the 1,658 issues analyzed, 93% of students indicated that ChatGPT did not identify any additional issues beyond those detected by PMD. Conversations between students and ChatGPT encompass five categories, including ChatGPT’s use of affirmation phrases like ‘certainly’ regarding bug fixing decisions, and apology phrases such as ‘apologize’ when resolving challenges. Through this experiment, we demonstrate that code review can become an integral part of the educational computing curriculum. We envision our findings to enable educators to support students with effective code review strategies, increasing awareness of LLMs, and promoting software quality in education.
The dataset utilized in this study is available for download here.