Towards Automated Testing for Simple Programming Exercises

Abstract

Automated feedback and grading platforms can require substantial effort when encoding new programming exercises for first-year students. Such exercises are usually simple but require defining several test cases to ensure their functional correctness. This paper describes our initial effort to leverage automated test case generation for simple programming exercises. We rely on grey-box fuzzing and random combinations of method calls to test the students’ solutions and compare their execution to the results produced by a reference implementation. We implemented our approach in a prototype, called SimPyTest, openly available on GitHub. We discuss its usage and possible future extensions.

Publication
Proceedings of the 4th International Workshop on Education through Advanced Software Engineering and Artificial Intelligence (EASEAI ‘22)
Xavier Devroey
Xavier Devroey
Assistant Professor

My research interests include search-based and model-based software testing, test suite augmentation, DevOps, and variability-intensive systems engineering.

Related