Encouraging Grading: Per Aspera Ad A-Stars
Niemelä, Pia; Hukkanen, Jenni; Nurminen, Mikko; Huhtamäki, Jukka (2024)
Lataukset:
Niemelä, Pia
Hukkanen, Jenni
Nurminen, Mikko
Huhtamäki, Jukka
2024
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202405316536
https://urn.fi/URN:NBN:fi:tuni-202405316536
Kuvaus
Peer reviewed
Tiivistelmä
<p>The surge in computer science student enrollment in Data Structures and Algorithm course necessitates flexible teaching strategies, accommodating both struggling and proficient learners. This study examines the shift from manual grading to auto-graded and peer-reviewed assessments, investigating student preferences and their impact on growth and improvement. Utilizing data from Plussa LMS and GitLab, auto-graders allow iterative submissions and quick feedback. Initially met with skepticism, peer-review gained acceptance, offering valuable exercises for reviewers and alternative solutions for reviewees. Auto-grading became the favored approach due to its swift feedback, facilitating iterative improvement. Furthermore, students expressed a preference for a substantial number of submissions, with the most frequently suggested count being 50 submissions. Manual grading, while supported due to its personal feedback, was considered impractical given the course scale. Auto-graders like unit-tests, integration tests, and perftests were well-received, with perftests and visualizations aligning with efficient code learning goals. In conclusion, used methods, such as auto-grading and peer-review, cater to diverse proficiency levels. These approaches encourage ongoing refinement, deepening engagement with challenging subjects, and fostering a growth mindset.</p>
Kokoelmat
- TUNICRIS-julkaisut [20683]