Too busy for blogging

The Fall semester was again another hectic run through the courses, without much breaks. Three courses with two full time and two part time teachers helping, having nearly 900 students. No wonder…

Regarding my latest posts and this situation, I did manage to solve Advent of Code up to part 1 of day 10. Part 2 is still WIP, and I do have some ideas about that. But don’t know if and when I return to that, if ever.

Much of the teaching work at Fall was conducted not in prepared teaching sessions, since for some reason, many students nowadays do not seem to want or need to come to lectures or exercise sessions. Though we teachers sat though them all! Well we did cancel some sessions at the very end since nobody was coming, but anyways.

We even had to arrange three classroom sessions for the Fall, since last year (2024, fully online teaching) students demanded at least some face-to-face teaching for 1st year students. They even went to the Dean of the Faculty to make sure this happens.

Almost nobody came. Less than 20 students (out of ~900) from those three courses went to the classroom sessions, if I remember correctly. Not very effective use of teaching resources, IMO. Hopefully those who came found them valuable!

Well, we have video lectures and other support (Web based FAQ tool in Moodle, email), and major part of the teaching happens via these. Maybe we should be happy that the course materials and the support we do provide is enough for those students that decide to continue in the course. Some enrolled to my DSA (Data Structures and Algorithms) course do not, since they never provide the URL to their project repository or quit after first 1-2 programming assignments.

In the FAQ tool in DSA, I published 47 new FAQs in addition to the 33 old FAQs from the previous course implementations. Many of the solutions in the FAQs could have been avoided if carefully reading the course materials, watching the lectures and following the instructions in the programming assignments. Anyways, maybe the FAQs also contained explanations and points that somehow provided additional help to the students.

I also moved DSA course exams to controlled exam environment. Because of AI slop and cheating is on the rise. Some small hiccups there I need to fix, but seems to work overall.

To make preparing exams a bit more efficient, I implemented several console based tools (in Swift) that generate random Exam questions, also with image attachments. I have tools to generate questions about binary search algorithms, hash table and hashing using linear probing, as well as tools that generate questions testing if students understand how nodes are added to Binary Search Trees. And how to conduct breadth-first-search and depth-first-search to graphs.

Whenever I like, I can then generate new questions, to make sure every student has their own unique set of questions for their exams.

Today I refactored a part of my code analysis tool. The tool and the refactored feature helps me to check if the 300-something students in DSA have used solutions in their code that must be avoided or are forbidden (e.g. calling Java hashCode() when they should have implemented their own, or calculating the hash from wrong things). Or to make sure they have used elements that must or should be used (e.g. calling a function from itself, when recursive code is required). Something teachers could do manually, but since we do not have those teachers in the numbers required for manual work, automation comes to help.

The analysis code used Antrl to parse the students’ Java code to search for things that are not OK and must be OK. While it worked nicely in my Swift app, Antlr was very slow in debug builds. It was a bit faster when the app was build in release configuration, but still slow enough to be annoying.

So instead, the new implementation just runs a Java console app in a Subprocess, using the Swift Subprocess library. I implemented the same code analysis tool as a Java console app and gave that to the students, using com.github.javaparser for the source code parsing.

Using the tool, students can check, before deadlines‚ that they have not done anything stupid. Avoiding to get failed grade after the last deadline, as they can use the tool to check and fix any obvious wrong solutions during the course. Preferably before deadlines, as they will get sanctions on using solutions that are not accepted.

I needed to modify the Java app a bit so that the Swift app I use can launch it and use the output to record results to my Swift app’s database:

  • modified the command line parameters to include output format, either “text” or “json”. Default is text, so students can see more understandable output from the tool. Given the “json” parameter from my Swift app to the Java app explicitly, produces JSON formatted output my Swift tool reads from Subprocess’ stdout.
  • Make sure the Java app output to stdout contain only the JSON, containing the issues found in the code, when “json” format was selected. So the Swift app can then rely on the received output to be valid JSON.
  • Fix some issues in the code analysis output at the same time, that lead to invalid JSON. Like having unescaped quotes in the analysis output, or line endings like \r which messed the output not just in JSON but also in text format.

This change makes also sure that the Java app the students use is behaves exactly as my tool. No surprises in what the tool reports and does not report to students, compared to the results the teacher sees.

Java app executed in the subprocess executes definitely faster than the Antlr implementation in Swift. Additionally, I managed to strip down tens of lines of code, the Antlr generated Java parser removed, plus the dependency to the Antlr library from the Swift tool project is now gone.