Teaching
Portfolio
CS-1P Programming - Glasgow University - Quintin Cutts
|
|
Home Context Course Structure Reflection Outcomes |
Written
Examinations |
We run two written examinations - one worth 10% of the course, held midway through in January, and a second major terminal, or degree, exam held in June, worth 70% of the course. Here are the two exams for this year: January and June. Note that both exams have short answer questions, covering around half the marks, and then one long problem-solving question for the remainder. There are endless arguments about whether these kinds of exams really are a fair assessment of students' learning, or whether they are assessing in any way what we think they are assessing. These are my reflections: Written problem solving questionsAs part of a balanced diet of assessment formats, I believe that a written problem solving assessment has some merit. We are very clear with the students that we don't expect a perfectly coded solution, indeed that coding is not the main point in these kind of questions. Rather, we are attempting to see if the student has some basic problem solving skills, has learned some basic program structures, and can put in place the gross structure of a solution to an unseen problem. Also in its favour of course is the relatively low cost of preventing plagiarism of any kind. I suspect it does set the hurdle rather high however. In order to be able to solve problems from scratch in a short time period, the student must have a reasonably deep understanding of the programming concepts involved; and they must have done enough programming for some simple programming patterns to have become embedded. Our questions follow a familiar format, following on from the assisgnments during the year, but it's an open question as to how many such problems a student must have solved before the patterns are in place, and the problem solving in the exam is not a "from scratch" exercise. Our laboratory exams assess those other aspects of learning to program - the use of a programming environment to enter, edit, and debug code. Large problemsParticularly in the June exam, the main problem solving question forms a large part of the exam - usually 60%. We do aim to break it down into a series of more manageable chunks worth between 5 and 25% each, and so guide the student through the process. For example, a question will often require the development of a few useful auxiliary procedures or functions first, which can then be incorporated into the main problem. Nonetheless, the single large question is daunting for almost any student. My concern is that a student may have plenty of programming knowledge but just not be able to display it, because they can't get started, either in problem solving in general, or on this problem in particular. Where does one draw the line? Colleagues would say that we must have students who can solve problems from scratch. But this is surely only one part of the whole process, and if we are unable to reward students for skills and knowledge in other areas, are we really being fair? For example, a student may know the language well and be able to code and debug effectively. But if they just can't see how to start the process off, these skills are useless. In past years, I made use of a question format where a plan was provided for solving a problem, in quite some detail, and the student simply had to code the plan. I was dissuaded from continuing with this format, on the argument that it would take a student too long to get into the mindset of the person providing the plan - the exam setter. I am unconvinced of this argument. Matching practice to assessmentFrom a student perspective, the course is very much set around the assignments. That is the metronome of the course - the regular fortnightly submissions. The feedback they gain on each assignment is their best guide to how they are progressing. But of course it is likely that the programming process they took to reach their assignment solution is radically different to the process required to solve a problem in an exam. For example, in the assignment:
These all conspire to encourage an incremental approach to programming, and, if the program works successfully at the end with good feedback from the tutor, a belief that the skill is being successfully mastered. Whilst we tell the students to expect a very different environment in the exam, and encourage them to practice past papers, and give them a taste of the format in the January class test, I'm concerned that this is not sufficient to really get the message home. Doing vs. understandingDuring the exam setting process this year, I recognised that all the questions in the exam papers ask the students to do things. We expect that they are demonstrating their understanding through the completion of tasks. But I would like to incorporate some element of reflection, both during the course and in the final exams - in order to develop and deepen the students' understanding. For example, some requirement by a student to articulate why he/she did something a particular way. |