[lpi-discuss] Multiple answers
alan.mckinnon at gmail.com
Sat Jun 14 04:52:46 EDT 2008
On Friday 13 June 2008, Tobias Crefeld wrote:
> <alan.mckinnon at gmail.com> wrote:
> > On Friday 13 June 2008, Grant Sewell wrote:
> > > Cisco used to have the same scheme on their Academy courses. They
> > > recently (3 years?) changed their system to accommodate the
> > > flexibility. Essentially their system now recognises that if a
> > > question has 3 marks then there should be an opportunity for the
> > > examinee to get 0, 1, 2 or 3 marks for it.
> > It also encourages guessing and thereby completely screws up the
> > statistical analysis of the answers.
> The support of guessing is a disadvantage of multiple choice tests in
> general no matter how many answers you offer.
> The only advantage is the reduced efforts for automated scoring.
Guessing does introduce an X factor, which can be (trivially)
Problem: Offer a multiple choice exam with 100 questions, 4 choices
each, one correct answer each. Answering randomly gives a statistical
average of a 25% score.
Solution: Move the baseline. A score of 25% gives you a rating of 0,
your *actual* marks are whatever you scored in excess of 25%.
That's the highly simplified version of course, but it illustrates the
> > Multi-answers questions test the candidates ability and knowledge
> > on a topic that has more than one element. The whole point of the
> > question is to identify if the candidate does in fact know all the
> > elements.
> I think it depends on the type of answer and it depends on how close
> to practical problems the test should operate:
> If you ask how to solve a problem it should be enough if you have one
> correct solution. If you have more solutions this is nice but
> without any practical benefit so you get just one point and none if
> one or more wrong answers got marked.
This is probably not viable for a written exam, so these kinds of
question are simply not asked.
In contrast, Red Hat's exam can use this method as they only check for
the end result. (I'm an active RHCI so I have some insight into this)
As a simple example, suppose the question is "Setup and configure a mail
server to receive mail for example.com". The marking scripts may
connect to port 25, and check for the expected response after "RCPT
To:" and do very little else. The candidate is entirely free to use
whatever MTA they choose. If the candidate has superior l33t c0ding
skillz and thinks they can write and build an MTA from scratch in the
three hours allocated, and does so *and* it works, he will get full
marks. This is of course highly unlikely, but it would be perfectly
acceptable if it were ever accomplished.
LPI cannot use this marking method for practical reasons, so we avoid
the problem entirely by simply not asking questions with several
equally correct answers.
> On the other hand if you ask e.g. for some effects of a command every
> correctly marked item should count as it proves a slightly increased
> level of knowledge and every wrong answer marked should bring one
> minus point down to a limit of zero points per question.
In practice, questions like this are horrendously difficult to
formulate. Once upon a time I ran several quite large item labs and a
few attendees tried to come up with questions like this just for fun.
What actually happened is that there was always some ambiguity in te
wording. Fixing that always introduced an explosion of edge cases that
had to be accounted for. Trying to deal with that always ended up with
a stupid simplistic question where the very wording of the question
gave away it's own answer.
It was quite an eye-opener to everyone, and quite hilarious afterwards.
We all got significant insight into the highly technical nature of exam
questions. In the end we concentrated on what was recommended to us in
the beginning anyway - questions deal with one known fact, which has
one unambiguous answer. If the answer has two separate components then
you still treat the answer as one complete whole (with two parts).
> The third possibility might be a security question, e.g. you have to
> list all points you have to care about if you setup a secure host.
> As there is no half secure host only a complete list should get
This is a possibility that does work in practice, and for more things
than just security. The only trick is to make sure that all the points
that have to be taken into account belong to the same published
Objective. You shouldn't ask a question that requires a candidate to
answer that port 143 must be open on the firewall and dovecot must be
running for the user to read their mail using imap for example - that
is two completely different exam Objectives
> This is just my personal point of view. Don't care about it. ;)
Nice discussion though :-)
In the past 4 years I have learnt that education and testing are much
further removed from real life than I first thought. The classroom is
always an artificial environment, so is the exam room. This is true for
any exam room, and for any type of exam you could ever dream up. So an
exam you have passed never proves anything in the scientific sense, but
it does give a fairly reliable indicator that you have a fair amount of
knowledge in the area tested.
Nothing would make me happier than to have a thorough Linux exam method
that tests sysadmins to the same throughness as airline pilots and
brain surgeons. Have the candidate demonstrate over and over again that
they can perform every required action consistently, and that they know
how to do it and why it's done this way. But that's just ludicrous -
few sysadmins on the planet could afford to pay the fees :-)
alan dot mckinnon at gmail dot com
More information about the lpi-discuss