Skip to content

Pascal Bercher

My feedback

1 result found

  1. 4 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Ideas  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Pascal Bercher commented  · 

    (PART TWO)

    -----meta information-----
    Due to the merge of my suggestion in here bz PollEv, only the first part remained visible, So, I now post it as a second comment so that it is complete again.
    -----meta information-----

    CONTINUE of last part:
    So clearly this adapted semantics is **much** more important than those
    (strange) numbers you report.

    Anyway, this was just a "fix" of your "How often was an item chosen"
    semantics. While I can see that this (fixed) version might be
    interesting in some cases, I still argue that the **main** semantics we
    (University lecturers/professors) are normally interested in is home
    many students got the *complete* question right.

    "Standard semantics"
    --------------------

    The standard semantics (I call it that way, as I claim in a University
    context this *is* the standard) should report on how many participants
    got the correct result. With correct, I of course mean that the
    respective question is answered correctly, which means that (a) all
    correct answers need to be selected and (b) not wrong answers may be
    selected.

    Remember how the participants selected:
    participant 1: (a) and (b) (a is right, b is wrong)
    participant 2: (a) and (c) (this is *exactly* the right pattern)

    So the statistics should look something like:
    correct solution: 50%
    incorrect solution: 50%

    You could even think about incorporating additional information, e.g.
    number of participants who selected at least one wrong solution or one
    right one. In our example:
    correct solution: 50%
    incorrect solution: 50%
    > =1 correct item: 100%
    > =1 wrong item: 50 %

    The latter two are just (very interesting) bonus, though. The important
    number here is how many got the entire question right.

    An error occurred while saving the comment
    Pascal Bercher commented  · 

    (PART ONE)

    -----meta information-----
    I think that my suggestion was already described here:
    https://polleverywhere.uservoice.com/forums/151441-ideas/suggestions/39139012-percentage-breakdown-of-combination-of-answer-choi

    However, I already had my (verdy detailed) description done before this, as I sent it already via mail. I could have added it as comment to the other one, but the size of comments are bounded, and my text was wayyy too long. Thus the new entry. Maybe you can merge or link (Asking for a feature multiple time should be standard anyway).
    -----meta information-----

    There are two additional semantics of the statistics of MC questions
    that one usually is interested in.

    Actually I claim that both of my proposed semantics make 100 times more
    sense than the one used by now. Of course it is pure speculation which
    semantics is relevant more often, I can only say that I cannot ever use
    your MC questions due to your strange semantics and will thus never be
    able to use ur system in the future. I would really wonder if I am the
    only one. I am pretty sure that many people to not use ur system for
    that very reason.

    So, my suggestion is to either change the semantics of your reported
    statistics (which I recommend, yours is just extremely strange!), or
    simply implement all of them and let the user choose which one they
    want. (Whereas per default I would definitely choose one of mine.)

    Here the explanation of your current (strange, as I argue) statistics
    semantics, later on I explain the 'standard semantics' that I believe
    University lecturers/professors *all* need.

    Current semantics:
    ------------------

    Assume I have the following MC question:

    Which answer(s) is (are) correct?
    (a) This one!
    (b) This is close, but it's not right.
    (c) If u look carefully u will discover that this one is.
    (d) RARELY THE LOUDEST IS THE ONE WHO IS RIGHT. This is not it.
    (e) None of the above is right.

    Here, (a) and (c) are correct (and of course marked as such).

    Now, assume two people participated, in the order reported below:

    participant 1: (a) and (b)
    -->
    The results show:
    (a) 50 %
    (b) 50 %
    (c) 0 %
    (d) 0 %
    (e) 0 %

    So, clearly the semantics u implement is "How often, in relation to all
    elements chose, got each element selected."
    So, if that participant had only selected (a), then it would have been
    100%, if he had also added (e), each would have been 33%

    participant 2: (a) and (c)
    -->
    The results show:
    (a) 50 %
    (b) 25 %
    (c) 25 %
    (d) 0 %
    (e) 0 %

    So, clearly, the semantics as explained above is still applied, because
    answer (a) makes 50% of all answers (2 of 4), whereas the others only
    make 25% of all answers (1 of 4).

    I argue that this is just an extremely strange statistics that nobody
    could possibly be interested in! (Unless each participant can only
    choose one element among n, because then the semantics applied here will
    actually coincide with the 'standard' semantics I propose u should
    support instead.)

    There are two main reasons why these numbers are extremely strange (and
    thus useless, as I argue):

    (1) We have no chance of finding out how many people got it right!

    Remember that your tool is not used to choose some product, but to
    answer questions in an academic context. So if 2 of 5 are correct, then
    **clearly** we want to know how many people knew this! So every other
    pattern will be wrong, so in the example above only 1 of 2 participants
    was right, the other one was not. This information is completely lost,
    i.e., even theoretically we can not re-infer these values! They are
    completely lost.

    (2) It does not account for individual participants. Thus, a single
    participant can distort the results.

    For illustration, assume that participant 2 would have chosen the four
    options. Then we would have gotten
    (a) 33 %
    (b) 33 %
    (c) 17 %
    (d) 17 %
    (e) 0 %

    That is of course perfectly correct with regard to the semantics
    applied, as u simply say how often a certain element was chose compared
    to all choices made. I just argue that this semantics does not make any
    sense at all.^^ Instead, *if* u are interested in this semantics of "how
    often was an element chosen", then it should at least incorporate the
    number of participants that chose, shouldn't it?

    So, according to this (much more useful, I argue) semantics, it should show:
    (a) 100 %
    (b) 100 %
    (c) 50 %
    (d) 50 %
    (e) 0 %

    Because that shows by how many participant the respective item was
    chosen. Of course u can compute everything, but I simply cannot believe
    that anybody could possibly be interested in the numbers u report, they
    are just weird... What could be a possible scenario of that??

    So at least this slightly adapted "how often was an item chosen"
    statistics makes a bit sense. (Because now we can see how many people
    got the correct results! E.g., the correct answer (a) was found by all,
    whereas the wrong answer (b) was also selected by all, etc.)

    -- BREAK SINCE COMMENT IS TOO LONG --

    Pascal Bercher supported this idea  · 

Feedback and Knowledge Base