Fork me on GitHub
Math for the people, by the people.

User login

The new rating system

Primary tabs

The new rating system

Hmm, I've just noticed that there is a new rating feature on PM.
To those that have implemented it --- I assume it's Mr. P. Jurczyk --- bravo! I think many of us will appreciate it as a way to further improve our entries.

// Steve


I don't understand it.
Is a 1 a high score and 5 low? or is 1 low and 5 high?

The latter is correct. The values of rank can be between 1 and 5. The higher rank, the better.

Pawel

Aaron,

That's great. Thanks.

I assume that the left hand of the two new little boxes at the top of the entry should link to some description of how the rating in the meter was arrived at. But in fact, both little boxes link to the description of the ratings system.

Roger

Looking at the boxes again, it seems the one of the boxes gives you an idea about the "credibility" of the "current" owner of the entry... am I right?

Again, thanks to Aaron and everyone who did this work!

The color seems to. I had assumed that if you clicked on that box, you'd get some kind of numeric summary of the rating. Perhaps that's an unreasonable expectation :)

Hi,
I would like to explain a bit how the system works and what does all the bars mean etc.
First of all, as some of you noticed, each encyclopedia entry has two bars, first of them gives rating of current owner of the object, the second bar shows average rating of given entry (you can rate objects by logging in and filling in survey which is available below the Encyclopedia entries). If you do mouse over each of bars, you will get verbal explanation of the rating.
In general, there are five different 'levels' for user and entry 'quality'. Each of levels is represented by slightly different image (the higher bar in the image, and the closer to green, the better entry or user is). If there is blue image with question mark - it means that given entry has not been rated so far or, the user is new and has not been included in the most recent HITS calculations.
After you rated entry, you are token to a summary page which gives average ratings of this object. As you can notice, each question can be answered in scale 1-5. Again, the higher value, the better. If you do mouse over the questions, you should be able to see full question.
About the users ratings - this is more complicated process, and as you may read after clicking on the bars - it involves running the HITS algorithm on the PM database. We will include details of this in the references section a bit later.

Thanks,
Pawel

I rated one entry just now and I noticed that the second box changed from a question mark to a colored box, the color representing the numerical average of the rating... neat!

Such a qualification seems me dichotomic when it is referring to some PM mathematicians. Sincerely I think that is preferable that my criticism could be hard about Pawel's method, before we are not fair qualifying the Djao's ``Owner confidence rating'', for just only name a case. The sentence:
"They should be used only as a rough guideline of quality and reputation",
really sounds me temerarious.
perucho

I too am confused about it. It does make sense that Wkbj79 has a high rating, PrimeFan a medium rating and Mathnerd a low rating. But why does akrowne have a medium rating and not high? Why does CompositeFan have a low rating and not medium, or at least medium low?
~knodel

I understand your concerns. I agree that, the algorithm is not perfect and it punishes some great users. Sometimes, it is very hard to design algorithm which works well for all the cases - you should know that. However, please be a bit patient. The users expertise algorithm takes into account many factors which include the ratings you are providing for entries.
The algorithm was run only once - when we had almost no ratings in the system. As the number of rated entries (and their accuracy) increases, you should see the improvement.

Pawel

Thanks for your reply Pawel, I appreciate. Indeed my criticism is in good faith and I see that by using any program (not the yours specifically, and I steem your work because I'm sure you are working in good faith too!) is very difficult that the PM entries be arbitrable on this way. In fact you are a relative new PM member, but every "old" member knows who are the top mathematicians in PM, without prejudice that all the PM people must in mind to contribute for the best prestige of PM (me included!). I think that to stablish a fair rating for the entries is really complex. Look for example the ``corrections'' (in my opinion, along with the ``Forums'', the best way until now) and you will realize that a significant amount of competent mathematicians have a lot of ``corrections received''. Is this paradoxical? Of course not! It happens that corrections in PM are complex and dynamic. By an analogous reason papers, books and PM entries too! are arbitrated by the specialists. Nevertheless, I think you may get the help of the Administrators in order to combine your program with the 'real' appreciation and so obtain a better fair ratings. I do not want discourage you, by no means, and hope that you may get the completion of this work what I know it is very important for you.
Sincerely,
perucho
PS. Hope you may understand my short English.

Hi Chi, you said:

-- I have a suggestion: if a user has not voted on an entry, only the number of votes on the entry should be revealed and not the rating itself. The rating will be revealed to the user only after the voting has taken place. This will reduce potential bias (at least to some degree) to the voting scheme. --

Although I see your point, I would not apply this option, for the simple reason that we do want outsiders to be able to see the rating of the entry to understand the degree of confidence that they should apply to what they are reading. We shouldn't assume that everyone visiting planetmath has an account and is willing to spend enough time to read the entry thoroughly and vote.

A

Hi Alvaro,

I see your point too, and I agree that this is one way to attract more readers to join PM. However, if we were to reveal the ratings to everyone outright, we would not be able to take the ratings too seriously, at least from a statistical point of view. An entry with a very high/low rating just means that some people have given it on average a high/low rating, whether these ratings are biased or not (perhaps because they have seen the rating prior to voting). It does not necessarily reflect the intrinsic worth of the entry itself.

Well, I still think a PM content committee is a good thing to have, sooner or later.

I think that not allowing users to see the ratings of an entry is not the best idea. My reasons are:

1. If a user is browsing the encyclopedia (and not hunting for entries to rate), that user should be able to see what others think about the entry.

2. The person who created the entry should know what other people think of the entry. Ideally, if an entry receives a low rating, a correction should also be filed, but if this does not occur, then the user can glean that information and try to improve the entry.

I would like to propose a compromise to this issue of whether we see ratings or not. It would be nice if we had a "browse mode" in which users can see ratings but cannot rate entries, and a "rate mode" in which the ratings are hidden and users can rate entries. (I have in mind something akin to the initial mass rating we did in which the author's user name is also hidden during rating. This is a good idea as knowing the author of an entry is another possible source of bias.) I am uncertain how much work it would be to implement this idea, but I thought that I would at least throw the idea out and see what others think.

While we are talking about ratings, would it be possible to set it up so that, in a list of entries, we can see their ratings? This is already set up in the search engine, but not in other lists. (For example, the list that comes up by clicking on "your objects" in the menu on the upper left.) I was trying to figure out which of my entries had been rated, and going through each of my entries individually to do this was a huge hassle.

Now back to something Chi said:

> Well, I still think a PM content committee is a good thing to have, sooner or later.

I agree, and I think sooner would be better than later.

Warren

This sounds like a reasonable compromise. And I also like your idea of putting the ratings next to the entries when browsing, and this includes browsing one's own entries. Overall, I think the rating system is a great improvement over what we had before... my suggestion is really just food for thought :)

Yes... I still need to do an announcement (just woke up having slept all evening and not feeling well), but I think you have the general idea.

I'd consider this a "trial period" -- we'll see how it works out, and can remove it if we don't like it.

It still could be hooked into more things, such as search results ranking, and the top users display.

apk

I have a suggestion: if a user has not voted on an entry, only the number of votes on the entry should be revealed and not the rating itself. The rating will be revealed to the user only after the voting has taken place. This will reduce potential bias (at least to some degree) to the voting scheme.

Subscribe to Comments for "The new rating system"