Main
Date: 23 Apr 2008 18:12:27
From: samsloan
Subject: Chess Ratings
The Rating of Chess Players, Past and Present by Arpad Elo

Arpad Elo was one of the giants of the World of Chess. Born in 1903,
he got off to a slow start, learning the rules of chess by reading the
Encyclopedia Britannica in his high school library. By 1935, he was
Wisconsin State Champion, a title he won eight times.

In 1939, Arpad Elo was one of the seven founding members of the United
States Chess Federation. His signature is on the original USCF
corporate charter, dated December 27, 1939.

In the late-1930s, the first chess rating system was developed by the
Correspondence Chess League of America. In the early 1940s, Chess
Review magazine developed a system for its postal chess program.
Kenneth Harkness was the managing editor of Chess Review and, after he
left in 1948, he spent two years developing a rating system for over
the board play. The original Harkness System forms the starting point
for all chess rating systems in the world today. Under the original
Harkness System, every player who got an even score of 6-6 in the US
Open of was assigned a rating of 2000. Players above 2100 were experts
and players above 2300 were masters.

Harkness unveiled his system in 1950. National Chess Rating list was
published in the December 1950 issue of Chess Review magazine, page
354. The first list covered 2306 players and 582 tournaments covering
a 30 year period ending on July 31, 1950.

Almost from the start, there were problems. In the early lists,
everybody's rating went down as new improving players took rating
points away from the older established players. By 1956, the standards
had to be reduced. The requirement for expert was dropped to 2000 and
the requirement for master was dropped to 2200.

In the late 1950s, a new crisis arose with the rise of Bobby Fischer.
Under the Harkness System, if a player lost a game to a much lower
rated player, he could lose as much as 70 or 80 rating points in that
one game. Also, there were long intervals between rating lists, at
least six months sometimes as long as one year. A player's rating was
only updated when the new list came out. Thus, if a player was rated
1700, he was considered still to be rated 1700 until the next list
came out and the gains and losses of the ratings of his opponents were
calculated on that basis.

During this period, the rating of Bobby Fischer rose from 1700 to 2400
in only two years. Every time Fischer, rated 1700, beat an expert
rated 2000, the higher rated player lost about 80 rating points in
that one game. This caused a lot of players to become upset at losing
all these rating points, especially after it became apparent that
Fischer was really a grandmaster, and not a Class B player.

In 1959, USCF President Jerry Spann appointed a committee headed by
Arpad Elo to study the rating system and make recommendations to
change the system to avoid the decimation that Fischer had wrecked on
the ratings of so many players.

In August, 1960, Elo submitted his report at the USCF Delegate's
meeting in St. Louis. He had developed a formula that emulated the
results of the existing Harkness System. Among the changes were a
reduction on what is now known as the K-Factor. Under the Harkness
System, if a player lost a game to an opponent with the same rating,
he could lose 50 points. Under the Elo System, he would only lose 16
points. Also, the most that a player could lose in a single game was
only 30 points, no matter how low rated the opponent was. Finally,
each tournament was rated in succession, with the rating after one
tournament applied to the next, so that the rating of a player would
go up or down gradually, not in big jumps.

I was the delegate from Virginia at the USCF meeting in St. Louis in
1960 that approved adoption of the new Elo System. I was the only
delegate who voted against it. I voted against it because I did not
like the fact that under the new Elo System, the rating of a player
would go up more slowly. Being a kid myself, I wanted for my rating to
go up quickly.

By the 1970s, Professor Elo was regularly attending the annual
meetings of FIDE, the World Chess Federation. By then, the USCF was
running its own rating system and Elo was no longer involved except as
an advisor. However, Elo had developed a similar system for rating
International Players at the grandmaster level. Elo's personal rating
list of the top 65 players in the world was published in the October
1969 issue of Chess Life magazine and was similarly published in other
magazines around the world. At the 1970 FIDE Congress in in Siegen,
West Germany, Professor Elo got FIDE to agree to adopt his Elo Rating
System as the official FIDE System. It was an easy sell, because Elo
himself would calculate the ratings on his own adding machine at his
home. Thus, FIDE did not have to spend any money on it.

There was a big difference with the USCF System, in that under the
FIDE System as calculated by Professor Elo, only the ratings of the
top level elite players were calculated. Under the USCF system, all
tournament players had ratings. Under the FIDE System, a male player
had to be a master, meaning that he had to be able to hold a rating of
2205 or better. Otherwise, he was not listed.

Thus, on Elo's annual lists, only a few hundred players had ratings.
The 1969 list only had 375 players listed. This included all the
grandmasters and international masters in the world who had played in
at least two tournaments in the past two years.

Elo was involved in a number of controversies, principally with his
nemesis Bill Goichberg. Goichberg was hired in 1964 by the USCF to be
its first full-time rating-statistician. Although Elo had developed
the theory of how the system should work, it was up to Goichberg to
put it into practice.

In 1967, when the USCF moved from New York City to Newburgh New York,
Goichberg did not move with it and instead became a tournament
organizer. In the mid-1970s Goichberg started organizing FIDE Rated
Tournaments for the specific purpose of helping American players get
FIDE Ratings. Since almost all top level tournaments were being held
in Europe, it was nearly impossible for an American to get a FIDE
rating without traveling to Europe, because in order to get a FIDE
Rating one generally needed to play nine games against players who
already had FIDE ratings.

As FIDE Ratings became more popular, the number of rated players
increased. By the July 1, 1983 list, 3600 men and 720 women had FIDE
Ratings. Because there were far fewer top women players, Women's
ratings were as low as 1805 whereas men had a minimum rating of 2205.

By that time Professor Elo was no longer doing the ratings at home
alone. One reason for this was that so many players had ratings that
one man could not do all the work.

Another reason was because of a disputes between Arpad Elo and William
Goichberg, organizer of many FIDE Rated tournaments. At a time when
less than 600 players in the world had FIDE Ratings, Bill Goichberg
started an aggressive program to qualify US players for FIDE Ratings.
Typically, his tournaments were ten player round robins with four
players with FIDE ratings, the minimum number necessary to qualify a
player for a partial FIDE Rating. However, it happened by pure chance
that Bill Goichberg, normally a 2350 player, had the best tournament
of his life and scored a 2530 tournament performance. In another
event, Michael Valvo, a strong player who had been inactive, came out
of retirement and produced a performance of 2440. Those who knew Valvo
knew that this was a typical result for him, but Arpad Elo had never
heard of Valvo and thought that this result was suspicious.

The result was that Goichberg submitted tournament results showing
that he had earned a 2530 FIDE rating and Michael Valvo had earned a
2440 rating. Professor Elo had never heard of Valvo, but he knew
Goichberg well, due to the many disputes and disagreements between
Goichberg and Elo in 1964-67 when Goichberg was the rating
statistician working in the New York office and Elo in Wisconsin was
overseeing his work.

Arpad Elo did not believe any of this. He thought that this was all a
fix. Therefore, Elo refused to rate these events and to give Goichberg
his 2530 rating and Valvo his 2440 rating. Goichberg complained,
pointing out that if some unknown Russian or unknown Yugoslav had
produced these results, Elo would have awarded these ratings without
question, since Elo knew that there were many players in Russia and
Eastern Europe who were very strong and had not been allowed to
compete internationally.

The showdown came at a FIDE meeting in 1977 in Israel. USCF FIDE
Delegate, Executive Director and Lieut. Col. Edmund B. Edmondson
(1920-1982) protested to FIDE that Elo was biased, refusing to give
ratings that had been earned by American players who also had USCF
ratings, while readily giving ratings to Soviet or Hungarian players.
At that meeting, Elo was instructed to follow the rules, but when the
July 1977 rating list came out, Elo had done the same things again.
FIDE President Max Euwe then arranged a meeting in Milwaukee,
Wisconsin between Edmondson, Elo and Euwe. At that meeting, the three
went over every disputed rating. Elo finally said that he would comply
with FIDE rules and ultimately he did.

At the 1978 World Chess Olympiad in Buenos Aires, Argentina, news of
what had happened reached the General Assembly of FIDE. There was a
big controversy involving Valvo's rating and the fact that these
changes had been made. Many players objected to this behind the scenes
deal.

Goichberg was right, of course. His tournaments were not fixed.
Goichberg's performance, while unusual, was within the expected normal
range of tournament results. Valvo clearly was legitimately a 2440
player, even though Elo had never heard of him.

It was while these events were going on that Elo published his book.

In the last paragraph of the introduction to his book, Elo makes the
following statement:

=93The general structures of the USCF and FIDE rating systems have
pretty well matured, and no significant changes are expected in the
immediate future.=94

This is or should be one of those =93rolling on the floor=94 type
statements. It should cause us to be regaled with laughter, because
since then there have been many change to both rating systems as they
have diverged further and further away from the original and from each
other.

However, you should pick yourself up off the floor and real the next
paragraph of Elo's book, which says:

=93Both systems are treated in this book as they stand on January 1,
1978, but as with everything subject to legislative control, trimming
and adjusting may occur from time to time. The basic principles,
however, are scientific principles and enjoy a rather greater
durability.=94

Professor Elo recognized that his rating system was subject to
legislative control meaning that the politicians had ultimate control
over it. In fact, the politicians both in the USA and in FIDE have
found themselves unable to resist the temptation to tinker with the
system making little changes here and there. That indeed is the reason
who it is necessary to reprint this book, which has been out of print
for 30 years so that the general chess playing public can see what the
Elo System originally was.

I served one year on the Executive Board of the United States Chess
Federation and during that one year another board member proposed a
change to the USCF Rating System that was unspeakably ridiculous and
preposterous. Nevertheless I was the only one who voted against it and
it passed.

The current USCF Rating system is under the nominal control of the
Rating Committee while the FIDE System is being run by Toti Abundo of
the Philippines. Right now, the USCF Rating Committee consists of
entirely good, qualified people, but in past years the Ratings
Committee has been infiltrated in some cases with political hacks or
those who did not know, did not want to learn, and were incapable of
understanding the rating system.

The most important thing to understand is that chess strength runs
differently from that of other activities. Most chess players who
eventually reach master have learned to play chess by the age of ten.
During their first eight years of tournament play, they improve
rapidly. By age 21 they are probably within 50 rating points of their
ultimate peak strength. By age 30, they have reached their peak. They
then level off, staying at about the same strength for 30 years or
suffering decline. Then, in their 60s or 70s they suffer a greater
decline. Still even in old age, they will in most cases be only 100
points lower than their peak strength.

This general curve has been tracked over hundreds of players at all
levels and has been found to apply with considerable accuracy.

Knowing this, it should be possible to track the ratings of several
hundred players of all ages andf strengths and then by watching how
their ratings and results go up and down to tweak the rating system so
that, over the long term, the same rating equals the same strength.

In other words, if the rating system is running perfectly, they a
rating of 1850 is 1966 should represent the same chess strength as a
rating of 1850 in 2006 and so on.

Actually, it does. All things considered a rating of 1850 is 1966
really does represent about the same chess strength as a rating of
1850 in 2006.

However, my opinion is that this is largely the result of dumb luck.
No matter what the rating system is and no matter how the K-Factor is
modified, the results in the end will come out about the same.

Still, it bothers me that from July 1998 until July 2000 my rating
dropped from 2104 (about where is had been for 30 years) down to 1921,
a drop of 183 points in two years. I thought that my rating would pop
right back up, but that has not happened. On the other hand, during
that exact same time period, many other players complained of a
similar ratings drop.

Now I would like to know whether this drop of 183 rating points is the
result of senility because I am suffering the ravages of old age (I am
63) or did changes in the rating system cause this to happen.

The rating system does not run by itself. There are constant factors
affecting the rating system. One is natural deflation. Lets us say a
new player enters as a scholastic player at rating 800. Over the next
ten years, his rating increases to 1800. Now, he has gained 1000
points. However, under the original system the sum total of all
ratings did not changed. Thus, other players lost that 1000 points.
Multiply that over the 600,000 ;players who now have USCF ratings, it
is easy to see why the overall rating system has lost points.

In order to counter this problem, various rating statisticians have
injected =93bonus points=94 into the system, trying to award enough points
to counteract the points taken out by the rapidly improving players.

This has become somewhat similar to the way that the Federal Reserve
Board tweaks the interests rates and the money supply depending on
changes in the economic factors such as =93new housing starts=94 for
example.

In short, this is the way the USCF Rating System should work. It does
not yet work that way as far as I know, however.

Sam Sloan
April 23, 2008




 
Date: 09 May 2008 23:33:08
From: Wlodzimierz Holsztynski (Wlod)
Subject: Re: Chess Ratings
On Apr 23, 6:12 pm, samsloan <[email protected] > wrote:
> The Rating of Chess Players, Past and Present by Arpad Elo
>
> Arpad Elo was one of the giants of the World of Chess.

Silly! :-)

Wlod


 
Date: 09 May 2008 09:25:24
From:
Subject: Re: Chess Ratings
On May 9, 12:13=A0pm, samsloan <[email protected] > wrote:
> Elo's book, The Rating of Chess Players, Past and Present, is
> reprinted today.
>
> http://www.amazon.com/dp/0923891277
>
> Professor Elo's book long out of print and almost impossible to obtain
> has just been reprinted, with a foreward by Sam Sloan.

Is this book already in the public domain? It first appeared in
1978, only 30 years ago. How long do copyrights last for such books?


 
Date: 09 May 2008 09:13:17
From: samsloan
Subject: Re: Chess Ratings
Elo's book, The Rating of Chess Players, Past and Present, is
reprinted today.

http://www.amazon.com/dp/0923891277

Professor Elo's book long out of print and almost impossible to obtain
has just been reprinted, with a foreward by Sam Sloan.


On Apr 24, 8:30 am, samsloan <[email protected] > wrote:
> TheRatingofChessPlayers, Past & Present by ArpadElo
>
> Introduction
>
> ArpadElowas one of the giants of the World ofChess. Born in 1903,
> he got off to a slow start, learning the rules ofchessby reading the
> Encyclopedia Britannica in his high school library. By 1935, he was
> Wisconsin State Champion, a title he won eight times.
>
> In 1939, ArpadElowas one of the seven founding members of the United
> StatesChessFederation. His signature is on the original USCF
> corporate charter, dated December 27, 1939.
>
> In the late-1930s, the firstchessratingsystem was developed by the
> CorrespondenceChessLeague of America. In the early 1940s,Chess
> Review magazine developed a system for its postalchessprogram.
> Kenneth Harkness was the managing editor ofChessReview and, after he
> left in 1948, he spent two years developing aratingsystem for over
> the board play.
>
> The original Harkness System forms the starting point for allchessratingsystems used in the world today. Under the original Harkness
> System, every player who got an even score of 6-6 in the US Open of
> was assigned aratingof 2000. The ratings of otherplayerswere
> calculated from that starting point.Playersabove 2100 were experts
> andplayersabove 2300 were masters.
>
> Harkness unveiled his system in 1950. The first NationalChessRating
> List was published in the December 1950 issue ofChessReview
> magazine, page 354. The first list covered 2306playersand 582
> tournaments covering a 30 year period ending on July 31, 1950.
>
> Almost from the start, there were problems. In the early lists,
> everybody'sratingwent down as new improvingplayerstookrating
> points away from the older establishedplayers. By 1956, the standards
> had to be reduced. The requirement for expert was dropped to 2000 and
> the requirement for master was dropped to 2200. It remains there
> today.
>
> In 1957, Kenneth Harkness published his book, The Blue Book
> Encyclopedia ofChess, which has recently been reprinted, which
> explained hisratingsystem in detail.
>
> In the late 1950s, a new crisis arose with the rise of Bobby Fischer.
> Under the Harkness System, if a player lost a game to a much lower
> rated player, he could lose as much as 70 or 80ratingpoints in that
> one game. Also, there were long intervals betweenratinglists, at
> least six months sometimes as long as one year. A player'sratingwas
> only updated when the new list came out. Thus, if a player was rated
> 1700, he was considered still to be rated 1700 until the next list
> came out and the gains and losses of the ratings of his opponents were
> calculated on that basis.
>
> During this period, theratingof Bobby Fischer rose from 1700 to 2400
> in only two years. Every time Fischer, rated 1700, beat an expert
> rated 2000, the higher rated player lost about 80ratingpoints in
> that one game. This caused a lot ofplayersto become upset at losing
> all theseratingpoints, especially after it became apparent that
> Fischer was really a grandmaster, and not a Class B player. There were
> bitter complaints about this.
>
> In 1959, USCF President Jerry Spann appointed a committee headed by
> ArpadEloto study theratingsystem and make recommendations to
> change the system to avoid the decimation that Fischer had wrecked on
> the ratings of so manyplayers.
>
> In August, 1960,Elosubmitted his report at the USCF Delegate's
> meeting in St. Louis. He had developed a formula that emulated the
> results of the existing Harkness System. Among the changes were a
> reduction on what is now known as the K-Factor. Under the Harkness
> System, if a player lost a game to an opponent with the samerating,
> he could lose 50 points. Under theEloSystem, he would only lose 16
> points. Also, the most that a player could lose in a single game was
> only 30 points, no matter how low rated the opponent was. Finally,
> each tournament was rated in succession, with theratingafter one
> tournament applied to the next, so that theratingof a player would
> go up or down gradually, not in big jumps.
>
> I was the delegate from Virginia at the USCF meeting in St. Louis in
> 1960 that approved adoption of the newEloSystem. I was the only
> delegate who voted against it. I voted against it because I did not
> like the fact that under the newEloSystem, theratingof a player
> would go up more slowly. Being a kid myself, I wanted for myratingto
> go up quickly.
>
> By the 1970s, ProfessorElowas regularly attending the annual
> meetings of FIDE, the WorldChessFederation. By then, the USCF was
> running its ownratingsystem andElowas no longer involved except as
> an advisor. However,Elohad developed a similar system forrating
> InternationalPlayersat the grandmaster level.Elo'spersonalrating
> list of the top 65playersin the world was published in the October
> 1969 issue ofChessLife magazine and was similarly published in other
> magazines around the world. Here is his initial list.
>
> At the 1970 FIDE Congress in in Siegen, West Germany, ProfessorElo
> got FIDE to agree to adopt hisEloRatingSystem as the official FIDE
> System. It was an easy sell, becauseElohimself would calculate the
> ratings on his own adding machine at his home. Thus, FIDE did not have
> to spend any money on it.
>
> There was a big difference with the USCF System, in that under the
> FIDE System as calculated by ProfessorElo, only the ratings of the
> top level eliteplayerswere calculated. Under the USCF system, all
> tournamentplayershad ratings. Under the FIDE System, a male player
> had to be a master, meaning that he had to be able to hold aratingof
> 2205 or better. Otherwise, he was not listed.
>
> Thus, onElo'sannual lists, only a few hundredplayershad ratings.
> The 1969 list only had 375playerslisted. This included all the
> grandmasters and international masters in the world who had played in
> at least two tournaments in the past two years.
>
> Elowas involved in a number of controversies, principally with his
> nemesis Bill Goichberg. Goichberg was hired in 1964 by the USCF to be
> its first full-timerating-statistician. AlthoughElohad developed
> the theory of how the system should work, it was up to Goichberg to
> put it into practice.
>
> In 1967, when the USCF moved from New York City to Newburgh New York,
> Goichberg did not move with it and instead became a big tournament
> organizer. In the mid-1970s Goichberg started organizing FIDE Rated
> Tournaments for the specific purpose of helping Americanplayersget
> FIDE Ratings. Since almost all top level tournaments were being held
> in Europe, it was nearly impossible for an American to get a FIDEratingwithout traveling to Europe, because in order to get a FIDERatingone generally needed to play nine games againstplayerswho
> already had FIDE ratings.
>
> As FIDE Ratings became more popular, the number of ratedplayers
> increased. By the July 1, 1983 FIDE list, 3600 men and 720 women had
> FIDE Ratings. Because there were far fewer top womenplayers, women's
> ratings were as low as 1805 whereas men had to have a minimumrating
> of 2205.
>
> By that time, ProfessorElowas no longer doing the ratings at home
> alone. One reason for this was that so manyplayershad ratings that
> one man could not do all the work.
>
> Another reason was because of a disputes between ArpadEloand William
> Goichberg, organizer of many FIDE Rated tournaments. At a time when
> less than 600playersin the world had FIDE ratings, Bill Goichberg
> started an aggressive program to qualify USplayersfor FIDE ratings.
> Typically, his tournaments were ten player round robins with fourplayerswho already had FIDE ratings, the minimum number necessary to
> qualify a player for a partial FIDERating. However, it happened by
> pure chance that Bill Goichberg, normally a 2350 player, had the best
> tournament of his life and scored a 2530 tournament performance. In
> another event, Michael Valvo, a strong player who had been inactive,
> came out of retirement and produced a performance of 2440. Those who
> knew Valvo knew that this was a typical result for him, but ArpadElo
> had never heard of Valvo and thought that this result was suspicious.
>
> The result was that Goichberg submitted tournament results showing
> that he had earned a 2530 FIDEratingand Michael Valvo had earned a
> 2440rating. ProfessorElohad never heard of Valvo, but he knew
> Goichberg well, due to the many disputes and disagreements between
> Goichberg andEloin 1964-67 when Goichberg was therating
> statistician working in the New York office andEloin Wisconsin was
> overseeing his work.
>
> ArpadElodid not believe any of this. He thought that this was all a
> fix. Therefore,Elorefused to rate these events and to give Goichberg
> his 2530ratingand Valvo his 2440rating. Goichberg complained,
> pointing out that if some unknown Russian or unknown Yugoslav had
> produced these results,Elowould have awarded these ratings without
> question, sinceEloknew that there were manyplayersin Russia and
> Eastern Europe who were very strong and had not been allowed to
> compete internationally.
>
> The showdown came at a FIDE meeting in 1977 in Israel. USCF FIDE
> Delegate, Executive Director and Lieut. Col. Edmund B. Edmondson
> (1920-1982) protested to FIDE thatElowas biased, refusing to give
> ratings that had been earned by Americanplayerswho also had USCF
> ratings, while readily giving ratings to Soviet or Hungarianplayers.
> At that meeting,Elowas instructed to follow the rules, but when the
> July 1977ratinglist came out,Elohad done the same things again.
> FIDE President Max Euwe then arranged a meeting in Milwaukee,
> Wisconsin between Edmondson,Eloand Euwe. At that meeting, the three
> went over every disputedrating.Elofinally said that he would comply
> with FIDE rules and ultimately he did.
>
> At the 1978 WorldChessOlympiad in Buenos Aires, Argentina, news of
> what had happened reached the General Assembly of FIDE. There was a
> big controversy involving Valvo'sratingand the fact that these
> changes had been made. Manyplayersobjected to this behind-the-scenes
> deal.


 
Date: 24 Apr 2008 05:30:12
From: samsloan
Subject: Re: Chess Ratings
The Rating of Chess Players, Past & Present by Arpad Elo

Introduction

Arpad Elo was one of the giants of the World of Chess. Born in 1903,
he got off to a slow start, learning the rules of chess by reading the
Encyclopedia Britannica in his high school library. By 1935, he was
Wisconsin State Champion, a title he won eight times.

In 1939, Arpad Elo was one of the seven founding members of the United
States Chess Federation. His signature is on the original USCF
corporate charter, dated December 27, 1939.

In the late-1930s, the first chess rating system was developed by the
Correspondence Chess League of America. In the early 1940s, Chess
Review magazine developed a system for its postal chess program.
Kenneth Harkness was the managing editor of Chess Review and, after he
left in 1948, he spent two years developing a rating system for over
the board play.

The original Harkness System forms the starting point for all chess
rating systems used in the world today. Under the original Harkness
System, every player who got an even score of 6-6 in the US Open of
was assigned a rating of 2000. The ratings of other players were
calculated from that starting point. Players above 2100 were experts
and players above 2300 were masters.

Harkness unveiled his system in 1950. The first National Chess Rating
List was published in the December 1950 issue of Chess Review
magazine, page 354. The first list covered 2306 players and 582
tournaments covering a 30 year period ending on July 31, 1950.

Almost from the start, there were problems. In the early lists,
everybody's rating went down as new improving players took rating
points away from the older established players. By 1956, the standards
had to be reduced. The requirement for expert was dropped to 2000 and
the requirement for master was dropped to 2200. It remains there
today.

In 1957, Kenneth Harkness published his book, The Blue Book
Encyclopedia of Chess, which has recently been reprinted, which
explained his rating system in detail.

In the late 1950s, a new crisis arose with the rise of Bobby Fischer.
Under the Harkness System, if a player lost a game to a much lower
rated player, he could lose as much as 70 or 80 rating points in that
one game. Also, there were long intervals between rating lists, at
least six months sometimes as long as one year. A player's rating was
only updated when the new list came out. Thus, if a player was rated
1700, he was considered still to be rated 1700 until the next list
came out and the gains and losses of the ratings of his opponents were
calculated on that basis.

During this period, the rating of Bobby Fischer rose from 1700 to 2400
in only two years. Every time Fischer, rated 1700, beat an expert
rated 2000, the higher rated player lost about 80 rating points in
that one game. This caused a lot of players to become upset at losing
all these rating points, especially after it became apparent that
Fischer was really a grandmaster, and not a Class B player. There were
bitter complaints about this.

In 1959, USCF President Jerry Spann appointed a committee headed by
Arpad Elo to study the rating system and make recommendations to
change the system to avoid the decimation that Fischer had wrecked on
the ratings of so many players.

In August, 1960, Elo submitted his report at the USCF Delegate's
meeting in St. Louis. He had developed a formula that emulated the
results of the existing Harkness System. Among the changes were a
reduction on what is now known as the K-Factor. Under the Harkness
System, if a player lost a game to an opponent with the same rating,
he could lose 50 points. Under the Elo System, he would only lose 16
points. Also, the most that a player could lose in a single game was
only 30 points, no matter how low rated the opponent was. Finally,
each tournament was rated in succession, with the rating after one
tournament applied to the next, so that the rating of a player would
go up or down gradually, not in big jumps.

I was the delegate from Virginia at the USCF meeting in St. Louis in
1960 that approved adoption of the new Elo System. I was the only
delegate who voted against it. I voted against it because I did not
like the fact that under the new Elo System, the rating of a player
would go up more slowly. Being a kid myself, I wanted for my rating to
go up quickly.

By the 1970s, Professor Elo was regularly attending the annual
meetings of FIDE, the World Chess Federation. By then, the USCF was
running its own rating system and Elo was no longer involved except as
an advisor. However, Elo had developed a similar system for rating
International Players at the grandmaster level. Elo's personal rating
list of the top 65 players in the world was published in the October
1969 issue of Chess Life magazine and was similarly published in other
magazines around the world. Here is his initial list.

At the 1970 FIDE Congress in in Siegen, West Germany, Professor Elo
got FIDE to agree to adopt his Elo Rating System as the official FIDE
System. It was an easy sell, because Elo himself would calculate the
ratings on his own adding machine at his home. Thus, FIDE did not have
to spend any money on it.

There was a big difference with the USCF System, in that under the
FIDE System as calculated by Professor Elo, only the ratings of the
top level elite players were calculated. Under the USCF system, all
tournament players had ratings. Under the FIDE System, a male player
had to be a master, meaning that he had to be able to hold a rating of
2205 or better. Otherwise, he was not listed.

Thus, on Elo's annual lists, only a few hundred players had ratings.
The 1969 list only had 375 players listed. This included all the
grandmasters and international masters in the world who had played in
at least two tournaments in the past two years.

Elo was involved in a number of controversies, principally with his
nemesis Bill Goichberg. Goichberg was hired in 1964 by the USCF to be
its first full-time rating-statistician. Although Elo had developed
the theory of how the system should work, it was up to Goichberg to
put it into practice.

In 1967, when the USCF moved from New York City to Newburgh New York,
Goichberg did not move with it and instead became a big tournament
organizer. In the mid-1970s Goichberg started organizing FIDE Rated
Tournaments for the specific purpose of helping American players get
FIDE Ratings. Since almost all top level tournaments were being held
in Europe, it was nearly impossible for an American to get a FIDE
rating without traveling to Europe, because in order to get a FIDE
Rating one generally needed to play nine games against players who
already had FIDE ratings.

As FIDE Ratings became more popular, the number of rated players
increased. By the July 1, 1983 FIDE list, 3600 men and 720 women had
FIDE Ratings. Because there were far fewer top women players, women's
ratings were as low as 1805 whereas men had to have a minimum rating
of 2205.

By that time, Professor Elo was no longer doing the ratings at home
alone. One reason for this was that so many players had ratings that
one man could not do all the work.

Another reason was because of a disputes between Arpad Elo and William
Goichberg, organizer of many FIDE Rated tournaments. At a time when
less than 600 players in the world had FIDE ratings, Bill Goichberg
started an aggressive program to qualify US players for FIDE ratings.
Typically, his tournaments were ten player round robins with four
players who already had FIDE ratings, the minimum number necessary to
qualify a player for a partial FIDE Rating. However, it happened by
pure chance that Bill Goichberg, normally a 2350 player, had the best
tournament of his life and scored a 2530 tournament performance. In
another event, Michael Valvo, a strong player who had been inactive,
came out of retirement and produced a performance of 2440. Those who
knew Valvo knew that this was a typical result for him, but Arpad Elo
had never heard of Valvo and thought that this result was suspicious.

The result was that Goichberg submitted tournament results showing
that he had earned a 2530 FIDE rating and Michael Valvo had earned a
2440 rating. Professor Elo had never heard of Valvo, but he knew
Goichberg well, due to the many disputes and disagreements between
Goichberg and Elo in 1964-67 when Goichberg was the rating
statistician working in the New York office and Elo in Wisconsin was
overseeing his work.

Arpad Elo did not believe any of this. He thought that this was all a
fix. Therefore, Elo refused to rate these events and to give Goichberg
his 2530 rating and Valvo his 2440 rating. Goichberg complained,
pointing out that if some unknown Russian or unknown Yugoslav had
produced these results, Elo would have awarded these ratings without
question, since Elo knew that there were many players in Russia and
Eastern Europe who were very strong and had not been allowed to
compete internationally.

The showdown came at a FIDE meeting in 1977 in Israel. USCF FIDE
Delegate, Executive Director and Lieut. Col. Edmund B. Edmondson
(1920-1982) protested to FIDE that Elo was biased, refusing to give
ratings that had been earned by American players who also had USCF
ratings, while readily giving ratings to Soviet or Hungarian players.
At that meeting, Elo was instructed to follow the rules, but when the
July 1977 rating list came out, Elo had done the same things again.
FIDE President Max Euwe then arranged a meeting in Milwaukee,
Wisconsin between Edmondson, Elo and Euwe. At that meeting, the three
went over every disputed rating. Elo finally said that he would comply
with FIDE rules and ultimately he did.

At the 1978 World Chess Olympiad in Buenos Aires, Argentina, news of
what had happened reached the General Assembly of FIDE. There was a
big controversy involving Valvo's rating and the fact that these
changes had been made. Many players objected to this behind-the-scenes
deal.

Goichberg was right, of course. His tournaments were not fixed.
Goichberg's performance, while unusual, was within the expected normal
range of tournament results for a player of his strength. Valvo
clearly was legitimately a 2440 player, even though Elo had never
heard of him.

It was while these events were going on that Arpad Elo published his
book.

In the last paragraph of the introduction to his book, Elo makes the
following statement:

=93The general structures of the USCF and FIDE rating systems have
pretty well matured, and no significant changes are expected in the
immediate future.=94

This is or should be one of those =93rolling on the floor laughing=94 type
statements. It should cause us to be regaled with laughter, because
since then there have been many change to both rating systems, as they
have diverged further and further away from the original and from each
other.

However, you should pick yourself up off the floor and read the next
sentence of Elo's book, which says:

=93Both systems are treated in this book as they stand on January 1,
1978, but as with everything subject to legislative control, trimming
and adjusting may occur from time to time. The basic principles,
however, are scientific principles and enjoy a rather greater
durability.=94

Professor Elo recognized that his rating system was subject to
legislative control meaning that the politicians had ultimate control
over it. In fact, the politicians both in the USA and in FIDE have
found themselves unable to resist the temptation to tinker with the
system, making little changes here and there. That indeed is the
reason who it is necessary to reprint this book, which has been out of
print for 30 years, so that the general chess playing public can see
what the Elo System originally was.

I served one year on the Executive Board of the United States Chess
Federation and during that one year another board member proposed a
change to the USCF Rating System that was unspeakably ridiculous and
preposterous. Nevertheless, I was the only one who voted against it
and it passed.

The current USCF Rating system is under the nominal control of the
Rating Committee, while the FIDE System is being run by Toti Abundo of
the Philippines. Right now, the USCF Rating Committee consists of
entirely good, qualified people, but in past years the Ratings
Committee has been infiltrated in some cases with political hacks and
those who did not know, did not want to learn, and were incapable of
understanding the rating system.

The most important thing to understand is that chess strength runs
differently from that of other activities. Most chess players who
eventually reach master have learned to play chess by the age of ten.
During their first eight years of tournament play, they improve
rapidly. By age 21 they are probably within 50 rating points of their
ultimate peak strength. By age 30, they have reached their peak. They
then level off, staying at about the same strength for 30 years or
suffering a modest decline. Then, in their 60s or 70s they suffer a
greater decline. Still, even in old age, they will in most cases be
only 100 points lower than their peak strength.

This general curve has been tracked over hundreds of players at all
levels and has been found to apply with considerable accuracy.

Knowing this, it should be possible to track the ratings of several
hundred players of all ages and strengths and then by watching how
their ratings and results go up and down, to tweak the rating system
so that, over the long term, the same rating equals the same strength.

In other words, if the rating system is running perfectly, then a
rating of 1850 in 1966 should represent the same chess strength as a
rating of 1850 in 2006 and so on.

Actually, it does. All things considered, a rating of 1850 in 1966
really does represent about the same chess strength as a rating of
1850 in 2006.

However, my opinion is that this is largely the result of dumb luck.
No matter what the rating system is and no matter how the K-Factor is
modified, the results in the end will still come out about the same.

Still, it bothers me that from July 1998 until July 2000 my rating
dropped from 2104 (about where is had been for 30 years previously)
down to 1921, a drop of 183 points in just two years. I thought that
my rating would pop right back up, but that has not happened. On the
other hand, during that exact same time period, many other players
complained of a similar ratings drop.

I would like to know whether this drop of 183 rating points is the
result of senility because I am suffering from the ravages of old age
(I am 63) or did changes in the rating system cause this to happen.

I believe that the decline in my rating is due to a general drop in
the rating system, that many people complained about at that time. If
it was due to old age there would have been a slow but steady drop. In
my case, there was a sudden drop, followed by a leveling off. Right
now, my rating is 1923, actually two points higher than it was in
2000. So, senility could not be the cause. This point brings up an
interesting reason why the elderly should consider playing rated
tournament chess.

The rating system does not run by itself. There are constant factors
affecting the rating system. One is natural deflation. Let us say a
new player age 10 enters as a scholastic player at rating of 800. Over
the next ten years, his rating increases to 1800. Now, he has gained
1000 points. However, under the original system, the sum total of all
ratings at each stage does not change. Thus, other players lost that
1000 points. Multiply that over the 600,000 players who now have USCF
ratings, it is easy to see why the overall rating system has lost
points.

In order to counter this problem, various rating statisticians have
injected =93bonus points=94 into the system, trying to award enough points
to counteract the points taken out by the rapidly improving players.

This has become somewhat similar to the way that the Federal Reserve
Board tweaks the interests rates and the money supply depending on
changes in the economic factors such as =93new housing starts=94 for
example.

In short, this is the way the USCF Rating System should work. It does
not work that way yet as far as I know, however.

One way to gage the decline in the rating system is to look at the
results of the US Open. The same type of player tends to play in the
US Open year after year and there are few scholastic players, who
otherwise have a big, unpredictable impact on the overall system.

When the USCF Rating System was first started in 1950, every player
who got an even score of 6-6 in the US Open was assigned a rating of
2000.

In the 2007 US Open, the average or median player who completed all
the games and got an even score was Donna Alarie, who ended the
tournament with a rating of 1754.

=46rom this, we see that the rating system has declined 246 points in
the last 57 years from 1950 until 2007. However, almost all of that
decline occurred during the first few years after the rating system
started, when the forces affecting the rating system were not yet
understood. My first US Open was the 1959 US Open in Omaha and the
average player in that event was about 1750 as well. This indicates
that over-all, the rating statisticians are doing an admirable job of
keeping the rating system steady.

On the other hand, when the politicians get their hands on it, the
rating system tends to spike upwards or downwards. Examples are the
1980-1983 period when =93fiddle points=94 were introduced and some players
experienced huge jumps in their ratings, and the 1998-2000 period when
almost everybody (not only me) suffered a serious ratings drop.

Players take their ratings seriously. In my own case, my rating held
steady, never deviating much more than 50 points from a 2050-2100
base, over a 33-year period from 1965 until 1998, and then it
experienced a drop of 183 points from 1998 until 2000. In my case, I
am anxious to know whether this decline occurred due to my old age, to
a general decline in the rating system, to the availability of chess
databases or to other factors. On this latter point, many of my games
are now published on ChessBase. My opponents nowadays often look up my
games and study them before they play me. This may explain why my no-
longer-secret opening tricks no longer work as well.

Elo Type systems have been developed for other sports such as ping
pong and tennis. Bill Goichberg has devised an Elo-Type System to play
the race horses. However, the Elo System does not work as well with
other sports. The main reason seems to be that in chess, players once
they reach their peak will stay at the same level for as long as 30 or
40 years. Thus, there is the rule, =93Once Rated =96 Always Rated=94.
Players in physical sports only stay at their top level for a few
years. Professional Football players, for example, have an average
expectancy of only six years where they can play professionally.

Another question is why do some people play chess better than others.
What special talents and abilities make a strong chess player?
Although many have asked that question, nobody has found a conclusive
answer. The only things that can be said are that almost all top level
chess players have powerful memories and almost all masters played
competitive chess before they were ten years old. It is similar to
learning a foreign language. Once you pass a certain age, you can
never learn to speak a new language without at least a trace of a
foreign accent.

Arpad Elo died on November 5, 1992 in Brookfield, Wisconsin. The chess
rating system in general and the Elo System in particular has greatly
increased the popularity of tournament chess. More than 600,000 chess
players now have USCF ratings. USCF membership is required for anyone
to have a rating. Before the chess rating system was introduced in
1950, the USCF had less than a thousand members. Nowadays, the USCF
has 86,000 members.

Most players nowadays enter tournaments to gain rating points, not to
win money. Occasionally, some organizers try to hold a non-rated
tournament. The result is invariably that far fewer players show up.



Sam Sloan
April 24, 2008


 
Date: 23 Apr 2008 22:21:47
From: Ray Gordon, creator of the \pivot\
Subject: Re: Chess Ratings
The Elo system is not perfect, and requires further adjustment in any
individual form of competition. For example, in basketball, K has to be
scaled so that the ratings can generate pointspreads, and one must use
"replays" or re-rating the past month or so, with caps, to account for teams
who suddenly improve or decline, as the system is too slow to catch it.

The problem with Elo and chess is that chess knowledge is not fixed, so a
zero-sum system makes no sense. The "expanding rating universe" you see
with the methods used is actually more accurate.


--
Ray Gordon, The ORIGINAL Lifestyle Seduction Guru

Finding Your A-Game:
http://www.cybersheet.com/library.html
Includes 29 Reasons Not To Be A Nice Guy (FREE!)
The book Neil Strauss and VH-1 STOLE The Pivot From

Click HERE: for the ORIGINAL pivot chapter:
http://www.cybersheet.com/pivot.pdf

Here's my Myspace Page: And Pickup Blog (FREE advice)
http://www.myspace.com/snodgrasspublishing

Don't rely on overexposed, mass-marketed commercial seduction methods which
no longer work. Learn the methods the gurus USE with the money they make
from what they teach.

Thinking of taking a seduction "workshiop?" Read THIS:
http://www.dirtyscottsdale.com/?p=1187

Beware! VH-1's "The Pickup Artst" was FRAUDULENT. Six of the eight
contestants were actors, and they used PAID TARGETS in the club. The paid
targets got mad when VH-1 said "there are no actors in this club" and ruined
their prromised acting credit. What else has Mystery lied about?





  
Date: 23 Apr 2008 23:18:36
From: Kenneth Sloan
Subject: Re: Chess Ratings
Ray Gordon, creator of the "pivot" wrote:
> The Elo system is not perfect, and requires further adjustment in any
> individual form of competition. For example, in basketball, K has to be
> scaled so that the ratings can generate pointspreads, and one must use
> "replays" or re-rating the past month or so, with caps, to account for teams
> who suddenly improve or decline, as the system is too slow to catch it.
>
> The problem with Elo and chess is that chess knowledge is not fixed, so a
> zero-sum system makes no sense. The "expanding rating universe" you see
> with the methods used is actually more accurate.
>
>

The Elo system is not zero-sum.

--
Kenneth Sloan [email protected]
Computer and Information Sciences +1-205-932-2213
University of Alabama at Birmingham FAX +1-205-934-5473
Birmingham, AL 35294-1170 http://KennethRSloan.com/