Online game data XML access

Game of universal domination. New dice available free upon request.
User avatar
dustin
Lux Creator
Lux Creator
Posts: 10998
Joined: Thu May 15, 2003 2:01 am
Location: Cascadia
Contact:

Online game data XML access

Post by dustin » Wed Apr 02, 2008 1:43 am

Sillysoft now offers public access to XML data of all the Lux games played online. You can use this to build your own ranking system, or tools based on ours.

There is a dynamic API to get recent games as they come in. It will only return games after 300,000 right now.

It will return a maximum of 5,000 games at a time.

Mode 1:
XML for the last N games played. Minimum of 10 and maximum of 5000. Example URL:
http://sillysoft.net/lux/xml/gameHistor ... tGames=100

Mode 2:
XML for games since given game ID (above 600,000). Example URL:
http://sillysoft.net/lux/xml/gameHistor ... ame=605000

Mode 3:
XML for games between a certain ID range (above 600,000). Example URL:
http://sillysoft.net/lux/xml/gameHistor ... ame=600200

Cache the data you get back please. Don't make lots of requests for large sets.

If you want all the card/continent options for every game add &includeOptions to the URL. It all comes as one text string.

The seed given is always according to the current standing, not the historical standing from the game date.

Happy hacking. 8)
Last edited by dustin on Wed Mar 29, 2017 6:04 pm, edited 3 times in total.

User avatar
n00less cluebie
Lux Cantor
Posts: 8377
Joined: Sun Jan 06, 2008 8:55 am
Location: At the Official Clown Reference Librarian Desk--'All the answers you weren't looking for.'
Contact:

Re: Game data XML access

Post by n00less cluebie » Wed Apr 02, 2008 5:14 am

What a wonderful idea!

Now we can all implement our own Ranking systems, and we can see what works and what doesn't.

Thanks!

User avatar
AquaRegia
Lux Ambassador
Posts: 3721
Joined: Sat Jan 01, 2005 6:20 am
Location: Lounging once more at the mods' retirement villa
Contact:

Post by AquaRegia » Wed Apr 02, 2008 7:47 am

∞ AquaRegia trusts that kitty on cracknip, paranoiarodeo, and all the other dissatisfied geniuses will get to work IMMEDIATELY ∞

User avatar
nimrod7
Clown Prince
Posts: 9685
Joined: Thu Apr 12, 2007 8:51 pm
Location: Under the big top
Contact:

Post by nimrod7 » Wed Apr 02, 2008 7:48 am

L M A O @ geniuses

User avatar
Dominator
The Man
Posts: 1291
Joined: Sat Mar 25, 2006 5:00 pm

Post by Dominator » Wed Apr 02, 2008 10:04 am

AquaRegia wrote:∞ AquaRegia trusts that kitty on cracknip, paranoiarodeo, and all the other dissatisfied geniuses will get to work IMMEDIATELY ∞
some have already started before this was posted....

User avatar
The Wontrob
Ninja Doughboy
Posts: 2792
Joined: Wed Oct 03, 2007 9:56 pm
Location: The Pan-Holy Church, frollicking

Post by The Wontrob » Wed Apr 02, 2008 1:30 pm

Good idea Dustin, but I will leave it to those more intelligent than I.

User avatar
Big Will E Style
RAW Dogger
Posts: 2943
Joined: Tue Oct 24, 2006 1:28 am
Location: Los Angeles, California

Post by Big Will E Style » Wed Apr 02, 2008 2:05 pm

intelligent?? or dorky?? :wink:

User avatar
Dominator
The Man
Posts: 1291
Joined: Sat Mar 25, 2006 5:00 pm

Post by Dominator » Wed Apr 02, 2008 6:41 pm

Is there any way to use data from a player's ranking page such as wins, most common maps, awards, ect.??

User avatar
Kain Mercenary
Luxer
Posts: 201
Joined: Mon May 23, 2005 4:21 pm
Location: OMEGA HQ

Post by Kain Mercenary » Wed Apr 02, 2008 7:31 pm

Dominator wrote:Is there any way to use data from a player's ranking page such as wins, most common maps, awards, ect.??
You can get wins and most common maps from the backlog of game information that Dustin provided. Alternatively, you can scrape the site, but I don't know how Dustin feels about that...

User avatar
Dominator
The Man
Posts: 1291
Joined: Sat Mar 25, 2006 5:00 pm

Post by Dominator » Wed Apr 02, 2008 7:43 pm

how about something like weekly rankings for system's such as kain's??

Would you have to separate all the games based on resets??

User avatar
The Wontrob
Ninja Doughboy
Posts: 2792
Joined: Wed Oct 03, 2007 9:56 pm
Location: The Pan-Holy Church, frollicking

Post by The Wontrob » Thu Apr 03, 2008 4:54 pm

The Wontrob wrote:Good idea Dustin, but I will leave it to those more intelligent than I.
Big Willie wrote:intelligent?? or dorky?? :wink:
I hope dorky... but I fear they may be related.

User avatar
GregM
Luxer
Posts: 252
Joined: Wed Jun 01, 2005 4:33 pm

Post by GregM » Mon Apr 07, 2008 5:34 am

Nice idea, Dustin!

I've grabbed the games from the week that ended a few hours ago. I have an outline of a basic ranking program running that produces results vaguely similar to the official rankings.

Among other things, this will allow stuff like computing separate rating lists for different maps.

User avatar
Bertrand
Reaper Creator
Posts: 568
Joined: Mon Nov 28, 2005 4:35 pm
Location: Montreal

Post by Bertrand » Mon Apr 07, 2008 8:24 am

I've put up the results of my zero-sum alternative scoring system here : http://sillysoft.net/wiki/?Alternative% ... g%20System

The page is also accessible from the wiki front page http://sillysoft.net/wiki/ (last link on the page).

I'll keep it up do date for a few weeks.
GregM wrote:I have an outline of a basic ranking program running that produces results vaguely similar to the official rankings.
Cool! Greg, it would be fun if you could post your results on the Alternative Scoring System page.
Last edited by Bertrand on Mon Apr 07, 2008 11:13 am, edited 1 time in total.

User avatar
Scad
Lux Elder
Posts: 2521
Joined: Sun Aug 13, 2006 6:53 am
Location: Walking through the woods on a snowy evening

Post by Scad » Mon Apr 07, 2008 8:53 am

Bertrand, a problem I see is that with this less differentiated system, there's a greater possibility of ties. Raw usually doesn't because the difference is pretty significant, but with much smaller numbers the ties will likely go up. How would your system solve this?

Also, I wonder if there are some interesting patterns regarding cumulative weekly scores over the long term... something like seeding, maybe? Or perhaps just an avg score/ per week stat

User avatar
Bertrand
Reaper Creator
Posts: 568
Joined: Mon Nov 28, 2005 4:35 pm
Location: Montreal

Post by Bertrand » Mon Apr 07, 2008 9:23 am

Scad wrote:Bertrand, a problem I see is that with this less differentiated system, there's a greater possibility of ties. Raw usually doesn't because the difference is pretty significant, but with much smaller numbers the ties will likely go up. How would your system solve this?
My system is just a quick hack, an experiment to see if zero-sum makes sense. It would need further tweaking to make it a "production ready" system. It does have it's flaws. Someone could win by stopping playing after a lucky streak, for example.

Dustin's current system has the advantage of being "slightly positive" , rewarding the frequent player and preventing ties. It's a good setup, better than my system I think.
Scad wrote:Also, I wonder if there are some interesting patterns regarding cumulative weekly scores over the long term... something like seeding, maybe? Or perhaps just an avg score/ per week stat
Good idea, I've put up a long term link here: http://sillysoft.net/wiki/?Long%20term

User avatar
Dominator
The Man
Posts: 1291
Joined: Sat Mar 25, 2006 5:00 pm

Post by Dominator » Mon Apr 07, 2008 9:32 am

Another problem is alias use. Just like dustin's first system, this rewards players with a low seed. If I played under an alias I could rack up much more points with a low risk of losing.

ALSO, someone pointed out that applying new equations to old data could be somewhat irrelevant. Players change the way they play or who they play with based on the ranking system in place. For example, when dustin's RAW system was in place everyone played more games because there was less risk in losing.

Another example is if you re-calculated last years NFL season. Imagine touchdowns worth 8 and field goals worth 4. The decisions that the coaches made during the season would not matter now.

Using the XML data can show that your equation works in theory, however everything can change when it is actually put into practice. Even the tiniest loophole could be taken advantage of... Alias use for example.

User avatar
Bertrand
Reaper Creator
Posts: 568
Joined: Mon Nov 28, 2005 4:35 pm
Location: Montreal

Post by Bertrand » Mon Apr 07, 2008 9:51 am

Dominator wrote: ALSO, someone pointed out that applying new equations to old data could be somewhat irrelevant. Players change the way they play or who they play with based on the ranking system in place.
Very true, I think it's Para who pointed this out. I agree that the results do not mean much.

But having the game data on-line is still infinitely better than what we had before. It enables us to find the obvious problems in our new systems. Dustin's last week "experimental disaster" could have been avoided by running the new formula against the historical database.

User avatar
GregM
Luxer
Posts: 252
Joined: Wed Jun 01, 2005 4:33 pm

Post by GregM » Tue Apr 08, 2008 3:39 pm

Dominator wrote:ALSO, someone pointed out that applying new equations to old data could be somewhat irrelevant. Players change the way they play or who they play with based on the ranking system in place. For example, when dustin's RAW system was in place everyone played more games because there was less risk in losing.

Another example is if you re-calculated last years NFL season. Imagine touchdowns worth 8 and field goals worth 4. The decisions that the coaches made during the season would not matter now.
If we ignore the above (because there's really no way to compensate for it) how about this as an objective method of evaluating ranking systems: the ranking of players at the beginning of a game should predict the outcome of that game as often as possible.

Perhaps the following measurement would be good: if A is ranked above B, A should place higher than B in a game. The best ranking system is the one with the most successes in making this kind of prediction.

Random rankings give, of course, 50% accuracy. I ran a little test on games #500000-600000 and dustin's raw system seems to have a success rate of 59% -- that is, someone with higher raw will do better than someone with lower raw 59% of the time. If you look at seeds instead of raw (using raw to compare unseeded players) the success rate goes up to 61%. Not bad, given the unpredictability of Lux. But surely it can be improved.

User avatar
kitty on catnip
Lux Elder
Posts: 2209
Joined: Tue Jun 06, 2006 12:34 pm
Location: BACK IN THE FORUMS...
Contact:

Post by kitty on catnip » Tue Apr 08, 2008 4:07 pm

wow Greg, noone has ever posted data like that before. That is very interesting!

so, if you'd compare a calculation based on skill, that skill calculation that is above a lesser calculation should hypothetically be winning better than 61%...I completely agree here. I think 70-75% would be a much better goal to strive for. Any higher than that is shooting too high, for even the best players only win 1 out of 3 games. or slightly higher...

any ideas, you mathmatical geniuses? I can only use theories, I cannot make a specific mathmatical equation that would ever work properly...

User avatar
Kain Mercenary
Luxer
Posts: 201
Joined: Mon May 23, 2005 4:21 pm
Location: OMEGA HQ

Post by Kain Mercenary » Tue Apr 08, 2008 5:58 pm

GregM wrote:
Dominator wrote:ALSO, someone pointed out that applying new equations to old data could be somewhat irrelevant. Players change the way they play or who they play with based on the ranking system in place. For example, when dustin's RAW system was in place everyone played more games because there was less risk in losing.

Another example is if you re-calculated last years NFL season. Imagine touchdowns worth 8 and field goals worth 4. The decisions that the coaches made during the season would not matter now.
If we ignore the above (because there's really no way to compensate for it) how about this as an objective method of evaluating ranking systems: the ranking of players at the beginning of a game should predict the outcome of that game as often as possible.

Perhaps the following measurement would be good: if A is ranked above B, A should place higher than B in a game. The best ranking system is the one with the most successes in making this kind of prediction.

Random rankings give, of course, 50% accuracy. I ran a little test on games #500000-600000 and dustin's raw system seems to have a success rate of 59% -- that is, someone with higher raw will do better than someone with lower raw 59% of the time. If you look at seeds instead of raw (using raw to compare unseeded players) the success rate goes up to 61%. Not bad, given the unpredictability of Lux. But surely it can be improved.
The problem with your idea is that most people play to the system. Therefore, the scores don't show skill at the game, they show an ability to play within the system. In other words, if someone were to create a new scoring method that more accurately predicts who will win a game under the current system, that method will only maintain that accuracy if we continue to play under the current system. Adopting that new scoring method would, in all likelihood, change the behavior of players within games. It would also have a significant effect on the games players choose to play in as well as the opponents they would play against.

Basically, this boils down to what Dominator said (someone said). Applying new formulas to old game data is dangerous. The only way to accurately measure the results of a scoring system is to put it into play and and make a judgement based on the effects.

User avatar
GregM
Luxer
Posts: 252
Joined: Wed Jun 01, 2005 4:33 pm

Post by GregM » Tue Apr 08, 2008 8:28 pm

Kain Mercenary wrote:Applying new formulas to old game data is dangerous.
Agreed, but it's the best we've got short of a live test, which is expensive in time, and using past data should at least give some sense of what to expect.

User avatar
Kain Mercenary
Luxer
Posts: 201
Joined: Mon May 23, 2005 4:21 pm
Location: OMEGA HQ

Post by Kain Mercenary » Tue Apr 08, 2008 8:51 pm

GregM wrote:
Kain Mercenary wrote:Applying new formulas to old game data is dangerous.
Agreed, but it's the best we've got short of a live test, which is expensive in time, and using past data should at least give some sense of what to expect.
My point was that applying a new formula to old data only predicts how someone would play under the old system and not necessarily the new one. Therefore, your percentage system is somewhat irrelevant in predicting behavior under a new system.

Perhaps running concurrent, officially supported (RAW: 1200, Bertrand: 42), systems is the best way to figure out what works and what doesn't. If enough people like an alternate system and its scoring algorithm, it may lead to that being the 'official' system.

I think the use of a percentage meter is an ineffective way of showing how well a system will perform when in actual use.

EDIT: Woah, and I'm currently in first under Bertrand's system! Perhaps it doesn't work so well. :-P

User avatar
GregM
Luxer
Posts: 252
Joined: Wed Jun 01, 2005 4:33 pm

Post by GregM » Tue Apr 08, 2008 9:02 pm

I've implemented Bertrand's system and ran this test on it; on the past week it predicted 61% of pairings, even with predicting based on seed and raw. Over games #500000-600000 it gets 63%, a bit better than seed/raw's 61%.

However, I left out this condition, which drops predictive power to 59%:
Bertrand wrote:But if the winner is already a "positive" player, and the average of the other players is negative, then it was an easy win, and I simply eliminate the base score from the calculation and only keep the skills score.
----
Kain Mercenary wrote:I think the use of a percentage meter is an ineffective way of showing how well a system will perform when in actual use.
Clearly, but I can't think of a better way to get an initial estimate of a system's effectiveness. Any ideas?

OK, an example of how this analysis is flawed: eliminating skill points from Bertrand's system still gives a predictive accuracy of 61%, but it's very clear that this system could be exploited by cooperating to play a lot of games and having one person win, since their winnings don't diminish as their opponents get worse.

User avatar
Bertrand
Reaper Creator
Posts: 568
Joined: Mon Nov 28, 2005 4:35 pm
Location: Montreal

Post by Bertrand » Tue Apr 08, 2008 9:06 pm

GregM wrote:If we ignore the above (because there's really no way to compensate for it) how about this as an objective method of evaluating ranking systems: the ranking of players at the beginning of a game should predict the outcome of that game as often as possible.

Perhaps the following measurement would be good: if A is ranked above B, A should place higher than B in a game. The best ranking system is the one with the most successes in making this kind of prediction.

Random rankings give, of course, 50% accuracy. I ran a little test on games #500000-600000 and dustin's raw system seems to have a success rate of 59% -- that is, someone with higher raw will do better than someone with lower raw 59% of the time. If you look at seeds instead of raw (using raw to compare unseeded players) the success rate goes up to 61%. Not bad, given the unpredictability of Lux. But surely it can be improved.
That's a brilliant insight. It's a very interesting way of comparing different scoring systems. So I couldn't resist trying it with my system.

Did you filter out the matches where RAW was close to being equal? Since those matches can not be predicted, they represent "noise" that has to be removed from the final result.

My system does pretty well: using the long-term statistics (3 weeks worth of games), and filtering out the matches where the score difference was less than 20, the successful prediction percentage was 74%. This is significantly higher than chance, so it proves that my system actually means something.

User avatar
Kain Mercenary
Luxer
Posts: 201
Joined: Mon May 23, 2005 4:21 pm
Location: OMEGA HQ

Post by Kain Mercenary » Tue Apr 08, 2008 9:12 pm

Bertrand wrote:
GregM wrote:If we ignore the above (because there's really no way to compensate for it) how about this as an objective method of evaluating ranking systems: the ranking of players at the beginning of a game should predict the outcome of that game as often as possible.

Perhaps the following measurement would be good: if A is ranked above B, A should place higher than B in a game. The best ranking system is the one with the most successes in making this kind of prediction.

Random rankings give, of course, 50% accuracy. I ran a little test on games #500000-600000 and dustin's raw system seems to have a success rate of 59% -- that is, someone with higher raw will do better than someone with lower raw 59% of the time. If you look at seeds instead of raw (using raw to compare unseeded players) the success rate goes up to 61%. Not bad, given the unpredictability of Lux. But surely it can be improved.
That's a brilliant insight. It's a very interesting way of comparing different scoring systems. So I couldn't resist trying it with my system.

Did you filter out the matches where RAW was close to being equal? Since those matches can not be predicted, they represent "noise" that has to be removed from the final result.

My system does pretty well: using the long-term statistics (3 weeks worth of games), and filtering out the matches where the score difference was less than 20, the successful prediction percentage was 74%. This is significantly higher than chance, so it proves that my system actually means something.
How much of a spread is 20 when compared to the range (your high and low scores)? If it's to wide, you would end up testing only extremes.

User avatar
Bertrand
Reaper Creator
Posts: 568
Joined: Mon Nov 28, 2005 4:35 pm
Location: Montreal

Post by Bertrand » Tue Apr 08, 2008 9:18 pm

Kain Mercenary wrote: How much of a spread is 20 when compared to the range (your high and low scores)? If it's to wide, you would end up testing only extremes.
A spread of 20 represents 2 average wins in my system. It's an arbitrary value that intuitively looks good. In the current RAW system, I guess the equivalent would be something like 200 or 300 RAW points.

User avatar
GregM
Luxer
Posts: 252
Joined: Wed Jun 01, 2005 4:33 pm

Post by GregM » Tue Apr 08, 2008 10:11 pm

Bertrand wrote:Did you filter out the matches where RAW was close to being equal? Since those matches can not be predicted, they represent "noise" that has to be removed from the final result.
Good point; a bigger ranking difference represents a stronger belief that A is better than B and so the evaluation system should take that into account. It would be interesting to plot the relationship between ranking difference and win probability for ranking systems under investigation.

Edit:
Here's some output from my program, testing your system: probability of successful prediction of the outcome of a pairing versus absolute value of rating difference. As hoped, predictions get much more accurate for larger.

Code: Select all

(1, 4)     -   8576 / 16435 = 52.1%
(4, 9)     -  10777 / 19681 = 54.7%
(9, 16)    -  10982 / 18448 = 59.5%
(16, 25)   -   8443 / 12551 = 67.2%
(25, 36)   -   6229 /  8381 = 74.3%
(36, 49)   -   3787 /  4648 = 81.4%
(49, 64)   -   1335 /  1528 = 87.3%
(64, 81)   -    286 /   315 = 90.7%
(81, 100)  -     42 /    45 = 93.3%
For comparison, a similar chart for raw, disregarding seeds:

Code: Select all

(0, 50)       -  10297 / 19410 = 53.0%
(50, 100)     -   8518 / 14841 = 57.3%
(100, 150)    -   6723 / 11216 = 59.9%
(150, 200)    -   5339 /  8657 = 61.6%
(200, 300)    -   7502 / 11624 = 64.5%
(300, 400)    -   4392 /  6481 = 67.7%
(400, 500)    -   2243 /  3161 = 70.9%
(500, 700)    -   1474 /  1876 = 78.5%
(700, 1000)   -    204 /   247 = 82.5%
(1000, 1500)  -      6 /     7 = 85.7%
Last edited by GregM on Wed Apr 09, 2008 2:07 am, edited 1 time in total.

User avatar
Dominator
The Man
Posts: 1291
Joined: Sat Mar 25, 2006 5:00 pm

Post by Dominator » Tue Apr 08, 2008 11:01 pm

GregM wrote:
Bertrand wrote:Did you filter out the matches where RAW was close to being equal? Since those matches can not be predicted, they represent "noise" that has to be removed from the final result.
Good point; a bigger ranking difference represents a stronger belief that A is better than B and so the evaluation system should take that into account. It would be interesting to plot the relationship between ranking difference and win probability for ranking systems under investigation.
how about bots? are they included?

User avatar
GregM
Luxer
Posts: 252
Joined: Wed Jun 01, 2005 4:33 pm

Post by GregM » Wed Apr 09, 2008 2:08 am

Dominator wrote:how about bots? are they included?
Why not?

User avatar
Dominator
The Man
Posts: 1291
Joined: Sat Mar 25, 2006 5:00 pm

Post by Dominator » Wed Apr 09, 2008 10:20 am

because players generally play bots first... I'm just saying that when bots are in the game humans behave differently then if it was a full house... I know I do.

Post Reply

Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 114 guests