Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading more about this match and Magnus in general, I learned of a measure termed "Nettlesomeness" which has been used to measure which players do the most to make their opponents to make mistakes. Magnus, with his highly creative style of play and unexpected moves, not surprisingly ranks the highest in this measure.

He seems to have this remarkable gift of making moves which aren't just strong, they get inside his opponent's head and cause them to either overthink/break down. I'm interested in the technical details behind this metric. Has anyone heard of it before?

Regardless, congrats Magnus. You are truly a generational talent, and I'm excited to see what your win will do for the game.

http://marginalrevolution.com/marginalrevolution/2013/11/net...



Thanks for the comment, the concept of nettlesomeness intrigues me.

"...Carlsen is demonstrating one of his most feared qualities, namely his “nettlesomeness,” to use a term coined for this purpose by Ken Regan. Using computer analysis, you can measure which players do the most to cause their opponents to make mistakes."

I was surprised to see that this isn't just some subjective measure but can me measured using computer analysis. In chess this can be a great tool against one's opponents but in collaborative endeavors it can be a detriment to team productivity. I wonder whether the same analysis can be used to pin-point nettlesome members of team e.g members of an open source team whose contributions sidetrack collaborators and cause them to make more trackable errors...


I think this is actually based on subjective measures. An annotated game[0] will have ?!, ?, and ?? added by human commentators to indicate varying levels of mistakes. A computer analysis can make use of these subjective move evaluations within an annotated game to easily measure which players cause their opponents to make mistakes at a significantly greater frequency than their normal rate of mistakes.

[0] http://en.wikipedia.org/wiki/Chess_annotation_symbols#Move_e...


The computer program Fritz actually has algorithms to imitate a human's analysis. It's one of my favorite tools to improve my game, since it searches databases of master-level games to find games that progressed in the same way and also gives "human-like" annotations. Here's an example PGN[1] from one of my games (NAG[2] has been converted to Unicode for the sake of clarity)

    {C24: Bishop's Opening: 2...Nf6} 
    1. e4 e5 2. Bc4 Nf6 3. d4 exd4 4. Nf3 Bb4+ 5.Bd2 Bxd2+ 6. Qxd2 O-O 
    7. e5 {White threatens to win material: e5xf6} 
        (7. O-O Nxe4 8. Qxd4 Nd6 9. Bd5 c6 10. Bb3 Nf5 11. Qf4 d5 
        12. Nbd2 Qd6 13. Qxd6 Nxd6 14. c4 dxc4 15. Nxc4 Nxc4 16. Bxc4 Bf5 
        17. Rfe1 Nd7 18. Nd4 Bg6 19. Re7 Rad8 20. Rd1 Rfe8 21. Nxc6 bxc6 
        {Jensen,B (1817)-Gade,J (1535) Helsingor 2009 1-0 (58)}) 
    7... d5N  {Black threatens to win material: d5xc4}
        (7... Re8 8. O-O d5 9. exd6 Qxd6 10. Na3 c5 11. Nb5 Qb6 
        12. Qf4 Re7 13. Rae1 Be6 14. Bxe6 fxe6 15. Nd6 Qc7 16. Qg3 Nc6 
        17. Ng5 e5 18. Nf5 Ree8 19. Nf7 Qxf7 20. Nh6+ 
        {1-0 (20) Le Dref,M (1770)-Bremond,E (1600) Angers 2006}) 
    8. Qxd4? 
        (8. ⌓Bb3 {and White is still in the game} Ne4 9. Qxd4=) 
    8... Nc6∓ 9. Qd2 dxc4 
        (9... Bg4!? 10. Be2 Bxf3 11. Bxf3 Nxe5 −+) 
    10. Qxd8⩱ Rxd8 11. exf6 {White king safety dropped.} Re8+ 
    12. Kd1 
        (12. Kf1 gxf6 13. h4 Be6∓)
    12... Bg4 13. Nbd2 Rad8 {Black has a king attack}
        (13... Re6!?∓)
    14. Re1⩱ Ne5 15. c3?? 
        (15. ⌓Re3⩱ {was necessary})
    15... Bxf3+ −+ 16. gxf3 
        (16. Kc2 {doesn't do any good} Bg4 17. f3 Bf5+ 18. Ne4 Ng4 −+) 
    16... Rxd2+! {Deflection: e1} 17. Kxd2 
        (17. Kc1 Rxf2 18. Re4 Nd3+ 19. Kd1 Nxb2+ 20. Ke1 Nd3+ 21. Kd1 Rxe4 22.
        fxe4 Rf1+ 23. Kd2 Rxa1 24. a3 Rxa3 25. Ke3 gxf6 26. h4 Ne5 27. Kd2 Ra2+ 
        28. Ke1 b5 29. h5 b4 30. Kf1 b3 31. Ke1 b2 32. Kf2 b1=Q+ 33. Kg3 Qg1+ 
        34. Kh4 Qh2#)
    17... Nxf3+ 18. Kc2 Nxe1+ 19. Kd2 
        (19. Kd1 {the last chance for counterplay} Nf3 20. Rc1 Re1+ 21. Kc2 Re2+ 
        22. Kb1 Nd2+ 23. Ka1 Rxf2 24. Rd1 Rxh2 25. Rg1 −+) 
    19... Nf3+ 20. Kc2 Re2+ 
        (20... Re2+ 21. Kb1 Re1+ 22. Kc2 Rxa1 −+) 0-1
[1] Portable Game Notation http://en.wikipedia.org/wiki/Portable_Game_Notation

[2] Numeric Annotation Glyphs http://en.wikipedia.org/wiki/Numeric_Annotation_Glyphs


Since computers play chess (better than humans) by looking at moves and ranking them as advantageous or disadvantageous, I would think it would be relatively straightforward to use a computer to decide whether a move is a mistake or not.


You can't definitively say that a computer's move is best, the branching factor of chess means that searching the game tree until finding a checkmate is nearly impossible. It leads to the horizon effect:

https://en.wikipedia.org/wiki/Horizon_effect

but recent advances in AI like in Deep Blue have introduced branch extension, where branches of the game tree with statistically similar rankings are searched further until one can be definitively regarded as the best one.

To summarize, chess engines can't give definitive answers on how good a move is because, like humans, they can't search the whole game tree.


This applies in the earlier stages of the game. In the endgame, when there are fewer pieces, computers can find the best move. There are many instances where say, you reach and endgame with some nine pieces, and the computer will find a mate in 18 moves.


Apparently Carlsen (and presumably other Grandmasters) are also able to look ahead 15-20 moves as well.

http://content.time.com/time/world/article/0,8599,1948809,00...


One of the reasons computers are so much stronger than humans, especially in the endgame, is because they have incredibly large pre-calculated tablebases[1] of 3, 4, 5 and 6 piece positions. So, if a computer calculates ahead 15 moves, and reaches a 6 piece ending, it already knows the exact evaluation of this position. And in some of these cases, the number of moves from that position to the final position (forced draw or checkmate) can be 10-100 moves [2]. But it is already calculated, the computer doesn't have to go any further. So in effect, the computer calculating 15 or 20 moves ahead in a late stage position can actually be calculating 50 or 100 moves ahead.

This is why the second part of Carlsen's quote is so important:

"Sometimes 15 to 20 moves ahead. But the trick is evaluating the position at the end of those calculations."

A human sees a late game position and has to actually calculate it all the way to its conclusion or at least a key position where they are confident of the evaluation [3]. A computer doesn't have to, and believe me, some of those 6 piece endgame positions are really weird. You move a piece just one innocuous looking square and it can alter the outcome from a win to a draw. There in fact were many endgame positions throughout chess history which were considered solved, only for the evaluation to change after a brute force calculation.

I'm not particularly good at chess [4], but in slow games I will routinely calculate 10 moves ahead in some sharp positions. Now, I'm not comparing my calculating ability to a GM, which is obviously better in every way, but the fact that a GM calculates 15-20 moves ahead is not why they are good. It is that at every step of that calculation they are evaluating the position incredibly accurately and are deciding the right moves to calculate.

In fact, due to the horizon effect, the reason computers got a lot better than humans wasn't due to raw calculation speed, it was due to massive improvements in their evaluation ability (like endgame tablebases) allowing them to put that massive calculation advantage to good use.

1: https://en.wikipedia.org/wiki/Endgame_tablebase

2: All of which are already in the tablebase by definition, because you can't add to the number of pieces on the chess board

3: For example, if I can calculate a position to where I have a rook and a king and you only have a king, I don't have to think any further, I know this is a win for me.

4: ~1300 rating


It would be able to tell if the move was a mistake if playing against a computer. A move that would be bad against a computer opponent might throw a human opponent off balance through surprise, or take the game into a type of board state which was unfamiliar and disorienting to the opponent.


I suspect that's precisely why the computer can measure it. This is the scenario:

1. Player A makes a move the computer considers suboptimal.

2. Disoriented player B responds with another move the computer considers an important mistake.

3. Player A capitalizes with moves the computer thinks improves his position, even relative to the original baseline.

4. The computer concludes that A is nettlesome.

So it's measuring the delta of what it considers optimal with what actually happens against real humans.


From the James T. Kirk school of chess.


Right - a human would have to curate which moves were mistakes. Though I think that eventually computers could take over in this category as well.


Impressive that it's scientific.

I think it also measures how much people are trained to learn all the historical games versus learn deep principles and truths of chess. The great ones should be able to thrive in chaos.

This is true in other fields. Some traders get crushed in non-standard markets. Others thrive.


I believe Regan just measures how badly a given player's opponents tend to err (as determined by computer evaluation of moves). So it's just a post hoc statistical measure, not an actual evaluation of how tricky that player's moves are.


But funnily enough, all you'd need for erikig's program.

A program runs through your git/svn and tracks the user, date, commit, and reversion.

While analysing reversions vs commits and all the in between, I am confident it would be somewhat easy to tweak the analysis to be accurate.

Each reversion you encounter, you check the five next and five previous commits. If any users stand out as being in the previous tree for additional/reversional...

Actually, it may be a little complicated...


This reminds me of the sorry of Deep Blue's buggy randomness causing Kasperov to freak out: http://www.wired.com/playbook/2012/09/deep-blue-computer-bug...


There was a checker playing computer called Chinook that had something similar happen at one point (iirc one of its databases was corrupted).

Definitely worth listening to the story: http://relprime.com/chinook/


Not a bug, it was a feature.


I follow Regan's chess work pretty closely and I don't know where Cowen got this term. The link provided shows intrinsic ratings, not "nettlesomeness". As shown by Regan and others, Carlsen simply plays more accurately than anyone else. Chess is probably drawn, so one only wins when his opponent makes mistakes.

Both Regan and Guid & Bratko have tried weighting the accuracy of moves by the complexity of positions faced. Carlsen is middle-of-the-pack in terms of reaching complex positions, so this doesn't seem to be the mysterious "nettlesomeness" either.


I've tried a few techniques to try getting in a person's head while playing, but I wouldn't have any idea how to measure it. Some techniques are on the board, but some aren't.

One thing I've tried is just staring at the player. I've also tried just barely glancing at the board and making the move on my turn while actually doing my studying on their turn.

The funniest thing that someone tried against me was playing a guitar while also playing chess. This was in a local tournament in college. Half way into the game, he stopped and exclaimed "What am I doing??" and tried to play serious from that point forward. Luckily for me, he waited too late.


If Magnus is making strong yet unexepected moves relative to his opponents - doesn't that mean he simply has a higher degree of mastery at the game?


Well, "unexpectedness" isn't just a failure to anticipate by weaker players. A non-optimal move could be intentionally made with the intention to be unexpected, in order to play mind games with the opponent. Unexpectedness could be a factor in what makes a move strong, not just an after-the-fact observation.


Chess is a game of perfect information. Chess players at this level do not intentionally play suboptimal moves hoping to trick their opponents. They try to find the best move given the position on the board.

In some cases two moves may be roughly objectively equal, one leading to more complex positions than the other. In these cases players do sometimes choose according to the situation on the clock, their style, or their mood. For instance, if I have more time left than my opponent, I may choose the more complicated continuation, and vice versa. This sort of practical savvy is important, but not nearly as important as many casual players assume.


True, I was thinking more of the latter example. Several equally-optimal options are at hand - most people would pick the one they are most familiar with. A more devious player might pick the option that the opponent has least likely considered, taking advantage of their unpreparedness.


I refer you to the infamous Lasker-Capablanca game where Lasker played a dull drawish game to throw Capa off his guard.


More to the point, the players have minutes to hours, not days, to study the position. It can be a very good strategy to play a surprising but perhaps less than optimal move to break the opponents' concentration and hopefully get them into time trouble. That's why correspondence and tournament chess don't really intersect apart from studying the same game.


I believe there was a Russian grandmaster in the 60s who is credited with introducing psychology into chess because he would deliberately play moves which would stir up uncomfortable memories of defeat for his opponents (based on study of their past tournaments) and this distract them from the game at hand. Can't remember his name though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: