Reading more about this match and Magnus in general, I learned of a measure termed "Nettlesomeness" which has been used to measure which players do the most to make their opponents to make mistakes. Magnus, with his highly creative style of play and unexpected moves, not surprisingly ranks the highest in this measure.
He seems to have this remarkable gift of making moves which aren't just strong, they get inside his opponent's head and cause them to either overthink/break down. I'm interested in the technical details behind this metric. Has anyone heard of it before?
Regardless, congrats Magnus. You are truly a generational talent, and I'm excited to see what your win will do for the game.
Thanks for the comment, the concept of nettlesomeness intrigues me.
"...Carlsen is demonstrating one of his most feared qualities, namely his “nettlesomeness,” to use a term coined for this purpose by Ken Regan. Using computer analysis, you can measure which players do the most to cause their opponents to make mistakes."
I was surprised to see that this isn't just some subjective measure but can me measured using computer analysis. In chess this can be a great tool against one's opponents but in collaborative endeavors it can be a detriment to team productivity. I wonder whether the same analysis can be used to pin-point nettlesome members of team e.g members of an open source team whose contributions sidetrack collaborators and cause them to make more trackable errors...
I think this is actually based on subjective measures. An annotated game[0] will have ?!, ?, and ?? added by human commentators to indicate varying levels of mistakes. A computer analysis can make use of these subjective move evaluations within an annotated game to easily measure which players cause their opponents to make mistakes at a significantly greater frequency than their normal rate of mistakes.
The computer program Fritz actually has algorithms to imitate a human's analysis. It's one of my favorite tools to improve my game, since it searches databases of master-level games to find games that progressed in the same way and also gives "human-like" annotations. Here's an example PGN[1] from one of my games (NAG[2] has been converted to Unicode for the sake of clarity)
Since computers play chess (better than humans) by looking at moves and ranking them as advantageous or disadvantageous, I would think it would be relatively straightforward to use a computer to decide whether a move is a mistake or not.
You can't definitively say that a computer's move is best, the branching factor of chess means that searching the game tree until finding a checkmate is nearly impossible. It leads to the horizon effect:
but recent advances in AI like in Deep Blue have introduced branch extension, where branches of the game tree with statistically similar rankings are searched further until one can be definitively regarded as the best one.
To summarize, chess engines can't give definitive answers on how good a move is because, like humans, they can't search the whole game tree.
This applies in the earlier stages of the game. In the endgame, when there are fewer pieces, computers can find the best move. There are many instances where say, you reach and endgame with some nine pieces, and the computer will find a mate in 18 moves.
One of the reasons computers are so much stronger than humans, especially in the endgame, is because they have incredibly large pre-calculated tablebases[1] of 3, 4, 5 and 6 piece positions. So, if a computer calculates ahead 15 moves, and reaches a 6 piece ending, it already knows the exact evaluation of this position. And in some of these cases, the number of moves from that position to the final position (forced draw or checkmate) can be 10-100 moves [2]. But it is already calculated, the computer doesn't have to go any further. So in effect, the computer calculating 15 or 20 moves ahead in a late stage position can actually be calculating 50 or 100 moves ahead.
This is why the second part of Carlsen's quote is so important:
"Sometimes 15 to 20 moves ahead. But the trick is evaluating the position at the end of those calculations."
A human sees a late game position and has to actually calculate it all the way to its conclusion or at least a key position where they are confident of the evaluation [3]. A computer doesn't have to, and believe me, some of those 6 piece endgame positions are really weird. You move a piece just one innocuous looking square and it can alter the outcome from a win to a draw. There in fact were many endgame positions throughout chess history which were considered solved, only for the evaluation to change after a brute force calculation.
I'm not particularly good at chess [4], but in slow games I will routinely calculate 10 moves ahead in some sharp positions. Now, I'm not comparing my calculating ability to a GM, which is obviously better in every way, but the fact that a GM calculates 15-20 moves ahead is not why they are good. It is that at every step of that calculation they are evaluating the position incredibly accurately and are deciding the right moves to calculate.
In fact, due to the horizon effect, the reason computers got a lot better than humans wasn't due to raw calculation speed, it was due to massive improvements in their evaluation ability (like endgame tablebases) allowing them to put that massive calculation advantage to good use.
2: All of which are already in the tablebase by definition, because you can't add to the number of pieces on the chess board
3: For example, if I can calculate a position to where I have a rook and a king and you only have a king, I don't have to think any further, I know this is a win for me.
It would be able to tell if the move was a mistake if playing against a computer. A move that would be bad against a computer opponent might throw a human opponent off balance through surprise, or take the game into a type of board state which was unfamiliar and disorienting to the opponent.
I think it also measures how much people are trained to learn all the historical games versus learn deep principles and truths of chess. The great ones should be able to thrive in chaos.
This is true in other fields. Some traders get crushed in non-standard markets. Others thrive.
I believe Regan just measures how badly a given player's opponents tend to err (as determined by computer evaluation of moves). So it's just a post hoc statistical measure, not an actual evaluation of how tricky that player's moves are.
But funnily enough, all you'd need for erikig's program.
A program runs through your git/svn and tracks the user, date, commit, and reversion.
While analysing reversions vs commits and all the in between, I am confident it would be somewhat easy to tweak the analysis to be accurate.
Each reversion you encounter, you check the five next and five previous commits. If any users stand out as being in the previous tree for additional/reversional...
I follow Regan's chess work pretty closely and I don't know where Cowen got this term. The link provided shows intrinsic ratings, not "nettlesomeness". As shown by Regan and others, Carlsen simply plays more accurately than anyone else. Chess is probably drawn, so one only wins when his opponent makes mistakes.
Both Regan and Guid & Bratko have tried weighting the accuracy of moves by the complexity of positions faced. Carlsen is middle-of-the-pack in terms of reaching complex positions, so this doesn't seem to be the mysterious "nettlesomeness" either.
I've tried a few techniques to try getting in a person's head while playing, but I wouldn't have any idea how to measure it. Some techniques are on the board, but some aren't.
One thing I've tried is just staring at the player. I've also tried just barely glancing at the board and making the move on my turn while actually doing my studying on their turn.
The funniest thing that someone tried against me was playing a guitar while also playing chess. This was in a local tournament in college. Half way into the game, he stopped and exclaimed "What am I doing??" and tried to play serious from that point forward. Luckily for me, he waited too late.
Well, "unexpectedness" isn't just a failure to anticipate by weaker players. A non-optimal move could be intentionally made with the intention to be unexpected, in order to play mind games with the opponent. Unexpectedness could be a factor in what makes a move strong, not just an after-the-fact observation.
Chess is a game of perfect information. Chess players at this level do not intentionally play suboptimal moves hoping to trick their opponents. They try to find the best move given the position on the board.
In some cases two moves may be roughly objectively equal, one leading to more complex positions than the other. In these cases players do sometimes choose according to the situation on the clock, their style, or their mood. For instance, if I have more time left than my opponent, I may choose the more complicated continuation, and vice versa. This sort of practical savvy is important, but not nearly as important as many casual players assume.
True, I was thinking more of the latter example. Several equally-optimal options are at hand - most people would pick the one they are most familiar with. A more devious player might pick the option that the opponent has least likely considered, taking advantage of their unpreparedness.
More to the point, the players have minutes to hours, not days, to study the position. It can be a very good strategy to play a surprising but perhaps less than optimal move to break the opponents' concentration and hopefully get them into time trouble. That's why correspondence and tournament chess don't really intersect apart from studying the same game.
I believe there was a Russian grandmaster in the 60s who is credited with introducing psychology into chess because he would deliberately play moves which would stir up uncomfortable memories of defeat for his opponents (based on study of their past tournaments) and this distract them from the game at hand. Can't remember his name though.
He seems to have this remarkable gift of making moves which aren't just strong, they get inside his opponent's head and cause them to either overthink/break down. I'm interested in the technical details behind this metric. Has anyone heard of it before?
Regardless, congrats Magnus. You are truly a generational talent, and I'm excited to see what your win will do for the game.
http://marginalrevolution.com/marginalrevolution/2013/11/net...