1 For a game with B+4.5 result taking komi (including dynkomi) into account, board_official_score() would return -4.5, while most other scores will give 4.5 (i.e. black's area is larger than white's, plus komi, by 4.5). This score (multiplied by 2 to yield an integer called "result" in some places) will be clamped into a value in [0,1] so that unlikely playouts will not affect the average too much. The values stored in the tree nodes are still absolute (>0.5 means black is favored), but the printed values have been adjusted by tree_node_get_value() such that >0.5 means the currently player (corresponding to nodes just below the root) is favored. The dynkomi value shown is absolute as well: positive means that black is winning.
3 When the tree reaches the maximum size, no node will be added until the move is decided and garbage collection is performed; therefore, if too little memory is available and the important nodes (the ones corresponding to tricky moves not understood by the playout engine) are not in the tree when memory is filled, long thinking times won't help. Pruning is currently not done during a move, and it is done by preserving all nodes within a shallow depth (the node count would increase exponentially) and high-playout nodes in larger depths. All children of a node are pruned at once, putting the parent node back into the unexpanded state.
7 When the dynkomi is adjusted, the existing values in the tree are not recomputed, so if large changes in dynkomi is present during one move, the average values and scores will be inaccurate, and more importantly, nodes expanded early will have biased values and thus be explored too (in)frequently. Resign decisions and some timing decisions will be affected as well, although since the statistics used for dynkomi adjustment itself are reset in komi_by_value(), dynkomi adjustment is not affected. Perhaps we should include a larger linear section (5 points should probably be reasonable) in scale_value(), and avoids adjusting the dynkomi during a move (as opposed to adjustments between moves) unless the average is near the boundary of this linear section, and a correction may be introduced even in this case. However, other thresholds depending on the average value, such as the resign threshold, should be adjusted accordingly. Alternatively, we can adjust up or down the value according to the current dynkomi.
9 Memory use could be optimized. The depth and playout count thresholds when pruning should be adjusted more robustly according to the maximum size of the temporary tree; currently it is possible that the maximum temp tree size gets exceeded before all nodes within the playout count threshold are included (take a look at the pruning statistics; the unit is bytes and each node is currently 88 bytes on a 64-bit system, and "temp tree overflow" will be reported). Finally, it seems to be inefficient to expand all the (often 300+) children of a node at once whenever its playout count reaches expand_p (and memory is available); due to the relatively high first-play urgency when explore_p is high, a few more playouts on the parent node will visit a similar number of these child nodes, but many of the children are still untouched (intuitively, we don't need to consider faraway moves with no prior information in the first few playouts), and even the touched ones only need a small amount of information. Of course, allowing pruning during a move may well be helpful with limited memory and long thinking times, but thread synchronization must be done carefully, and the aforementioned robustness issues of the pruning process would become more important. Also note the assumptions made in the program, e.g. in uct_search_result() it is assumed that pass is always the first child.
11 Although expand_p is now automatically adjusted according to the amount of remaining memory, its base value should also allow adjustment.
13 As a node can have many children, it might be inefficient to compute and sort all urgency values in every descent. Do some profiling to see whether any optimization is needed here. (Currently most of the time seems to be spent in playouts.)
15 Currently, pruning is necessary during tree_promote_node() under fast_alloc, because we only have a limited amount of space to copy the surviving nodes into. This wastes a bit of time in copying (although without fast_alloc, freeing the numerous dead nodes individually will be even slower, and in any case marking the surviving nodes will take almost as many cache misses as copying), and while the pruned nodes are expanded again. To solve this problem, we can add a version number to each node which is incremented only for surviving ones, and in this way it will be easier to find free nodes; however, threading issues remain to be considered.
17 Time control does not seem to work properly (pachi plays very fast and says that the game is internally lost by time) when the same pachi plays both black and white when pondering. Perhaps this configuration is simply not supported.
19 Revise board_estimated_moves_left() to use the ownermap information from UCT.
21 Generate passes a bit earlier by giving it the appropriate reward or prior value.
23 UCT computes the upper confidence interval by assuming that the outcome of each playout is i.i.d., but actually the outcome distribution can change due to the discovery of a new follow-up move, change in dynkomi, etc. If we assume that the distribution changes with probability proportional to 1/(n0+n), where n0 is a constant (maybe 10 or 1000) and n is the playout count at that node, a good (I don't know whether it is optimal) estimate of the mean of the distribution at playout n would be one with the outcome of playout i weighted by (n0+n)^beta, where beta is a constant at maybe 2 or 4. In case the distribution does not change, this weighting will only increase the variance by a constant factor of about (beta+1)^2/(2*beta+1), so this isn't a big problem. To implement this, just store the weighted partial sum along with the playout count; the total weight can be uniquely determined by the playout count. Merging of other statistics needs some care; perhaps we can just regard the priors as the outcomes of some initial playouts that should no longer matter much after a significant number of playouts are available.
25 The above change will allow promising follow-up moves to affect the value of their parent nodes after a smaller number of playouts. However, when these follow-up moves are initially discovered, their playout count may still be too small to affect the parent value much, so the parent node may not receive enough playouts to allow the follow-up move to be played more. Perhaps we should instead use the expectation of the min/max value of the children as the value of the parent, and use the upper confidence level as the urgency; this computation is straightforward if we approximate all distributions as e.g. Laplacian (a distribution with an easy-to-compute cdf in general, preferably limited in [0,1]) and then compute the cdf of the minimum. As the pdf of min/max random variables are often highly asymmetric, it may be a good idea to record the 5% and 95% quantiles rather than the means, and regard the distributions as uniform between neighboring quantile lines. Be careful when pruning turns a non-leaf node back into a leaf node, though.