#..-...#...-..# .+...=...=...+. ..+...-.-...B.. -..+...-...QIS- ....+.....DITA. .=...=...WO.EX. ..-...-.JEON-.. #..-...TOADY..# ..-...-R-KA.-.. .=...=.AW=D..=. ....+..IO.S.... -..+...TO..+..- ..+...ZO-...+.. .GAMBLER.=...+. HUIA..AS...-..# ??????? 0 0 running the static evaluator on this board shows: Unseen tiles are: aaccdeeeeeeeeeffgghiiiiilllmnnnnnoopprrrrstttuuuvvy?? place hamedly at 15G across (as???????) => ashamedly. SCORE 107 284938 moves were considered for this play Time taken was 448.6u 2.8s 9:10.85 81.9% 0+0k 16+1io 0pf+0w If you set the confidence level to 100%, this would be the best possible reply by our opponent to our hypothesized move (placed on the board above). You can see that this would probably be the best reply to *any* hypothesized move on our part, so as such it is not very useful. When we start using more realistic probabilies, we expect to get results like 'the opponent is likely to reply to move XXX with a score as large as 39' and 'the opponent is likely to reply to move YYY with a score as large as 20' Actually we're more likely to get this sort of benefit closer to the end game, but I still want to see what it does in a position like this. The long time it took for the evaluation is disappointing. Even when this unoptimised code is speeded up, it's not going to allow many evaluations during a game (unless the memofn trick (aka dynamic programming) speeds it up considerably). It may be that we need to turn the algorithm on its head once again, and instead of generating words to fit, we just run through all the words in the dictionary, seeing where we can place each of them, then checking against the remaining tiles to see if the placement would have been possible. However, even as it stands, this should be great for tasks such as the game analysis that we do on the cga mailing list, where usually the choice is only between two suggested moves.