Given a set of nonnegative integers a1, a2, ... an, Alice and Bob play a game with three rules

1) Alice goes first, and Bob goes next (they alternate turns)

2) A single turn consists of choosing some positive integer ai from the sequence, and replacing it with ai — d, where d is some divisor of ai.

3) When a player has no valid moves remaining, they lose.

Let's suppose (a1, a2, a3) = (1, 5, 4). Then on the first move Alice can make the list (1, 5, 2) by subtracting 2 from the third number (since 2 divides 4). Then Bob can get (1, 0, 2) by subtracting 5 from the middle number (5 divides 5), etc.

Let f(n, m) be the probability that Alice has a winning strategy for a random game with input integers (a1 ... an). where each ai is uniformly distributed on [0, m]. Given n and m, how can I write a program to compute the limit as n approaches infinity of f(n, m)?

Auto comment: topic has been updated by awang01999 (previous revision, new revision, compare).