New google competitive programming model seems to have a potential rating between Expert and CM

Revision en1, by ratherSimple1, 2023-12-06 18:57:47

Quoting directly from the technical report of Google's new AlphaCode 2 model,

"We found AlphaCode 2 solved 43% of these competition problems, a close to 2× improvement over the prior record-setting AlphaCode system, which solved 25%. Mapping this to competition rankings, we estimate that AlphaCode 2 sits at the 85th percentile on average – i.e. it performs better than 85% of entrants, ranking just between the ‘Expert’ and ‘Candidate Master’ categories on Codeforces"

What do you think it means for the future? If they figure out how to improve it like their alpha go/chess models then we might have something magical in our hands pretty soon

Tags artificial intelligence, google

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en1 English ratherSimple1 2023-12-06 18:57:47 836 Initial revision (published)