Rating changes for the last round are temporarily rolled back. They will be returned soon. ×

Recent actions
 On huawei → Huawei Honorcup Marathon 2, 29 hours ago Created or updated the text
 On huawei → Huawei Honorcup Marathon 2, 39 hours ago +38 Where are hoodies?
 On huawei → Huawei Honorcup Marathon 2, 5 days ago 0 Where did the wrong pieces come from? It's an optical illusion!
 On huawei → Huawei Honorcup Marathon 2, 5 days ago 0 Not me yet :(
 On huawei → Huawei Honorcup Marathon 2, 5 days ago +17 Did huawei contact anyone regarding hoodies or internship?
 On huawei → Huawei Honorcup Marathon 2, 7 days ago +20 Since the competition is finished I can share some outstanding results from C3:
 On huawei → Huawei Honorcup Marathon 2, 7 days ago 0 Thanks for sharing your approach, which judging by the standings it worked quite well :)I share some ideas about my final approach here: Edge compatibility: I used some simple approach calculating the sum of the absolute value of the difference between the normalized pixel values (in YCbCr color space for RGB images) between the border row/column of the pieces. Assembly algorithm: Greedy considering unused blocks on the positions touching the already taken ones which will not make the grid exceed MxM dimensions. For each position I calculate the best score and the second best score, and then I pick the position with the highest difference between those scores and place the block with the highest score on that position. This simple approach was good enough to achieve the following scores: 29.5k on A3, 29.3k on B3 and 26.7k on C3.
 On huawei → Huawei Honorcup Marathon 2, 8 days ago +37 MikeMirzayanov, will it be possible to submit in the future? Seems a good practice to get at least 27k on C.
 On huawei → Huawei Honorcup Marathon 2, 8 days ago 0 Hello, I used four-algorithms combine solutions. I run every algorithm and choose the best result as answer. First algo is greedy solution. I just brut-force the central fragment and construct result for it, then choose best. For every brute-force center I made prio queue with best fragments suggestions, then every time choose the best one and merge it into current-center result. Second and third soultions are BFS-based algorithms with low difference. The main idea is that if place which you wanna to fill with fragment connected with more than one filled poses then you decrease probability of wrong fragment-setting. So let's brute force as in previous method central fragment and start BFS algo from center with greedy setting fragments in each poses. So with it in most of cases when you choose current "best" fragment you oriented to two already set fragment, which decrease error probability. Fourth algo I used just only for 16x16 subtask cause previous algorithms works not bad only for A and B tasks. So I just brute-force the best 4x4 subfragments formed by BFS algorithm and then choosen the best of this and merge it into result. It leads to stability 41 point per image in front of ~28 point of BFS/Greedy algo. And after all I try to improve results by deleting fragments which distance was more than 10% and re-build image with placing only removed poses. If it led to more optimal solution I leave it as result and round it otherwise.
 On huawei → Huawei Honorcup Marathon 2, 8 days ago 0 Uh nope not at all, I just found something to read/write png then it's just an array of numbers to me and I wrote a few helper functions to decorate the output to debug.
 On huawei → Huawei Honorcup Marathon 2, 8 days ago +10 Did you have previous experience working with images?Looks like your visualization is pretty advanced
 On huawei → Huawei Honorcup Marathon 2, 8 days ago +3 Generally speaking, no. The genetic algorithm shows good results for tasks A and B, but for problem C it gives about 20.000-25.000 points. I believe that the main problem is that the best-buddies metric that underlies this paper does not work well enough for the third sample. I tried to introduce the psevdobudies metric, however this provided minor improvements. I suppose many participants went the same way and got into a dead end.
 On huawei → Huawei Honorcup Marathon 2, 8 days ago +8 At least that one didn't have several identically fully black (or white) pieces like some others where you can't do anything. Abusing the colors in the output i get : SpoilerGreen links are where my greedy is confident about the match.White pixels are (1,1,1) (almost black) and black ones are (0,0,0) (really black).So this one is about matching noise, I didn't do anything specific and I get garbage because I use only the outer two columns/rows on each piece. But you might get a bit better than random if you match some statistics on the noise ? (there seems to be more or less densely (1,1,1) vs (0,0,0) areas for instance)
 On huawei → Huawei Honorcup Marathon 2, 8 days ago 0 This paper contained some interesting ideas.
 On huawei → Huawei Honorcup Marathon 2, 8 days ago 0 Check system test done
 On huawei → Huawei Honorcup Marathon 2, 8 days ago 0 Why?
 On huawei → Huawei Honorcup Marathon 2, 8 days ago 0 results already out!
 On huawei → Huawei Honorcup Marathon 1, 9 days ago Created or updated the text