**Hello, Codeforces!**

Welcome to the ICPC Communication Routing Challenge powered by Huawei!

This challenge edition is very special, as it will be happening in conjunction with the ICPC World Finals! For those who are not participating in the finals — ICPC and Codeforces are offering a unique chance to compete during a **5-day Challenge Marathon** and win amazing prizes from Huawei!

**ICPC Challenge Marathon (open to public, unrated): October 9-14, 2021, 00:00 UTC:**

This time we’re delighted to provide you with a future-oriented topic – routing algorithms for next-generation communication. During this Challenge we will provide you with a super-large inter-satellite optical network to properly plan message paths, to reduce communication latency and improve resource utilization. However, finding the optimal path in a network with limited resources is an NP-hard problem. Considering the physical constraints of optics and circuits, our algorithm faces greater technical challenges:

- How can we model and abstract device constraints and design algorithms in the most appropriate way?
- Is there an algorithm that takes almost the same time to calculate routes as the network scale increases?
- Is there any way to obtain the theoretical optimal solution in a short period for ultra-large networks?

We hope that these satellite communications challenge questions can help you understand the problems that Huawei's software algorithm researchers face every day. All the best!!!

#### Prizes

For **3-hours onsite ICPC World Finals Challenge** Huawei will provide prizes to the winners in 2 groups of participants:

**Group 1: TOP 30 ICPC World Finalists**, who participate individually:1st-10th place HUAWEI MATE 40 Pro 11th-20th place HUAWEI MatePad Pro 21st-30th place HUAWEI WATCH 3 Pro **Group 2: TOP 9 ICPC World Finals Coaches and Co-Coaches**, who participate individually:1st-3rd place HUAWEI MATE 40 Pro 4th-6th place HUAWEI MatePad Pro 7th-9th place HUAWEI WATCH 3 Pro

For **5 days online Challenge Marathon**, Huawei will provide prizes to TOP 30 individual participants:

1st place* | HUAWEI MateBook X Pro + 5000 USD |

2nd — 4th place | HUAWEI MateBook X Pro |

5th — 10th place | HUAWEI MATE 40 Pro |

11th — 16th place | HUAWEI MatePad Pro |

17th — 22nd place | HUAWEI WATCH 3 Pro Classic |

23rd — 30th place | HUAWEI FreeBuds Studio |

* The 1st place winner will get additional bonus in amount of 5000 USD. If the bonus cannot be transferred to the winner due to any reason, it may be replaced by a prize of the same value.

If the allocated Huawei Challenge prize cannot be delivered to your region for any reason it may be replaced by another prize of the same value (if no legal restrictions), at the discretion of the sponsor.

By participating in this Challenge, you agree to the Challenge Rules and Conditions of Participation

Good luck to all participants!

*UPD*: The contest duration has been extended to 5 days. The actual finish time is October 14, 2021, 00:00 UTC

dang, tourist is about to get so much stuff now...

I think now he can open his own hardware store =)

This guy is making us feel depressed.

In the actual contest register page, there's an error: it says maraphon instead of marathon

Hope Huawei will provide another

fantastic contestlike thishuawei sopports?!

i never heard a chinese company sopports any contests.

no they did this before

why only 1 problem?

is it difficult?

there're no actual answer, just competing to reach a better solution

No offense, but you misspelled "support" :(

im sorry im not so good at english. i have to work harder. :)

别钓鱼了

They've been Europe sponsor for swerc 2018

what is the difference between normal cf competitions and this one?

real funny, uses others as free labor forces and the top payment is HUAWEI MateBook X Pro + 5000 USD? No offense, but that is a smart move.

You do know that top solutions on such types of contests are completely unusable in their resulting form for business applications, right?

Really? My bad.

that's true but they provide a basic structure that can be modified for a real-world problem.

GANG :>

Is the problem for onsite event and for marathon different?

It seems so. There will be additional restrictions in the marathon.

qpzc

this is my 1st time posting a comment.

I dont want it to lower my contribution.

:)

how can i delete the comment??

not everything is possible here.

But you can edit it.

but the previous versions will be also visible.

I didn't know that.

Any Huawei prize for US contestants would pretty much be useless anyways due to US and Google Play ban of Huawei

why?

US propoganda that China will track and steal all your info. pfft like the US doesn't smh. Anyways, also, it was during the time that Huawei was doing really really good and the US was just jealous and wanted US companies to do better. Though I guess any country would want their own companies better than foreign ones

The United States is afraid of Huawei，I could've died laughing.

Then why does the United States need to find such a false reason to sanction Huawei？

What is ICPC username? Is it the email address of my account on https://icpc.global/?

I guess so. Previous Huawei marathons were also connected to icpc global.

Thanks!

rated only for naturals

ICPC Username ???

2708 people did not read the Challenge Rules and Conditions of Participation

My second challenge in codeforces. Best of luck to all!

Is it true that 3 hours contest and 4 days contest are same task and some contestants can cheating?

No, the problems are not the same

(posting this here because i don't think clarifications work in marathons)

Does GFL limit the number of flows that can pass through a group or the total flow rate that can pass through a group?

edit: it's the number of flows

How come there are only 10 people in current standing?

Can we have a checker log or something ? Like it is truly hard to know where we are wrong.

Adding to this, could we also have time / memory usage if possible? The submit limits make it irritating to binary search on constants to maximize score without TLEing.

Right at the top of the task

Time / memory usage of a submission, not the problem constraints. As in is my submission running in 300ms or 1950ms. I know I can use a timer and make my soln stop at like 1.9s, but knowing the running times makes some stuff easier.

Oh, that. At least you can see the time, just go to the submissions tab of your profile page. Unfortunately it's only the worst time, no per-testcase resolution. Now that you mention it, I think I've seen this feature before at the veeroute contest.

Thats actually exactly what I needed, thanks!

Though I expect that's unintentional and its not supposed to be visible on the profile lmao.

The contest duration has been extended to 5 days. The actual finish time is 14 Oct 2021, 00:00 UTCThis is a backstab move. Don't you think people have some plans?

I apologize for that, but at some stage, there was a discrepancy in the duration of the contest. Even the post here talked about a 5-day marathon. I think it is better to unexpectedly extend for a part of the audience than unexpectedly shorten it by a day. The situation is unpleasant, I agree. Next time we will take a closer look at this. Sorry.

The thing is that a major part of the audience thought that the contest is going to end at midnight so the "unexpectedly shorten" argument is not very valid.

`The contest duration has been extended to 5 days. The actual finish time is October 14, 2021, 00:00 UTC**`

What?! I planned my time with a duration of 4 days. It's not fair!

What was the reason to extend the duration 13 hours before the end (according to all announcements (except video) duration equal to 4 days)? I understand that it is beneficial for people that can allocate more free time. But for me (and I guess some other people), it either completely ruins plans or a day of anxiety (watching how other people outrun you).

I would like to participate during all 5 days, but only if the contest duration was given correctly from the beginning.

The video announcement always mentioned a "5 days marathon" (at 1:32). So was it just a bad

communication(pun intended) between codeforces and Huawei? Anyways I agree with previous posters: I'm not very pleased about this late change, I made sure to allocate 4 days for the contest.What did you guys do during the contest? Comment down about your approaches, please.

At least, I have mine here: https://youtu.be/k4SAv2cJ4ws

8th place, 34089

Path cost should increase the longer the path is and the more "prohibiting" it is -- the rarer resources it demands. Half-depleting one edge capacity can be assumed similar to depleting a quarter of capacity of two edges, same applies to node and group limits, so the cost can be a sum of

`flow/edge_capacity`

,`1/node_capacity`

,`1/group_capacity`

. Floating point operations are too slow, so we replace them with integer division of some scaling constant and precompute division. Dijkstra is slow, so I replaced it with 1-1024 BFS which can be shown to have a much lower constant than 1024 for such weights.Solution for 33941 points orders paths by

flow demand plus constantmultiplied bycost assuming uniform constant flow demand: that way we don't have to search graph for each path separately and can reuse an older search for source or target node. Then for each path in that order optimal routing is found (if the path was already implemented, it's removed before searching) and applied, and the process is repeated from sorting until TL.[Additional heuristics worth last couple hundred points: for small test cases check paths that were

accidentallysolved by this search, if any have better cost and not greater flow demand, apply one of them instead. If reversed direction path is more likely to accidentally solve others, swap start and end nodes. For very small test cases search graph to find cost of the next path in sorted order, if it's better -- apply it instead. Sometimes restart the iteration to reroute currently applied paths before finishing the pass over all sorted paths.]Solution for 34089 instead if sorting paths once per iteration by approximate cost selects the actually cheapest path. Find the true cost for each path in separate graph searches, store them in PQ. Extract the cheapest path, search routing for it, place it back with updated cost if the one it had assigned is no longer correct, apply it otherwise (if possible), repeat until PQ is exhausted. Iterate through currently applied paths, reroute each, keep the old routing if no route was found (it can happen because graph search with edge constraints sometimes misses routes when they exist). Iterate through paths that we couldn't add, if a route is found place them in PQ without applying yet, repeat from PQ exhaustion until TL.

To further improve score, place paths with cost < 1e5 not in PQ but in an ordinary queue corresponding to this cost -- that's the second "layer" where we replace a PQ with an array of queues. I've no idea why but this process works fast, now [] stuff from the previous solution is obsolete and this approach solves all test cases but the largest, for which we still use the approximate ordering solution.

I think it makes some sense to use powers on edges cost, like (1/group_capacity)^0.6 in my code (best constants may be different for different implementations, need trials). We can precompute cost for each possible group capacity or node capacity so it will not increase executing time.

To get rid of floats I did something like: int cost[i] = pow(1.0/i, 0.6) * (1 << 23);

Interesting. I tried to interpolate between ^0 giving simple edge distance and ^inf being sensitive to the rarest resource, like L0 and Lmax norms, but 1 worked best.

Thank you for the writeup maxplus.

I don't understand what is meant by the 1-1024 BFS in this sentence: "Dijkstra is slow, so I replaced it with 1-1024 BFS which can be shown to have a much lower constant than 1024 for such weights." Do you execute BFS 1024 times? And if so, what is the reason for this?

The

cost assuming uniform constant flow demandis also not entirely clear to me, could you give an example perhaps?You're welcome. The former is called 1-K BFS, you can read about it here, and by the latter I mean that instead of the actual $$$FlowRate$$$ some constant (just $$$2$$$) is substituted, same for every flow.

17th, 33740 points

Init: compute distance from each node to each other (ignoring constraints)Path building: randomly try 1 of those:A path always tries to get closer to the target using the precomputed distances. If no such edge exists, a certain number of detours can be made. Up to 1 invalid edge can be chosen (exceeding the capacity limit). Repeat 10 times, then give up and try the next flow

Overall algo:Sort flows by expected distance * pow(flowRate, 0.3). Set maxDetours to 0. for each flow:

Increase maxDetours by 1 and try again as long as there is time remaining

26th place, 33594 points.

I used the following algorithm.

This solution got 33594 points. But I came up with the latest optimizations with the removal of paths in the last hours of the challenge and I had little time to test it well. I believe this solution can score more points.

23rd place, 33662

I did a “smart” initial assignment and random optimisations afterward.

The distance everywhere onward is just the number of edges in the path (obviously, ignore the given distances). Usage of some custom edge weights significantly slowed down the search, as I needed to switch from BFS to Dijkstra and I did not come up with any cool 0-K edges.

I handle blocked edge pairs in the most straightforward way — I keep track of the parent edge for vertices in the BFS and do not go to edges which are blocked by the parent.

For the first part, I greedily add queries to the answer. I took queries with the shortest distance (in case of equality — smaller flow). Other metrics combining distance, flow, number of blocked edges, group sizes etc. performed worse.

I sorted edges for each node in the order of increasing capacity. It forced BFS to find path that takes smaller edges if possible, which seems to be beneficial.

Without randomisation, proper selection of the shortest query (rather than approximation) gives significant improvement. To do it fast, I initially precomputed distances between all the node pairs (without any restrictions on the capacity) and initialized query lengths with these values as a lower bound. Then I iteratively took the shortest candidate from a set, and if it had a precomputed path (and it was still passable) just added this query to the answer. Otherwise, I recomputed the path and returned it to the set. This way I processed all the queries in about a second and got something like 33200.

Well, this is it. The rest is some tuning of constants, random seeds, and other meaningless stuff (I’ve got +20 points by adding a bunch of pragmas lol)

Since there are only 2 or 3 large tests and they are closed from us (unlike in HashCode for example) it was very hard to understand what are the weak parts of the current solution. I feel like I did dozens of optimisations that I believed should strongly impact the result, but they did not do anything. And after some cleaning, my best solution looks really straightforward which makes me feel sad as I could have implemented it right away.

Bro, how long has your 33200 solution been running? and the delete edge， Can the edge deletion strategy be improved by nearly 400 point?

My solution is basically the same as yours, but I only got 29000+. (and I wrote 1k+ lines of code)

First I tried floyd, but it was too slow, so I switched to dij to calculate the number of minimum number of edges. Next I went for a balance between the number of edges and the distance, and sorted the flow by rate.

I wrote a simple undo operation, also tried randomization, i put the blocked flow in front,sort the blocked flow from the largest to the smallest, and then the last 4/5 sorted in ascending order (I think some large flows are more likely to block, but if too many large flows are put in front, it will reduce the overall answer)

I feel that this is not too different from your algorithm, but the score is not good, so I also wrote a lot of optimization.

I preprocessed some of the feasible paths for all flows, and roughly calculated the importance of each edge. If the flow through an edge is much larger than its capacity, or if a flow has to go through it, then it becomes very important. If possible, the importance of the edges a stream flows through should be as small as possible.

Due to the limitations of edges and groups, I calculated the average flow rate that each node and group should pass through. The coefficient, importance, and difference between the flow and the average flow are multiplied to measure which edges a flow should pass through first. This avoids the possibility that a group with a large capacity only passes through some flows with a small rate.

I don't know why my score is so low, if someone can tell me, thanks a lot. (My English is not very good, some sentences are translated)

Thank you for the interesting contest, I am a little disappointed that I didn’t get the prize, but the task is fascinating as always. The addition of an extra day was very upsetting, because it happened in my time zone before the night and because of this I could not plan the time normally. People who get this time in the morning obviously have an advantage.

All the same, marathon problems are my favorite type of competition, because it allows even being inferior in level to compete on an equal footing with participants who have constant competition practice.

Dude, your scores are three times of mine :)

Will the submissions be open to public? It would be interesting to check the top scoring solutions

1st place, 34332

My solution is divided into three parts.

I did a rather smart initial assignment. In this part, the flow rate of each flow is only to check whether an edge can be passed. And the distance is just the number of edges in the path. My BFS starts from both the start and the end simultaneously, which is much faster than starting from only the start. To handle the blocked edges inside a node, I just considered the first edge that comes to a node, although it may cause some unfound paths. The algorithm is to add the current shortest path greedily(use a priority queue to maintain it), which can get about 33280 in 250ms.

I looked over all the unsatisfied flows and check whether there exists a path that only contains 0/1 illegal edge using BFS. If no edge is illegal, I just add the path. If only one edge is illegal, I delete a flow that goes through that edge, then add the path just found and try to re-add the flow just deleted. Repeatedly running this algorithm until 500ms can lead to 33750.

Do the following algorithm repeatedly until the end:

This can raise the score from 33750 to 34332.

There are several other optimizations in my solution:

These will certainly improve the result, but I can't evaluate their effect. So that's all my solution. The third part of it seems not very effective but it really improves much. I still don't get why it is that helpful. Actually I tried lots of optimizations during the contests, but most of them are difficult to implement and founded useless after spending much time. That made me very painful. :(

simpler is better)) good job!

Can you please share your code? Really want to see how you implemented first part to run in 250ms