### A. Replacing Elements

The minimum value that an element can get is the sum of the two smallest elements of the initial array. Constraints allowed to find such pair in $$$O(n^2)$$$ by brute force. One can also sort the array in $$$O(n \log n)$$$ and take `a[0] + a[1]`

. However, the optimal method to select $$$k$$$ min elements is, of course, a heap with a size limit of $$$k$$$ elements which performs this job in $$$O(n \log k)$$$. We then change every element to this value if such a change reduces it, and compare against the limit.

### B. String LCM

We have to print the lcm anyways, so we can just generate it by looping over two string simultaneously with a pair of indices. If lcm exists then its length is the lcm of lengths of the two strings, hence it is relatively short. If we encounter a pair of different symbols while iterating then lcm does not exist and we print -1.

### C. No More Inversions

If you swap $$$(i, j)$$$ before the maximum element and $$$(j, i)$$$ after it, then the number of inversions does not change. Moreover, it follows that the number of inversions does not change for any set of numbers that are placed like $$$w,x,\dots,y,z,y,\dots,x,w$$$. Therefore, we can take the permutation $$$1,2\dots,a_n-2,a_n-1$$$, $$$k,k-1,\dots,a_n+1,a_n$$$. Finally, we can't change the first part because it is currently sorted, and any change will lead to more than $$$0$$$ inversions.

### D. Program

The number of different coordinates is the maximum coordinate minus the minimum coordinate plus one (example: there are $$$3-(-2)+1=6$$$ integers in the range $$$[-2,3]$$$). If we exclude all commands from $$$[l, r]$$$, then what remains is a prefix $$$[0,l)$$$ and a suffix $$$(r,n)$$$. It now makes perfect sense to compute max and min on all prefixes and all suffixes, along with the current coordinate after every prefix. From here with the help of some drawings, we can restore min and max as follows:

```
min = min(prefix_min[left - 1], prefix_current[left - 1] + suffix_current[right] - suffix_max[right])
max = max(prefix_max[left - 1], prefix_current[left - 1] + suffix_current[right] - suffix_min[right])
```

Here index $$$0$$$ corresponds to the empty prefix/suffix.

### E. Minimum Path

Every edge can turn out to be min-, max-, or a regular edge. weight of min-edge is twice the weight of the regular edge, and the weight of max-edge is 0. Each path may use at most one edge as its min-edge and at most one edge as its max-edge (a total of four options, according to the product rule). It follows that we can run Dijkstra on the graph with $$$4n$$$ vertices, but for the answer, we only consider paths that use both min and max-edge (or use neither, which can happen for paths of length 1).

### F. Strange Set

It is an interesting problem that can be reduced to maximum flow. UPD: I changed my opinion about Dinic. First, it is not all that hard, and second, I actually had it in my library, I just forgot where it was. Added solution to github

### G. Tiles

Hello $$$998'244'353$$$ my old friend. It is unlikely that you will implement NTT in half an hour that remains after you solved first six problems, so it is basically impossible unless you have NTT in your library, and hence not a good fit for Div. 2 Round. On the other hand, if you do then the problem is rather straightforward, which makes it not a good fit Div. 1 Round. Therefore we see it in Educational Round, popularizing the idea of NTT among lower-rated participants, and reminding higher-rated ones that they should add it to their library.

Overall, I think that this educational round was very good for the purposes of educating people about standard techniques, and problems were used wisely.

Love the way you are contributing to the community . Recently found your youtube channel, indeed you are working hard for your content. Much appreciated man.

Can You Provide the Link to the Youtube channel !!

Here you go-https://www.youtube.com/channel/UCMkdawJIiCQR1iyE0ZvlaqA

btw, there is also a convenient short url https://youtube.com/c/NikitaSkybytskyi

I was trying to solve Problem D, I was confused when I saw many people used Data Structure like Segment Tree (I'm not sure of) because I don't know them now. But Your Explanation to Problem D was simple.

Thanks I understood it now !!

You needn't write NTT on problem G, because my solution per cube works 3962ms (!)

You can try to hack it, because I couldn't :)

Here is the solution 104334607.

Wow, you must be very lucky, I was confident that this won't pass. UPD: I can't hack it either

Even better, with pragmas this solution works in 2714ms, kind of safe from TL

Wish I had written the code :(

In E, example-3, I can't find any path with score 7 for Path 1-5. The best I could do is 8 (1-3-5 or 1-7-5). Same for Path 1-7, what's the path with score 3?

Edit: Got it, you can revisit the same edges.

Good question! The path $$$1\to7\to4\to7$$$ has cost $$$(4+1+1)-4+1=3$$$, and the path $$$1\to7\to4\to7\to5$$$ has cost $$$(4+1+1+5)-5+1=7$$$. Note, that Djikstra can restore paths if necessary, not only distances.

I disagree the algorithms for F and G require a library, both have relatively short implementation if you know them well and are able to think of the solutions in the hypothetical time frame you specified (unlike me). But I like your analysis, and agree it was a good edu round.

tbh, I think that the longer/better I know certain things, the more likely I am to not pay enough attention and mistype something, introducing strange bugs that are not easy to debug under time pressure.

Hi, what's your thoughts about problem C? What should I do if I am unable to solve it post contest as well?Before this round, I had actually practised all problem from A-C in earlier EDU rounds(78-84) so as to prevent myself from getting stuck in these 3 problems..

In fact, I misread problem C twice today, and so did SecondThread (in the same way), so sometimes things just don't go your way. I was so confused with C that I even switched to D and solved it first instead, just to refresh my thoughts. I think that it is a very weird and rare permutation problem and you won't see anything like that again in the near future.

In your specific case, I feel like solving B faster than you did was more important than solving C. The rule of thumb in such contests is that if the constraints are low then you are most likely expected to simply implement whatever is requested, and you don't have to waste time thinking.

Thanks for your comments. Well yeah, I know this time I solved B very slowlyXD .But I hope to increase my solve count in upcoming rounds.

I didn't emphasize much on the constraints of Problem B. My bad :)

It's okay, as long as you learn something new with every contest

I would like to clarify one thought of myself. See, I love doing these contests because conceiving a new idea or making a crucial observation elates my heart. Also,at the same time I want to improve my skills at this otherwise how shall I pitt up against others XDD . I am absolutely okay with learning new things in a contest but not okay with being unable to improve my ideas :( . Is it because I am solving only problems A-C in practise and not going for harder ones? Well again, the problem with harder ones I faced is that I am more than often unable to grasp the concept of the solution and also I feel, if i cant solve easier problems , how am i gonna solve harder problems? I am not someone who is doing CP to increase my rating...however I do it so that i improve gradually and if i improve rating shall increase automatically...

I understand Dijkstra's algorithm but yet not able to understand problem E. Could you help me by expanding a little more on E.

Each edge $$$e$$$ can be used as a max-edge (with weight 0), as a regular edge (with weight $$$w_e$$$), and as a min-edge (with eight $$$2w_e$$$). However, only one min-edge and only one max-edge can be used per path.

We create four versions of each vertex. The first version means that neither min-edge nor max-edge was used, the second means that min-edge was used but not max-edge, the third means that max-edge was used but not the min-edge, and the final fourth version means that both min-edge and max-edge were already used.

We then add regular edges from each version to the same version, max-edge (of weight 0) from versions 1 and 2 to versions 3 and 4 respectively, and a min-edge (of double weight) from versions 1 and 3 to versions 2 and 4 respectively.

Then Djikstra will find the shortest paths from the first version of the source vertex to each version of every other vertex. Finally, it is prohibited to use max-edge and not use min-edge, so for the answer, we ignore the third versions of every vertex.

I dint understand what you did there. Shouldn't we just check for

`shortest_distances[vertex + 3 * vertex_count]`

a.k.a the fourth version ? Why do we have`shortest_distances[vertex]`

?Also

1) How is that reflected in your code ?

2) Isn't it compulsory to use 4th version ? (acc. to question — how they calculated the path => all weights — max weight + min weight). SO technically we must use only fourth version of all nodes right ?

Thanks for the help and support ! Appreciate it :)

For paths of length 1, there can't be both min-edge and max-edge, so I include

`shortest_distances[vertex]`

into the`min`

not to miss these cases. I also ignored`shortest_distances[vertex + 1 * vertex_count]`

(second version), because it is never better than`shortest_distances[vertex + vertex_count]`

, but this detail is not necessary to get AC.I kind of wanna understand why 2nd version is NOT prohibited ?

It is sort of prohibited, but it is also suboptimal: in the second version, you used a min-edge, so you paid twice for it, but you didn't use a max-edge, so you didn't save anything. As a result, you overpaid comparing to regular path price, which is, in turn, more expensive than the fourth version. So you won't change the minimum by including it.

Okay thank you, Btw I hope u'll review my pull request in the repo.

Comments in my github code are not intended to be complete explanations, but I changed 'select' to 'extract' in my library because you are technically right about the complexity.

And for paths of length more than 1,

`shortest_distances[vertex + vertex_count]`

is never better than`shortest_distances[vertex + 3 * vertex_count]`

, so it does not harm to include it.