I have a question. Does anyone knows what name of this proof technique? Similar approach is very often used in proofs of correctness of algorithms. I'll start from simplest case which isn't well representative .

All proofs below is how I would proof correctness of solutions for those tasks.

**Proof**

Following is proof which resemble basic proof by contradiction.

Suppose we have correct optimal covering. If there is any segment covered at least twice right next to end of tape, then we can remove part with overlap, and reduce length of segments with overlap at least by one. It contradicts with assumption that covering was optimal, thus optimal covering has no overlaps.

This proof above looks similar to what I'm asking but this case is not representative.

Now, yet again, suppose we have correct optimal covering. If there is any segment right next to end of tape covered non-broken part, then we can remove this unnecessary part. It contradicts with assumption that covering was optimal, thus any tape in optimal covering starts and ends on broken segments.

Here comes the proof technique which I'm asking about:

Suppose we have correct optimal covering with $$$L$$$ of tape and $$$c$$$ pieces. Find maximum covered by tape segment between consecutive broken parts. Let's say its length equal to $$$a$$$. If there is any exposed part with length $$$b < a$$$, then we can join (connect) two ends of exposed segment $$$b$$$ and now we have $$$c-1$$$ pieces with total length $$$L + b$$$. And after that, cut the tape over covered segment of length $$$a$$$ and remove. Now we have back to $$$c$$$ pieces with total length $$$L + b - a$$$. But, because $$$b < a$$$ we know that $$$L > L + b - a$$$. Therefore, there shouldn't be any exposed segment with length less than longest covered. And it means we need to expose largest segments.

In other words, proof above looks like transformations of optimal that shows optimal strategy is greedy — take largest segments and expose them and cover everything else. It's actually proof that maximum sum you can achieve by taking $$$k-1$$$ numbers from a list is to take largest $$$k-1$$$ of them. And, to prove it I would use same technique I'm talking about. But it's simplest case, not much representative, because it looks like basic proof by contradiction.

A bit more representative example of this proof technique.

**Proof**

Suppose we have correct optimal answer: $$$b_i$$$ arrays. If there is any $$$j$$$ for which there is at least two different $$$i$$$ holds $$$b_{i,j} < b_{i,j+1}$$$, let's call them $$$u$$$ and $$$v$$$. Then, for each $$$z$$$ starting from $$$j+1$$$ we can increase $$$b_{u,z}$$$ and decrease $$$b_{v,z}$$$ by same amount $$$b_{v,j+1}-b_{v,j}$$$. After this modification, $$$b_v$$$ is still non-decreasing, because now $$$b_{v,j+1} = b_{v,j}$$$ and all numbers after that were decreased by same amount. Also, because $$$<$$$ changed into $$$=$$$ the number of different numbers in $$$b_v$$$ decreased. Similarly $$$b_u$$$ is still non-decreasing, and the number of different numbers in $$$b_u$$$ is same. For any affected $$$a_j$$$ it didn't change because we increased it by same amount as decreased. For this $$$j$$$ we reduced number of $$$i$$$ with $$$b_{i,j} < b_{i,j+1}$$$. $$$b$$$ still meets all restrictions. We can repeat this modification until all of $$$j$$$ contains at most one $$$i$$$ with property $$$b_{i,j} < b_{i,j+1}$$$.

Let say $$$x_j$$$ is such $$$i$$$ that $$$b_{i,j} < b_{i,j+1}$$$, otherwise $$$x_j = -1$$$. Now, with same technique we can show that we can modify arrays $$$b$$$ into other correct optimal answer but this time $$$x$$$ array without taking into account positions with -1 will become non-decreasing.

Suppose there is $$$x_u > x_v$$$ and $$$x_u,x_v \neq -1$$$, then we can decrease / increase particular elements that $$$x_u, x_v$$$ would swap their values, while being still correct and optimal. And, because we can swap each inversion this means we can sort $$$x$$$ in non-decreasing order.

End of proof. Using fact above we can show that greedy algorithm is optimal.

This proof is also looks like some kind of fact that is proved by contradiction, but this time we don't restrict other kind of solutions like in previous task. We just reduce all optimal answers to some useful form. And then, looking for answer of known form, which happens to be unique up to choice of first number in first array in case when answer has two or more arrays.

Much more representative example:

**Proof**

Suppose we have correct optimal answer. Let's make log of our actions: array of pairs (p_i, m_i) where p_i is product we bought by price m_i. For example may have log (1, 2), (1, 1) which means we bought product 1 by price 2 and then bought one more product 1 by price 1. Let's sort log with all actions of our optimal answer by non-decreasing discount associated with the product we buy.

More representative example:

**Proof**

Suppose we have correct optimal distribution. Without loss of generality $$$S_1/k_1 \geq S_2/k_2$$$. (otherwise swap indexes of services). Sort all servers in non-decreasing order by capacity. If there is any server used by service 1 with lower capacity than server used by service 2, then we can swap services, because $$$S_1/k_1 \geq S_2/k_2$$$ so if server with service 1 may hold service 1, then it also has enough capacity to run service 2. Server which is running $$$S_2/k_2$$$ may run service 1 because it has more capacity by condition. After swap of services, we still have correct optimal distribution. If there is still some pair with condition, we can continue swap services.

This means, we can transform any optimal distribution into distribution where none of servers with service 2 has capacity bigger than servers with service 1. In other words, in non-decreasing order by capacity of servers all servers with service 1 goes after all servers with service 2.

Next, if there any unused server with bigger capacity, we can move service to it. So we can transform any optimal correct distribution into other distribution with all services running in servers with biggest capacity, and all servers with service 1 are after all servers with service 2 without unused servers in between.

End of proof. This proof again show us some useful form of answer, then we just look for answer in this form, but this time we require to look through searching space: iterate over $$$k_1$$$ and find appropriate $$$k_2$$$.

Last example. Most representative.

1249D2 - Too Many Segments (hard version)

**Proof**

Suppose we have optimal answer (decision about segments we remove and segments we keep). Among initial set of segments pick point $$$x$$$ that is covered more than $$$k$$$ times with smallest coordinate. Order all segments which cover this point by $$$r$$$. Let's say number of them is $$$c$$$. Then we had to remove at least $$$c - k$$$ segments for this point. If any segment $$$u$$$ among $$$c - k$$$ segments with highest $$$r$$$ is not being removed, then there is other segment $$$v$$$ which covers this point that is being removed. Don't remove segment $$$v$$$ and remove segment $$$u$$$ instead. For intersection of segments $$$u$$$ and $$$v$$$ the number of segments over point didn't change because one segment was added and one segment was removed. Also, because $$$r_u \geq r_v$$$ the number of segments over $$$(r_v, r_u)$$$ is decreased. And we picked point $$$x$$$ that is covered more than $$$k$$$ times with smallest coordinate, and $$$l_u, l_v \leq x$$$, so even if we didn't remove both segments there would not appear point with more than $$$k$$$ segments with less coordinate than $$$x$$$. In other words, after this change we got other correct optimal answer. We can repeat this until all $$$c - k$$$ segments with highest $$$r$$$ are removed.

Therefore, any optimal answer (decision about segments) we can transform into other answer (other decision) with those $$$c - k$$$ segments removed. If any optimal answer exists, then answer in our form also exists. Thus, we can remove those $$$c - k$$$ lines anyways. Then, forget about them and solve new task without those segments. (Find new point that is covered more than $$$k$$$ times with smallest coordinate...)

In this example we don't have contradiction. It also doesn't show what form of answer is. But, we yet again transforming any optimal answer to result of our algorithm step by step. Also, there is no direct contradiction in any step, but in the end we show that after all changes we get answer that is not worse than optimal, so if we do same we find optimal answer.

So, usual steps of this technique are following:

- say: "Suppose we have optimal answer"
- compare our answer with optimal answer
- make single change to make optimal answer less different to our answer
- say: "we can repeat steps until optimal answer become ours"

This way we prove greedy strategy is optimal or among optimal answers there has to be answer of some particular form.

Does anyone know correct name of this technique? Is it even have a name?

Auto comment: topic has been updated by _the_beginner_ (previous revision, new revision, compare).Exchange Argument. Pretty standard way to prove optimality of greedy strategies.

I think your post can be used as a very nice tutorial on exchange arguments for beginners