# | User | Rating |
---|---|---|

1 | tourist | 3628 |

2 | Um_nik | 3534 |

3 | Petr | 3341 |

4 | wxhtxdy | 3329 |

5 | ecnerwala | 3305 |

6 | LHiC | 3300 |

7 | mnbvmar | 3291 |

8 | sunset | 3278 |

9 | V--o_o--V | 3275 |

10 | dotorya | 3188 |

# | User | Contrib. |
---|---|---|

1 | Radewoosh | 188 |

2 | Errichto | 183 |

3 | rng_58 | 161 |

4 | PikMike | 159 |

5 | Petr | 157 |

6 | Vovuh | 156 |

7 | Ashishgup | 153 |

8 | 300iq | 151 |

9 | Um_nik | 149 |

10 | majk | 148 |

↑

↓

Codeforces (c) Copyright 2010-2019 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Apr/19/2019 10:42:20 (f2).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

How to solve I and M?

I：There will be an optimal solution in which all the added numbers are in nonincreasing order. Use this to formulate an

O(n^{2}k) dynamic program and then optimize using divide and conquer trick or convex hull trick.M: I'm unsure about the solution, but apparently it can be proven that you spend all your money on at most two people. Then some sort of ternary search should work.

For M, my team's solution was to map the soldiers to (x, y) points (budget*potency/cost, budget*health/cost) and then the goal becomes to maximize x * y. So then you build the convex hull of those points.

That way, it's only ever optimal to combine soldiers that are adjacent along the convex hull. The reasoning behind that is if you tried to merge more than 2 soldiers, you'd effectively be drawing a line between two segments on the convex hull (and of course that segment lies strictly inside the hull) and the product of any (x, y) on that segment will never be greater than some (x, y) product along the outer segments of the hull.

So it's not really a proof but there's some intuition behind why you only combine 2 soldiers (also other solutions don't seem to use convex hull at all).

I'm having trouble convincing myself of this. Do you have any proof or good intuition on why it's true?

Some intuition: let's say there are two undecided positions

iandj. They will divide the array in three partsA,BandCin something like this:-----A------|i|------B-----|j|-----C-----

Let's say there is some value

O_{i}which is the greatest optimal value to be chosen to be in positioni. The smallerO_{i}is, the greater is the number of elements to pair with it in partA; the biggerO_{i}is, the greater is the number of elements to pair with it in partBandC.Let's say

O_{j}is the same thing for positionj. The only difference is that now the smallerO_{j}is, the greater is the number of elements to pair with it in partB.So, let's say

O_{j}=O_{i}. If we increaseO_{j}, obviously it will be worse than not increasing, because the greaterO_{i}was, the greater the number of pairs would be made with partB, andO_{i}is optimal by definition. Now, even worse, the number of pairs made withinBis greater ifO_{j}is smaller. Thus, it's always optimal to makeO_{j}≤O_{i}and you can extend this to any number of undecided elements.M: Transform (c, h, p) to (h * C / c, p * C / c). The answer is a combination x1 * v1 (vi is (hi, pi)) + x2 * v2 + ..., you take that vector and take x*y. But that combination has other constraint, x1 + x2 + ... + xn == 1 and xi is in [0, 1] so it's a convex combination (https://en.wikipedia.org/wiki/Convex_combination). This means that you can take the convex hull and solve the problem for the vertices and edges and it will have the greatest answer.

Is it possible to solve M with c++? It seems like there is a precision problem. My python code using (exactly) the same logic got AC.

We have reference solutions in C++ (and we also have some other AC in C++ from other contestants independently). I'm not sure why yours is failing.

Thanks. BTW, do you by any chance know why my solution to E got Idleness limit exceeded on test 11? Thanks for your help.

http://codeforces.com/gym/101982/submission/45508248 here is the code.

nvm solved. my n and m are the other way round...

how to solve d?

dp[number of digits filled in][this number is k modulo m] or dp[n][k] can transition to dp[n+1][2*k] and dp[n+1][2*k+1]. dp[n][k] = (number of ways to get to this state from [0][0], sum of 1's of these ways). answer is dp[bits][0].second

ty

Similar solution to tfg's Is dp[position of bit (from higher to lower)][number module n][how many ones i've put]. Base case is when

pos= = - 1, if module is 0, returnones(third dimension). otherwise return 0. transifions is:f(p, k, o) = f(p-1, (2^p + k)%n, o+1) + f(p-1, k, o)

I am not sure whether i got the question right.I tried two different approaches one similar to osdajigu_. Can't seem to figure out what's wrong .Thanks . Post Edit: Found the issue was taking the wrong modulo number :/

Code... //@cartmancodes

## include<bits/stdc++.h>

## define MAX 1015

## define MAXN 130

## define ll long long

## define mp make_pair

const ll mod = 1e9 + 9;

using namespace std;

pair<ll,ll> dp[MAXN][MAX];

ll vis[MAXN][MAX];

ll n,k;

ll fexpo(ll a,ll b) { ll ans = 1; a = a%k; while(b) { if(b % 2) ans = (ans * a)%k; a = (a * a)%k; b /= 2; } return ans%k; }

pair<ll,ll> solve(ll idx,ll rem) { if(idx == -1) { if(rem == 0) return mp(1,0); return mp(0,0); } if(vis[idx][rem]) return dp[idx][rem]; ll temp = fexpo(2,idx); ll cnt1 = solve(idx — 1,(temp + rem)%k).first; ll sum1 = (solve(idx — 1,(temp + rem)%k).second)%mod; //put 0. ll cnt0 = solve(idx — 1,rem%k).first; ll sum0 = (solve(idx — 1,rem%k).second)%mod; ll cnt = (cnt1 + cnt0)%mod; ll sum = (sum1 + sum0 + cnt1)%mod; vis[idx][rem] = 1; return dp[idx][rem] = mp(cnt,sum); }

int main() { cin>>k>>n; ll i,j; cout<<solve(n — 1,0).second; return 0; }

haha I had same mistake, had module 1e9+7, instead of 1e9+9.

Did exactly the same :p

How to solve K?

It can be solved by DP. let's focus on minimum expected value, and then its similar for maximum. Be f(mask) = minimum expected value if I have the current mask (1 for available numbers, and 0 otherwise). So, the transition would some something like.

You have to calculate values of what numbers can I remove such that sum up

d, and add that movement tomoves[d]. There a few cases to handle, in case you can't make any movement, that will be the base case. Also you have to calculate the probabilitie of getdwith 2 dice. For that just do a double nested loop and sum up for i+j, and then divide all (for 2 to 12) by 36.Here is the code

How to solve E, and B?

E: You have to find the shortest cycle around cell having 'B'. For each cell, consider 2 nodes in the graph,

PandQ. Also, consider a line from 'B' to an edge of the grid. For a particular cell, you are on nodeQif you have crossed that line an odd number of times in your path. Otherwise, you are on nodeP. Now, calculate the shortest distance fromPtoQfor any cell.problem E can be solved using max-flow(min-cut actually). Lets build the following flow network: for every position on the matrix lets create two nodes(call them entry node and exit node) and add an edge between them with capacity equal to the cost of blocking that position if it has a letter('a', 'b', ...) or INF in case it is the position of 'B' or a dot('.'). The source will be entry node of 'B' and the sink will be an extra node that we will connect with the exit nodes of every position on the borders with capacity equal to INF. Then for every position add an edge between its exit node and the entry nodes of every adjacent position with capacity equal to INF. Then run max-flow algorithm on this graph and that is the answer. What we are actually computing here is the minimum cost to separate the sink from the source(min-cut) which is equivalent to max-flow. The problem is similar to this one http://coj.uci.cu/24h/problem.xhtml?pid=2505 in case you want to practice this approach. Good luck

Nice, thank you

For problem E, why do standard max-flow algorithms (Dinitz and Push-Relabel) run very fast even though the graphs involved have about $$$2nm = 1800$$$ nodes and a similar order of magnitude of edges, while these algorithms take cubic time in the worst case? Is it simply that the test data does not have the worst case, or is there a tighter analysis you can do on grid graphs like these to prove better upper bounds?