Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Before contest

2020 ICPC, COMPFEST 12, Indonesia Multi-Provincial Contest (Unrated, Online Mirror, ICPC Rules, Teams Preferred)

16:00:18

Register now »

2020 ICPC, COMPFEST 12, Indonesia Multi-Provincial Contest (Unrated, Online Mirror, ICPC Rules, Teams Preferred)

16:00:18

Register now »

*has extra registration

# | User | Rating |
---|---|---|

1 | Um_nik | 3459 |

2 | tourist | 3438 |

3 | maroonrk | 3359 |

4 | ecnerwala | 3347 |

5 | Benq | 3317 |

6 | ksun48 | 3309 |

7 | boboniu | 3300 |

8 | Petr | 3293 |

9 | Radewoosh | 3289 |

10 | TLE | 3223 |

# | User | Contrib. |
---|---|---|

1 | Errichto | 206 |

2 | Monogon | 196 |

3 | SecondThread | 191 |

4 | pikmike | 187 |

5 | antontrygubO_o | 186 |

6 | vovuh | 183 |

6 | Um_nik | 183 |

8 | Ashishgup | 181 |

9 | Radewoosh | 169 |

10 | pashka | 168 |

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial of 2019 Winter training #1

↑

↓

Codeforces (c) Copyright 2010-2020 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Sep/26/2020 15:59:44 (f2).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

Auto comment: topic has been updated by GlebsHP (previous revision, new revision, compare).In the problem C I used a map to store previous queries to pass on the 100th test case. It worked, lol

and main program?Are you using the is_sorted() function in stl?

upd:sorry I'm bit naive

No, actually I precomputed the answer as said in the problem explanation.But my algorithm runs on O(k*m), Because it checks every column, it's pretty easy to repeat queries just to slow it down. Só I used a map to store previous queries. Here is a link: 24986527.

you are so clever.

I saw your solution, the key here is using "pref" dynamic programming, not the map. No need to store the previous queries, I can pass all test without map and faster two times yours. 25166547

What I meant was that I didn't need to quite do what the editorial said and it worked. Later I did what the editorial explained.

cheers

:) But they are similar!

nice....haha...

Seems distinct queries are very needed

i saw ur comment and did the same thing :)

There may be a mistake in prob E's solution.

`ans[i] += opt[i].height;`

should be`ans[i] += r[i].height;`

`opt.back()`

should be`opt.top()`

Is it right?

Yes, fixed now. Thanks you!

for problem E can't we use modified longest increasing sub-sequence in O(nlgn) ?

i did something similar to LIS, but in the end it's not so far from the solution proposed above

Should the dynamic programming recursion be

ans(i) =max_{j < i, bj > ai}ans(j) +h_{i}instead of what you wrote?Sorry, I misunderstood your notation.

http://codeforces.com/blog/entry/50712

I think there is a mistake in 777C. It's supposed to be "such that the table is not decreasing in COLUMN j" right? Thanks for the editorial!

In problem C, should be a[i, j] <= a[i + 1, j] instead of a[i, j] < a[i + 1, j].

When CF round #403 begins?

I see that there is a binary search tag for problem C. Did anyone solve this problem using binary search? Please enlighten me on how to use binary search for this problem.

If you just store the indexes which are greater than the next number in an adjacency list for each column. Then for each query you could apply binary search within that range and see if no such element exists even for one column, then Yes. But there seems to be some kind of optimization that I have missed. Plus, the approach is way slower than that in the editorial. This might help

Please let me know if I'm wrong.

I am unable to understand the tutorial for problem C. Can anybody please explain the tutorial in a more easy way. I am new to DP. Thank you.

Here are two submissions for problem D.

26348627

26348633

These two submissions contain exactly the same code. Yet, I didn't use the same compiler. The first one got TLE, and the second one got Accepted. I would be thankful if anybody could explain me why.

Problem E (777E - Ханойская фабрика):

Could someone please explain the second approach described in the tutorial?

I'm failing to understand how/why it should work

with stack.If someone could provide a

link to a complete solutionwhich uses the approach, it could possibly help as well.I was able to understand an approach behind this submission: 33027274.

It is also

stack-based,although I'm not sure it uses the same idea

(because I'm still not sure I understand the tutorial's idea).

can someone provide me dp solution for B

I have used hashing instead of dp and my solution works in O(n). 67647151

DDangr coongj sanr vieetj nam muoon nawm >u<

Editorial for problem C should be:

* $$$up(i,j) = up(i+1,j)$$$ if $$$a[i][j] \leq a[i+1][j]$$$

* $$$up(i,j) = i$$$, otherwise.

I think it is more suitable with your explanation.

I think I got E in O(n log n) time.

Sort rings by outer radius first, then inner radius. Sorting should take O(n log n).

From the end of the array (biggest outer radius), do the below for each ring:

Knowing that each individual ring can only be added and removed to the stack one time each, this part of the program takes O(n) time.

The whole algorithm thus takes O(n log n + n) = O(n log n) time.

Code:https://codeforces.com/problemset/submission/777/71466945Feel free to try and prove me wrong.

Problem E can be solved using Fenwick tree and coordinate compression! First compress the values of a and b. Sort the rings first based on b,then on base of a in decreasing order. Fenwick tree will store the max height account so far till the value 'a'. We start iterating from 0 to n-1, and the for the max value in the fenwick tree till b[i]-1 call it qmax, and update fenwick tree at index a[i] by h[i]+qmax, and max height so far by h[i]+qmax!

I think that the editorial proof of D is unnecessarily involved. We can just prove by induction that at the $$$mth$$$ step, the length of string $$$s_t$$$, $$$t=n-m+1$$$ is maximum as produced by our algorithm. Can anyone prove me wrong?