New problems will be added as soon as the analysis is ready.

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Codeforces Round #416 (Div. 2) has been moved to start on 27.05.2017 09:05 (UTC).
×

# | User | Rating |
---|---|---|

1 | tourist | 3602 |

2 | Petr | 3265 |

3 | ainta | 3174 |

4 | LHiC | 3163 |

5 | Um_nik | 3159 |

6 | izrak | 3109 |

7 | W4yneb0t | 3087 |

8 | RomaWhite | 3053 |

9 | moejy0viiiiiv | 3051 |

10 | I_love_Tanya_Romanova | 3031 |

# | User | Contrib. |
---|---|---|

1 | Errichto | 176 |

2 | rng_58 | 175 |

3 | Petr | 164 |

4 | csacademy | 160 |

5 | tourist | 158 |

6 | Zlobober | 151 |

7 | Swistakk | 150 |

7 | GlebsHP | 150 |

9 | zscoder | 144 |

10 | matthew99 | 138 |

New problems will be added as soon as the analysis is ready.

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial is loading...

Tutorial of Codeforces Round #363 (Div. 1)

Tutorial of Codeforces Round #363 (Div. 2)

↑

↓

Codeforces (c) Copyright 2010-2017 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: May/26/2017 04:48:43 (p1).

Desktop version, switch to mobile version.
User lists

Name |
---|

You r a little bit of shit

Be a person who first solves.

wrong site buddy. this is not facebook!

oh what?? wait...let me check...oh yeah...

The first test data of B is too weak......

Wow! There is so much to learn from these editorials. I made so many mistakes while solving the problems. Lots to learn yet. :)

Is problem B could be solved using mode? So the basic idea is you count mode for V[] and G[] then simulate the explosion using that modes as coordinate. I'm curious cause when I validate that way I found some corner case that's really tedious if I tried to fix them all.

Does C++ really have builtin tools to work with dates? As far as I know, it could only deal with timestamps (which might only store 32 bits, so the timestamps of 10

^{12}given in the problem were too much for it to handle).http://www.cplusplus.com/reference/ctime/tm/

It looks like the g++ used in Codeforces is 32-bit, so time_t is a 32-bit integer which cannot store 10

^{12}:(Why not support 64-bit compiler Q_Q?

....

.....

my code was accepted,but the B 2 2 .* *. is YES 1 1

In fact.I saw wrongly.I saw wrongly.I saw wrongly.

???????

You can place the bomb on a '.', so (1, 1) is also a correct answer.

For problem C I didn't realize that we can look at this process backwards so instead I came up with a bit different solution.

Let's fix a number

iand find the probability of it being in the final set after 10^{100}iterations. Let f(steps, mask) be the probability that afterstepssteps all numbers described bymaskare located before elementiin the list and the rest of the numbers are after it (here we ignorekand just assume that we're maintaining a list of all elements sorted by the time of their latest lookup). This thing describes a Markov chain and we're interesting in finding its stable state (i.e. such values off() thatf(step,mask) =f(step+ 1,mask) =f(mask)).Now, if we write down the equations for

f(mask) wheremask> 0, we can see thatf(mask) depends only onf(mask') wheremask' ≤maskand there are at mostndifferent valuesmask'. Instead of writing down the equation forf(0) we can use another equation thatf(0) +f(1) + ... = 1 (the system hasnvariables andn+ 1 equations so we can just discard the equation forf(0)).In order to solve this system, let's assume that

f(0) =Xand for everyf(mask) we'll compute the probability in the form off(mask) =A·X+B(i.e. we'll store it as a pair of (A, B)), this can be done by dp becausef(mask) depends only onf(mask') formask' ≤mask. Now, we can use the last equation (f(0) +f(1) + ... = 1) to find X (we'll have an equation of the formA·X+B= 1 for it) which will allow us to findf(mask) for everymask. At the end, we're interested in findingf(mask_{1}) +f(mask_{2}) + ... for allmask_{i}which have less thankbits equal to 1.I followed very similar thought process as yours during contest :). However noting that

f(0) =p_{i}simplifies things :). Btw aren't B coefficients always 0?Shit. I knew it should be less complicated :)

I still don't understand:D

The main clue I cannot get is that probability has a limit when steps -> infinity.

Love this kind of graph from problem D :D (i call it flower graph because the components look like flowers to me and i never knew this name functional graph)

Problems with the same kind of graph:

Joining Couples from South America ICPC 2012 : https://icpcarchive.ecs.baylor.edu/index.php?option=onlinejudge&page=show_problem&problem=4151

BFFs from Google Code Jam 2016 1A: https://code.google.com/codejam/contest/4304486/dashboard

I believe they are also known as Pseudoforests, if you are interested.

I always get hit by TLE when dealing with bitmask dp questions, can somebody please help and point out which part did I wasted all the computing power? Thanks in advance.

This time's Div2-E

A practice of 8C I have done a while ago, also a bitmask dp question

If i understood your logic correctly, you are not using the memo table to avoid recalculations, you are just storing values there but still computing the same state more than once

I just added a line "if(dp[prev|(1<<i)]!=0.0) continue;" Now I see what you mean, I think what I am doing is just something similar to dfs which leads me to visit the same point a few times instead of storing the value to use it later on, is it?

I wonder if there is a way to generate bitmasks effectively, or at least "orthodoxy", now I only have an idea which uses a queue to store the possible next states of the bitmasks, somewhat similar to bfs, but this sounds cluncky and weird.

UPD: Surprise surprise, it ended up working, thanks for pointing out my problem! :D

http://codeforces.com/contest/699/submission/19267737

Great! :)

I don't think there is any special way. You just have to keep in mind that moving to a next state usually means setting a new bit (using bitwise OR with some power of 2). You can use some defines to create a interface, i think thats the best way to hide the bitwise operations in the code:

And another problem regarding Div2-F, I get the approach and tried to build a code for it, yet my sorting function fails to complete its' job. I suspect that it could be floating point problems, but nothing seems wrong with the angle even when checking the angle with setprecision(20)

The code with an echo test

I have to applaud on the great illustration of the test cases though, it helped me a lot in understanding the problem and the possible approach to solve the question. =D

UPD: OMG my brain must be rotten during the contest... I've made a dumb mistake in the sorting function.

Can some one please explain the dp we have to use in Div2-E LRU.

DISCLAIMER: I AM ALSO NEW TO BITMASK DP SO WHAT I AM EXPLAINING COULD POSSIBLY BE WRONGSo, I believe that you have already understood why does the problem becomes "Choosing K items to be added at the end of the cache", so now we will need to use dp in order to find out the chances of picking up certain combination.

In this bitmask dp, we store the states of each item: 1 representing it has been picked up and 0 as the opposite, then we combine the flags together. (That means, for n=3, if item 2 and 3 has been picked up, the flag could be 110 or 011, depending on implementation but I prefer using the rightmost bit as the first item).

Similar to regular dp, bitmask dp also stores previously processed value for further processing to avoid reprocessing. In order to walk from state to state, we have to do some bitwise operation to set some flags active (i.e. picking up a new item). Remember to generate states efficiently and don't make the mistake I have made above.

The answer for each item is the sum of dp[(k flags active)] which the item is also set active. At the end of the editorial it mentioned to take care of 0.0 probability (test case 13), and the chance of having not enough k items having non-zero probability (test case 17). Take care of these cases and you are done.

http://codeforces.com/contest/699/submission/19267737

My AC solution, again, I am new to bitmask dp so my implementation might be considered as unorthodox.

I know what bitmask dp is. What is the recursive equation for the dp?? dp[011] = what in terms of other dp

Let me try to explain: dp[011] = dp[001]*P[1]/(P[2]+P[1]) + dp[010]*P[0]/(P[2]+P[0]) where P[0] is the possibility of the first video, P[1] is the possibility of the second video and P[2] is the possibility of the last one. Note that the most significant bit is responsible for the last video, shows whether it is in the cache or not and the least significant bit is responsible for the first video similarly. If you still have any questions, please feel free to ask.

I have a confusion. Isn't dp[011] also not dependent on dp[101]? you have 3rd and 1st video in the cache and then 3rd video is removed by the second video if second one is called? I am sorry I know I probably dont understand the state transitions clearly. Could you clear that up for me?

We are looking at the problem from the other end. Instead of finding out which videos are in the cache at the end, we start filling an initially empty cache with new videos until it's full.

This way there's no removals since we stop processing once the cache is full.

Thanks for your explanation. So, are we finding all possible sets (S) of size k from N elements (Removing 0 probabilities if present inside a set)? Then for each element v belongs to N, we find the probability as count(s belongs to S such that s has v in it)/Total possible sets ?

So , to get all the sets of size k we use bitmaks? What information about the set is dp storing? Is it storing the probability of that set to occur?

Div1 C can also be solved by inclusion exclusion. We brute force for last occurrence of a[i] and then choose a mask of numbers which will appear on its right. This leads to a GP which can be trivially summed. We can use inclusion exclusion to remove overcounting. See my implemention for details

Can you give a detailed explanation please?

For each a[i], we repeat this algo :

We brute force for its last occurrence, then to its right can be some no more than K-1 distinct numbers. So consider this formula for a particular mask of numbers which will appear on right(number of set bits must be <= K-1) : x ^ j ,

where x = sum of a[i] in mask and j = number of positions available to the right of a[i]. This means any of numbers in mask can appear to its right

We have to sum this for all j. It will be 1/(1-x).

If you consider only those masks where number of set bits are K-1, you will cover all cases, but you will overcount(a lot). For that you have to subtract answers for those masks where set bits are K-2. Then again you will have to add those of K-3 etc. Note that we can't just naively keep subtracting/adding the answer for mask, we have to count exactly how many cases are being overcounted.

Consider a general mask p : It will be already counted in all masks which are its superset. You can find the number of its supersets by an initial brute force or some combinatorics.

My solution was exactly the same.

Can you explain this part in your code?

~~~~~

I cannot understand reason for multiply C[n — 1 — ln][l]

For Div2 problem B, whichever cell we plant the bomb, it can destroy at most n+m-1 cells (the current row, the current column, minus the bomb cell which is counted twice). How about checking if(

cur>n+m-1 ) before looking over the cells to findcur=cnt? Since the bomb can never destroys more than n+m-1 cells, computations needed can be reduced for test case where n=1000, m=1000, and the whole field is full of walls.Correct. After counting all walls, if cur is 0, we can print out YES 1 1 and return. If cur > n+m-1, we can immediately print out NO and return.

For div2/C, I think there's a more straightforward greedy solution.

If possible, then Vasya will not rest. So now we define

`pre = a[0], now = a[1]`

,`ans = 0`

.Let's iterate from

`i = 1`

. if`pre == 0`

: Vasya will get rest,`ans++`

. if`pre == now && now != 3`

: Vasya will get rest at the`i`

day, don't forget modify`a[i]`

to`0`

after increases the`ans`

. if`now == 3`

: change`a[i]`

to`1`

if`pre == 2`

, or 2 if`pre == 1`

. if`pre == 3`

, doesn't matter, Vasya absolutely will not get rest at the`i`

daySame solution here. For more explanation, here's the cases to consider by observation.

When a[i] == 0 it's pretty clear that Vasya can do nothing but rest.

When a[i] == 1 or a[i] == 2, the only concern is that Vasya wouldn't do the same task in the consecutive day, so a greedy solution here is that we can see it's always better to do the specific task in the first day and rest the other day and then work and then rest and so forth.

To deal with a[i] == 3, if it's not 1(a bunch of 3)2 or vice versa, you can just turn 3 into anything you want without any rest. Next, we take a look at this sequence, 1212212, no matter it's the 4th or the 5th day Vasya chooses to rest, the answer remains the same (in fact Vasya rests at most 1 day to deal with a pair of consecutive task collision of any form), so for 1(a bunch of 3)2 and 1(a bunch of 3)0 and the identical sequences with 2 at the beginning, the greedy solution is that we change every 3 to the opposite task Vasya did from the previous day iteratively.

Hope this is clear enough. Here's my submission by the way, 19243726

My Greedy Solution for problem C /Div 2 19276210

problem c div2 has a greedy solution also, here is my submission[submission:19270426]

I met a problem with Div2.E (Div1.C). When I calculate the value of

dp[mask], it seems that I need to usedp[mask]itself to calculatedp[mask].For example, if there are 4 videos and the size of the cache is 3.

dp[1100]means the probability of the first and the second videos in the cache. If we choose the first video again, the mask will still be1100. It seems to be a "self reference". How can I deal with this problem?My English is not good. Sorry if my question is hard to understand.

Since we are considering the possibility after a googol of query, we would only like to consider the last chosen k items. Choosing an item again could be considered as skipping a query, which has an insignificant effect due to we are considering a googol of query.

Thanks haleyk100198 for the answer. But personally I think it's not the case.

My solution for this problem is to solve an equation. For example, in order to calculate

dp[1100]in my previous post, I will solve this equation:dp[1100]= _p1*dp[1100] +p2*dp[1100]+p1*dp[0100]+p2*dp[1000](_p1_ means the probability the first video is chosen,

p2means the probability the second video is chosen)So the answer is

dp[1100]=p1/(1-p1-p2)*dp[0100]+p2/(1-p1-p2)*dp[1000]But actually it's incorrect. The correct answer should be

dp[1100]=p1/(1-p2)*dp[0100]+p2/(1-p1)*dp[1000]I can't understand why my solution is wrong.

I understand your doubt, but the problem is about the last K distinct items chosen.

Consider that we are trying to fill the cache with 3 items and infinite moves. For state [1100], choosing items 1,2 again doesn't mean that we have the calculate the value of [1110] and [1101] recursively due to there are (p1+p2) chance of reaching the state [1100] again. Instead, since there is infinite moves, you should consider that choosing these option 1 and 2 as invalid, because we are not going to move on to the next state until we choose something that is not chosen yet — this is why that we could completely ignore p1 and p2 in our calculation.

I don't think you can ignore choices this way, you can do it only if p1+p2 is 0.

Well, I don't have a strong proof for ignoring the choices, but I think that since we are getting back to the state until we have chose something different, and we have infinite moves, all the recursive calculations could be treated as p1 and p2 are not here.

UPD: Consider sum of GS to infinity, the chances of reaching [1110] from [1100] equals to p3*p[1100]+(p1+p2)*[1100]*p3+.... = (p3*(1-(p1+p2)^inf))/(1-(p1+p2)) = p3*[1100]/(1-p1-p2). Same for any options and other states, thus p1 and p2 could be ignored here.... Maybe it's just not the word I should use. (Well unless p1+p2=1, but we had handled the case of p3=0)

UPD2: Rethought about this... Yeah... it's my bad to use the word "ignore" here, it's kinda misleading.

Your equation doesn't represent what you think it does. Given your equation, what your dp[mask] actually represents is, what is the probability that the cache will be exactly at state mask in the end of the process?

We can see then that, because the cache will end up with K bits, the solution for all masks with less than k bits set is dp [mask]=0, and then we don't have enough equations to deduce dp [mask] for masks with K bits.

The dp we actually want is, "what is the probability that, at some point in time, the cache equals mask?" which has a different and not self-referential equation, already explained.

(The probability you move from mask1 to mask2, picking j, is P (mask1, j)= pj + sum of ps in mask1 * P (mask1,j), solving we get P (mask1,j) = pj / (1-sum of ps in mask1)).

Let's say that mask has t set bits — (V1, V2, ..., Vt). Now let's fix the next query, denote its index with F. Now we need to add dp[mask|(1<<F)]*p[F] to the answer. We will keep a variable S (comes from Sum).

So if F is not among those V1,V2,...,Vt, then let's add dp[mask|(1<<F)]*p[F] to S. If F is V1, we need to add dp[mask]*p[V1] to the answer. If F is V2, we need to add dp[mask]*p[V2] to the answer. And so on, if F is Vt, we need to add dp[mask]*p[Vt] to the answer. Or in total, if F is among V1,V2,...,Vt, we add dp[mask]*(p[V1]+p[V2]+...+p[Vt]) to the answer.

To sum up, we have dp[mask]=S+dp[mask]*(p[V1]+p[V2]+...+p[Vt]). Well everything is simple now because we know S, we know p[V1]+p[V2]+...+p[Vt] and we need to find dp[mask].

S=dp[mask]-dp[mask]*(p[V1]+p[V2]+...+p[Vt])

S=dp[mask]*(1-(p[V1]+p[V2]+...+p[Vt]))

dp[mask]=S/(1-(p[V1]+p[V2]+...+p[Vt]))

Don't forget to consider the case when p[V1]+p[V2]+...+p[Vt]=1 because you will have division by zero :)

I have a question about my solution to div2 E. See #65 and #36, n is the same and #36 contain many 0 probabilities, but my solution do about the same amount of operations for both, however, #36 costs much more time than #65. It seems that time cost of mul/div operations on double type depends on the values. so strange.

http://codeforces.com/contest/699/submission/19276677

UPDATE： change dp[l | (1 << i)] += dp[l] * p[i] / sum; to if (sum > ERROR) { dp[l | (1 << i)] += dp[l] * p[i] / sum; } and the time is ok, seems that division by 0 on double doesn't cause runtime error but costs more time.

- Problem D__ - I don't understand what if there's no cycles in given graph?It is impossible. The graph containing n vertices and n edges must have at least one cycle.

got it, спасибо)

In Div2-E, why is f(mask) divided by (1-sum of probabilities of all videos whose bit is set in mask)? I mean it should just be sum of p[i]*f(mask^(1<<i)].

Check out my comment above, the formula can be proved by sum of GS.

I doesn't understand your comment. We need ith video to be one of the last k different videos. So it doesn't matter what videos are being requested for in previous requests. So we can multiply 1 for those videos. Why are you summing on how many times no. of videos not in mask are requested? f(mask) = probablilty of bits set in mask to be in cache. f(011) should be equal to p[1]*f(010)+p[2]*f(001) but why its equal to (p[1]*f(010)+p[2]*f(001))/p[3]

Choosing videos that are requested before DOES matter.

Again, I am going to try to explain the idea with "filling the cache". This is what we are considering:

[blah blah blah 123212] + [k distinct items] (a total length of one googol)

You can agree that our problem now becomes finding the chances of each latter sequence appearing regardless of its' length, and sum the probability to the chance to the corresponding flags, right? Great.

So since we are counting the probabilities regardless of the sequence's length, choosing an item repeatedly effectively increases the length of the sequence by one, and it should be considered as a different sequence. This is why we have to consider choosing it repeatedly, and the probability of moving from state [mask] to [mask|(1<<i)] equals to p(i) * [mask] + p(sum of chosen items) * p(i) * [mask] + ... , which can represented as a sum of Geometric Sequence to infinity.

Thanks for the great explanation for the problem.

However, I have difficulty in understanding because I think below formula can hurt final precision. Could you please correct me if I'm thinking wrong?

For example, if we consider 2 chosen items, I can find the probability of choosing 2 times sequencially(p1*p2) in square of their sum. However, it is (2*p1*p2) which is more than what if we calculate correctly.

This division was the first I participated in officially. After I tried to hack someone's code, I have two questions.

I saw that someone used lower_bound, which can trigger runtime error.And I copied it and figured out some data which can make runtime error in visual studio. So I tried to hack by using that data. But after hack, it didn't lead to error and output normally. Why this situation happenend? Can't we hack runtime error code? Secondly, when I tried to hack Div2-A for O(n^2) code, I couldn't hack because of limitation of text file size. Then, can't we hack TLE code which needs a big data?

Anyway, it was good experience and specially thanks for hacker who hacked me :). It helped me be the "BLUE" for my first trial.

2) You may use generator — program that prints test to stdout 1) Yes, you can, but you should understand that usually runtime is not guaranteed and it's actually UB that may work and return right answer

2) for large tests you can send generators.

Example generator I hacked withWow, I didn't know that. Thanks for great tips!

Now that's a nice editorial(or truly said 'analysis'). Good job and thanks!

Hey, guys! Could you please share some interesting and small test cases for div2 F ? I got WA and I can't seem to find my bug..:(

I felt the a standard solution for B where you would sum walls per column and row was too tricky with corner cases. I made another solution. The idea is too add the walls one per one while we design the solution

filteringthe reliable coordinates (x,y) to put the bomb. We will define aqueueof coordinates (x,y) . This queue contains all possibilities for the bomb in a particular moment. But, there are cases where there this queue may be too big. For example at the beginning (with no wall processed) all places could be used thus the size of the queue would be (N*M). So, to simplify the solution, we'll sayx= - 1 ory= - 1, two values out of range, ifany place could be usedin that stored chance as x or y. We start with only one value on queue, (-1,-1), cause at the beginning any place is reliable, and then we process for each wall, each bomb possibility to be reliable with this wall.X= - 1, then we add to the new queue (W.x,Y) as the possibility must bemodifiedas now with this chance, X value is fixed, not variable.Y= - 1, then we add to the new queue (X,W.y)X=W.xorY=W.ythen we can add the same possibility to the new queue, as the bomb possibility won't be changed by the wall.We destroy the old queue and replace it with the new queue for the next wall processing. In the end we will get a queue with all the possible coordinates for the bomb.

The total complexity is

O(N*M*J) where J is the time to process the queue in wall processing. I feel it is not very big as the possibilities filter very fast by my intuition. If you want you can do the analysis of J complexity.The nice thing of this algorithm it is that is so general that we can avoid to analyse the tricky cases :) Code: 19309915

F? :(

After so many days F is still not out. (and the problem seems interesting O_O)

Well,I am confused about LRU. I don't understand what is the exact meaning of dp[mask]. My confusion is:after a long time(10^100) in this problem,the probability of all mask(or status) will converge to a fix number.However,when considering the limit of probability of status less than k bits,the probability must be 0(unless n<k),for example,assume a final status having m(m<k) bits,then each time the new choice must be within these m bits,then the probability is less than (m/n)^infinity,which is obviously 0. So,if we use dp[mask] to calculate dp[mask|(1<<i)]... ,it will be 0 forever. I am sure I misunderstood the exact meaning of dp[mask]. Can anyone explain it in details?

got it

Hello! How did you manage to understand this problem? I couldn't.

A simple explanations for LRU problem, please? I don't get the whole "we consider the process backwards". Is somebody kind enough to explain it, please? :D

Could anyone please help me to solve div2.F? my_submission

I couldn't figure out why my simple solution doesn't pass for TC#7.

Here's my approach,

Distribute lines from each teleportspot to other monsters. The line parameters are normalized by absolute gcd of its parameters (L = Pk X Pn).

So, a line contain one or more monster(point) within. And these monsters(points) are sorted by distance from teleport-spot.

Search for all permutation of k(by dfs) and check every line within the teleport-spot and take maximum one monster for a line at front. If the front monster can reachable from the other teleport-spot visited before, then ignore them)

The final result is the maximum number of reachable monster among all permutation of k.

In Div 2D (or Div 1B) is the graph directed or undirected? Or does it not matter which way we assume it to be? Because if it is undirected then there can be multiple edges between a pair of vertices.How to solve the problem then?

Both interpretations are fine. Choose the one you understand better. If it's undirected, two edges between a pair of vertices form a cycle of size 2.

When will we see the tutorial of problem F(div 1.). It seems like a very interesting problem. I know how to deal with the case that Pi=0 (for all i). But my program may be wrong sometimes when some gap has been filled. It seems a little bit complicated. Can anyone help?

why is it that problem D's page is not showing its description, and its tutorial is also not available.

I don't understand why 21387180 is failing :( Can anybody help?