Блог пользователя Everule

Автор Everule, история, 7 месяцев назад, По-английски

Lucas theorem deals with analyzing the structure of $$$\binom{n}{r} \mod p$$$ for prime $$$p$$$. $$$\binom{n}{r}$$$ is odd if and only if $$$r$$$ is a submask of $$$n$$$. If that doesn't catch your attention probably this blog isn't for you ...

Case 1: $$$n = p$$$

Combinatorial interpretation
Algebraic interpretation

Case 2: $$$n = p^t$$$

Combinatorial Interpretation
Algebraic Interpretation

Case 3: $$$n = \sum_t d_t \times p^t$$$

Combinatorial Interpretation
Algebraic Interpretation
Example problem to use the internal operation instead of the final result

Полный текст и комментарии »

  • Проголосовать: нравится
  • +155
  • Проголосовать: не нравится

Автор Everule, история, 14 месяцев назад, По-английски

This (possibly series of) blog(s) should serve as in introduction to template metaprogramming in C++, and how you can use it to write more reusable and efficient snippets of code. You need to know nothing beforehand, and I believe its actually much simpler than most people think it is, looking at all the features you have in it. I will also be mentioning which version of C++ you need to use each mentioned feature, if it is recent enough.

Consider a rudimentary function like std::max. How do you even write std::max?. Well you could do


int max(int a, int b){ if(a > b) return a; else return b; }

Well you might use it on floats too. Why not add


float max(float a, float b){ if(a > b) return a; else return b; }

You very quickly realise this is a horrible idea. There has to be some way to say, I want to do this, but for any types.

Here is where template comes in. template is where you write the list of type parameters of the function. For example, you could consider the first function to be max with a type parameter of int, and the second function to be max with a type parameter of float.

So we could write


template<typename T> T max(T a, T b){ if(a < b) return a; else return b; }

Now writing max<int> tells the compiler, that you want the function max with a type parameter of T = int. It is important to mention, that to the compiler max<int> and max<float> are two completely different functions, just like the previous code. The only difference is that these functions are autogenerated.

Now let us try writing some more utility functions.

A lot of the times, we want to work with a list of elements, rather than just 2 elements like what std::max does. For example find the first occurence of element $$$x$$$ in some list $$$L$$$. C++ opted for generality, and it represents a list as a pair of iterators, the start and end iterator. They work like this — You can dereference the start iterator, which is equivalent to getting the first element of the list. You can also increment the start iterator, which is equivalent to going to the next element of the list. If the start iterator becomes equal to the end iterator, it means that the list is over, and dereferencing an end iterator is usually undefined behavior, and you should avoid it. So std::find works like this (there are lot of optimisations hidden behind it).

//ForwardIterator is the type of the iterator, and T is the type of the value we check equality with
template<typename ForwardIterator, typename T> 
ForwardIterator find_first(ForwardIterator start, ForwardIterator end, T value){
    while(start != end){                          //Checks if the iterator hasn't reached the end
        if(*start == value) return start;         //"*start" dereferences an iterator, and returns the value
        ++start;                                  //++start increments an iterator to the next value
   }
   return start;
}

The underlying idea is, you increment start if you didn't find match, and once you do, you return start. According to convention, if there was no match, you return end, which is equal to start at the last line.

So we can use it like this


std::vector<int> a; std::vector<int>::iterator it = find_first(a.begin(), a.end(), 0);

It is not very convenient to write std::vector<int>::iterator. The compiler already knows the return type of find_first because of the parameters, so it doesn't make sense for us to rewrite it. This is where the auto keyword comes in useful.

You can use

std::vector<int> a;
auto it = find_first(a.begin(), a.end(), 0);

Where auto is deduced from the return type of find_first.

Let's say now, you wanted to find the first element larger than value in find_first. All of a sudden, the equality check doesn't seem as general as you would like. Maybe instead of checking equality, we should let the user decide the if condition. But you can't pass code into a function. This is where lambdas come in. A lambda is a function that acts as variable. That is a bit abstract, so let's try an example.


auto add_values = [](int a, int b /* parameters */) -> int /* return type */ { return a + b; }; auto mult_values = [](int a, int b) -> long long { return 1ll * a * b; }; assert(add_values(2, 3) == 5); assert(mult_values(2, 3) == 6ll);

add_values and mult_values are lambdas, which can be used as a function using the () operator. Internally, a lambda is just a local struct, with the () operator overloaded.


struct __add_values_impl{ int operator()(int a, int b){ return a + b; } }; struct __mult_values_impl{ long long operator()(int a, int b){ return 1ll * a * b; } }; __add_values_impl add_values; __mult_values_impl mult_values; assert(add_values(2, 3) == 5); assert(mult_values(2, 3) == 6ll);

That's why its important to notice each lambda has a distinct type, and you cannot assign one lambda to another. But they can be passed around in functions, which is what make them particularly useful.


template<typename ForwardIterator, typename F> ForwardIterator find_first(ForwardIterator start, ForwardIterator end, F condition){ while(start != end){ if(condition(*start)) return start; //condition(*start) calls the function condition, with the value in the iterator ++start; } return start; }

Now you can put just about anything in condition. For example, you could do

vector<int> a;
auto it = std::find_first(a.begin(), a.end(), [](int val) -> bool {
    return val > 10; // You don't need to assign a lambda to a variable, you can just write it directly as parameter
});

and it will store an iterator to the first element that is larger than 10.

Let's consider now, what if didn't want 10 specifically, we might want some variable like x instead. This is the second thing lambdas can do. They can capture variables in their surroundings.

vector<int> a;
int x;
auto it = std::find_first(a.begin(), a.end(), [x /* capturing x */ ](int val) -> bool {
    return val > x;
});

The [] is to store the captured variables. [x] represents capturing x by value. This means x will be copied inside the lambda. You can also capture x by reference, by writing [&x] instead. If you capture something by reference, then editing it will change the value of your variable. Keep that in mind.

To prevent needing to capture too many variables, lambdas also come with capture defaults. If you use [&], it will capture all used variables by reference, and if you use [=], it will capture all used variables by value. As an example, you can look at this

int x = 1, y = 2;
auto func = [x, &y](int z) -> int {
    return x + y + z;
};
assert(func(2) == 5);
x = 3;
assert(func(2) == 5);
y = 3;
assert(func(2) == 6);

Internally, you can think it works like this

struct __func_impl{
   int x;
   int& y;
   __func_impl(int _x, int& _y) : x(_x), y(_y) {} // This is notation of initialising x = _x, and y = _y.
   int operator()(int z){
       return x + y + z;
   }
};
int x = 1, y = 2;
__func_impl func(x, y);
assert(func(2) == 5);
x = 3;
assert(func(2) == 5);
y = 3;
assert(func(2) == 6);

You've learnt the basics of template metaprogramming! You can start writing most basic ideas you have now, using the template, auto, typename keywords. This is one of the most powerful features of C++, and that's why its part $$$1$$$ out of $$$\infty$$$.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +161
  • Проголосовать: не нравится

Автор Everule, история, 14 месяцев назад, По-английски

A. Monsters (easy version)

Notice that this is equivalent to maximizing the damage done by the operation of type $$$2$$$. Let us consider the case when there are 2 monsters with health $$$h$$$, and $$$h+k$$$, and no monsters with health in $$$(h, h+k)$$$. Then reducing the monster with health $$$h + k$$$ to $$$h + 1$$$ cannot make the final outcome worse. You can prove it by noticing that the operation of type $$$2$$$ can only happen $$$h$$$ times if the health of the monster is above $$$h + 1$$$. Now we get monsters with health in such a way, that there exists $$$H$$$, where there is monster with health from $$$1$$$ to $$$H$$$. Then one move of operation $$$2$$$ can kill the remaining monsters. So you only need to simulate the above process, and find out how many moves of type $$$1$$$ were required to do it.

B. Letter Exchange

Let us consider a graph with $$$3$$$ nodes $$$w, i, n$$$. For a word of type $$$wii$$$ add an edge from $$$i$$$ to $$$n$$$. For a word of type of $$$www$$$ and an edge from $$$w$$$ to $$$i$$$ and $$$w$$$ to $$$n$$$. This represents the swaps it needs to be sorted. Now instead of swapping letters between words, we swap letters between edges. Edges $$$a \to b$$$ and $$$b \to a$$$ can be swapped to make $$$a \to a$$$ and $$$b \to b$$$. Notice self edges are redundant, so we can drop them. For edges $$$a \to b$$$ and $$$b \to c$$$, we get $$$a \to c$$$ and $$$b \to b$$$. The rest of the possibilities are symmetric versions of these. Therefore to swap these in the minimum number of moves, we should maximize the number of times we use the first operation, that removed 2 edges at once, or minimize the number of times we do the operation of type 2.

We claim that the minimum number of times we need to use the operation of type 2 is the absolute value of the cyclic flow in the graph. The cyclic flow is defined as following. Let the flow from $$$a \to b$$$ be the difference of edges from $$$a \to b$$$ and $$$b \to a$$$. Then the absolute value of this equal for all $$$a, b$$$. Notice this by the fact that the indegree and outdegree of each node must be equal. Call this value the cyclic flow.

The operation of type $$$1$$$ doesn't change the cyclic flow, and operation of type 2 changes cyclic flow by 1 in any direction (Notice the signed cyclic flow is additive). Therefore we must use the operation of type $$$2$$$ at least that many times. This is also constructible, since just doing operations of type 1 until its possible, then using operations of type 2 to get rid of the cycles is valid strategy.

Therefore you just need to simulate doing it.

C. Monsters (hard version)

Consider doing the process in reverse. From the easy version, we deduce that there exists $$$H > 0$$$ s.t. there is some monster with health $$$h$$$ for all $$$1 \le h \le H$$$. This effectively forms a matching between $$$[1, H]$$$ and some subset of monsters, such that the health of the monster is larger, and the sum of differences is minimised. Additionally, the health of all monsters is less than $$$H$$$. We only need to keep track of the monster mapped to health $$$h$$$ at each point of time.

Let us prove the assignment with larger $$$H$$$ is strictly optimal. Let us consider two solutions, one with $$$H = x$$$, and one with $$$H = x + 1$$$. If it is possible to not use the largest element for $$$H$$$, we will not use it. We can show this by direct exchange argument, by replacing the matching. If there exists solution with $$$H = x + 1$$$, it is possible to remove any element greater than $$$x$$$ and still have matching for $$$H = x$$$. Then we could simply have maximum be at $$$x + 1$$$, and produce strictly better solution. Therefore $$$H$$$ is always maximum possible. It is also then obvious, that if there exists monster with $$$h > H$$$, the current solution is not optimal.

Consider doing the solution for the entire array and removing one by one. Then if the removed monster wasn't already matched the solution is still optimal. Let us consider removing the monster was matched with $$$h$$$. Then we can retain the value of $$$H$$$ if there exists replacement for it. This is true because there are $$$H - h$$$ monsters with more health than $$$h$$$, since all are in matching if there is no replacement, but we need $$$H - h + 1$$$. So checking if the value of $$$H$$$ is still same is checking if there exists replacement.

Then we claim, that picking the smallest replacement, and not changing the rest of the matching produces an optimal solution.

Lemma : If there exists a solution, there exists a sorted solution with respect to $$$[1, H]$$$. We can prove this by exchange argument on every inversion. Therefore the following solution is optimal, when considering the set of sorted matchings. Go from $$$[1, H]$$$. Pick the smallest available element at each step.

We will now find that with the above two algorithms, the set of chosen elements in both the matchings will be equal, and therefore will have equal cost.

Let us consider the case where $$$x$$$ is matched to a lower cost than the sorted solution. Then any replacement for this will work for optimal too, and since smaller replacement means better than optimal, it will do as good as optimal.

If it is matched to higher cost than optimal, Let us consider $$$k_1$$$ and $$$k_2$$$, there must be no excess elements between $$$k_1$$$ and $$$k_2$$$, otherwise you could replace $$$x$$$ by it, and the solution is not optimal.

If there is no replacement, then we can simply replace the monster matched to $$$h$$$ with the monster matched to $$$H$$$, and decrement $$$H$$$.

Therefore we can deduce that the set of matched elements is equal, and therefore the solutions are equivalent, which solves the problem.

Solution — Courtesy of jeroenodb

D. Wooden Spoon

We can find the Wooden spoon uniquely by this process. Create a binary tree, where the leaves are the players. We consider the maximum wins instead of the minimum for algebraic convenience. The winner of node is the maximum of its children. Start at the root, and go to the loser of the game. Do this until you reach a leaf node. Now we need some way to find the probability of each person being the final leaf of this process. Let us consider the dynamic programming relation $$$dp[d][s]$$$, as the probability of the person with skill $$$s$$$ being the node we reach after $$$d$$$ iterations of the above process.

We also claim the following invariant $$$:$$$ The probability of any person $$$x < s$$$ being in the subtree of $$$s$$$ is equiprobable.

Let us now consider finding the probability of transition from $$$dp[d+1][x]$$$ from $$$dp[d][s]$$$. Let $$$sz(d)$$$ be the size of the subtree after doing $$$d$$$ steps of the process, or equivalently $$$2^{n - d}$$$. Then the number of way to split the subtree of $$$s$$$ into two sets is $$$\binom{s - 1}{sz(d+1)}\binom{s - 1}{sz(d+1) - 1}$$$. The number of ways to split so that the loser of one subtree is $$$x$$$ is $$$\binom{x - 1}{sz(d+1) - 1}\binom{s - sz(d+1) - 1}{sz(d+1) - 1}$$$. This transition also satisfies our previous invariant, therefore its a valid dynamic programming solution. Notice that each factor in the probability transition depends only on $$$x$$$ or only on $$$s$$$. So we can use prefix sums with some multiplication to calculate it efficiently.

F. Minimums or Medians

We basically need to find the set of constructible strings. For this we should first analyze how the median and minimum operations interplay. Let us imagine 2 pointers at the 2 medians of the array at all times. If we do a minimum operation, both these pointers move forward to the next element. If we do a median operation, they delete the current element, and move outwards. This also implies the larger element deleted by any median operation is monotonically increasing.

With this analysis we can make our first claim. For any constructible string, the smallest remaining number, if it exists will always be odd. This is trivial to see by invariant. At each point the array will satisfy $$$a[i] \equiv i \mod 2$$$.

Now let's go for a harder claim. For any constructible string with smallest remaining number being $$$2t + 1$$$, it is possible to construct it with exactly $$$t$$$ minimum operations.

If the median acts on some pair $$$(x, y)$$$, then there must be no elements currently between $$$x$$$ and $$$y$$$. Therefore we notice that either $$$max(x, y) < 2t + 1$$$, or $$$min(x, y) > 2t + 1$$$. Since the larger element is monotonically increasing, we also know that only some prefix of median operations satisfy $$$max(x, y) < 2t + 1$$$. We replace all median operations with $$$max(x, y) < 2t + 1$$$ with minimum operations. Notice that on the first median operation, the number of deleted elements to the left of $$$2t + 1$$$ is equal. Therefore this claim is true.

Now its quite natural to fix $$$t$$$, the number of minimum operations we did. Then all median operations act on numbers larger than $$$2t + 1$$$. Let us now look what the possible median operation actions look like. Notice that the right midpoint doesn't just increase every move, it increases by exactly one. Therefore we can claim the largest deletable element is $$$n + k$$$.

Let's go for the next claim. Let $$$b$$$ be the number of elements in $$$[1, n]$$$ deleted with median operations. The set of deleted elements is $$$[n-b+1, n]$$$, and additionally, $$$[n+1, n+b]$$$ must also be deleted. This requires the same observation of medians not "skipping" numbers, except now we look at the gap between $$$n$$$ and $$$n+1$$$. The right endpoint is always larger than $$$n+1$$$, and the left endpoint here is less than $$$n$$$. Therefore at each point we can only delete the largest element less than $$$n$$$ and the smallest element larger than $$$n+1$$$.

Now let's additionally fix the balance $$$b$$$. Now consider the numbers $$$[n+b+1, n+k]$$$.

We claim, you can delete any subset of these numbers, as long as the length of every deleted segment is even. We first delete the $$$[n-b+1, n+b]$$$ trivially. Then we do a minimum operation and have the end points of $$$(n+b+1, n+b+2)$$$. From here we do the simple algorithm, of go to the midpoint of the next even length deleted segment, and do operations until that segment is deleted. It is quite easy to see that this will delete the necessary segments, and is possible since the higher end of the median can reach $$$n + k$$$.

The next step only involve equation bashing, but the high level idea is as follows. If you have $$$x$$$ elements and you want $$$2s$$$ elements deleted, then the number of ways to do it is $$$\binom{x - s}{s}$$$. When you add the combinatorial terms over all pairs $$$t, b$$$ such that answer is $$$\binom{x}{y}$$$ you might notice the possible $$$y$$$ for each $$$x$$$ forms contigous range. Also as you increase $$$x$$$, this ranges changes by at most 1 in either direction. Since $$$\sum \binom{x + i}{y}$$$ is easier to compute than $$$\sum \binom{x}{i}$$$, we can kindof transpose the sum.

I think there is easier interpretation of this if you loop $$$b$$$ before $$$t$$$. If anyone has such interpretation feel free to share it in comments.

If anyone wants to write down their solutions to $$$E$$$, I would be happy to add it to the blog.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +121
  • Проголосовать: не нравится

Автор Everule, история, 15 месяцев назад, По-английски

It seems many people struggle to understand what a proof even looks like, for greedy problems. The issue is the polarized forms of proof in CP and academic math, which may seem to be overly rigorous and difficult to during contest. However there is spectrum of what a proof even is. It is important to understand, that you cannot truly write and verify a proof within yourself, or within an entity (the entirety of codeforces for example). It is necessary to have judgement on what level of rigor is sufficient.

If you want to tell yourself that you just don't have math background and therefore can't prove and will perpetually use it as excuse this blog is not for you.

Academic proofs usually tend to be as rigorous as possible, and are carefully verified by other experts in the field, to be objectively certain of its correctness. Clearly that is not a level of rigor you need while solving a codeforces problem. You only need to prove it to yourself.

For this, there are certain heuristics you can use to solve these problems with proofs almost as fast as just solving as you do now, but with more certainty of correctness, which saves you time on implementing incorrect solutions or getting WAs and not knowing if your implementation or idea is bugged. Not only do these strategies help you prove your solutions, they also make it much faster to find approaches that could reasonably work, making you much faster at being able to solve, and you're not at the mercy of the tests while practicing.

On top of that, on problems with multistep solutions, finding the solution is exponential in number of steps, but by proving you can optimize it to linear. Most of my practice has been done without writing a single line of code, because I'm sufficiently experienced in knowing whether my solution is correct or not.

A lot of professional mathematicians believe there are 3 big segments of ability in any particular mathematical domain.

  • Level 1 : You either cannot make sensible claims, or cannot back your claims up to anyone else with anything more than heuristic arguments.
  • Level 2 : You're able to make claims and prove them, but you don't see the relation between your abstract writing and the intuition it works.
  • Level 3 : You don't need to even write down the proof anymore, and at this point you can tell if you can prove something or not.

IT IS NOT POSSIBLE TO REACH LEVEL 3 WITHOUT EXPERIENCE IN LEVEL 2

For example, you might see a proof like. You have a set $$$S$$$ of $$$n$$$ elements. Find the largest subset $$$T \subseteq S$$$ such that of size $$$k$$$, or $$$|T| = k$$$, and the sum of elements in $$$T$$$ is maximised, or $$$\sum_{t \in T} t$$$ is maximised.

It is quite obvious to simply pick the $$$k$$$ largest elements in $$$T$$$. You might think a formal proof would require you to do the following. Let $$$T$$$ be your solution, and let $$$T*$$$ be the optimal solution. If $$$T = T*$$$, our solution is optimal. Let $$$T \not= T*$$$. Then let us consider the symmetric difference of $$$T$$$ and $$$T*$$$. Notice that all elements in $$$T$$$ and not in $$$T*$$$ are larger than those in $$$T*$$$ and not in $$$T$$$. Therefore $$$sum(T) \ge sum(T*)$$$, and is optimal.

But this is not really a proof you need in Competitive programming. This is necessary for math, because everything relies on the basic axioms, and all arguments should derive from it. However, in most cases you don't need to prove a self-evident claim like this. It's useful, but not necessary to understand the formal reasoning of a proof of something really simple, so you can apply it to cases where you don't have as strong intuition as picking the $$$k$$$ largest. I can go on a tangent about how intuition is derived from the primal instincts of man, and is optimized for real life scenarios. For an interesting read on that, you can try this.

But effectively, what I'm trying to say that this claim is not obvious to you because its easy to prove. Its because the claim has applied for humans long enough that its "common sense" to our brain. Really the easiest way to improve in Competitive Programming is the art of developing intuition of something you can't quite intuit anything about. It's also why I enjoy easier problems that I struggle in. It usually means I had to develop intuition in something I didn't really have good intuition for. I personally believe that is something that helps in all areas of thinking, but you might disagree.

I hope that we're in agreement on what a proof should do.

An easy way to sanity check your proof is. Think of every step you took while proving. If you wanted to justify to an opponent you were correct, what would the opponent try to argue against. If you think you can't justify your reasoning over his to a jury without reasonable doubt, your proof needs to be more specific.

I will now show how to write effective proofs, and why they're still formal, despite not having the illusory abstract variables describing everything.

I've decided to make this part interactive. You can send me any problem that you might have guessed, or any constructive you did not understand how to derive, and I will reply to it (if I can solve it). If I think its sufficiently educational, I will add it to the blog.

Let's try this atcoder problem. You've binary string made with $$$AB$$$. You can change some pair of adjacent characters to "AB". Find if you can make the given string a palindrome.

Solution

This form of reasoning is applicable to solving problems, and not just proving your solutions.

Let's start with this unnecessarily hated problem for example click.

Create $$$n$$$ by $$$n$$$ matrix $$$M$$$, where every number from $$$1$$$ to $$$n^2$$$ appears once, such that the number of different values of $$$|x - y|$$$ for adjacent elements in $$$M$$$ is maximized.

Solution

Полный текст и комментарии »

  • Проголосовать: нравится
  • +343
  • Проголосовать: не нравится

Автор Everule, история, 18 месяцев назад, По-английски

Hello Competitors!

I'm looking to build a simulated exchange with intrinsic value based on codeforces rating, taking inspirations from the problems and solutions from real-world exchanges.

If you're interested in financial markets and you want to gain a better understanding of the kind of work they do there, this is an easy project to work on to get a ground-up understanding of markets and trading.

You're not required to have any previous financial background. You need to have an interest in learning and enjoy finding and critiquing different solutions to interesting mathematical problems. I recommend applying irrespective of your experience level, I believe there is a lot to learn in the development of a trading system.

I'm looking for 10-15 people to fill 3 types of roles(closed).

Stock Exchange Developer

You will develop the back end orderbooks to keep track of all orders and fill them as fast and efficiently as possible. You will learn about the actual implementation of how shorting, options and margins work. You will have to keep track of collateral, market liquidity and slippage, and instrument volatility to make informed decisions about margin calls and open option positions and automatically close those speculative positions.

Algorithm developer and liquidity provider

You will help create liquidity in the market, allowing trades from users to close almost instantaneously. You will be split into two hedge funds, and you will have to come up with strategies to invest your own and other's money. These strategies include risk neutralization, arbitrage, and correlation analysis, and try to remain market-neutral at any time.

Front end developer

You will develop a front end that displays stock prices over a timeframe, allow users to make trades, look at the orderbook and the positions of other professional traders.

The role has no fixed hours, you can work at any time you would like to, and if you feel you would like to work after a month or two, you can still apply.

Your compensation will be between 200-2000 usd based on your contribution and experience level, paid in Toncoin.

Send an email with your resume to [email protected] or join the discord server to apply https://discord.gg/8wMhRchDNX

How the system works

This is a tentative description, subject to change based on the results of our exchange and algorithms team.

Every account with more than 10 contests is allowed to make trades for any securities.

Every account with more than 10 contests will have their shares available on the public market. Those with 20 or more contests can be shorted. Those with 30 or more contests will have derivative instruments such as options and futures.

Every share will provide a dividend based on f(rating, perf, competed?, contest_type) after every contest. We will test different possibilities of $$$f$$$ before finalizing it.

You will start out with money as a function of how many contests you've done and how high your rating is. To prevent inflation and permanent stock holding, we will slightly dilute everyone's holding after every contest and redistribute them, and change interest rates.

You will not be trading the securities with real money. However we will give awards to the best performing traders in the low, medium and high capital strategies.

We will use SEC like analysis to analyze pump and dump, insider trading and betting on alts. We will use the strongest statistical methods available to find those who cheat here.

Edit. Thank you all for applying. We've taken in 15 paid employees. However you may still want to consider applying as an unpaid intern.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +184
  • Проголосовать: не нравится

Автор Everule, история, 18 месяцев назад, По-английски

Those of us who have been doing cp for a long time, have come to love the representation of numbers as an infinite prime power vector i.e.

$$$n = \prod p_i^{e_i} = [e_1, e_2, \ldots ] = v_n$$$

Where $$$n$$$ is a number and $$$p_i$$$ is the $$$i$$$ th prime number. This has many useful properties for problems, such as mapping

$$$n \times m \sim v_n + v_m$$$
$$$n / m \sim v_n - v_m$$$
$$$n \mid m \sim v_n \le v_m$$$
$$$\gcd(n, m) \sim \min(v_n, v_m)$$$
$$$lcm(n, m) \sim \max(v_n, v_m)$$$

All the above operations are done elementwise. However in some cases, the problem authors may forbid you from factorising the numbers, by perhaps increasing $$$n$$$ to around $$$10^{36}$$$ where there isn't any simple factorisation algorithm to do it in reasonable time. However using effective prime factorisation, we can use these properties freely on a set of numbers. If this set contained all integers, then it would require us to truly prime factorise the integer. However, we only need to use these operations on a subset of integers, making our task a lot simpler. We will make a finite array $$$q$$$ of size $$$t$$$ with the following properties.

$$$(1)$$$ Any element in $$$x \in S$$$ is representable by $$$q$$$, i.e. there exists $$$[f_1, f_2, \ldots f_t] = v'_n$$$ such that $$$x = \prod q_i^{f_i}$$$

$$$(2)$$$ For all the above operations, replacing $$$v_n$$$ with $$$v'_n$$$ should produce the same result.

Let us show, that if $$$q$$$ satisfies $$$(1)$$$ and $$$gcd(q_i, q_j) = 1$$$ for all $$$i < j$$$, then $$$q$$$ satisfies $$$(2)$$$.

Proofs

Let us now consider how we can find $$$q$$$ given $$$S$$$. Let us proceed element wise, and assume we have a valid solution, and we want to add another element.

Now let us add the next element $$$x \in S$$$. Go through $$$q$$$ until we spot an element that has a common factor with $$$x$$$. Let that element be $$$q_i$$$.

Let $$$P(q)$$$ be the set of elements produced by some product of elements in $$$q$$$. Notice that if $$$q$$$ has 2 elements that divide each other $$$a \mid b$$$, then $$$a, b$$$ is replaceable by $$$a, \frac{b}{a}$$$, and $$$P$$$ will not lose any elements. Let us do this recursively until neither divides the other, or an element becomes 1. Let us call this procedure reduce_pair.

(It maybe useful to look at the example implementation in parallel from here)

We run the procedure reduce_pair on the elements $$$q_i, x$$$, if $$$x$$$ becomes 1, we're done. If $$$q_i$$$ becomes 1, we remove it from the set and continue.

We then run an iterative algorithm on a list $$$L$$$, initialised with $$$[q_i, x]$$$. Let the elements before $$$L[ptr]$$$ be all co-prime. Initialise $$$ptr$$$ with 1. Run reduce_pair on all elements before $$$ptr$$$ with it. Then add their gcd to the end of the list if its not one, and divide both elements by it. This will make all elements upto $$$ptr + 1$$$ coprime. There can only be as many elements in this list as there are true primes in $$$q_i, x$$$. Therefore we can deduce that the size of the list will be at most $$$O(\log{\log{max(S)}})$$$.

Since this algorithm will be run $$$O(\log{x})$$$ times, the total algorithm will run in $$$O(n^2\log{A} + n\log{A} \times (\log{\log{A}})^2)$$$ Where $$$A = max(S)$$$, and we will get our array $$$q$$$.

Example implementation : 159590539

Полный текст и комментарии »

  • Проголосовать: нравится
  • +287
  • Проголосовать: не нравится

Автор Everule, история, 23 месяца назад, По-английски

Kirchoff's tree theorem was recently used in https://atcoder.jp/contests/abc253/tasks/abc253_h, and it on first glance, almost feels like black magic, but I will show that it is not the case.

So prerequisites... Advanced contestants may skip these

Prerequisites

Problem statement

Given an undirected graph $$$G = (V, E)$$$ with no multi edges or self loops, find the number of spanning trees in $$$G$$$.

Bijection between spanning trees and Arborescences

Let us consider all arborescences of $$$G$$$ rooted at $$$r$$$. We can undirect the arborescence to get a spanning tree of $$$G$$$. We can direct all edges of $$$G$$$ away from $$$r$$$ to get the arborescence back. Therefore they are equal in number.

Representing an arborescence as a functional graph transpose

An arborescence is the transpose of a functional graph with $$$f : [1, r-1] \cup [r+1, n] \to [1, n]$$$, where $$$f$$$ corresponds to the parent function in the arborescence. Every functional graph may not form an arborescence, they may have cycles. Notice any cycle must be a directed cycle, as every node has exactly one in edge.

Modeling counting spanning arborescences as counting acyclic functional graphs

Let $$$G$$$ be an undirected graph. We would like to count the number of arborescences rooted at $$$r$$$ in $$$G$$$. Let us root the arborescence at $$$1$$$ for convenience without loss of generality. Let's look at all functions $$$f : [2, n] \to [1, n]$$$ where $$$f(x) \in G_x$$$, where $$$G_x$$$ is the set of nodes connected to $$$x$$$ by an edge. Note any arborescence in $$$G$$$ must correspond to some $$$f$$$ satisfying these constraints, and every arborescence that has an edge not in $$$G$$$ cannot satisfy the given constraints. So we can model the problem as counting the number of functional graphs induced by $$$f$$$ without cycles. Let the set of valid functions be $$$F$$$.

Counting acyclic functional graphs with $$$PIE$$$

Let $$$C_F$$$ be the set of possible cycles in functional graphs induced by functions $$$f \in F$$$. Let $$$c \subseteq C_F$$$ be some subset of cycles in $$$C_F$$$. Let $$$F(c)$$$ be the number of functions in $$$f$$$ with the cycles in $$$c$$$. Then the number of acyclic functional graphs in $$$F$$$ is

$$$\sum\limits_{c \subseteq C_F} (-1)^{|c|}F(c)$$$

Computing $$$F(c)$$$

Notice, that the set $$$c$$$ must consist of only disjoint cycles. Then $$$F(c) = \prod |G_x| = \prod deg_x$$$ for all $$$x \in V$$$ that is not contained in any cycle in $$$c$$$, since $$$f$$$ is already fixed for the elements in the cycles, and the ones not in any cycle could be anything.

How to use determinants to compute this

Since $$$c$$$ is a set of disjoint cycles, and so is a permutation, we can try counting over all permutations. Let us consider some set of cycles $$$c \subseteq C_F$$$ that do not contain $$$1$$$. If $$$c$$$ is not disjoint, we can ignore it. Otherwise, let there be a permutation $$$P$$$ of $$$[2, n]$$$ with the set of disjoint cycles in $$$c$$$, and for those not in $$$c$$$, we can have a self loop. This permutation should be weighted by

$$$(-1)^{|c|} \prod\limits_{P[x] = x} deg_x = (-1)^{n-1}\text{sgn}(P)\prod\limits_{P[x] = x} -deg_x$$$

Notice this is true, because $$$|c|$$$ is the number of cycles in $$$P$$$ of size more than $$$1$$$, and then we multiply $$$-1$$$ over all elements in cycle of size $$$1$$$.

Notice that the set of cycles in $$$P$$$ are only valid if there exists edge $$$(i, P_i)$$$ for each $$$i$$$ with $$$i \not= P_i$$$. Then we should make matrix $$$M$$$ with $$$M_{i, P[i]} = 1$$$ if there exists such edge and $$$M_{i, P[i]} = 0$$$ otherwise. Any permutation with a cycle not in $$$C_F$$$ will contribute $$$0$$$ to the determinant. We let $$$M_{i, i} = -|G_i| = -deg_i$$$, so that the permutation is weighted by the product of $$$-deg_x$$$ over all self loops. We should remove the row/column corresponding to $$$1$$$, as there is no edge from $$$1$$$, and no cycle in $$$C_F$$$ containing it either. Then it's not hard to see that the determinant here computes the above sum. We should multiply by $$$(-1)^{n-1}$$$ or take the absolute value of the determinant. Alternatively, you can multiply each entry by $$$-1$$$ and get $$$M_{ij} = -1$$$ and $$$M_{u, u} = deg_u$$$, and then just take the determinant. As an exercise you can show that you can find number of arborescence rooted at $$$r$$$ for each $$$r$$$ in a directed graph using a similar method.

Spanning arborescence of directed graphs

If you have understood upto here, You should notice that $$$deg_i$$$ should really be indegree and whether you mark directed edges $$$(u, v)$$$ at $$$M_{u, v}$$$ or $$$M_{v, u}$$$ doesn't really matter.

Expected spanning trees

Let's assume you're given some set of edges, and each edge has some probability of being in our graph. We can compute the expected number of spanning trees. Notice that the old matrix is just sum of product of all edges over all spanning trees. Every spanning tree has a weight of 1 when all edge weights are 1. But if we weight every spanning tree by the product of the probability of each edge, we will get the probability of this spanning tree in our graph. Then summing this over all spanning trees gives us the expected number of spanning trees. For every edge $$$(u, v, p)$$$ you should set $$$M_{v, u} = M_{u, v} = p$$$, and $$$deg_u$$$ should be $$$sum(M_{u})$$$.

Counting spanning forests with mutliple roots.

Let $$$R$$$ be the set of roots. You can remove the row and column corresponding to every node in $$$R$$$. This essentially tells the matrix to not count parent pointers for any node in $$$R$$$, as required.

Counting minimum spanning trees

Iterate over edge weights in increasing order. The edges of the current edge weight, form some connected components. You should multiply the answer by the number of spanning trees of each component. Then merge each connected component into one node, and then repeat for the next smallest edge weight. This will amortize into $$$O(V^3)$$$ since the total component size of components with more than one node is less than $$$2V$$$.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +251
  • Проголосовать: не нравится

Автор Everule, история, 23 месяца назад, По-английски

Imagine you have a set of elements $$$S$$$. $$$x, y \in S$$$ are equivalent if you can get from one to another by performing some operations $$$G$$$. You must count the number of non-equivalent things in the set. Turns out this is a lot easier if the operation has some special properties.

Let $$$G$$$ be the set of operations. We define $$$gx$$$ for $$$g \in G$$$ and $$$x \in S$$$ as the element that is the operation $$$g$$$ on the element $$$x$$$.

Let's define our set of operation $$$G$$$ as a "group action" if the following is satisfied.

  • Every operation is invertible. For $$$x, y \in S$$$ and $$$g \in G$$$, if $$$gx = y$$$, then $$$x = g^{-1}y$$$.
  • Given 2 operations, the operation of them one after other is another operation in our set. For $$$g_1, g_2 \in G$$$, there exists $$$g_3 \in G$$$ such that $$$g_3 = g_2g_1$$$

A more intuitive way to define this would be, that the graph of operations is undirected set of cliques, where an edge $$$(x, y)$$$ denotes that $$$gx = y$$$ for some $$$g$$$ (Notice invertibility guarantees this is a symmetric statement). If 2 elements are equal, its possible to get from one of them to another in one operation. Cyclic shifts of an array is one example of such a set of operations, as we will see later. For a deeper understanding I recommend reading "groups" in https://web.evanchen.cc/napkin.html

Now we need the number of connected components in this graph, which is the same as the number of non equivalent elements.

Each clique is a just a set of some equivalent copies, let's say $$$k$$$ copies. If we weigh each node by $$$1/k$$$. Then each clique will contribute exactly one to our sum. So we get the sum

$$$\sum_{x \in S} \frac{1}{|\{gx\ :\ g \in G\}|}$$$

This is just sum $$$1 / k$$$ over all elements. Summing over elements is a bit easier, but still, finding the size of the set seems hard.

Let us look at the number of solutions for $$$gx = y$$$. Let $$$g_{xy}$$$ be one of the solutions. Then $$$g_{xy}^{-1}gx = x$$$. So the number of solutions to $$$gx = y$$$ is same as the number of solutions for $$$gx = x$$$. Invertibility implies that there is bijection between the solutions of $$$gx = x$$$ and $$$gx = y$$$ using $$$g_{xy}$$$.

That means for each $$$x, y \in S$$$ where $$$x, y$$$ are equivalent, there is an equal number of solutions to $$$gx = y$$$. Let's imagine the size of clique is $$$k$$$ and the number of solutions to $$$gx = y$$$ is $$$s$$$ for each $$$y$$$. Then $$$ks = |G|$$$, because each $$$g$$$ is solution to some equation. So instead of using $$$1/k$$$ we use $$$s/|G|$$$ instead.

Ok we're getting somewhere. Using this in the above equation, we get

$$$\frac{1}{|G|}\sum_{x \in S} |\{gx = x\ :\ g \in G\}|$$$

Notice that this is just the number of pairs $$$(g, x)$$$ such that $$$gx = x$$$. We can choose to iterate over $$$g$$$ instead.

$$$\frac{1}{|G|}\sum_{g \in G} |\{gx = x\ :\ x \in S\}|$$$

And we've finally reached the long awaited burnside lemma. In many cases iterating over $$$g$$$ is easier like in the following problem.

Count the number of arrays $$$A$$$ of size $$$n$$$ with entries in $$$[1, m]$$$, where 2 arrays are considered equal, if they differ only by a cyclic shift. https://cses.fi/problemset/task/2209

Let's first take a step back, and reprove burnside's lemma on this explicit case. For a given cyclic array, let $$$k$$$ be the smallest $$$k$$$ such that $$$a_i = a_{i+k}$$$. Then this array is the concatenation of $$$a[1 : k]$$$, $$$n/k$$$ times. So there are $$$k$$$ equivalent arrays in $$$S$$$, and $$$n/k$$$ cyclic shifts that equal itself, as you would expect from the proof above. So each array should be counted $$$\frac{1}{k} = \frac{n/k}{n}$$$ times. $$$n/k$$$ is the number of cyclic shifts of this array such that the array equals itself. From here you can verify the burnside's lemma counts each element the correct number of times.

$$$G$$$ is the set of cyclic shifts of an array of size $$$n$$$, and $$$S$$$ is the $$$m^n$$$ arrays of size $$$n$$$ with entries in $$$[1, m]$$$. The number of arrays such that its $$$k$$$th cyclic shift is equal to itself is $$$m^{\gcd(k, n)}$$$. Therefore using burnside's lemma we get the answer

$$$\frac{1}{|G|} \sum_k m^{\gcd(k, n)} = \frac{1}{n} \sum_k m^{\gcd(k, n)}$$$

Полный текст и комментарии »

  • Проголосовать: нравится
  • +272
  • Проголосовать: не нравится

Автор Everule, история, 2 года назад, По-английски

I think that linear algebra, is a unnecessarily intimidating tool, to solve a problem I believe is far easier, and needlessly intimidates those without formal knowledge of linear algebra, which is a significant part of codeforces. I know I didn't. Therefore I want to show a step by step problem solving process that will lead to the concept of linear basis at the end, with no formal knowledge required.

We start with the elementary problem of linear basis. Given an array $$$A = [a_1, a_2, \ldots a_n]$$$, where $$$a_i \le 2^d$$$, Choose some subset of elements in $$$A$$$, and XOR them. Find the number of distinct elements you could make.

We start by using a simple dp. Let $$$X_i$$$ be the set of elements you could make with the first $$$i$$$ elements of $$$A$$$. Then its easy to see that $$$X_{i+1}$$$ contains $$$x$$$ and $$$x \bigoplus a_{i+1}$$$ for every $$$x \in X_i$$$. This is an $$$O(2^dn)$$$ algorithm, which is too inefficient.

We need to put some constraints on what $$$X_i$$$ looks like to make further progress. Let us assume $$$x,y$$$ are two elements in $$$X_i$$$. Then $$$x \bigoplus y$$$, must also be in $$$X_i$$$. This is because I can just take the elements I used to make $$$x$$$ and the elements I used to make $$$y$$$. The elements used in both simply cancel out. This means that the new set of elements used to make $$$x \bigoplus y$$$ is also a subset of $$$[a_1, a_2, \ldots, a_i]$$$, and therefore in $$$X_i$$$.

This provides us with an equally powerful converse.

Let $$$x$$$ be some element in $$$X_i$$$, and $$$y$$$ some element not in $$$X_i$$$. Then $$$x \bigoplus y$$$, cannot be in $$$X_i$$$, because if I could make $$$x$$$ and $$$x \bigoplus y$$$, I could make $$$x \bigoplus x \bigoplus y = y$$$.

Another way of thinking about this is, we want to attack $$$y$$$ with some set of elements in $$$[a_1, a_2, \ldots a_i]$$$, and kill it by getting it to $$$0$$$.

If I could kill $$$x$$$ and $$$y$$$, I can kill $$$x \bigoplus y$$$ by killing $$$x$$$ first, then $$$y$$$.

If I can kill $$$x$$$ but not $$$y$$$, neither can I kill $$$x \bigoplus y$$$, because I could already get $$$y$$$ to $$$x \bigoplus y$$$, but I couldn't kill that either.

I recommend spending a minute internalizing this.

Now let us try adding a new element $$$a_{i+1}$$$ to $$$X_i$$$ again. I check if I can make $$$a_{i+1}$$$. If I can, I can simply ignore it. If I cannot, then $$$x \bigoplus a_{i+1}$$$ for $$$x \in X_i$$$ is a new element too. So the size of $$$X$$$ will double. Obviously the set can double only $$$d$$$ times. This gives an algorithm in $$$O(n + d2^d)$$$ or $$$O(n + 2^d)$$$ depending on implementation.

if $$$2^d$$$ is large, this algorithm doesn't work either, as we cannot explicitly store $$$X$$$. This means we will need to compress $$$X$$$ in some way. To do that, notice that there are only at most $$$d$$$ elements that matter, because the ones that I ignored, may as well not have been there. So just storing the elements $$$[a_{i_1}, a_{i_2}, \ldots, a_{i_k}]$$$, that doubled the size of $$$X$$$, is enough. $$$X$$$ is just the set of the XORs of every subset of the compressed array.

While we successfully compressed $$$X$$$, we lost our ability to quickly check if $$$a_{i+1}$$$ is in $$$X$$$. We need to get that back.

We cannot guarantee much about the structure of $$$[a_{i_1}, a_{i_2}, \ldots a_{i_k}]$$$. So it is hard to see an algorithm that does this quickly. However it is important to notice that its not the elements itself that are important, but more so what is the set that is generated by the compressed elements. So any edits made to the array, is okay as long it doesn't change the produced set.

We now show that the set $$$[a_{i_1}, a_{i_2}, \ldots a_{i_k}]$$$ and $$$[a_{i_1} \bigoplus a_{i_2}, a_{i_2}, \ldots a_{i_k}]$$$, produce the same $$$X$$$. Or more generally, I can take any element, and xor it with some other element, and it will produce the same set $$$X$$$. This is easy to prove, because if I was using $$$a_{i_2}$$$ in the first set, I use the first and second element in the second set. If I was using both $$$a_{i_1}$$$ and $$$a_{i_2}$$$, then I just need the second element. It is equally good.

While this is enough to solve the given problem, it doesn't do the idea enough justice. In fact, any invertible XOR transformation from $$$[a_1, a_2, \ldots, a_k]$$$ to $$$[b_1, b_2, \ldots, b_k]$$$ will produce the same set.

A XOR transformation is one where $$$b_i$$$ is the XOR of $$$a_j$$$ for all $$$j$$$ in some subset of $$$1,2,3,\ldots, k$$$. An invertible XOR transformation, is one where there exists some XOR transformation from $$$b$$$ back to $$$a$$$.

Let $$$X_a$$$ be the set of elements that are produced by $$$[a_1, a_2, \ldots, a_k]$$$ and similarly define $$$X_b$$$. Then because $$$b_i$$$ belongs to $$$X_a$$$ by definition, every element of $$$X_b$$$ also belongs to $$$X_a$$$, implying $$$X_b \subseteq X_a$$$. But as the operation is invertible, we can similarly argue that $$$X_a \subseteq X_b$$$. This is only possible if $$$X_a = X_b$$$.

Any set of the smallest size that produces $$$X$$$ is a XOR basis, and we can choose any basis of $$$X$$$ we want.

Let us now ask ourselves, what do I want my basis to look like. If some element had a "unique" bit, it would be easy to check if the element should be added to make $$$y$$$. I just need to see if the unique bit exists in $$$y$$$. In fact, we can do this. Let us look at the bits in decreasing order.

If no element has this bit set, we can ignore this bit. If there is already just one element with this bit set, its good already. If there are more than one, we just pick any element, and XOR it with all the other elements with this bit set to make this element the only element with this bit set.

Now since we know if this element is included, by checking the unique bit, we know when to XOR $$$y$$$ with this element. Now we need to check if the new $$$y$$$ can be made from the other elements of the basis. The other elements form a smaller basis, and we can repeat the process, and its trivial to check in a basis with only $$$0$$$. Therefore inductively we have found an efficient, $$$O(poly(d))$$$ way to check if $$$y$$$ is in the basis.

I will leave optimizing this to $$$O(d)$$$ as an exercise, as my goal was to explain the conceptual underlying forces that allow the algorithm to work.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +114
  • Проголосовать: не нравится

Автор Everule, история, 2 года назад, По-английски

This is a blog starting from the very basics of number theory, in a way that flows fluidly from one concept to another and is based in developing an intuitive feeling for the basics of elementary number theory. This is not a blog to simply gloss over. I consider more of a guided exploration into the world of discovering things in the world of number theory, and I don't expect anyone to immediately understand all the insights in this blog. But if you put an honest effort into discovering how I find these insights you will find much use for my blog.

If you do not know some notation or some elementary theorem I use you should refer to this.

Elementary definitions

Greatest common divisor, Additive structure of residues mod n, and Bezout's Theorem

Multiplicative structure of residues mod n and Fermat's little theorem

Chinese remainder theorem and linear equations modulo n

Fundamental Theorem of arithmetic

Extended Chinese remainder theorem

Multiplicative functions and Mobius inversion

Primitive roots and modular logarithm

Probabilistic primality test

Полный текст и комментарии »

  • Проголосовать: нравится
  • +308
  • Проголосовать: не нравится

Автор Everule, история, 3 года назад, По-английски

Disclaimer : I just want to highlight some simple techniques used to solve Ad-Hoc tasks.

In my opinion, these are all simple techniques, but a lot of these are used even in much harder problems. I hope these will be helpful to some people.

1: Eliminate obvious cases, and see if you can simplify the problem.

Solution

2: Ignore unnecessary information, and use it to draw the problem in new ways.

Solution

3: Making obvious lower and upper bounds, and proving they are constructible.

Solution

4: Finding invariants

Solution

5 : Define something that cannot change much.

Solution
Question that uses many of these
Solution

Полный текст и комментарии »

  • Проголосовать: нравится
  • +250
  • Проголосовать: не нравится

Автор Everule, история, 4 года назад, По-английски

Some newbie messaged me with asking for help on this article https://www.geeksforgeeks.org/queries-for-number-of-distinct-elements-in-a-subarray-set-2/.

Can someone explain why time complexity is O(log n) and not O(n)?

Please read it once before downvoting. It is an advanced data structure, that is not well known.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +27
  • Проголосовать: не нравится

Автор Everule, история, 4 года назад, По-английски

There is a codeforces round X coming up. Is this a new thing in CF, or have similar rounds already taken place on CF?

Does anyone know what it is about?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +7
  • Проголосовать: не нравится

Автор Everule, история, 4 года назад, По-английски

Inspired by comments on this post, I am trying to recreate the system.

Please fill out this form and I will allocate as requested to the best of my ability. I believe Groups of $$$3$$$ are best for practising. If you are not content with your group, I will try to accommodate you in another group.

To prevent spam, please submit a compilation error to 29B in problemset. Alternatively, If you don't want a compilation error in your records, change your name to "Buddy" "System".

I hope I am able to help people have fun while practicing and feel more motivated. \

Edit : There are a full 111 participants, and it will be impossible for me to message all of them. I can only message $$$7$$$ people every hour.

Join this Discord Server. There are different channels for each rank and each time in GMT. This will allow you to quickly find the right people for yourself, rather than me making all the groups.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +57
  • Проголосовать: не нравится

Автор Everule, история, 4 года назад, По-английски

I'm looking for other people to practice with. I think giving virtual contests alone is very boring. It would also be better as we will be able to explain how we got to the solution, and how to prove it. I think I would make a good team-mate for people around 2100 rating, Who are able to almost solve D2E/D1C in contest. I also think it is important to understand why the algorithm is correct, so if you believe similarly, I think it will be better. Also I encourage others who may be better off finding other team-mates to find people on this post in a comment.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +58
  • Проголосовать: не нравится

Автор Everule, история, 4 года назад, По-английски

I found a website https://recommender.codedrills.io where you can see your strong and weak topics. I'm curious to know what topics others find easy. Here's mine.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +50
  • Проголосовать: не нравится

Автор Everule, история, 4 года назад, По-английски

I'm trying to solve atcoder ABC167F. https://atcoder.jp/contests/abc167/tasks/abc167_f .

I started by questioning what conditions decide whether a concatenation is valid. We start by defining sum of a bracketed sequence, which is count('(')-count(')'). I then found the sum of every string, and the smallest sum of a prefix of the string. They are denoted by $$$sum$$$ and $$$mnprefix$$$ in my code. If my current string is valid which is equivalent to $$$mnprefix\ge 0$$$ and has a sum of $$$x$$$, then a concatenation is valid if and only if $$$x+mnprefix\ge 0$$$ and our new sequnce has a sum of $$$x+sum$$$. Let's say we have two strings $$$a$$$ and $$$b$$$. Let $$$a.sum$$$ denote the sum of $$$a$$$ and the rest of the notation is deductible.

Let's say $$$a$$$ comes before $$$b$$$. Then the two conditions that need to be valid are $$$x\ge a.mnprefix$$$ and $$$x\ge b.mnprefix-a.sum$$$. So we want $$$\max(a.mnprefix, b.mnprefix-a.sum)\le \max(b.mnprefix, a.mnprefix - b.sum)$$$ for $$$a$$$ to be chosen before $$$b$$$. However my proof has one flaw, which is that it doesn't account for transitivity. I don't know how to prove another string $$$c$$$ will not affect my result, and how do I know my comparator function is transitive, as each string doesn't have an exact value, and the comparator values are differing according to the string being compared to.

If my solution is correct, can someone tell me if I have made an error in my implementation.

My code

Edit : nvm, I was using abs(mnprefix) in my calculation but not in my code.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +9
  • Проголосовать: не нравится

Автор Everule, 4 года назад, По-английски

https://atcoder.jp/contests/abc160/tasks/abc160_f //Question link

https://atcoder.jp/contests/abc160/submissions/11324766 //My TLE submission

I need help in understanding how to reroot the tree in O(n) I think I have understood how to reroot the tree but I cannot understand how to do it for every node. For example

dp.resize(n,mp(-1,0)); //dp[u]=(the actual answer, size of subtree) 
cout<<solve(0,0).first<<"\n";
dp[0]=mp(-1,0);
dp[edge[0][0]]=mp(-1,0);
cout<<solve(edge[0][0],edge[0][0]).first<<"\n";

The following code seems to correctly calculate the answer for the root node and it's first edge. But how do I reroot the tree in such a way that it goes to all edges and calculates the answer in O(n)?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +12
  • Проголосовать: не нравится