polosatic's blog

By polosatic, history, 6 weeks ago, translation, In English

Physics has many cheating statements, like conservation laws, which can help to "prove" interesting math facts. If you know any "proofs", you can share them. I also know some:

Tetrahedron normals

Statement
Solution

Pythagorean theorem

Statement
Solution

Точка Торричелли

Statement
Solution

Full text and comments »

  • Vote: I like it
  • +15
  • Vote: I do not like it

By polosatic, history, 4 months ago, In English

(Unfortunately, Codeforces does not support emoji, so it is here)

Full text and comments »

  • Vote: I like it
  • +45
  • Vote: I do not like it

By polosatic, history, 12 months ago, translation, In English

Hello, Codeforces. I wrote a prorgam using this theorem for interpolation a function as a polynomial. At first, I will explain how does it work and then show some examples of using it.

Wikipedia does not provide one formula I need, so I will show it

Formula

Code is quite long, but simple in use.

How to use program

At first, you need to set values of constants. N is the number of variables, MAX_DEG is the maximal exponent of a variable over all monomials. In main you need to fill 2 arrays: names contains the names of all variables, max_exp[i] is the maximal exponent of the i-th variable, or the upper_bound of its value.

Define d = (max_exp[0] + 1) * (max_exp[1] + 1) * ... * (max_exp[N - 1] + 1). MAX_PRODUCT should be greater then d. Then you need to write a function f(array<ll, N>), which returns ll or ld. In my code it returns an integer, but its type is ld to avoid ll overflow.

Code

Stress-testing

If you uncomment 2 last rows in main, program will check the polynom it got on random tests. The test generation should depend from f working time, because it can run too long on big tests.

Approximations

Function from exaple and similar functions (with n loops) are a polynomial with rational coefficients (if this is not true, the function does not return an integer). So, if APPROXIMATION = true, all coefficients are approximating to rational numbers with absolute error < EPS with functions normalize and approximate (they are using the same algorithm). This algorithm works in O(numerator + denominator), that seems to be slow, but if the polynomial has a small amount of monomials, it does not take much time.

Stress-testing function check considers a value correct if its absolute or relative error < EPS_CHECK.

How and how long does it work

We consider monomials as arrays of exponents. We hash these arrays. Array PW contains powers of points (from POINTS), which we use for interpolation. If you want to use your points for interpolation, modify POINTS. If you use fractional numbers, replace #define ll long long with #define ll long double. Array DIV is used for fast calculating denominators in the formula.

convert(h) — get indexes (in array POINTS) of coordinates of the point corresponding to the monomial with hash h. convert_points(h) — get coordinates of the point corresponding to the monomial with hash h.

Then we are precalcing values of f in all our points and write them to F_CACHE. After it, we run bfs on monomials. During the transition from one monomial to another we decrease the exponent of one variable by 1. When a monomial is got from set in bfs, we find its coefficient using gen. If it is not zero, we need to modify our polynomial for every monomials we has not considered in bfs yet ("monomial" and "point" have the same concepts because we can get a point from monomial using convert_points(h), if h is a hash of the monomial).

We need to modify the polynomial to make one of the theorem's conditions satisfied: there are no monomials greater than our monomial (greater means that all exponents are more or equal). For every point we has not consider in bfs (they will be in set remaining_points) we add the value of the monomial in this point to SUM[hash_of_point]. Then we will decrease f(point) by SUM[hash_of_point] to artificially remove greater monomials.

Time complexity

  1. The longest part of precalc — calculating F_CACHE — take O(d * O(f)) time
  2. Each of d runs of gen is iterating over O(d) points, denominator calculation takes O(N) time for each point.
  3. For every monomial with non-zero coefficient we calculate its values in O(d) points in O(N) for each point.

We have got O(d * O(f) + d^2 * N + d * O(res)), where O(res) is the time it takes to calculate the polynomial we got as a result.

Trying to optimize

It seems that the recursion takes the most time. We can unroll it in one cycle using stack. It is boring, so I decided to try to unroll it in other way. For every monomial with non-zero coefficient, let`s iterate over all monomials with hash less or equal to our hash. For every monomial we check if it is less than our monomial (all corresponding exponents are less or equal). If it is lower, we add to the coefficient the value of fraction in this point (monomial).

// Instead of ld k = gen();
ld k = 0;
for (int h=0;h<=v;h++)
{
    array<int, N> cur = convert(h);
    bool ok = 1;
    for (int i=0;i<N;i++) if (cur[i] > cur_exp[i]) ok = 0;
    if (ok)
    {
	ll div = 1;
        for (int i=0;i<N;i++) div *= DIV[i][cur[i]][cur_exp[i]];
        k += (ld)(F_CACHE[h] - SUM[h]) / div;
    }
}

Is it faster than gen? New implementation is iterating over all pairs of hashes, so it works in O(d^2 * N), too. Let's estimate the constant. The number of these pairs is d * (d + 1) / 2, so we get constant 1 / 2. Now let's calculate the constant of number of points considered by gen. This number can be calculated with this function:

ld f(array<ll, N> v)
{
	auto [a, b, c, d, e, f, g] = v;
	ld res = 0;
	for (int i=0;i<a;i++)
		for (int j=0;j<b;j++)
			for (int u=0;u<c;u++)
				for (int x=0;x<d;x++)
					for (int y=0;y<e;y++)
						for (int z=0;z<f;z++)
							for (int k=0;k<g;k++)
								res += (i + 1) * (j + 1) * (u + 1) * (x + 1) * (y + 1) * (z + 1) * (k + 1);
	return res;
}

The coefficient with a^2 * b^2 * c^2 * d^2 * e^2 * f^2 is our constant. To find it, I used my program. It is 1 / 128. At all, it is 1 / 2^N for N variables. It means that the optimization can be efficient if N is small.

Conclusion

May be, this program will help someone to find formula for some function. Also it can open brackets, that is necessary if you calculate geometry problems in complex numbers. If you know other ways to use it, I will be happy if you share it.

With N = 1 this program is just a Lagrange interpolation, which can be done faster than O(d^2). Maybe, someone will find a way to speed up it with N > 1.

Full text and comments »

  • Vote: I like it
  • +76
  • Vote: I do not like it

By polosatic, history, 14 months ago, translation, In English

Hello, Codeforces. I submitted two very similar solutions to 1771F - Hossam and Range Minimum Query

TL: 193157567 AC: 193157512

You can see that they are different in one line with map. I have no idea why the first solution gets TL.

Full text and comments »

  • Vote: I like it
  • +8
  • Vote: I do not like it

By polosatic, history, 15 months ago, translation, In English

Hello, Codeforces. Recently I invented a problem. Consider a set of points (x, y) with integer coordinates, 0 <= x < a и 0 <= y < b. We want to know the number of acute triangles with vertices at given points.

Trying to interpolate

We can write function f(a, b) which finds the answer and works in (ab) ^ 3. I suggested that it is a polynomial of two variables with degree <= 6. I tried to interpolate it using this theorem. But my code finds non-zero coefficients with monoms with degrees > 6, so my hypothesis was not confirmed. I also failed with right triangles.

code (for right triangles)

This code finds a coefficient with a ^ (C1 - 1) * b ^ (C2 - 1)

What I would like to know:

  • can this problem be solved faster than O(stupid(a, b))
  • can this problem be solved in O(1)
  • maybe, someone knows problems with difficult to find formulas and this method can help to find them?

UPD: Found formula for the number of right triangles with b = 2: f(a, 2) = 2 * a ^ 2 - 4, a > 1.

UPD2: Many thanks to bronze_coder for finding O(1) solution for constant b: OEIS A189814.

Interpolation should use ai > b ^ 2.

UPD3: Finally wrote a O((ab) ^ 2) solution.

Code

Now I can interpolate using bigger values of a and b.

But it is quite strange for me that the number of right triangles is directly proportional to (ab) ^ 2 instead of (ab) ^ 3. Now I will try to understand why the formula doesn't work with ai <= b ^ 2.

UPD4: Code that finds a formula for f(a, b) for a > b ^ 2 and works in O(b ^ 6):

Spoiler

I used OEIS to find a regularity between the coefficients of P, but i didn't find anything :( except x2 = b * (b - 1), but it was obvious.

UPD5:

Finally found a formula and solution in O(min(a, b) ^ 6)

If a < b, let's swap them. Now we need to solve it in O(b ^ 6). If a <= (b - 1) ^ 2 + 1, we can run solution in O((ab) ^ 3) = O(b ^ 6). Now we need to solve it for a > (b - 1) ^ 2 + 1.

Definitions

Consider all right triangles and divide them into 3 types:

  1. triangles without any vertices on the line x = a - 1

  2. triangles with some vertices on the line x = a - 1 and with two sides which are parallel to the x-axis or y-axis

  3. other triangles

Let C(a) be the number of third type triangles (some function).

The number of second type triangles is (a - 1) * b * (b - 1) / 2 * 4:

Proof

By definition, for every vertex (x, y) of first type triangles, 0 <= x < a - 1 and 0 <= y < b, then the number of the triangles is f(a - 1, b), by definition of f.

Well, f(a, b) = f(a - 1, b) + (a - 1) * b * (b - 1) / 2 * 4 + C(a)

Let's prove that C(a) for every a > (b - 1) ^ 2 + 1 has same values.

Proof

C(a) is a constant. Let c be this constant.

Well, f(a, b) = f(a - 1, b) + (a - 1) * b * (b - 1) / 2 * 4 + c. After some transforms, we get a formula for f(a, b) using f((b - 1) ^ 2 + 1, b).

Transforms
Implementation

Unfortunately, the value of c is different for different values of b and I could not find a regularity between them. Interpolation and OEIS did not help me. There are some things I should do:

  1. Express c through b, after all c depends on b only
  2. Solve the problem for a <= (b - 1) ^ 2 faster
  3. Solve the problem for acute triangles.

It will be hard.

UPD6: We can speed up the solution to O(b^5) using this idea

Full text and comments »

  • Vote: I like it
  • +88
  • Vote: I do not like it