Блог пользователя bicsi

Автор bicsi, история, 4 года назад, По-английски

In a recent contest in Romania, there was a problem that was formulated as follows:

Given a convex polygon $$$P$$$ given by points $$$(x_i, y_i)$$$ in the 2D plane and a number $$$R$$$, construct a polygon $$$P'$$$ given by $$$(x'_i, y'_i)$$$, such that:

  • the Euclidean distance between $$$(x_i, y_i)$$$ and $$$(x'_i, y'_i)$$$ is at most $$$R$$$
  • the area of polygon $$$P'$$$ is as much as possible

It is guaranteed that any two points on the polygon are at distance at least $$$2R$$$ apart.

The problem was a bit awkwardly stated, as it required the area not to be the largest possible, but as large as the "model solution". In case you are interested, the task (in Romanian), can be found here. Keep in mind that the task at hand is a bit ill-posed and has some notable problems with floating point precision. However, I'm not as concerned about that, though.


It feels to me that the problem might actually be convex; however, I don't have the mathematical skills to (dis)prove it.

The problem can be formulated in terms of constrained optimization as follows:

max $$$x'_1 y'_2 + x'_2 y'_3 + ... + x'_n y'_1 - x'_2 y'_1 - x'_3 y'_2 - ... - x'_1 y'_n$$$

s.t. $$$(x'_i - x_i)^2 + (y'_i - y_i)^2 \leq R^2$$$

(here $$$x'_i, y'_i$$$ are the actual variables to optimize, excuse me for that)

The constrained space is convex. The objective function seems concave, but I'm not sure how to prove that. For three points, the hessian of the objective function looks like this.


The model solution is to iteratively fix all points apart from one and optimize for a single point (up to convergence). It can be proven rather easily that the best placement lies on the intersection between the perpendicular from $$$(x_i, y_i)$$$ to the vector given by $$$(x_{i - 1}, y_{i - 1})$$$ and $$$(x_{i + 1}, y_{i + 1})$$$ and the circle of radius $$$R$$$. Check it out.


I have some questions about this:

  • Is the constrained optimization convex?
  • If yes/no, does the algorithm above produce the global maximum?
  • How are these types of "iterative optimization" algorithms called in literature? Are there known bounds for convergence (in the convex case)?
  • Проголосовать: нравится
  • +95
  • Проголосовать: не нравится

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by bicsi (previous revision, new revision, compare).

»
4 года назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

How are these types of "iterative optimization" algorithms called in literature? Are there known bounds for convergence (in the convex case)?

Isn't it called hill climbing?

I don't have answers to other questions but I can propose a different way to think about it. You are given N disjoint unit circles such that their centers form a convex polygon. Choose one point from each circle so that this set would be stable: it can't be possible to move one point to increase the area of polygon. In other words, for every three consecutive points, the middle point should be as far as possible from the segment connecting the other two. Now we want to prove that there is only one stable set of points. (Though this is not enough to claim that your solution converges to this stable set.)

  • »
    »
    4 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    It is some form of hill climbing. I was wondering maybe this method in particular of ‘optimization by marginalization’ would have a more specific name.

    The alternate formulation seems appropriate. This means that there could maybe be some ‘fixed point analysis’ applied here. If we consider a function $$$f$$$ that maps at the same time all $$$n$$$ points into their extremes, than maybe it can be (dis)proved that $$$f$$$ has a unique fixed point ($$$f(x) = x$$$) (every local minimum should be a fixed point of $$$f$$$, so this would solve the optimality issue).

    Alternatively, one could just try to prove that the (signed) area function is concave in the arguments, but even if it is, I might need some more mathematical knowledge to prove that.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    If you check that the stable set of points is unique, it follows that the algorithm will converge to this stable set because by compactness a subsequence of the sequence provided by the algorithm will converge. If it converges, then by continuity, it necessarily will be a fixed point. Since it is unique it will "the stable set". This happens for all subsequences. Thus the sequence itself is convergent.

    One needs still to show that there's a unique stable set of points.

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I don't understand this. Do you make any silent assumptions about the algorithm? Because from only the fact that each iteration improves the score we can't conclude that it will converge to/reach local maximum.

      • »
        »
        »
        »
        4 года назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        Yes, I guess that I was assuming that the sequence given by the algorithm $$$x_{n+1} = f(x_n)$$$ is convergent to a point $$$p$$$ and moreover $$$f$$$ is continuous. If this is the case, then $$$p = f(p)$$$ and we have reached the only stable point. Moreover, since iteration by $$$f$$$ increases the area, then any point $$$q$$$ where the maximum area is attained must satisfy that $$$q=f(q)$$$ and therefore by the uniqueness of the stable point $$$p=q$$$.

        My previous message suggested that the sequence is automatically convergent. This is not the case, but it seems very plausible in the way that is given by the algorithm.

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think the area function might be convex (not concave). Please tell me if it's correct:

Let's consider the area function $$$A(x_1, y_1, x_2, y_2, ..., x_n, y_n) = A(x, y) = x_1 y_2 + x_2 y_3 + ... + x_n y_1 - y_1 x_2 - y_2 x_3 - ... - y_n x_1$$$, The quadratic form $$$(u_1, v_1, ..., u_n, v_n) \cdot Hessian(A) \cdot (u_1, v_1, ..., u_n, v_n)^T$$$ turns out to be $$$u_1 v_2 + u_2 v_3 + ... + u_n v_1 - v_1 u_2 - v_2 u_3 - ... - v_n u_1$$$, which is just $$$A(u, v)$$$.

So, for polygons of positive area, the hessian of the area functions is semi-positive definite, so the function is convex (?).

However, we are left with maximizing a convex function over some constrained space, which is not very helpful.

  • »
    »
    4 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    However, because the area function is convex, it means that the convex optimization problem of minimizing (**not maximizing**) the area under the same constraints has a unique local minimum. However, there is a one to one correspondence between local minima of the minimum area problem and local maxima of the maximum area problem (just take the symmetric of a solution wrt each of the disks’ center). Therefore, the maximization problem must have a unique local maximum.

    Is this proof correct?

    • »
      »
      »
      4 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      To assure uniqueness of the minimum usually we check that the function is strictly-convex and the domain where it is defined is convex. This is actually the case as the bilinear form A is uniformly positive definite in the domain given by the disks. I don't see the correspondence between local minima and local maxima of the area problem.