Let there be a sequence of integers $$$x_1, x_2, ... x_n$$$. Then, the harmonic mean is defined as

Given an array of positive and negative integers, what is the most efficient way to find the subarray with the maximum harmonic mean? Thanks!

# | User | Rating |
---|---|---|

1 | Um_nik | 3538 |

2 | tourist | 3509 |

3 | Benq | 3473 |

4 | ecnerwala | 3446 |

5 | ksun48 | 3432 |

6 | maroonrk | 3404 |

7 | Radewoosh | 3383 |

8 | yosupo | 3324 |

9 | boboniu | 3300 |

10 | apiadu | 3238 |

# | User | Contrib. |
---|---|---|

1 | Errichto | 207 |

2 | Monogon | 199 |

3 | SecondThread | 191 |

4 | vovuh | 188 |

5 | Um_nik | 185 |

5 | antontrygubO_o | 185 |

7 | pikmike | 184 |

8 | Ashishgup | 182 |

9 | pashka | 168 |

10 | Radewoosh | 167 |

Let there be a sequence of integers $$$x_1, x_2, ... x_n$$$. Then, the harmonic mean is defined as

Given an array of positive and negative integers, what is the most efficient way to find the subarray with the maximum harmonic mean? Thanks!

↑

↓

Codeforces (c) Copyright 2010-2020 Mike Mirzayanov

The only programming contests Web 2.0 platform

Server time: Oct/23/2020 03:06:47 (h1).

Desktop version, switch to mobile version.

Supported by

User lists

Name |
---|

What is the source of this problem? Are there any other constraints?

This question came as a random thought as I was trying to create more questions for my school's judge. It is purely a matter of interest ^^

Wouldn't it be just the maximum number in array?

OK. That explains my weird feeling reading the problem. With no further constraints, there can be division by zero in calculating the harmonic mean of some subarrays due to negative inputs, for example with

`1 -1`

. Even if input guarantees that this cannot happen or it is clarified that this doesn't count, the correct output can be large when the sum in the denominator only approximately cancels out. The correct output for`8 -9 -9 10`

is 1440. If there's a particularly efficient way to search for such near-cancellations, it's at least not obvious to me.If instead all input is positive, then (unless you want the problem to be 'print the maximum of the input' in disguise) some constraint on the size of the subarrays to be considered seems appropriate.

Thank you! I honestly had not realised at first and you pointed out many insightful observations. That resolves the problem :)

There is a stupid systematic way in $$$O(N \log^3 N)$$$. Assume without loss of generality that there is at least one positive element.

Let $$$a_i = \frac 1{x_i}$$$ and $$$p_i$$$ be $$$\sum_{j = 1}^i a_j$$$ (prefix sum array of $$$a_i$$$).

We wish to pick two endpoints $$$l$$$ and $$$r$$$ such that $$$\frac{r - l + 1}{\sum_{i=l}^r \frac 1{x_i}}$$$ is maximized.

Let $$$l_0 = l - 1$$$,

We wish to maximize this value. We note for later that $$$p_r - p_{l_0}$$$ must be positive. We binary search on $$$k$$$ to find how many $$$0 \leq l_0 \leq r \leq N$$$ satisfy

Let $$$b_i = i - p_ik$$$, then we want to check how many pairs $$$0 \leq l_0 \leq r \leq N$$$ satisfy $$$b_r \geq b_l$$$ AND $$$p_r > p_{l_0}$$$. This is a 3D partial order problem which can be solved using CDQ in $$$O(N \log^2 N)$$$. (Tutorial)

For each iteration of the binary search, if no pairs $$$l_0, r$$$ are found, then $$$k$$$ is too high, otherwise, it's at least that value of $$$k$$$.

Floating point accuracy is likely a big issue. The case where all elements are negative can be handled in a similar way.

Binary searching over $$$k$$$ probably requires some care, especially when you seem to claim $$$O(\log n)$$$ iterations as opposed to something like $$$O(\log \frac{1}{\varepsilon})$$$ iterations. The former is certainly achievable, for example by modifying your CDQ queries to provide a random sample and using that $$$(l_0, r)$$$ pair's highest possible value of $$$k$$$ as your next pivot, but isn't entirely obvious.

Floating point accuracy is a very big issue. Unless the bounds are very small, I would prefer to do all arithmetic for this problem in exact arithmetic rather than risk running headfirst toward the limits of what floating-point types the compiler might provide. Obviously this makes arithmetic operations rather expensive, but this seems to be a necessary price to pay. I was already able to craft a testcase that will run afoul of the precision available in a double with $$$n = 8$$$ and $$$| a_i | \leq 1000$$$:

When

`a = [1, 1, -979, -846, -729, 761, 884, 885]`

, $$$p_8 = 2 + \frac{1}{6656 823 096 297 660} > p_2 = 2$$$, but when computed in doubles in the obvious way on my machine, $$$p_8 = p_2$$$.Come to think of it, for a fixed $$$l_0$$$, the largest harmonic mean is achieved by minimizing the slope $$$\frac{p_r - p_{l_0}}{r - l_0}$$$ among points $$$r > l_0$$$ with $$$p_r > p_{l_0}$$$, which is a convex-hull query, and can be answered in $$$O((\log n)^2)$$$ arithmetic operations per query by maintaining a segment tree of dynamic (lower) convex hulls and adding points $$$(x, p_x)$$$ in decreasing order of $$$p_x$$$.