MMA_199's blog

By MMA_199, history, 3 years ago, In English

Hi Codeforces

One of my Codeforces friends and I were solving a binary search question (Multiplication Table) and then we thought of the problem stated below

Suppose we have a graph in the form of an adjacency list and each list is sorted in non decreasing order(Positive numbers only with no repetitions). We are given q queries and for each query we will take the adjacency lists which are mentioned in the query. Now the lists mentioned in the query are arranged in non-decreasing order and we have to find the Kth element in the new arrangement of elements.

Each list is sorted. Each query contains 2 lines

1st line of the query contains the list numbers that will be merged and 2nd line contains an integer K(Element in the Kth position after arrays are merged and sorted)

Suppose the example given below

Example

  1. {2,4,7,9}

  2. {6,11,13,18}

  3. {1,3,5,8,14}

  4. {12,26}

  5. {10,24}

//Suppose here 2 queries are given .

Q=2

3 4 5

5

1 2 3

11

Answer for 1st query- 12

Answer for 2nd query- 13

Its easy to implement this in the brute force approach but I was curious to know if there was any better approach considering the queries are very large 10^4.

I would like to know your approach for this problem.

Thanks and have a good day:)

  • Vote: I like it
  • +3
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

If the table is N x M, then brute force will take O(QNM), because for each query you loop through the whole table. If you use binary search on answer for each query, then this can be reduced to O(QNlog(maxval)log(M)). For each query after you binary search on answer, to check loop through and binary search on each row.

These are the two easiest ways I could think of. Though, there probably is an easier and faster way.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This is actually alright, If I understood it right.

    For each query Of length $$$k$$$ indices, the algorithm will take $$$\mathcal{O}(k\log (M)\log(max))$$$

    Since the input size over $$$Q$$$ itself is $$$Qk$$$, the algorithm is $$$Qk\log^{2}(M)$$$, should be efficient enough in most cases.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

The best thing I came with is to solve each query in $$$O(m\times \log^2 {n})$$$ where $$$m$$$ is the number of lists to merge. Suppose you want to check if the answer is $$$X$$$, you can loop over all the lists and as the lists are sorted use lower_bound to find the number of elements less than $$$X$$$, finally compare this count to $$$K$$$ and change the binary search range values.