Блог пользователя randomx_123

Автор randomx_123, история, 3 года назад, По-английски

So I gave the on-campus Google hiring test yesterday and when I saw the 2 problems I had to solve in an hour, I was really excited since they looked doable. However, things took a turn when I actually started to think about optimizations. Anyways, here goes :

Problem -1

We have an array A of N elements. We will be asked Q queries. each query is in the form of a single integer, X, and we have to tell whether there exists an index i in the array such that the bitwise AND of A[i] and X equals 0. If such an index exists print YES, otherwise print NO.

Constraints : 1<=N<=1e5, 1<=Q<=1e5, 1<=A[i]<=1e5

Problem 2

We have a binary string S of length N, and we have to find the number of substrings(that is, a contiguous range i to j) in which the frequency of 1 is strictly greater than the frequency of 0.

Constraints : 1<=N<=1e6;

I have spent a lot of time on them but could not come up with an optimal approach for either. I realize that the 2nd problem can be solved in O(NlogN) in a similar fashion in which we solve LIS in NlogN time, But other than that, I am clueless about both problems. Any help will be appreciated, Thanks!

  • Проголосовать: нравится
  • +88
  • Проголосовать: не нравится

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by randomx_123 (previous revision, new revision, compare).

»
3 года назад, # |
Rev. 3   Проголосовать: нравится +76 Проголосовать: не нравится

A few hints for problem 2 in your post:

Hint 1
Hint Answer
Hint 2
Hint Answer
Hint 3
Hint Answer
  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    I used segment tree to find no of prefixes with sum strictly less than current.

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Yeah, that's the way we do LIS in NlogN. But is there any simpler method?

      • »
        »
        »
        »
        3 года назад, # ^ |
        Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

        I simply made an array of size 2*N... First N indices to store frequency of negative sums and next N to store positive sums.

        Then at each index i called a query(0, N+cur-1) where cur is current prefix sum and updated +1 to present index.

        Could not think of anything else during challenge.

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Can you Please Share your code if you dont mind ? i need to know how excatly you did ? Sorry Sir but i am noob here .

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +11 Проголосовать: не нравится

        Could you import ordered set on that compiler?

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        I think socho has explained this part.

        You have 2 indexes l and r.

        Now you have to find number of ordered pairs of (l, r).

        Such that l < r and a[l] < a[r].

        This can be changed into a known problem of counting-inversions.

        Totalways = (n choose 2) — Ways such that (l < r and a[l] >= a[r]).

        You can solve the latter using merge-sort.

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится

        You can count inversions using policy based data structure, ordered set in nlogn

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится
        1. Simplest & shortest of all — Use pbds(policy based data structure) to find how many smaller values are there
        2. Compress the prefix array since negative values can be there and use BIT(binary indexed tree) to find how many smaller values are there
        3. Use merge sort. While merge process we already know how many values in right half are smaller than current value in left half, use that to find how many smaller values are there

        You can find in detail explanations here

      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится +14 Проголосовать: не нравится

        yes, there is. every next prefix will increase or decrease by 1, so you can use answer of previous element and add/subtract frequency of specific value. this can be done in O(n)

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

    I did the exact same thing, But then since the prefix sums are not increasing,how do we find the number of positive-sum subarrays in less than O(n^2) ? edit : thanks for taking up the time and explaining, Really appreciate.

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    will it overflow when pre[i] will reach beyond 1e9 ?? how to deal then with fenwick tree ??

    Spoiler
    • »
      »
      »
      3 года назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится
      Will it overflow?
      • »
        »
        »
        »
        3 года назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Sir , in my implementation i have used prefix sum . So if all elements are 1 . Then sum of 1e6 numbers is nothing but sum of first 1e6 natural numbers . and it will overflow . so Am i doing correct in my implementation?? what do am i missing here Sir ??

        • »
          »
          »
          »
          »
          3 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          The sum of all elements in your modified (replace 0 with -1) array is at most $$$N$$$. You don't need the prefix sum on your prefix sum array, that's what you seem to be doing.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Continuing after Hint 1 — I think we do not have to count inversions. We could just use two pointer method to calculate number of subarrays with sum greater than or equal to X.

»
3 года назад, # |
  Проголосовать: нравится +62 Проголосовать: не нравится

Problem 1: a[i] & x == 0 iff a[i] is a submask of ~x. Rephrasing the problem, you're given a mask and you want to know whether there's its submask in the array. Use sum over submasks dp to pre-compute the answer for all ~x values at once and then answer each query in $$$\mathcal{O}(1)$$$ by accessing this pre-computed array.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Another approach would be to make a dp, where dp(i) denotes the smallest element in the set (of A) such that:
    i&dp(i) = 0, or -1, if no such element exists.
    Updates can be done as : dp[i] = dp[i^2j], where j is the highest active bit in i. Base case is that we will have to find dp(i) for all i which are powers to 2 separately, that can be easily.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +19 Проголосовать: не нравится
    Implementation
»
3 года назад, # |
  Проголосовать: нравится -54 Проголосовать: не нравится

Both the problems are well known :) .

»
3 года назад, # |
  Проголосовать: нравится -12 Проголосовать: не нравится

Problem-1 can be solved using Trie.

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you explain more ? Split for every bit ? And then for each query in O(32) ?

  • »
    »
    3 года назад, # ^ |
    Rev. 2   Проголосовать: нравится +30 Проголосовать: не нравится

    Not Really, you could have solved this using Trie if the operation was Bitwise-Xor instead of Bitwise-And. Because incase of And when your current bit is 0, you can go either way in the trie because it doesn't really matter, so you might end up traversing the whole trie for each query.

    The key observation to solve this problem is that A[i] <= 1e5 and then do something like sum of subsets dp to preprocess answer for every possible query

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    In reasonable time ~ n•log(n)? Doubt it. It’s not Xor.

»
3 года назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Off topic,

I have seen another set in which the first question was some significantly easier stack question and the second question was same. So while selection is some kind of normalization done? Or is it the case that people of same college get same set?(which is kind of pointless)

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't know about normalization is being done or not, but everybody gets a different set

»
3 года назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Problem 1 is same as this one https://codeforces.com/contest/165/problem/E The only difference being the array itself forms the queries

»
3 года назад, # |
Rev. 4   Проголосовать: нравится -18 Проголосовать: не нравится

the first one is a basic question of tries , you need to find the prefix of your choice if the number has a 1 find a 0 else if its 0 then look for 1 or 0.

the second one is also a standard question called inversion count , we can make a prefix. sum array of the string by converting the 0's to -1 and keeping 1 as 1, then it can be solved using merge sort, bit tree, avl tree, ordered set and idk if there are more ways

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Wont't the time complexity for first one be O(n^2) in worst case in your solution?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +9 Проголосовать: не нравится

    Wouldn't looking for a 1 or 0 in the trie increase the time complexity? Because now instead of checking a single path in the trie, in the worst case (X is all zeroes) , you would have to traverse the whole trie, which takes O(n log(A)) per query.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have a very easy approach for problem A which no-one has mentioned.

We'll create a vector of sets in which v[i] will contain set of possible elements if we consider rightmost i bits for every element

Now for X let j be the index of most significant bit of X (from right) So we will query in v[j] for complement of X and if it exists answer is yes otherwise no.

Proof :- if X&Y==0 then after removing those bits which are not present in X, Y will become compliment of X. If Y<X then we'll pad Y with zeros.

I think this approach will work, feel free to point out any mistakes you find.

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I thought on a funny solution to problem 2 and it ended up being O(N) after some optimization (which is what i think is the complexity you were aiming for).

I'll define 0 to be -1 instead so it will make calculations easier. Make an array V of size 2*n+1, where we will be storing all the the number of sequences that end on the index we're iterating and have a certain sum. Initially we will assume the point in that array that represents sum 0 to be n. Let's say we filled this array from the start of the string to i-1. Consider these two cases:

If S[i] = 1: it's like the "zero point" of the sequences from j to i-1 "went down", because we will be adding 1 to all other sufixes that come before it. In addition, we do on V[newzeropoint+1].

If S[i] = 0: like the previous case, it's like the "zero point" went up, and in addition to that V[newzeropoint+1]++.

At any given index we're iterating over the initial string the number of subsequences that end on that point is the sum of all V[i] such that i > zeropoint. We can do that in O(logn) with a fenwick tree (aka BIT) but we just need to mantain a variable storing this sum and update it as we're moving the zero point up and down for a O(n) solution

»
3 года назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

Approach for the 2nd question:

1.replace each 0 with -1

2.calculate prefix sum of newly formed array after replacement ( call this array pref)

3.now we have to find the number of sub-arrays who's sum is > 0.

4.let's say we have to find the sub-arrays ending at index j.

5.then sub-array [i,j] has +ve sum iff, pref[j] — pref[i-1] > 0 =>pref[j] > pref[i-1]

6.it means we have to find number of elements pref[i] which are smaller than pref[j] such that i < j.

8.for this (in python ) we use SortedList a special type of container which keeps the list sorted and upon binary searching elements we can get the number of elements < pref[j] .

9.as SortedList is a special type of container it won't be allowed in the online rounds, hence here is the link for the source code for it, https://ideone.com/RyMNKu

10.we will traverse from left to right and keep pushing elements into the SortedList and add answer for each index to our final answer.

link for the code I have solved using SortedList: https://codeforces.com/contest/1536/submission/122169458

I have no idea how to solve this in c++ ,as set/multiset returns pointer, how to convert it to index ?

»
3 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

problem 2

Brute force (c++)
Pbds (c++)
Fenwick tree | BIT (c++)
»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Does anybody have an idea how to solve Problem-1? (Considering the Trie solution is not very optimal in the worst case)

Best I could think is maintaining a set for each i from 1 to 32. If a number is having set bit at position 'i', then we add it to that set. Now to answer for query X, we check the set bit positions and take intersection of all those sets. But again, taking intersection might give TLE. Can this be improved or any other way to solve this?

  • »
    »
    3 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    There is a concept known as SOS DP , which is helpful for the first question

    • »
      »
      »
      3 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      But won't it be of exponential complexity? It is given Q,X <= 7x10^3 Am I missing something obvious?

      • »
        »
        »
        »
        3 года назад, # ^ |
        Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

        a_i <= 10^ 5, which is something like 20 bits, so the complexity is gonna be O(20*2^20 +N) for precomputation and O(Q) for answering the queries.

»
3 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

I too got the same set.

I couldnt do 2nd one but I did 1st one For the first one I created an array of powers of 2 of length 20 and then for every element of this array I made sure whether it is in the keys array or not. Then for each query I traversed through the power of two's array and checked if that element exists in keys array and its and with the X = 0 then yes otherwise no.

PS: I had done 1st one partially...sorry for the mistake.

»
3 года назад, # |
  Проголосовать: нравится -6 Проголосовать: не нравится

I think there exists an o(n) solution for the 2nd question, we convert all 0 to -1 and want to find all l < r such that pref[r] > pref[l],

so the optimization basically boils down to : given an array arr, can you find all l < r such that arr[l] < arr[r] in O(n), if abs(arr[i + 1] — arr[i]) == 1; and this seems possible if we store just two arrays what is the current prefix and what have we not updated yet.

»
3 года назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

For the first, read about SOS DP

»
19 месяцев назад, # |
Rev. 3   Проголосовать: нравится -10 Проголосовать: не нравится

Here and here is the soln for first one:

»
10 месяцев назад, # |
  Проголосовать: нравится -50 Проголосовать: не нравится

First problem can be done with trie data structure

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

We can do second problem by using lazy propagation on array. In second problem we have to form an array which will intially have a[i] = (count of prefix having cnt(1)-cnt(0) >= i). At each index we have to find the value of a[(count(1)-count(0) + 1 upto i)] . After finding this value, we have to decrement a[min(count(1)-count(0))] upto a[(count(1)-count(0) upto i)] and since the difference between counts is changing by 1 we can apply lazy propagation. Note: count(1)-count(0) can be negative and we can handle it by sufficient positive offset. Time and space:O(n)

First problem can be done by graphs. We have to take values of a[i] as nodes and we have to add directed edge between two nodes such that the number of zeroes in binary representation of parent is 1 more than number of zeroes in binary representation of child node and all the indices having 0 in child nodeis subset of those in parent node. Then we have to create a flag array for occurence of each node value and then using bfs we will assign flag of child to be true if it is true or if any one of its parents is true. Complexity O(logn).

Let me know if there us any doubt

»
10 месяцев назад, # |
  Проголосовать: нравится -8 Проголосовать: не нравится

Since I found some coincidence with this blog so thought of sharing it. So recently, I have qualified for the Round2 of Google Girl Hackathon 2023 and to qualify for the further rounds, I had to appear for a Google Online Challenge on 25th June'23. Therein, I got two questions and the first question of my goc is exactly the second problem of this blog. I solved it using merge sort (NlogN) approach and it did pass all the testcases. Waiting for the results now :)