how to find leftmost 1-bit in C++ ?

in other word I want to find biggest k so that 2^k <= N for some given number N

for example if N=10 then k=3

I want an O(1) time and memory solution and also without using built-in functions that may not compile in ALL C++ compilers.

thank you in advance.

Try to get the logarithm of 2 :-)

You are wrong. Log(2,n) is the minimal x, such that 2^x>=n. We can easily see, that the maximal x, such that 2^x<=n is Log(2,n)-1 if n is not power of two and Log(2,n) otherwise. You can see the code to understand better : http://pastie.org/8617128

Since log() takes just doubles, its range+precision is heavily limited.

There is

`Integer.highestOneBit(int i)`

method in Java that returns int with leftmost bit set in x. It is implemented as follows:As you can see, its running time actually depends on size of int as . Same way you can implement functions like

`numberOfLeadingZeros`

or`bitCount`

. Look for source of`java.lang.Integer`

for more details.what means ">>>"?

http://docs.oracle.com/javase/tutorial/java/nutsandbolts/op3.html

Interesting, does C++ have unsigned bit shift?

`((int)x) >> 5`

— signed shift.`((unsigned)x) >> 5`

— unsigned shift.C. O.

It is christmas tree.

I had been where you are once! And the best I could think of is O(Log number of bits), for int its O(log 32), simply binary search that bit.

But in practice this tends to be slower than the linear search as the constant factor is quite big! Otherwise if you are interested in low level solutions, there are some special instructions that can get you what you want

See here

I don't think you can do it in

O(1) — the computer does not recognize the "numbering" of bits, that's just the way we imagine it.Why do you need it? Or better: do you really need this, wouldn't it be better to have an way with a small constant?

But if we find a way to mask out all bits except the leftmost? Then we can use lookup array of 32 elements to return the number of this bit. And it would be O(1)

Though I'm not sure I know how to perform such masking.

Dependent on 32, the integer size. Once again, dependent on size. That's what the thread author wants, and that's what I'm claiming impossible.

From Intel 80386, bsr instruction do it.

If

N= 0 then .You can read more about gcc builtins here.

To all commenters:

No offence, but wasn't I clear when I said that I want it:

1- O(1) time and memory

2- without using built-in functions that may not compile on all C++ compilers

and for log function it's too slow

Anyway, thank you all for your try.

To be absolutely clear: if you'll use an ordinary binary search, it'll work in O(1), since our integer size is constant. Just use it!

It is however still unclear whether you want a practical or theoretical solution.

If you need this in practice, perhaps the stackoverflow answer above covers it entirely with its multiple links. One can say the currently popular word sizes are constants, so many of the presented solutions can be considered as taking

O(1) time and memory. If you insist on not using the processor instruction, something along the lines of these two answers will perhaps be the fastest. Also note that if your number is guaranteed to be a power of two, you can find the logarithm inO(1) time andO(bits) memory in the precalculated array.If you are interested in the theoretical question (that is, find the most significant bit in

O(1) when the size of the word tends to infinity, but word operations still takeO(1)), then the requirement to use C++ looks strange. Perhaps it is then better to explicitly state which operations are available (for example, only arithmetic operations, bitwise logical operations and bitwise shifts, but not bit-scan-reverse nor any other modern processor instructions). At a glance, it looks to me like there is noO(1) solution, but I have no proof of that so far. And anyway, an existence of such a solution will not guarantee it to be feasible on modern hardware.that's good question indeed, actually I want both (practical or theoretical) =D.

after reading all comments and googling a lot I agree with you that there's no theoretical O(1) time and memory solution (or at least it's unknown yet)

but there's still two good choices practically, the first is using built-in functions , actually I didn't want to use them because I'm not familiar with all C++ compilers so I wanted to use standard methods that successfully compile on all C++ compilers, and the second choice is using the idea in link you gave me.

there's two good choices, so I wrote simple C++ code to test whether built-in functions are faster or the idea in previous link is faster, and it turned out that built-in function are faster.

so I decided to do this during the contests:

1- I will try to use built-in functions.

2- if I wasn't familiar with the C++ compiler used in this contest and I faced compile error when using the built-in functions then I will use the second choice.

thank you all very much for helping me and sorry if I was so offensive.

If you want a theoretical answer. Everything that uses that int has 32 bits, is O(1), because 32 is O(1), if the range of integers that you can represent with your type is unbounded, it cannot be O(1), so it's easy to determine if you have an O(1) algorithm or not.

You can calculate in advance this function for all integers from 0 to 2

^{16}and save the values to the array. Then you for any integerx,It is not O(1) memory, but in most cases you can afford to use additional 64 kilobytes.

I think you try to find the rightmost bit

awesome!

it's even a little bit faster than built-in functions , but I can't understand how it's work. since it's first time I see "union" operation , I tried to read tutorials about "union" but still don't understand how your code works!

I converts number to

`double`

and then read bits of number's representation in IEEE 754. In fact, exponent is read.I just want to clarify one point: Is it always the case that b[1] represents 32 bits containing the sign bit and 11-bit exponent?

The C++ standard doesn't specify the binary representation of integral or floating-point types as well as type sizes. Means that

`b[1]`

is not necessary 32 bits and it's not necessary it shares any bits with`a`

. Also, according to C++ standard writing to one member of union and reading from another results in an unspecified value. However, on practice all compilers allow type punning through unions and this code will work on most platforms that use little endian format and support IEEE 754. One can cover wider range of platforms by using constants from std::numeric_limits and by replacing`int b[2]`

with`int64_t b`

to fix endianness and int's size issues.Thanks for your quick reply! By the way, I found this http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious helpful too.

Speed test: http://ideone.com/KdjzTB

my output is

in cf customtest

NOBODY CARES

You're wrong. I do.

Don't feed.

As mentioned already, for word size w, the naive approach takes w time. Binary search combined with bitshifting and masking takes O(lg w). It was shown in the "fusion tree" paper by Fredman and Willard that you

canin fact do it in O(1) time, if you allow yourself to use multiplication. This is described in my lecture here: https://www.youtube.com/watch?v=3_o0-zPRQqw&list=PL2SOU6wwxB0uP4rJgf5ayhHWgw7akUWSf&index=2__lg( a ^ (a & (a — 1)) )