### Ygor_Ribeiro's blog

By Ygor_Ribeiro, history, 7 weeks ago, I have the same question which can be read through this link: https://cs.stackexchange.com/questions/111227/still-not-understanding-why-the-knapsack-problem-does-not-have-a-polynomial-time

Could someone explain to me why we don't apply the same logic to the value of n ?

I would also like to understand the complexity relationship according to the values, which in the case of the knapsack problem is O(nW) and complexity according to the input size, which in this case is O(n logW), in many forums the people talk about it and I still don't understand how the input complexity turns into the complexity of values.

Another question is about sorting methods, the complexity in relation to the values ​​is O(n^2), but the input size would be O(n * logMAX), as the largest value has no relation to the complexity of the values ​​so that's why not change nothing? Comments (6)
 » Could someone explain to me why we don't apply the same logic to the value of n ?The input to the knapsack problem is:$N$ $W$$V_1, V_2 \ldots, V_N$ -> Values$W_1, W_2 \ldots, W_N$ -> WeightsAnd let's say that all $V_i, W_i \le MAX$. Then the input size is $O(\log{N} + \log{W} + 2N\log{MAX})$ = $O(\log{W} + 2N\log{MAX})$.Now, you can double $W$, but the input size is still going to remain the same, as $O(\log{2W} + 2N\log{MAX})$ is the same as $O(\log{W} + 2N\log{MAX})$.But if you double $N$, sure, $\log{N}$ doesn't change much, but the $2N\log{MAX}$ term is going to double, and since that's usually the larger term, that means you are effectively doubling the input size.This is the crux of the difference between $N$ and $W$ — we can make $W$ huge without changing the input size by much. But we can't make $N$ huge without making the input size huge as well. In practice, the 'main parameter', which is usually $N$, tends to be followed $N$ integers, or $N$ bits or whatever. It's that part which comes later, which makes $N$ the 'main parameter' of the problem, because the input size is linearly proportional to it. So we tend to just focus on $N$ and not go into the nitty-gritty details of what exactly the input size is, and what the time complexity is, as a function of that. We use $N$ as a proxy for the input size, since they tend to be very correlated. But that won't be the case if the input doesn't have $N$ items in it. Another relevant example is primality testing. The only input to it is $N$. And forget $O(N)$, there's a trivial $O(\sqrt{N})$ algorithm itself to check whether $N$ is prime or not. It's obviously polynomial in $N$, and so should be in class $P$. So what's all the fuss about AKS? The issue is that the input size isn't linear in $N$. It's actually $\log{N}$. So for this problem to be considered a 'polynomial time' algo, the running time should be a polynomial function of $\log{N}$, and not just $N$. And that's the AKS algo.
•  » » I think I understand the knapsack problem part, but about the sorting problem part, the maximum input size is O(n * logMAX) anyway, right? Since this larger value makes no difference to the time complexity which is O(Nˆ2), why don't we consider that increasing this value by 1 bit would also increase the complexity?I wanted to know if I can follow a step by step in these scenarios, where I calculate the time complexity normally and also calculate the size of the input, if in the complexity there is something involved in the size of the input (some variable having a log in the size of the input) then yes could i consider these parts of increasing 1 bit and doubling the time?
•  » » » 7 weeks ago, # ^ | ← Rev. 4 →   I assume you're considering something like insertion sort, which we say has a time complexity of $O(N^2)$.You are right that the input size is actually not $N$, but rather $N * \log{MAX}$. But note that for sorting algos, what we count as time complexity is the number of comparisons and number of assignments. And we assume that each operation of the form "if(A[i] < A[j])" takes a constant amount of time. But this also depends on the size of the elements of A. If you take the elements to be say $10^{200}$, then of course comparing two elements is going to take some time. In particular, it's proportional to $O(\log{MAX})$. So the time complexity is more like $O(N^2 * \log{MAX})$. Which still isn't quadratic in the input size (which would be $O(N^2 * \log^2{MAX})$), but note that it is not more than quadratic in the input size. In practice, we ignore these $\log{MAX}$ terms from both sides (ie. both from the input size side, and also assume that such operations are constant time). Regarding your second question:The correct method is to find the time complexity — let's say that's a function $T(a, b, c)$ where $a, b, c$ are some parameters. You have to then find the input size, which say is a function $S(a, b, c)$. Then represent $T$ as a function of $S$. That is, $T(a, b, c) = O(f(S(a, b, c)))$. Then we can say that this algorithm's time complexity is $f(s)$, where $s$ is the input size.For example, if the input contains an $N \times N$ grid, then $S(N) = N^2$. So even if the running time is $T(N) = O(N^2)$, this is a linear algorithm, as it is linear in the input size. That is, $T(N) = f(S(N))$, where $f(s) = s$ — that is, linear.
•  » » » » Now I understand very well the case of the sorting algorithm.Regarding the second answer, can you tell me how these formulas would look to find the complexity for the backpack problem? I know that the s (input size) is going to be n * logW, what function do I apply on this to get O(n * W) ?
•  » » » » » There isn't a easily describable function $f$ to take $n * \log{W}$ to $nW$. But notice that if you take $n = 2$, then the required function is $f(s) = 2^s$, since $2^{2 \log W} = 4W$. That is, it is exponential in the input size in this instance. And for all $c$ and $n$, you can prove that there is a $W$, such that the function $f(s) = s^c = (n * \log{W})^c$ will be smaller than $nW$ [Just take $W = n^{c+1}$ ]. So we know for sure that this algorithm can't be expressed as a polynomial in the input size — it's 'larger' than polynomial time.
 »