The time complexity of an algorithm is defined as the amount of time taken by it to run as a function of the length of the input.
If I have a simple for loop function in C, which runs for a given input n then:
the length of n is log n(number of bits required to represent it).
Since input is log n and the loop runs n times, the code runs exponentially many times in it's input length ( 2^(log n) = n))
C code:
int forfunction(unsigned int n){
unsigned int i=0;
for(;i<n;i++){
// do something ordinary
}
return 0;
}
This for loop being an example.
But we will never hear anyone say, that such a for loop program is exponential in it's input (the bits required to store n). Why is it so? The only difference I see is that this is a program and time complexity is defined for an algorithm. But even if it is, then why does this not have an impact/taken into account when we want to do a rough time complexity of a program?
EDIT: Further clarification: I find it reasonable to claim it is exponential in it's input ( might be wrong =) ). If it so, then if a simple for loop is exponential, then what about other hard problems? Clearly this for loop is not a worry for anyone today. Why is it not? Why does this not have (much) real world implications when compared to other hard problems(EXP,NP-Hard,etc...)? Note: I am using hard the way it used to define NP-Hard problems.
n, then yes, that statement would be correct. But normally, your loop does something useful, like process some data of lengthn.log n(I'm just guessing what you meant...), and the loop runsntimes, that'sO(n log n), which is not exponential inn. Even if it'sO(n²), that's still not exponential. Exponential running time would beO(something ^ (something containing n)), and is actually quite rare [citation needed] without recursion, a stack (or similar).nonly makes the input size 1 biggern, not linear in the size of the input, that's the confusion OP is having