There is nothing inherently wrong about either approach, however the first one is overwhelmingly more common. The reason is that complexity analysis takes place in an asymptotic setting with respect to the size of input.
Computers are machines that perform meaningless operations on meaningless symbols, it is your job to find meaning in the computation. Therefore,
determining what the size is depends on some agreed upon definition of what constitutes a reasonable symbolic representation of the input. Formally, you would have to define how the machine that executes your algorithm works in order to be able to talk about complexity at all, but in this context it is sufficient to assume that any representation that uses a finite set of symbols is acceptable.
There are many ways - all in principle equally valid - to represent the natural numbers. For instance, you might say that $0$ is represented by the string x, $1$ by xx, $2$ by xxx and so on. In this case the size of input would be proportional to the numerical value.
However most of us would agree that representing numbers in some base $b$ - which is certainly acceptable as it requires at most $b$ symbols - is more reasonable; in this case the size of the input is proportional to the number of digits required to write the representation of the number, which is itself proportional to the logarithm of the numerical value.
Taking numbers to have size proportional to their logarithm is also more consistent with the way we treat other objects. If the input of an algorithm is an array, the size of the array is the number of cells (and not the number of different arrays of that length), similarly for graphs, matrices and other mathematical objects.