I have one piece like this program:
printf("%.2d", 5);
I think it should only display: 5 But it displayed: 05. Can you explain to me how it works?
It is known as precision modifier which is written as .number, and has slightly different meanings for the different conversion specifiers
For floating point numbers (e.g. %f), it controls the number of digits printed after the decimal point:
printf( "%.3f", 1.2 );
will print:
1.200
If the number provided has more precision than is given, it will round. For example:
printf( "%.3f", 1.2348 );
will print:
1.235
For g and G, it will control the number of significant figures displayed. This will impact not just the value after the decimal place but the whole number.
printf( "%.3f\n%.3g\n%.3f\n%.3g\n", 100.2, 100.2, 3.1415926, 3.1415926 );
will print:
100.200 // %.3f, putting 3 decimal places always
100 // %.3g, putting 3 significant figures
3.142 // %.3f, putting 3 decimal places again
3.14 // %.3g, putting 3 significant figures
For integers, on the other hand, the precision it controls the minimum number of digits printed:
printf( "%.3d", 10 );
will print:
010
Finally, for strings, the precision controls the maximum length of the string displayed:
printf( "%.5s\n", "abcdefg" );
will print:
abcde
Source: Printf Format Strings
The 2 in "%.2d" means you want to print at least 2 digits. Similarly, if you wanted to print at least 3 digits, you could use "%.3d". If you wanted no additional digits (ie. no padding with 0's), you could use "%d" on its own.
Man pages are your best friend