This is an example from Python 3.8.0 interpreter (however, it is similar in 3.7.5)
>>> import sys
>>> sys.getsizeof(int)
416
>>> sys.getsizeof(float)
416
>>> sys.getsizeof(list)
416
>>> sys.getsizeof(tuple)
416
>>> sys.getsizeof(dict)
416
>>> sys.getsizeof(bool)
416
getsizeof() returns how much bytes Python object consumes together with the garbage collector overhead (see here). What is the reason that basic python classes consume the same amount of memory?
If we take a look at instances of these classes
>>> import sys
>>> sys.getsizeof(int())
24
>>> sys.getsizeof(float())
24
The default argument is 0 and these two instances have the same amount of memory usage for this argument. However, if I try to add an argument
>>> sys.getsizeof(int(1))
28
>>> sys.getsizeof(float(1))
24
and this is where it gets strange. Why does the instance memory usage increase for int but not for float type?
getsizeof(some_type)gives you the size of the type object itself not instances of that type. Trygetsizeof(1); getsizeof(1.0); getsizeof([])