Unfortunately, lambda x: x just created some function, which—when looking from the outside—we have no idea about what it does. Of course, at that point, we theoretically can realize that it’s just an identity function, making its computation rather redundant. But even then, we just store this function in a variable and be done with it for now.
Then later, we call the name, executing the underlying function. Because it’s a function, and we don’t know anything about the function, we can’t tell what it does, so we just have to execute it. An optimizer technically could see that it’s an identity function and skip the call, returning just the value directly, but it would be difficult to do this in Python. The Peephole optimizer already takes some bytecode instructions out when it sees some possibilities, but in this case, this would be hard to do:
A call of a name is usually a LOAD_FAST followed by loads for parameters, and then a CALL_FUNCTION. This immediately follows from the syntax of something(args). So a theoretical optimizer would have to skip the first load and the call function. But to even consider this, it would have to know that the name loaded at first refers to an identity function.
Now, the way the Peephole optimizer works, it doesn’t work on dynamic variable content. Even if we had some kind of flag we could attach to the function so we could quickly check if it’s an identity function or not, the optimizer still wouldn’t be able to read that because it doesn’t operate on the underlying data. It only operates on bytecode operations, reducing stuff like LOAD_GLOBAL True to LOAD_CONST True.
And to be honest, introducing such flags for an identity function would be rather odd. Identity functions are already rare; and if we were to optimize that, we could just as well inline all lambdas we have and just reduce the overhead of function calls completely. But that’s simply not what the Peephole optimizer or any (?) optimizer for an interpreted language does. The overhead during the run-time would probably just be too big and negatively impact the overall performance for a micro-optimization.
Because more often than not, such a level of optimization is simply not worth it. Such a function call is rarely a bottleneck for your application, and if it is, you will have to consider optimizing it differently anyway.