The time of a dynamic programming algorithm is always O(P) where P is the number of subproblems.
Is the above proposition right? Or is there a counter-example?
Your proposition is too simple, unfortunately. If P is the number of subproblems, well, depending on the situation, maybe you only need to solve a fraction of them in order to deduce DP(N). Or, perhaps you need to solve all of them, but combining them to yield DP(N) costs O(P) time itself, so the runtime would really be O(P^2).
An example of the first case is perhaps the repeated squaring method. In order to find out 59^1000000 mod 123981238, you could figure out 59^1, 59^2, etc, one at a time and eventually reach 59^1000000. Or, with repeated squaring, you only care about a small fraction of the subproblems and get a much faster algorithm.
An example of the second case is the question from how to improve this code?. To summarize the question: Given an integer N, we want to find the shortest list of squares which sum up to N. For example, for N=1000, we want 30^2 + 10^2. For N = 3, 1^2 + 1^2 + 1^2. If you consider 1...N-1 as the 'subproblems', then by iterating over the squares less than N, it takes O(sqrt(N)) time to yield the answer for N. In this way, there were only O(N) subproblems, but to combine them at each point in time took O(sqrt(N)), so the overall runtime would be O(Nsqrt(N)).