3

This python code works properly and produces proper output:

def fib(x):
    v = 1
    u = 0
    for x in xrange(1,x+1):
        t = u + v
        u = v
        v = t
    return v

But when I write the same code in C++ it gives me a different and impossible result.

int fib(int x)
{
    int v = 1;
    int u = 0;
    int t;
    for (int i = 1; i != x + 1; i++)
    {
        t = u + v;
        u = v;
        v = t;
    }
    return v;
}

I'm still learning c++. Thanks!

Edit: C++ outputs -1408458269.
Python outputs 20365011074 when x = 50.

2
  • 2
    Can you give an example of where they differ? Commented Jan 11, 2012 at 23:24
  • See Danial Fisher's answer below. In c++ types have specific sizes - you're overflowing a signed 32 bit int. Commented Jan 11, 2012 at 23:30

1 Answer 1

16

For what input? Python has integers of unlimited (memory limited) size, C++'s int usually is a four byte integer, so you'll likely have overflow.

The largest Fibonacci number representable with a signed 32-bit integer type is fib(46) = 1836311903.

Sign up to request clarification or add additional context in comments.

5 Comments

Thank you. I used Unsigned int rather than int and it works perfect.
C++'s unsigned long long can go higher, up to fib(89). Above that you need a library.
Not very far, the last you can get with 32 bits is fib(47), with 64 bits, it's fib(93).
@Nekew you can also use unsigned long long for even bigger numbers (I think somewhere around 18446744073709551615 if your compiler makes it 8 bytes). Also, please click the checkmark beside this answer if it answered your question.
I don't need to go larger than fib(50). Thank you for your help!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.