7

I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?

8
  • Why would you want to do that? Ints are already natively an "array of bits", you can access each bit. Commented Dec 31, 2012 at 17:09
  • 1
    A "Binary string"? As in characters of 1s and 0s? What a strange task... Commented Dec 31, 2012 at 17:09
  • @Mat: reread the question, he wants to convert an integer into an array of int, where each integer in the array holds a bit from the original integer. Commented Dec 31, 2012 at 17:10
  • 1
    @MooingDuck: I understand. That's like a 32x or 64x storage increase. Doesn't change my question. Commented Dec 31, 2012 at 17:12
  • 1
    LSB first or last in the array? Commented Dec 31, 2012 at 17:33

8 Answers 8

12

Pseudo code:

int value = ????  // assuming a 32 bit int
int i;

for (i = 0; i < 32; ++i) {
    array[i] = (value >> i) & 1;
}
Sign up to request clarification or add additional context in comments.

5 Comments

Why not array[i] = (theValue >> i) & 1 - I'm sure the compiler does the same thing, but seeing that "there isn't going to be a branch in there" makes me happier.
<sarcasm>The question is tagged as C++ and so you must use templates, otherwise it's C. </sarcasm>
works good, however order of bits is reversed, so instead of array[i] I suggest using index array[31 - i]
may 32 be changed to sizeof(int) * 8
@jinzhenhui my answer (8 years ago) was, as indicated, intended to be pseudo code. There are several variations one could apply when writing this for real, including your suggestion -- but I would not literally use sizeof(int). Instead, I would use sizeof(value) which: yes, is currently the sam thing. The benefit of specifying the variable, not its type, is that if the variable type should change in the future, you only need to change its type directly and other things that know about the variable instead do not become broken or require a change.
7

You could use C++'s bitset library, as follows.

#include<iostream>
#include<bitset>

int main()
{
  int N;//input number in base 10
  cin>>N;
  int O[32];//The output array
  bitset<32> A=N;//A will hold the binary representation of N 
  for(int i=0,j=31;i<32;i++,j--)
  {
     //Assigning the bits one by one.
     O[i]=A[j];
  }
  return 0;
}

A couple of points to note here: First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes. Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library. You can print out the bitset variable A as cout<<A; and see how it works.

2 Comments

Well that's a good idea. +1 even though it competes with mine. But why do you only support 21 digits? Why not 32?
Okay let's make it 32 then.
6
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number, 
         output_iterator first, output_iterator last) 
{
    const unsigned number_bits = CHAR_BIT*sizeof(int);
    //extract bits one at a time
    for(unsigned i=0; i<number_bits && first!=last; ++i) {
        const unsigned shift_amount = number_bits-i-1;
        const unsigned this_bit = (number>>shift_amount)&1;
        *first = this_bit;
        ++first;
    }
    //pad the rest with zeros
    while(first != last) {
        *first = 0;
        ++first;
    }
}

int main() {
    int number = 413523152;
    int array[32];
    convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
    for(int i=0; i<32; ++i)
        std::cout << array[i] << ' ';
}

Proof of compilation here

2 Comments

Don't you mean (number >> i) & 1?
@James: Thanks. First I posted code, then I posted code that compiles, and now it compiles and executes and seems to be working.
2

You can do like this:

while (input != 0) {

        if (input & 1)
            result[index] = 1; 
        else
            result[index] =0;
   input >>= 1;// dividing by two
   index++;
}

1 Comment

I don't think that's quite right... (1) you never appear to change index, (2) even then it's still wrong.
1

As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:

// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
  arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}

1 Comment

Made the correction - thanks ;) (always forget about CHAR_BIT)
1

Decimal to Binary: Size independent

Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).

First Method:

#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
  int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
  if(bits == NULL) return NULL;
  int i = 0;

  // conversion
  int left = sizeof(int) * CHAR_BIT - 1; 
  for(i = 0; left >= 0; left--, i++){
    bits[i] = !!(dec & ( 1u << left ));      
  }

  return bits;
}

Second Method:

#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
   unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);   
                      //mask = 1000 0000 0000 0000
   int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
   if(bits == NULL) return NULL;
   int i = 0;

   //conversion 
   while(mask > 0){
     if((num & mask) == 0 )
         bits[i] = 0;
     else
         bits[i] = 1;
     mask = mask >> 1 ;  // Right Shift 
     i++;
   }

   return bits;
}

Comments

0

I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)

int BinToDec(int Value, int Padding = 8)
{
    int Bin = 0;

    for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
    {
        Bin += ((Value >> I - 1) & 1) * Pos;
    }
    return Bin;
}

Comments

0

This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.

std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
    std::vector<int> r;

    // make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
    while (num_to_convert_to_binary > 0)
    {
        //cout << " top of loop" << endl;
        if (num_to_convert_to_binary % 2 == 0)
            r.push_back(0);
        else
            r.push_back(1);
        num_to_convert_to_binary = num_to_convert_to_binary / 2;
    }

    while(r.size() < num_bits_in_out_vec)
        r.push_back(0);

    return r;
}

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.