78

Update: For future reference, Amazon have now updated the documentation from what was there at time of asking. As per @Loren Segal's comment below:-

We've corrected the docs in the latest preview release to document this parameter properly. Sorry about the mixup!


I'm trying out the developer preview of the AWS SDK for Node.Js and want to upload a zipped tarball to S3 using putObject.

According to the documentation, the Body parameter should be...

Body - (Base64 Encoded Data)

...therefore, I'm trying out the following code...

var AWS = require('aws-sdk'),
    fs = require('fs');

// For dev purposes only
AWS.config.update({ accessKeyId: 'key', secretAccessKey: 'secret' });

// Read in the file, convert it to base64, store to S3
fs.readFile('myarchive.tgz', function (err, data) {
  if (err) { throw err; }

  var base64data = new Buffer(data, 'binary').toString('base64');

  var s3 = new AWS.S3();
  s3.client.putObject({
    Bucket: 'mybucketname',
    Key: 'myarchive.tgz',
    Body: base64data
  }).done(function (resp) {
    console.log('Successfully uploaded package.');
  });

});

Whilst I can then see the file in S3, if I download it and attempt to decompress it I get an error that the file is corrupted. Therefore it seems that my method for 'base64 encoded data' is off.

Can someone please help me to upload a binary file using putObject?

4 Answers 4

70

You don't need to convert the buffer to a base64 string. Just set body to data and it will work.

Sign up to request clarification or add additional context in comments.

4 Comments

That seems to be it! Not sure the reasoning for the 'base64' mention in the docs then
We've corrected the docs in the latest preview release to document this parameter properly. Sorry about the mixup!
Is it possible to pass one Stream instead of data ? For instance, i wanna sent one file that have 50MB. Could I pass a readable stream and the S3.client pipe internally to the S3 ?
Yes that's possible. see here: stackoverflow.com/questions/15817746/…
28

Here is a way to send a file using streams, which might be necessary for large files and will generally reduce memory overhead:

var AWS = require('aws-sdk'),
    fs = require('fs');

// For dev purposes only
AWS.config.update({ accessKeyId: 'key', secretAccessKey: 'secret' });

// Read in the file, convert it to base64, store to S3
var fileStream = fs.createReadStream('myarchive.tgz');
fileStream.on('error', function (err) {
  if (err) { throw err; }
});  
fileStream.on('open', function () {
  var s3 = new AWS.S3();
  s3.putObject({
    Bucket: 'mybucketname',
    Key: 'myarchive.tgz',
    Body: fileStream
  }, function (err) {
    if (err) { throw err; }
  });
});

8 Comments

This does not work with mp4 files. The uploaded file is not equal to the origin.
I'm not sure what you mean when you say "the uploaded file is not equal to the origin". I've used code similar to this to upload binary MP4 files to AWS. It almost sounds like maybe you are trying to run this code in a browser instead of NodeJS? Can you be more specific?
I'm unable to upload MP4 files using this method, as well. The file shows up in the S3 bucket, but it's corrupt and unplayable.
Hmmmm I'm not sure why you guys are having problems. I use code like this to upload MP4 files which I can download and play again. I don't know that it makes any difference, but I am not "streaming" the files. I fully download them and play them-- but I don't think that would make any difference. I use an ubuntu host to send files to S3. You can see my code from which I created this example above here: github.com/CaptEmulation/soapbubble-cloud/blob/… A key difference is that this code checks if the file exists first
It appears you are correct and it appears to be an implementation issue within the Amazon S3 lib. S3 would need to be pausing the stream periodically in order to keep the FileStream from continuing to read data from disk and place in memory. You might be able to create an adapter stream implementation which tracks total bytes read but not sent and pauses itself. I'm not seeing any easy arguments to add to resolve this with glue implementation.
|
16

I was able to upload my binary file this way.

var fileStream = fs.createReadStream("F:/directory/fileName.ext");
var putParams = {
    Bucket: s3bucket,
    Key: s3key,
    Body: fileStream
};
s3.putObject(putParams, function(putErr, putData){
    if(putErr){
        console.error(putErr);
    } else {
        console.log(putData);
    }
});

Comments

4

I'm pretty sure you need to convert Buffer now. Otherwise you will get

Type 'Buffer' is not assignable to type 'StreamingBlobTypes | undefined'.

With StreamingBlobTypes defined here: https://github.com/awslabs/smithy-typescript/blob/21ee16a06dddf813374ba88728c68d53c3674ae7/packages/types/src/streaming-payload/streaming-blob-common-types.ts#L22

Here is an example conversion length calculated and using Buffer. Note you need to add ContentLength header:

import { Readable } from "stream";

  saveCreativeImage(name: string, image: Buffer): Promise<string> {
    const options: PutObjectRequest = {
      ACL: 'bucket-owner-full-control',
      Bucket: EnvConfig.S3_CREATIVES_BUCKET_NAME,
      Key: name,
      Body:  Readable.from(image),
      ContentType: 'image/png',
      ContentLength: image.length
    };

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.