1

I want upload my local CSV file in my AWS S3 bucket

I tried:

s3 = boto3.resource('s3')

s3.meta.client.upload_file('verified_path_of_my_CSV', 'bucket_name', 'My_csv')

but I get a 0 byte CSV in my bucket

I also try:

s3_client = boto3.resource('s3')

bucket = s3.Bucket('bucket_name')
s3.Object(bucket, 'My_csv').put(Body=open(csv_path, 'rb'))

but I get this error:

Traceback (most recent call last):
  File "test-text.py", line 109, in <module>
    main()
  File "test-text.py", line 88, in main
    upload_csv(bucket, 'name_of_my_csv', 'My_path')
  File "test-text.py", line 56, in upload_csv
    s3.Object(bucket, 'My_csv').put(Body=open(csv_path, 'rb'))
 ...
  File "/home/toto/.local/lib/python3.7/site-packages/botocore/handlers.py", line 219, in validate_bucket_name
    if VALID_BUCKET.search(bucket) is None:
TypeError: expected string or bytes-like object

How can I do for upload my CSV in S3 ?

Thanks in advance


Solved by:

Actually when I made my CSV I write this:

with open('job_file.csv', 'w') as csvfile:
    filewriter = csv.writer(csvfile, delimiter=',')
    text = process_text_analysis(bucket, file, True)
    filewriter.writerow(["doc1", text])
    upload_csv(bucket, key_s3, 'my_local_csv_path')

But I exec upload_csv() before finished my CSV, the correct way is to move my upload function out of the block:

with open('job_file.csv', 'w') as csvfile:
    filewriter = csv.writer(csvfile, delimiter=',')
    text = process_text_analysis(bucket, file, True)
    filewriter.writerow(["doc1", text])

upload_csv(bucket, key_s3, 'my_local_csv_path')

Stupid mistake I know :), but this append...

Hope this edit can avoid this type of question in the future ;)

Thanks to all of you, for try to solve my issue

1 Answer 1

1

This is using s3 as client

 aws_session = boto3.Session()
 s3_client = aws_session.client('s3')
 local_path = os.path.join(root, file)
 bucket_name= "abc"
 s3_client.upload_file(localpath, bucket_name, s3_file)

in your case 

s3_client.upload_file('verified_path_of_my_CSV',bucket_name,"my_csv.csv")

Using s3 as resource

aws_session = aws_session.resource("s3")
s3_resource = self.aws_session.resource("s3")

def s3_upload(self, s3_bucket, s3_key, local_file_path):
        """
        To upload file to a S3 bucket
        :param s3_bucket: Bucket name
        :param s3_key: Key of the file in the S3 bucket
        :param local_file_path: Location of the file to upload
        :return:
        """
        fp = open(local_file_path, "rb")
        file_data = fp.read()
        mime_type, temp = self.mime.guess_type(local_file_path)
        s3_params = {
            "Key": s3_key
            , "Body": file_data
        }
        if mime_type is None:
            mime_type = "application/octet-stream"
        s3_params["ContentType"] = mime_type
        s3_resource.Bucket(s3_bucket).put_object(**s3_params)
Sign up to request clarification or add additional context in comments.

4 Comments

Sharing code is ok, but please add a description and explanation in order to help other users.
This is a client level API as opposed to the resource level API OP is using in the question. As long as they are willing to move from client to resource this is an appropriate answer.
Sure @AmitBaranes .
Thank you, for me your first solution fix my issue @aviboy2006, but Amit Baranes is right if you can add some explanation for understand, that can be cool :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.