1

I am trying to convert a write a python Dataframe and store it in S3. I have added the required layers in Lambda (s3fs & fsspec) and have provided requisite permissions to lambda to write to s3. But not I am getting the below error:

"errorMessage": "module 's3fs' has no attribute 'S3FileSystem'"

Below are the relevant lines of my code:

s3 = boto3.client('s3')
df.to_csv('s3://buckets/<my-bukcet-name>/mydata.csv')

Any pointers on what could be the reason for this?

Regards, Dbeings

1 Answer 1

4

Instead of including those layers, I would recommend including the Amazon provided AWS Data Wrangler layer.

Then you would use AWS Data Wrangler to write your dataframe directly to S3, like the following:

import awswrangler as wr

wr.s3.to_csv(df, 's3://buckets/<my-bukcet-name>/mydata.csv', index=False)
Sign up to request clarification or add additional context in comments.

3 Comments

Thanks a lot, using wrangler for writing fixed it! The below code worked. wr.s3.to_csv(df, 's3://<my-bukcet-name>/mydata.csv', index=False)
Please mark the answer as accepted if it fixed your problem. Thanks.
This library is must-have for anyone extracting data and dumping it somewhere in AWS

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.