I have a problem with developing a crawler using nodejs/puppeteer. The old crawler was:
- Crawl Pages
- Store the output file locally with the
fsmodule
Since i'm going to introduce UI on the server, have set up the scenario to upload it to S3 instead of storing it locally, and show the result as a UI.
- Crawl Pages
- Stream output files to the server with the fs module
- Get the output file back and upload it to the S3 bucket
The above is a scenario that i know of as a knowledge, and i'd like to know if it is possible as below.
- Crawl Pages
- Upload the stream stored data to memory to the S3 bucket
If you have a scenario like this, I would like to receive a guide. I would really appreciate if you would comment or reply :)