I have a CLI script which I am using to push files into and s3 bucket. For larger files i am splitting the files into parts and uploading them in parallel. (Pasting code structure here. I tried to make a minimalist example but even that is 60 lines long)
def _upload_part(argFile, argBucket, max_attempts_limit, **core_chunk):
#bunch of stuff
pool = Pool(processes=parallel_processes)
for i in range( chunk_amount ):
#bunch of stuff
pool.apply_async( _upload_for_multipart, [keyname, offset, mp, part_num, bytes] )
pool.close()
pool.join()
def _upload_for_multipart(keyname, offset, mp, part_num, bytes):
#code to upload each part
#log the status of each part to log files
def _get_logger( pdir, ldir, lname, level, fmt ):
os.makedirs( logs_dir )
logging.basicConfig(
filename=os.path.join(logs_dir, lname),
level=level,
format=fmt
)
return logging.getLogger( lname )
#under main
if __name__ == "__main__":
logneeds = dict( pdir=exec_dir, ldir='logs', lname='s3_cli.log', level='INFO',
fmt='%(asctime)s %(levelname)s: %(message)s' )
logger = _get_logger(**logneeds)
The above code structure works in OSX and Linux but fails in windows. It says that the
name 'logger' is not defined under the _upload_for_multipart function .Is there a difference between the way global variables
are interpreted in windows and unix based OS?
Edit: Added working example here
name 'logger' is not definederror. Some folks upload longish pieces of code onto a website like pastebin or GitHub and include a url reference to it in their question.