One approach is to use Git hooks to copy the nested stacks to S3, e.g. post-receive hook.
Another one is to add another stage in the pipeline to invoke a Lambda function. You can follow this article to configure this step. When you set the "input artifacts" field, CodePipeline passes the path of the artifacts zip file as part of the event. Then the Lambda function extracts the zip file and uploads your stacks to your bucket.
Below is a sample Python code that downloads & extracts the artifacts to /tmp:
import boto3
import zipfile
def lambda_handler(event, context):
s3 = boto3.resource('s3')
codepipeline = boto3.client('codepipeline')
artifacts_location = event["CodePipeline.job"]["data"]["inputArtifacts"][0]["location"]["s3Location"]
jobId = event["CodePipeline.job"]["id"]
try:
print("Downloading artifacts")
s3.Bucket(artifacts_location["bucketName"]).download_file(artifact_location["objectKey"], '/tmp/artifacts.zip')
zip_ref = zipfile.ZipFile('/tmp/artifacts.zip', 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
except ClientError as e:
print("Cannot process the artifacts: {}".format(str(e)))
codepipeline.put_job_failure_result(
jobId=jobId,
failureDetails={"type": 'JobFailed', "message": str(e)}
)
return
# Perform the steps to copy your files from /tmp folder.
codepipeline.put_job_success_result(jobId=jobId)