I have a simple web form that sends the contents of a div (an SVG document) to the following Python script. The script takes the SVG data and prints it to the browser. The client then saves it as a file:
#!/usr/bin/python
import cgi, cgitb, urllib
form = cgi.FieldStorage()
try:
data = urllib.unquote(form.getvalue('output-data'))
fn = urllib.unquote(form.getvalue('output-fn'))
except AttributeError:
data = None
fn = None
print "Content-Type: application/x-gzip"
print "Content-Disposition: attachment; filename=" + str(fn)
print "Content-Description: File to download\n"
print data;
Here is the form:
<form id="form-figure-export-svg" action="assets/src/exportSvg.py" method="POST" style="display:none;" enctype="multipart/form-data">
<input type="text" id="form-figure-svg-data" name="output-data" style="">
<input type="text" id="form-figure-svg-fn" name="output-fn" style="">
</form>
The problem is that if the form contains a lot of data (around about 450-500 KB — actually 524288 bytes; see update below) then the data that the Python script prints is truncated. This corrupts the SVG output. Chrome and Safari browsers truncate at different points — the latest version of Firefox appears to work okay.
I verified that the data in the form is correct (e.g., console.log($('#form-figure-svg-data').val())) before it is submitted to the Python script.
Is there some parameter I need to add to the Python script, to ensure that the complete contents of data are printed?
Other things I have tried:
- Replacing the
Content-Typedirective with the MIME typeimage/svg+xml - Adding the
Content-Transfer-Encodingdirective setting ofbinary - Using the Python
StringIOandgziplibraries to manually compress the SVG data before printing
Update
On inspection of output from Webkit browsers (Chrome and Safari) this might be due to a limit on the maximum length of an input field (524288 bytes). I'm investigating to see if there is a way to remove this limit, or use some other means for sending my data through the form without an artificial limit.