145

I am trying to execute a curl command within a python script.

If I do it in the terminal, it looks like this:

curl -X POST -d  '{"nw_src": "10.0.0.1/32", "nw_dst": "10.0.0.2/32", "nw_proto": "ICMP", "actions": "ALLOW", "priority": "10"}' http://localhost:8080/firewall/rules/0000000000000001

I've seen recommendations to use pycurl, but I couldn't figure out how to apply it to mine.

I tried using:

subprocess.call([
    'curl',
    '-X',
    'POST',
    '-d',
    flow_x,
    'http://localhost:8080/firewall/rules/0000000000000001'
])

and it works, but is there a better way?

2
  • 1
    You don't have to use cURL to POST something to a server. requests can do so quite easily (as can urllib, with a bit more effort) Commented Sep 23, 2014 at 16:50
  • Check this to know more about executing shell cmds in python stackoverflow.com/questions/89228/… Commented Sep 23, 2014 at 16:51

9 Answers 9

285

Don't!

I know, that's the "answer" nobody wants. But if something's worth doing, it's worth doing right, right?

This seeming like a good idea probably stems from a fairly wide misconception that shell commands such as curl are anything other than programs themselves.

So what you're asking is "how do I run this other program, from within my program, just to make a measly little web request?". That's crazy, there's got to be a better way right?

Uxio's answer works, sure. But it hardly looks very Pythonic, does it? That's a lot of work just for one little request. Python's supposed to be about flying! Anyone writing that is probably wishing they just call'd curl!


it works, but is there a better way?

Yes, there is a better way!

Requests: HTTP for Humans

Things shouldn’t be this way. Not in Python.

Let's GET this page:

import requests
res = requests.get('https://stackoverflow.com/questions/26000336')

That's it, really! You then have the raw res.text, or res.json() output, the res.headers, etc.

You can see the docs (linked above) for details of setting all the options, since I imagine OP has moved on by now, and you - the reader now - likely need different ones.

But, for example, it's as simple as:

url     = 'http://example.tld'
payload = { 'key' : 'val' }
headers = {}
res = requests.post(url, data=payload, headers=headers)

You can even use a nice Python dict to supply the query string in a GET request with params={}.

Simple and elegant. Keep calm, and fly on.

Sign up to request clarification or add additional context in comments.

20 Comments

I am using python 2.4.3. can't use requests. ImportError: No module named requests.
@Gary pip install requests
@marciokoko Most definitely! :) requests is "just" HTTP under the hood. Use it as you would curl, but fly faster with Python.
+1. To the best of my knowledge, anything that can be done with cURL can also be done via python requests. Might as well use that one instead.
this worked perfectly for what I came looking for. Thanks @OJFord
|
70

Use this tool (hosted here for free) to convert your curl command to equivalent Python requests code:

Example: This,

curl 'https://www.example.com/' -H 'Connection: keep-alive' -H 'Cache-Control: max-age=0' -H 'Origin: https://www.example.com' -H 'Accept-Encoding: gzip, deflate, br' -H 'Cookie: SESSID=ABCDEF' --data-binary 'Pathfinder' --compressed

Gets converted neatly to:

import requests

cookies = {
    'SESSID': 'ABCDEF',
}

headers = {
    'Connection': 'keep-alive',
    'Cache-Control': 'max-age=0',
    'Origin': 'https://www.example.com',
    'Accept-Encoding': 'gzip, deflate, br',
}

data = 'Pathfinder'

response = requests.post('https://www.example.com/', headers=headers, cookies=cookies, data=data)

1 Comment

While @OJFord has enlightened us why not to use curl within python, Nitin has depicted the simplest way to implement the same using "requests". I highly recommend this answer.
46

You could use urllib as @roippi said:

import urllib2
data = '{"nw_src": "10.0.0.1/32", "nw_dst": "10.0.0.2/32", "nw_proto": "ICMP", "actions": "ALLOW", "priority": "10"}'
url = 'http://localhost:8080/firewall/rules/0000000000000001'
req = urllib2.Request(url, data, {'Content-Type': 'application/json'})
f = urllib2.urlopen(req)
for x in f:
    print(x)
f.close()

3 Comments

is urllib2 more time efficient compared to subprocess?
It depends of the subprocess, but spawing subprocesses calling commands when the language has core libraries to do so it's not definitely the right way to do it
TypeError: POST data should be bytes, an iterable of bytes, or a file object. It cannot be of type str.
40

If you are not tweaking the curl command too much you can also go and call the curl command directly

import shlex
cmd = '''curl -X POST -d  '{"nw_src": "10.0.0.1/32", "nw_dst": "10.0.0.2/32", "nw_proto": "ICMP", "actions": "ALLOW", "priority": "10"}' http://localhost:8080/firewall/rules/0000000000000001'''
args = shlex.split(cmd)
process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()

3 Comments

Thanks, I implemented it using subprocess.call()
I was very well helped with the suggestion of @Ollie Ford: I have installed Requests under W10. From command line, started up Python and then composed my requested URL. From now I'll have to figure out how to set up (and view the contents of) a stream in a .py file. Suggestions welcome!
"subprocess" is not defined
14

Try with subprocess

CurlUrl="curl 'https://www.example.com/' -H 'Connection: keep-alive' -H 'Cache- 
          Control: max-age=0' -H 'Origin: https://www.example.com' -H 'Accept-Encoding: 
          gzip, deflate, br' -H 'Cookie: SESSID=ABCDEF' --data-binary 'Pathfinder' -- 
          compressed"

Use getstatusoutput to store the results

status, output = subprocess.getstatusoutput(CurlUrl)

Comments

7

You can use below code snippet

import shlex
import subprocess
import json

def call_curl(curl):
    args = shlex.split(curl)
    process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    stdout, stderr = process.communicate()
    return json.loads(stdout.decode('utf-8'))


if __name__ == '__main__':
    curl = '''curl - X
    POST - d
    '{"nw_src": "10.0.0.1/32", "nw_dst": "10.0.0.2/32", "nw_proto": "ICMP", "actions": "ALLOW", "priority": "10"}'
    http: // localhost: 8080 / firewall / rules / 0000000000000001 '''
    output = call_curl(curl)
    print(output)

Comments

3

Rephrasing one of the answers in this post, instead of using cmd.split(). Try to use:

import shlex

args = shlex.split(cmd)

Then feed args to subprocess.Popen.

Check this doc for more info: https://docs.python.org/2/library/subprocess.html#popen-constructor

Comments

1

Inside the subprocess module, there is one more option called run use it

from subprocess import run
run(curl -X POST -d  '{"nw_src": "10.0.0.1/32", "nw_dst": "10.0.0.2/32", "nw_proto": "ICMP", "actions": "ALLOW", "priority": "10"}' http://localhost:8080/firewall/rules/0000000000000001)

Comments

0

With Python 3, the built-in HTTP protocol client is a viable alternative to cURL. Using the example provided:

>>> import http.client, urllib.parse
>>> params = urllib.parse.urlencode({"nw_src": "10.0.0.1/32", "nw_dst": "10.0.0.2/32", "nw_proto": "ICMP", "actions": "ALLOW", "priority": "10"})
>>> headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"}
>>> conn = http.client.HTTPConnection("localhost:8080")
>>> conn.request("POST", "/firewall/rules/0000000000000001", params, headers)
>>> response = conn.getresponse()
>>> print(response.status, response.reason)
302 Found
>>> data = response.read()
>>> conn.close()

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.