I'm trying to scrape this webpage using Python: https://fftoolbox.scoutfantasysports.com/football/rankings/PrintVersion.php
I've been using the requests package. I can "solve" the issue by setting verify=False, however I've read that that's not secure. In other threads, people said to point the requests.get() function to the filepath of the relevant certificate. I exported the certificate from my browser, and then tried that, but with no luck. This
requests.get('https://fftoolbox.scoutfantasysports.com/football/rankings/PrintVersion.php',verify='C:/Users/ericb/Desktop/fftoolboxscoutfantasysportscom.crt')
gives the SSL error still
SSLError: HTTPSConnectionPool(host='fftoolbox.scoutfantasysports.com', port=443): Max retries exceeded with url: /football/rankings/PrintVersion.php (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",),))
And this
requests.get('https://fftoolbox.scoutfantasysports.com/football/rankings/PrintVersion.php',cert='C:/Users/ericb/Desktop/fftoolboxscoutfantasysportscom.crt')
yields
Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'PEM lib')]
I've done a decent amount of webscraping before, but I've never had to deal with certificates until now. How can I get around this? I should also note that I'd like to put my final Python script and any files it uses onto a public GitHub repo. But I don't want do do anything that would jeopardize my security, like uploading keys or something.