1

I have been searching loads of forums for using a proxy in python with the selenium library to prevent "max number" timeout with web scraping through selenium.

I found the script below in many forums, but it just doesn't seem to work for me whatsoever... Could anyone please help me and give me some advice on how to implement proxy in chrome through python with selenium.

Thanks a lot!

SCRIPT:

from selenium.webdriver.chrome.options import Options
from selenium import webdriver

chromedriver = directory....
PROXY = "177.202.59.58:8080"
chrome_options = Options()  
chrome_options.add_argument('--proxy-server=%s' % PROXY)
chrome = webdriver.Chrome(chromedriver, options=chrome_options)
chrome.get("https://whatismyipaddress.com")
1
  • Update the question with max number timeout error. Commented Dec 19, 2021 at 17:03

1 Answer 1

1

There's nothing wrong with your code. That proxy is just not available/not working anymore. Try to find another proxy that a better uptime. Keep it mind that public proxies have a noticeable latency so the page will load pretty slow.

enter image description here

Sign up to request clarification or add additional context in comments.

1 Comment

Sorry for my late answer, many thanks for your help!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.