I am performing web scraping in via Python \ Selenium \ Chrome headless driver which involves executing a loop:
# perform loop
CustId=2000
while (CustId<=3000):
# Part 1: Customer REST call:
urlg = f'https://mywebsite.com/customerRest/show/?id={CustId}'
driver.get(urlg)
soup = BeautifulSoup(driver.page_source,"lxml")
dict_from_json = json.loads(soup.find("body").text)
#logic for webscraping is here......
CustId = CustId+1
# close driver at end of everything
driver.close()
However, sometime the page might not exist when the customer ID is certain number. I have no control over this and the code stops with page not found 404 error. How do I ignore this though and just move on with the loop?
I'm guessing I need a TRY....EXCEPT though?