0

In my project, I am downloading all the reports by clicking each link written as a "Date". Below is the image of the table. Table to be scraped

I have to extract a report of each date mentioned in the table column "Payment Date". Each date is a link for a report. So, I am clicking all the dates one-by-one to get the report downloaded.

for dt in driver.find_elements_by_xpath('//*[@id="tr-undefined"]/td[1]/span'):
    dt.click()
    time.sleep(random.randint(5, 10))

So, the process here going is when I click one date it will download a report of that date. Then, I will click next date to get a report of that date. So, I made a for loop to loop through all the links and get a report of all the dates.

But it is giving me Stale element exception. After clicking 1st date it is not able to click the next date. I am getting error and code stops.

How can I solve this?

5
  • 2 options to try. 1) in stead of using time.sleep to hard code a dealy, use Seleniums wait until clickable or visible (for example). 2) see if the data can be requested directly through api Commented Jul 12, 2021 at 8:19
  • Now, I'd like to know when you click on first date, is there any redirection ? cause if there's, we need to automate that part as well. Commented Jul 12, 2021 at 9:01
  • @cruisepandey When I click on the first date, it will download a report and will stay on the same page. Commented Jul 12, 2021 at 11:04
  • To solve that problem please take help from here:stackoverflow.com/questions/18225997/… Commented Jul 12, 2021 at 14:57
  • To solve that problem please take help from here:stackoverflow.com/questions/18225997/… Commented Jul 12, 2021 at 15:01

2 Answers 2

1

You're getting a stale element exception because the DOM is updating elements in your selection on each click.

An example: on-click, a tag "clicked" is appended to an element's class. Since the list you've selected contains elements which have changed (1st element has a new class), it throws an error.

A quick and dirty solution is to re-perform your query after each iteration. This is especially helpful if the list of values grows or shrinks with clicks.

# Create an anonymous function to re-use
# This function can contain any selector
get_elements = lambda: driver.find_elements_by_xpath('//*[@id="tr-undefined"]/td[1]/span')

i = 0
while True:
    elements = get_elements()

    # Exit if you're finished iterating
    if not elements or i>len(elements):
        break
    
    # This should always work
    element[i].click()

    # sleep
    time.sleep(random.randint(5, 10))

    # Update your counter
    i+=1
Sign up to request clarification or add additional context in comments.

Comments

0

The simplest way to solve it is to get a specific link each time before clicking on it.

links = driver.find_elements_by_xpath('//*[@id="tr-undefined"]/td[1]/span')
for i in range(len(links)):
    element = driver.find_elements_by_xpath('(//*[@id="tr-undefined"]/td[1]/span)[i+1]')
    element.click()
    time.sleep(random.randint(5, 10))

1 Comment

I used this solution. But it returns a list of elements and which are not clickable.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.