1
url="https://technet.microsoft.com/enus/library/hh135098(v=exchg.150).aspx"
r = requests.get(url)
soup = BeautifulSoup(r.content, 'lxml')
table = soup.find_all('table', attrs={"responsive": "true"})[0]
for rows in table.find_all('tr')[1:2]:
    item = []
     for val in rows.find_all('td'):
        item.append(val.text.strip())`

I am attempting to loop this through 4 different tables on the same website but I can't figure out how to write the loop. I have done research on it and can't seem to find out what to do

The 4 tables are at the locations 0, 1, 2 and 6. I have tried slicing the data to include them, but nothing seems to want to work

1 Answer 1

2

You can find all tables matching your filtering criteria, use enumerate() to get the indexes and "filter out" tables at undesired indexes:

desired_indexes = {0, 1, 2, 6}
tables = soup.find_all('table', attrs={"responsive": "true"})
for index, table in enumerate(tables):
    if index not in desired_indexes:
        continue

    # do something with table

In general, though, relying on the occurrence index of an element on a page does not sound like a reliable technique to locate an element on a page.

Sign up to request clarification or add additional context in comments.

1 Comment

Thankfully these indexes do not change, and these scripts only run once a month, they are low-priority. They just have to work. This worked perfectly, thank you

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.