Until now I used a for cycle to get all the elements on a page in a certain path with this script:
for username in range(range_for_like):
link_username_like = "//article/div[2]/div[2]/ul/div/li[" + str(num) + "]/div/div[1]/div/div[1]/a[contains(@class, 'FPmhX notranslate zsYNt ')]"
user = browser.find_element_by_xpath(link_username_like).get_attribute("title")
num += 1
sleep(0.3)
But sometimes my cpu will exceed 100%, which is not ideal.
My solution was to find all the elements in one line using find_elements_by_xpath but in doing so, I can't figure out how to get all the "title" attributes.
I know that the path changes for every title, //article/div[2]/div[2]/ul/div/li[" + str(num) + "]/div/div[1]/div/div[1]/a that's why I kept increasing the num variable, but how can I use this tecnique without a cycle for?
What's the most efficient way in term of performance to get all the attributes? I don't mind if it does take also 2 minutes or more
titleuser = browser.find_elements_by_xpath('//@title').get_attribute("title")but it doesn't work