1

I am having this attributeError AttributeError: 'Web_scraping' object has no attribute '_Web_scraping__headless'

import time
import os
from scraping.browser_manager import constants as const
from selenium import webdriver 
from selenium.webdriver.common.by import By

from scraping.browser_manager.automate_browser import Browser_bot

class Web_scraping():
    
    def __init__(self):

        # self.scraper instance
        self.scraper = Browser_bot(headless=self.__headless)

    def accept_cookies(self):
        cookies = self.find_element(By.CSS_SELECTOR,
           '.sui-TcfFirstLayer-buttons > button:nth-child(2)'
        )
        time.sleep(5)
        cookies.click()

this is the run.py function

from scraping.scraper import Web_scraping

try:

    with Web_scraping() as bot:
         bot.accept_cookies()

1 Answer 1

2

First of all, it should be:

class WebScraping:

instead of:

class Web_scraping():

(Parenthesis are redundant and PEP 8's style guide for classes' names recommends you to use CamelCaseNames)

Anyway I wouldn't use the scraping module at all (I don't get why you want to use it tbh) and do something like this:

from selenium import webdriver
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


class WebScraping:
    def __init__(self):
        options = FirefoxOptions()
        # options.add_argument("--headless")
        self.scraper = webdriver.Firefox(options=options)
        self.scraper.get("https://www.google.com/")

    def accept_cookies(self):
        cookies = WebDriverWait(self.scraper, 30).until(EC.element_to_be_clickable((By.XPATH, "//div[@class='QS5gu sy4vM']"))).click()


x = WebScraping()
x.accept_cookies()

(Note that I used explicit waits instead of the worse time.sleep())

Sign up to request clarification or add additional context in comments.

3 Comments

Change that dynamic xpath it looks fragile.
It is just a random example, he wasn't even trying to scrape that page!
I am calling the modules scraping scraper because there I have the automate class for the browser.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.