2

Looking to get some help around making my tests Parallelizable. I have a selenium c# setup that uses a combination of NUnit, C# and selenium to run tests in sequence locally on my machine or on the CI server.

I've looked into Parallelization of testing before but have been unable to make the jump, and running in a sequence was fine.

At the moment when I add the NUnit [Parallelizable] tag, I get an 'OpenQA.Selenium.WebDriverException : invalid session id' error, based on the reading I've done I need to make each new driver I call unique. However, I'm uncertain on how to do this? or even start for that matter... is this even possible within my current set up?

My tests are currently doing limited smoke tests and just removing the repetitive regression testing against multiple browsers, however, I foresee a need to vastly expand my coverage of testing capability.

I will probably be looking at getting Browserstack or Sauselab in the long term but obviously, that requires funding, and I need to get that signed off, so I will be looking to get it running locally for now.

here is a look at the basic set up of my code

test files:

1st .cs test file

{
    [TestFixture]
    [Parallelizable]
    public class Featur2Tests1 : TestBase
    {
        [Test]
        [TestCaseSource(typeof(TestBase), "TestData")]
        public void test1(string BrowserName, string Environment, string System)
        {
            Setup(BrowserName, Environment, System);

            //Run test steps....
        }

        [Test]
        [TestCaseSource(typeof(TestBase), "TestData")]
        public void test2(string BrowserName, string Environment, string System)
        {
            Setup(BrowserName, Environment, System);

            //Run test steps....
        }
    }
}

2nd .cs test file

{
    [TestFixture]
    [Parallelizable]
    public class FeatureTests2 : TestBase
    {
        [Test]
        [TestCaseSource(typeof(TestBase), "TestData")]
        public void test1(string BrowserName, string Environment, string System)
        {
            Setup(BrowserName, Environment, System);

            //Run test steps....
        }

        [Test]
        [TestCaseSource(typeof(TestBase), "TestData")]
        public void test2(string BrowserName, string Environment, string System)
        {
            Setup(BrowserName, Environment, System);

            //Run test steps....
        }
    }
}

TestBase.cs where my set up for each test

{ 
    public class TestBase
    {
        public static IWebDriver driver;

        public void Setup(string BrowserName, string Environment, string System)
        {
            Driver.Intialize(BrowserName);
            //do additional setup before test run...
        }

        [TearDown]
        public void CleanUp()
        {
            Driver.Close();          
        }

        public static IEnumerable TestData
        {
            get
            {
                string[] browsers = Config.theBrowserList.Split(',');
                string[] Environments = Config.theEnvironmentList.Split(',');
                string[] Systems = Config.theSystemList.Split(',');
                foreach (string browser in browsers)
                {
                    foreach (string Environment in Environments)
                    {
                        foreach (string System in Systems)
                        {
                            yield return new TestCaseData(browser, Environment, System);
                        }
                    }
                }
            }
        }
    }
}

The IEnumerable TestData comes from a file called config.resx and contains the following data:

  • {Name}: {Value}
  • theBrowserList: Chrome,Edge,Firefox
  • theEnvironmentList: QA
  • theSystemList: WE

This is where I create my driver in Driver.cs

{
    public class Driver
    {
        public static IWebDriver Instance { get; set; }

        public static void Intialize(string browser)
        {
            string appDirectory = Directory.GetParent(AppDomain.CurrentDomain.BaseDirectory).Parent.Parent.Parent.FullName;
            string driverFolder = $"{appDirectory}/Framework.Platform/bin/debug";
            if (browser == "Chrome")
            {
                ChromeOptions chromeOpts = new ChromeOptions();
                chromeOpts.AddUserProfilePreference("safebrowsing.enabled", true);
                chromeOpts.AddArgument("start-maximized");
                chromeOpts.AddArgument("log-level=3");
                Instance = new ChromeDriver(driverFolder, chromeOpts);
            }
            else if (browser == "IE")
            {
                var options = new InternetExplorerOptions { EnsureCleanSession = true };
                options.AddAdditionalCapability("IgnoreZoomLevel", true);
                Instance = new InternetExplorerDriver(driverFolder, options);
                Instance.Manage().Window.Maximize();
            }
            else if (browser == "Edge")
            {
                EdgeOptions edgeOpts = new EdgeOptions();
                Instance = new EdgeDriver(driverFolder, edgeOpts);
                Instance.Manage().Window.Maximize();
                Instance.Manage().Cookies.DeleteAllCookies();
            }
            else if (browser == "Firefox")
            {
                FirefoxOptions firefoxOpts = new FirefoxOptions();
                Instance = new FirefoxDriver(driverFolder, firefoxOpts);
                Instance.Manage().Window.Maximize();
            }
            else { Assert.Fail($"Browser Driver; {browser}, is not currently supported by Initialise method"); }
        }


        public static void Close(string browser = "other")
        {
            if (browser == "IE")
            {
                Process[] ies = Process.GetProcessesByName("iexplore");
                foreach (Process ie in ies)
                {
                    ie.Kill();
                }
            }
            else
            {
                Instance.Quit();
            }
        }
    }
}

4 Answers 4

4

All your tests use the same driver, which is defined in TestBase as static. The two fixtures will run in parallel and will both effect the state of the driver. If you want two tests to run in parallel, they cannot both be using the same state, with the exception of constant or readonly values.

The first thing to do would be to make the driver an instance member, so that each of the derived fixtures is working with a different driver. If that doesn't solve the problem, it will at least take you to the next step toward a solution.

Sign up to request clarification or add additional context in comments.

2 Comments

Thank you for your answer, I understand that my different test threads are clashing, however, I'm uncertain of how i should proceed with my next steps. I pulled the code together from various tutorials and other kind internet strangers. It's very much been a learning exercise as well as work. Is there any reading you could point me in the direction of please?
I wish there were something. It would have to be authored by a web developer who also understands how NUnit works. I'm the latter, not the former. I can tell you why your parallelism doesn't work (see above) but I don't have a full picture of what somebody in your shoes may want to do. If removing static from the driver doesn't help please update your question and we'll try again.
1

do not use static and that should help resolve your issue

public IWebDriver Instance { get; set; }


using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;


namespace Nunit_ParalelizeTest
{
    public class Base
    {
        protected IWebDriver _driver;


        [TearDown]
        public void TearDown()
        {
            _driver.Close();
            _driver.Quit();
        }


        [SetUp]
        public void Setup()
        {
            _driver = new ChromeDriver();
            _driver.Manage().Window.Maximize();
        }
    }
}

Comments

0

I see there is no [Setup] on top of setup method in the TestBase. Invalid session is caused because you are trying to close a window which is not there. Also try to replace driver.close() with driver.quit();

1 Comment

There's no reason to put an attribute on a method just because it's name is SetUp. It's called directly by the test code.
0

You should call the driver separately in each test, otherwise, nunit opens only one driver for all instances. Hope this makes sence to you.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.