Since you scrape a lot you'll probably need a reliable proxy provider. So most of them provide their own api for auth and using their proxy pool.
I've got this piece of code (java) from Luminati.io
package example;
import org.apache.http.HttpHost;
import org.apache.http.client.fluent.*;
public class Example {
public static void main(String[] args) throws Exception {
HttpHost proxy = new HttpHost("zproxy.luminati.io", 22225);
String res = Executor.newInstance()
.auth(proxy, "lum-customer-CUSTOMER-zone-YOURZONE", "YOURPASS")
.execute(Request.Get("http://www.telize.com/geoip").viaProxy(proxy))
.returnContent().asString();
System.out.println(res);
}
}
There are more complex example codes there.
Non-professional scrape
If you just want to plug in any deliberate proxy IP to test, you might use this:
FirefoxProfile profile = new FirefoxProfile();
host='149.215.113.110'
port='9150'
profile.SetPreference("network.proxy.type", 1);
profile.SetPreference("network.proxy.http", host);
profile.SetPreference("network.proxy.http_port", int(port));
driver = new FirefoxDriver(profile);