In my Java-application I need to store a big table on the harddisk as I want it to be persistent.
My first try was like this: (i & j can climb up to 300.000 and more, so I have an array of 300.000^2 double-entries, which crash my system.)
stmt.executeUpdate("DROP TABLE IF EXISTS calculations;");
stmt.executeUpdate("CREATE TABLE calculations (factorA, factorB, result);");
double temp = 0;
for (i = 0; i < datasource.size(); i++) {
for (int j = 0; j < datasource.size(); j++) {
if (i != j) {
temp = calc(datasource.get(i),datasource.get(j));
stmt.execute("INSERT INTO calculations (factorA, factorB, result) VALUES ('"+i+"','"+j+"','"+temp+"')");
}
}
}
Now, this performs extreme slow, probably because of the SQL-command which is a string etc.
My new guess is, that it's probably better to first calculate results for i.e. 10.000 i's and THEN store them as one unit into the database.
But before I try to implement, does anybody have a better idea? The database-useage is not mandatory, just an easy access and quick to implement.