I have two large (1000+ object) ArrayLists that I need to compare and manipulate. I essentially need to take a value from ArrayList A, look for a matching object in ArrayList B, then manipulate the object from B. I need to do this in all objects for A. I need to do this frequently in the application. Order is not known and sizes will differ.
(pseudocode)
ArrayList<myObject> A
ArrayList<myObject> B
I could loop through every single item in B looking for the one that matches the entity from A, for each entity in A. That just seems so inefficient.
(pseudocode)
for (each object in A){loop through all of B and find it}
Would it be worth converting B to a HashMap (using the specific value I am comparing as the key, and the object as the value), then searching B that way, then converting that temporary HashMap back to an ArrayList when I'm done processing?
(pseudocode)
convert B to HashMap<myObject.myValue,myObject> C
for (each object in A){look up the value in C}
convert C back to an ArrayList
Is this a good idea? Or is this premature/unnecessary optimization? Thank you.
(Background: Data comes to me from the service as an ArrayList - and the frontend needs an ArrayList for the view layer. I'm trying to make this middle tier processing more efficient - but the entry and exit objects must be ArrayList (or some other list) )
LinkedHashMap, instead of anArrayList. This will allow you to preserve the order (which is apparently important to you, otherwise you would just have aHashMapto begin with) and still have amortized O(1) lookup.foreach a in A: foreach b in Bwill do about 500,000 comparisons if len(A)==len(B)==1000. If you need to follow the pointers in each case and everything is uncached, this will take 50-100 ms. If you have better cache behavior, or don't need to follow the pointers, it will take less. How important is tens of milliseconds wherever this is called? Only you can answer that.