3
final class Combination {

    public static void findCombinations(String[][] sets) {
        int combinations = 1;
        for(int i = 0; i < sets.length; combinations *= sets[i].length, i++);
        for(int i = 0; i < combinations; i++) {
            int j = 1;
            for(String[] set : sets) {
                System.out.print(set[(i/j)%set.length] + " ");
                j *= set.length;
            }
            System.out.println();
        }
    }

    public static void main(String[] args) {
         findCombinations(new String[][]{{"a","b","c"}, {"d","e","i"}, {"f","g","h"}});
    }
  }

My answer looks like this

   a d f 
   b d f 
   c d f 
   a e f 
   b e f 
   c e f 
   a i f 
   b i f 
   c i f 
   a d g 
   b d g 
   c d g 
   a e g 
   b e g 
   c e g 
   a i g 
   b i g 
   c i g 
   a d h 
   b d h 
   c d h 
   a e h 
   b e h 
   c e h 
   a i h 
   b i h 
   c i h 

I want to know the time complexity of my solution and if there are any ways to improve the solution.

3
  • Many people I know would deem it better style to move combinations *= sets[i].length into the body of the loop, rather than having it in the head. Commented Apr 22, 2018 at 14:39
  • 1
    @ReazMurshed The complexity is definitely not O(n^2) for any reasonable definition of n. It's exponential. Commented Apr 22, 2018 at 14:47
  • I know it's been over a year, but you might want to check out Guava's implementation : guava.dev/releases/22.0/api/docs/com/google/common/collect/… Commented Jul 15, 2019 at 13:20

1 Answer 1

7

It's O(|first| * |second| * |third| * ...) and you can't improve on that bound, its Theta, not only O.

The result alone is that big (27 = 3 * 3 * 3 in your example), you need to create each result so you can't get better than the size of the result. Which concludes the Omega bound.

The O part is pretty straightforward since all sub-operations your code performs are in Theta(1). So we only need to consider the loops. Your inner-most loop generates the results, you get one print per result. So your algorithm is optimal, one iteration per correct result. You don't generate useless pairs that you need to throw away or use any non-constant operations in between. Since the amount of results alone is in the mentioned complexity, your code is too.


For the precise bound we need to include the size of every sub-element, as seen before. But if you rather want one size variable, let's say n you can bound the other sizes by the size of the biggest array:

n = max(|first|, |second|, |third|, ...)

and then you get

Theta(n^x)

where x is the amount of arrays you pass in. So in your example it would be Theta(n^3).

Sign up to request clarification or add additional context in comments.

4 Comments

I understand the logic of your answer which is correct, but benchmarking both this solution and another one of mine against this : guava.dev/releases/22.0/api/docs/com/google/common/collect/… The execution time of the guava script is almost constant regardless of the length of the array(s)
I do not see a conflict here. Big-O analysis does not talk about actual real time, any factors are completely omitted. It could get worse only after inputs of unrealistic size for example. Big-O does not care for that at all.
I know execution time differs, as complexity notation is defined in number of operations, but that is why I made a comment about guava's implementation. I am still trying to understand their implementation as I am not very experienced with java, but because of the constant running time when I used their function it looks like the number of operations does not change depending on the length of each input array, only the number of total outer arrays.
It only appears to be constant because their implementation is very fast for all inputs you tested. Also, their list is immutable, so they do not actually create new real lists but only mini wrappers which speeds everything up again. I you want an in-depth answer I would suggest you ask a new question with both implementations and a benchmark using a benchmark framework. Try out big lists, what about 100 lists with 100k entries each, now you will see a performance hit.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.