Sadly, the worst case scenario is exponential in time, no matter what algorithm you use.
To see why, consider the following set of numbers: $A:=\{2^k \mid 0 \le k\le n\}$.
There are $n$ numbers in this set, each taking at most $n$ bits to represent. Hence the input is of size $m:=n^2$.
Now, notice that for any two subsets $S_1,S_2\subseteq A$, if $S_1\neq S_2$ then there must be some $i$ such that (without loss of generality) $2^i\in S_1$ but $2^i\notin S_2$.
We can think of the numbers in $A$ as binary numbers, and then $2^i$ will be the binary number with $1$ at the $i$'th spot and $0$ everywhere else. You can easily show that you cannot construct $2^i$ as a sum of numbers from $A\setminus \{2^i\}$ (try to prove this!), and in particular this means that for any subset of $A\setminus\{2^i\}$, the $i$'th bit in the summation will always be $0$ (again, show this!).
Since $2^i\notin S_2$, then $S_2\subseteq A\setminus\{2^i\}$ and therefore $s_2:=\sum_{x\in S_2} x$ will have $0$ at the $i$'th spot in its binary representation.
But $ s_1:=\sum_{x\in S_1} x = 2^i + \sum_{x\in S_1\setminus \{2^i\}} x$.
Denote $s_1':=\sum_{x\in S_1\setminus \{2^i\}} x$ and since $S_1\setminus\{2^i\}\subseteq A\setminus\{2^i\}$ then the $i$'th bit at $s_1'$ must be $0$.
Thus, the $i$'th bit at $s_1$ has to be $1$, since it is a sum of $2^i$ with another number that has $0$ at the $i$'th bit.
But this means that $s_1\neq s_2$, since $s_2$ had $0$ at the $i$'th bit!
Thus the summation operation must be a one-to-one (injective) function, and hence the number of different summations has to be at least the number os subsets of $A$ (and its not hard to show it actually equals that), which is $|P(A)|=2^{|A|}=2^n=2^\sqrt{m}$.
Thus the output in this case is of size at least $2^\sqrt{m}$ when the input was of size (in bits) $m$. One can even improve upon this bound to get that the output size in bits has to be $\Omega(2^{2\sqrt{m}})$, which is achieved by observing that all values in the output are distinct natural numbers.
Notice that this is result shows that any algorithm will have to "brute-force" in the specific case of this $A$.
That being said, one can construct a dynamic programming solution for this problem when we know that the size of the output (i.e, the number of different summations) is small. Say, the output has $k$ elements - then one can construct a dynamic programming solution with $O(n \cdot k)$:
- For any $1\le i\le n$, define $sum_i$ to be the sorted array of all possible summations of the first $i$ items in $A$.
- To compute $sum_{i+1}$, do the following:
- create a copy of $sum_i$ called $sum'_i$, and you will need to add
$x_i$ (the $i$'th element in $A$) to each of the values in $sum'_i$.
- Now, merge $sum_i$ and $sum'_i$ together, while removing any
duplicates (this step can take an additional $O(|sum_i|+
|sum'_i|)= O(k)$ time, if we use the merging algorithm from MergeSort).
- If we also add $k$ to the resulting array,
it will be exactly $sum_{i+1}$ (prove this!).
This will be computed $n$ times, for a total of $O(n\cdot k)$.