@Malt's answer definitely highlights the problem with your code: that it doesn't 0-pad the int hex values; and that you mask the int to only take the last 8 bits using a & 0xff. Your original question implies you are only after the last byte in each int, but it really isn't clear.
You say you get results every second from your remote object. On a slow machine with large arrays it is possible that it could take a significant number of milliseconds to convert a long int[] to a hex string using your method using your (or rather Malt's corrected version of your) method.
A much faster method would be to get each 4-bit nibble from each int using bit shifting, and get the appropriate hex character from a static hex lookup array (note this does base-16 encoding, you would get shorter strings from something like base-64 encoding):
public class AltConverter {
final protected static char[] encoding = "0123456789ABCDEF".toCharArray();
public String convertToString(int[] arr) {
char[] encodedChars = new char[arr.length * 4 * 2];
for (int i = 0; i < arr.length; i++) {
int v = arr[i];
int idx = i * 4 * 2;
for (int j = 0; j < 8; j++) {
encodedChars[idx + j] = encoding[(v >>> ((7-j)*4)) & 0x0F];
}
}
return new String(encodedChars);
}
}
Testing this vs your original method using caliper (microbenchmark results here) shows this is around 11x faster † (caveat: on my machine). EDIT For anyone interested in running this and comparing the results, there is a gist here with the source code.
Even for a single element array
The original microbenchmark used Caliper as I happened to be trying it out at the time. I have rewritten it to use JMH. While doing so I found that the results I linked to and copied here originally used an array that was only ever filled with 0 for each int element. This caused the JVM to optimise the AltConverter code for arrays with length > 1 yielding artificial 10x to 11x improvements in AltConverter vs SimpleConverter. JMH and Caliper produce very similar results for both the flawed and corrected benchmark. (Updated benchmark project for maven eclipse here).
This is around 2x to 4x faster depending on array length (on my machine™). The mean runtime results (in ns) are:
Average run times in nanoseconds
Original method: SimpleConverter
New method: AltConverter
| N | Alt / ns | error / ns | Simple / ns | Error / ns | Speed up |
| ---------: | ---------: | ---------: | ----------: | ---------: | -------: |
| 1 | 30 | 1 | 61 | 2 | 2.0x |
| 100 | 852 | 19 | 3,724 | 99 | 4.4x |
| 1000 | 7,517 | 200 | 36,484 | 879 | 4.9x |
| 1000,0 | 82,641 | 1,416 | 360,670 | 5,728 | 4.4x |
| 1000,00 | 1,014,612 | 241,089 | 4,006,940 | 91,870 | 3.9x |
| 1000,000 | 9,929,510 | 174,006 | 41,077,214 | 1,181,322 | 4.1x |
| 1000,000,0 | 182,698,229 | 16,571,654 | 432,730,259 | 13,310,797 | 2.4x |
† Disclaimer: Micro-benchmarking is dangerous to rely on as an indication of performance in a real world app, but caliper is a good benchmarking framework, jmh is imho better. A performance difference of 10x 4x, with very small standard deviation, in caliper a good t-test result is enough to indicate a good performance increase even inside a more complex application.
int val1 = a & 0xff;You throw away three bytes of every int.