A simple fix to your code to do this might look like the following:
const duplicates = (data) => data
.filter((obj, index, array) =>
array.find((o, i) =>
o.latitude === obj.latitude &&
o.longitude === obj.longitude &&
i != index
)
)
We simply need to test for mismatched indices inside the find callback.
But I think there is much to be gained by separating out the filtering/dup-checking logic from the code that tests whether two elements are equal. The breakdown is more logical and we get a potentially reusable function from it.
So I might write it like this:
const keepDupsBy = (eq) => (xs) => xs .filter (
(x, i) => xs .find ((y, j) => i !== j && eq (x, y))
)
const dupLocations = keepDupsBy ((a, b) =>
a .latitude == b.latitude &&
a .longitude == b .longitude
)
const data = [{name: 'x', latitude: '45.9', longitude: '50.2'}, {name: 'y', latitude: '45.9', longitude: '50.2'}, {name: 'z', latitude: '40.5', longitude: '85.7'}];
console .log (dupLocations (data))
.as-console-wrapper {max-height: 100% !important; top: 0}
This keeps all the elements in the original array that have duplicates elsewhere, and returns them in their relative order from the original array. This is the same order as the above, but different from the interesting approach in Peter Seliger's answer which groups together all the matching values, returned in the relative order of the first elements of each group.
Note too the performance difference if you're expecting to use this on large lists. Your original and all the answers but Peter's operate in O (n^2) time. Peter's operates in O (n). For larger lists, the difference could be substantial. The tradeoff is different when it comes to memory resources, as Peter's operates in O (n) additional memory, while all other here operate in constant memory -- O (1). None of this likely makes a difference unless you're working in tens of thousands of elements or above, but it's often worth considering.