I'd move the helper functions into sort's scope. Names like any and merge are very generic, and don't need to be cluttering up the global scope.
I also question the need for some of the functions, especially dropHead. It's just a synonym for Array.shift, so why not just call shift? All you get is a function with side-effects; since shift modifies the receiver, dropHead modifies the passed-in array, but at an extra level of, well, obfuscation almost.
The naming of isSingleton isn't great either. "Singleton" in OO languages usually has precise meaning. In mathematics it refers to a single-element set, but in a programming context I'd expect it to check whether the array object itself is a singleton.
In the end, I'd just say array.length === 1 where needed. And any can be replaced with array.length > 0 or even array.length as zero is falsey. Is it as declarative? Maybe not, but in my opinion it's more readable JS than inventing new expressions.
Conversely, I see no reason to skimp on the naming of variables and properties. Why fst and snd instead of simply first and second? Why xs instead of just array or items or list? If the goal is to be declarative, I see little point to use overly terse naming.
Your main sort function has a subtle gotcha: If you return early because the array is a "singleton" or empty, you return the same array reference that was passed in. If there's actual sorting going on, you return a new array object. So the function essentially has two distinct behaviors depending on input - dangerous. I'd recommend that the early return uses xs.slice(0) (or just xs.slice()) to ensure that it's always a copy that's returned.
There are a few ES6 features you can use, although support is spotty. One is to use array destructuring instead of object destructuring to assign the return value(s) from split. It obviates the need for fst/snd, and by returning an array you ensure ordering, rather than having to explicitly assign fst and snd in order, or even having to know those property names. E.g.
const [head, tail] = split(array); // where split() returns a 2-element array
Returning an array also means you can simply run it through map instead: split(array).map(sort).
Also, you don't need the two extra while loops to yield the elements of xs and ys in your merge function; you can simply delegate the yielding to the arrays themselves:
yield* xs;
yield* ys;
In all, I'd write something like:
const sort = array => {
const split = array => {
const middle = array.length >> 1; // should be a safe use of bitwise trickery
return [array.slice(0, middle), array.slice(middle)]
};
const merge = function* (a, b) {
while(a.length && b.length) {
yield a[0] <= b[0] ? a.shift() : b.shift();
}
yield* a;
yield* b;
}
if(array.length < 2) {
return array.slice(0); // always return a copy
}
return [...merge(...split(array).map(sort))];
}
I've skipped the head function as it'd only be used twice - in the same line even. The remaining helper functions are nicely "symmetrical": Split and merge.
Edit: Just realized something: Your sorting isn't stable. In fact, it's precisely not stable. A quick test:
var test = [1, "1", 2, "2", "3", 3]; // already sorted
test = sort(test) // => ["1", 1, "2", 2, 3, "3"]
test = sort(test) // => [1, "1", 2, "2", "3", 3]
test = sort(test) // => ["1", 1, "2", 2, 3, "3"]
// ...
Long story short: The merge branching should use <= rather than < when deciding which array to shift from. I've corrected the code in this answer.
Addendum: It might be more declarative and pattern-matching'y to use a switch statement to determine whether to return early:
const sort = array => {
const split = array => {
const middle = array.length >> 1;
return [array.slice(0, middle), array.slice(middle)]
};
const merge = function* (a, b) {
while(a.length && b.length) {
yield a[0] <= b[0] ? a.shift() : b.shift();
}
yield* a;
yield* b;
}
switch (array.length) {
case 0:
case 1:
return array.slice();
default:
return [...merge(...split(array).map(sort))];
}
}
headfunction instead of usexs[0]... \$\endgroup\$