8

Here's a code example:

declare function test_ok<T>(arr: T[]): T;
test_ok([1, 2, "hello"]); // OK
test_ok([[1], [2], ["hello"]]); // OK

// note the nested array signature
declare function test_err<T>(arr: T[][]): T;
test_err([[1], [2], ["hello"]]); // ERR type "string" is not assignable to type "number"
test_err<string | number>([[1], [2], ["hello"]]); // OK if generic is specified

It seems that the general case, TypeScript is able to infer the best common type (a basic union) when given a heterogeneous array. However, if you try to scope the generic any further than just a simple array (such as the nested array above), it gives up. I've also found this with other cases (e.g. an array of functions where the generic is over the return type of the functions, rather than entire functions). Is this some sort of performance optimization?

4
  • This also happens for any property of depth two or more, not just arrays: declare function x<T>(a: { x: {y: T}, u: {v: T}}): void; x({x: {y: 1}, u: {v: ""}}); Commented Sep 16, 2019 at 1:03
  • 1
    TypeScript does best at inferring a type given a value of that type, as opposed to inferring a type given a value of some function of that type. So I'd try declare function foo<T extends any[][]>(arr: T): T[number][number]; instead... does that match your use case? Still not sure if there's an official answer for why the inference "gives up" at two layers deep. Could be intentional, since sometimes you want things like this to fail. Commented Sep 16, 2019 at 1:06
  • 1
    Looks like this is probably intentional, see related comment on microsoft/TypeScript#31617 Commented Sep 16, 2019 at 1:13
  • @jcalz Thanks for both the workaround, as well as the link. I’ll approve it if you put it into an answer. Commented Sep 16, 2019 at 1:16

1 Answer 1

7

I'm going to say this is more like intentional error-catching and not a performance optimization. When trying to infer T given a set of values that are supposed to be of type T, you can always succeed by widening T to fit all values, but then it's impossible to catch a legitimate mistake, where one of the values was entered incorrectly. This is a judgment call by the designers of the language, I think, and likely any heuristic would produce some false positives and some false negatives.

The GitHub issue microsoft/TypeScript#31617 is a similar report, where a user expects string | number to be inferred from two arguments, one of type string and the other of type number. The response from one of the language maintainers is:

This is an intentional trade-off so that generics can catch errors where you expect multiple objects to be of the same type, which is usually more common.


So, what can be done? Obviously you can manually specify T as string | number. Otherwise, if you want your function to be permissive and never throw an error, you can make T the type of the given argument itself. The compiler is much more consistent about inferring a type given an value of that type than it is when given a value of some function of that type. So my workaround here would be this for arrays:

declare function test_fixed<T extends any[][]>(arr: T): T[number][number];
const ret = test_fixed([[1], [2], ["hello"]]); // string | number

In this case, T is constrained to be a doubly-nested array, and the return type, T[number][number] is the element type of the innermost array (if you take T, and look up a value at a number index, you will get another array type... if you look up a value of that at a number index, you get the innermost element type... T[number][number].)

Okay, hope that helps. Good luck!

Link to code

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.