It basically depends on your business logic, your infrastructure and the table definition itself.
If you are storing data in temporary table, it is stored in the tempdb. So, the question is can we afford to store such amount of data in the tempdb without affecting the general performance.
What's the amount of data? If you are just storing one millions BIGINT values we might be OK. But if we are storing one millions rows and many nvarchar(max) values?
How big are is our tempdb and is it on ram disk?
How often this temporary table is going to be populated? Once per day, or hundreds times every minute?
You need to think about the questions above and implement the solution. Then, after few days or weeks, you can find out that it was not good one and change it.
Without knowing your production environment details I can advice only that you can optimize your query using indexes. You are filtering by IsLocal = 1 - this seems to be a good match for filtering index (even most of the rows have this value, we are going to eliminate some of them on read).
Also, if you are getting only few of the columns from the table, you can try to create covering index of your query creating index with include columns. Having index with the columns we need and filtering predicate can optimize our query a lot. But, you have to test this again as creating the perfect index is not an easy task each time.
WHEREcondition ofIsLocal = 1?