There are some ambiguities in your question that were either deliberately place to help you internalize the relationship between input size and run time complexity, or simple caused by miscommunication.
So as best as I can interpret this scenario:
Your algorithm complexity O(m) is linear with respect to m.
So since We assume that generating the data is independent of input. i.e. O(1)., your time-complexity is only dependent on some n that you specify that generates entries.
So yes, you can say that the algorithm runs in O(n log n) time, since it doesn't do anything with the input of size m.
In response to your updated question:
It's still hard to follow because some key words refer to different things. But in general I think this is what you are getting at:
- You have a data set as input, that is size O(n log n), given some specific n.
- This data set is used as input only, it's either pre-generated, or generated using some blackbox that runs in O(1) time regardless of what n is given to the blackbox. (We aren't interested in the blackbox for this question)
- This data set is then fed to the algorithm that we are actually interested in analyzing.
- The algorithm has time-complexity O(m), for an input of size m.
- Since your input has size O(n log n) with respect to n, then by extension your O(m) linear-time algorithm has time complexity O(n log n), with respect to n.
To see the difference: Suppose your algorithm wasn't linear but rather quadratic O(m^2), then it would have time-complexity O(n^2 log^2 n) with respect to n.
nin the big O notation is indicating the size of the input - so if your algorithm is linear in the size of the input it isO(n). However, this is often relaxed (a common example is graphs where we useO(E+V), and notnas the size of the input, but some properties of this input).