2

I have an array of strings:

string[] stringArray = {"aaa", "bbb", "ccc", "aaa", "ccc", "ddd"};

I would like to get all indexes of this array where a substring of these strings are inside another array:

string[] searchArray = {"a","b"};

The answer I would like to get is then:

index = {0,1,3};

A soultion for just one entry of the searchArray would be:

List<int> index = stringArray.Select((s, i) => new { i, s })
            .Where(t => t.s.Contains(searchArray[1]))
            .Select(t => t.i)
            .ToList();

A solution for all entries would be:

List<int> index = new List<int>();
foreach (string str in searchArray)
            index.AddRange(stringArray.Select((s, i) => new { i, s })
            .Where(t => t.s.Contains(str))
            .Select(t => t.i)
            .ToList());
index.Sort();

But out of curiosity, are there any solutions by just using one command in LINQ?

1 Answer 1

6

Yup, you just need Any to see if "any" of the target strings are contained in the array element:

List<int> index = stringArray
    .Select((Value, Index) => new { Value, Index })
    .Where(pair => searchArray.Any(target => pair.Value.Contains(target)))
    .Select(pair => pair.Index)
    .ToList();
Sign up to request clarification or add additional context in comments.

7 Comments

Whould this code work? Yes, Would I do it this way? No
@Eser: Without saying what you don't like about it, that isn't terribly useful. What's the problem? What's your better solution?
It is an O(n*n) algorithm. ( If we include value.Contains(target) O(n**3)). Here is too late for now, but i'll think of one...
@Eser: Given that each element of one array needs to be checked against each element of another, and you can't use hashing to make it O(n+m) because it's a contains condition, not equality, I don't see how you can avoid it being O(n * m) checks. I wouldn't claim that it's not how you'd do it without having at least some reasonably clear idea how you would do it. I'd also point out that there's no indication that the OP will actually have large amounts of data. Even if there were a more efficient approach, it would almost certainly be more complex. Premature optimization and all that...
I am only also not sure about it but I would try to use some like lucene.apache.org/core/4_1_0/analyzers-stempel/org/egothor/… by preprosessing in a linear time. ( BTW: You updated your comment, and invalidated this :( )
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.