I have documents that look like this:
{
{
"_id": ObjectId("5444fc67931f8b040eeca671"),
"meta": {
"SessionID": "45",
"configVersion": "1",
"DeviceID": "55",
"parentObjectID": "55",
"nodeClass": "79",
"dnProperty": "16"
},
"cfg": {
"Name": "test"
}
}
The names and the data is just for testing atm. but I have a total of 25million documents in the DB. And I'm using find() to fetch a specific document(s) in this find() I use four arguments in this case, dnProperty, nodeClass, DeviceID and configVersion none of them are unique.
Atm. I have the index setup as simple as:
ensureIndex([["nodeClass", 1],["DeviceID", 1],["configVersion", 1], ["dnProperty",1]])
In other words I have index on the four arguments. I still have huge problems if you do a search that doesn't find any document at all. In my example all the "data" is random from 1-100 so if I do a find() with one of the values > 100 then it takes anywhere from 30-180sec to perform the search it also uses all of my 8gb RAM, then since there is no RAM left the computer goes very very slow.
What would be better indexes? Am I using indexes correct? Do I simply need more RAM since it will put "all" of the DB in it's working memory? Would you recommend another DB (other than mongo) to handle this better?
Sorry for multiple questions I hope they are short enough that you can give me an answer.