312

Is there a simple way to do this?

5
  • 61
    The accepted answer was arguably the best method back in 2012, but now db.cloneCollection() is often a better solution. There are a couple of more recent answers here that refer to this, so if you came here from Google (like I did) take a look at all the answers! Commented Jan 31, 2015 at 16:12
  • 4
    Make sure to read the other answers as well though to make sure that it fits your needs, not just @kelvin 's in his/her situation Commented May 4, 2015 at 3:38
  • 1
    @Naman what is the use case of copy collection, i mean you need any command or it is ok with manually process? for the manual process just install studio3T connect both databases and right click on collection that you want to copy, click on option "Copy Collection" and then go to second database right click on "Collections" directory and click on option "Paste Collection". Commented Sep 23, 2020 at 13:51
  • 1
    @turivishal that's defiinitely one way, but the command line tools are much more reliable and comes with immediate support for features released with upgrades. I have raised the bounty to reward an existing answer by the way. :) Commented Sep 23, 2020 at 15:10
  • 6
    db.cloneCollection() is deprecated now, but there are new $out and $merge method in the aggregation pipeline. mongodb.com/docs/manual/release-notes/4.4/#removed-commands Commented Oct 12, 2022 at 12:49

28 Answers 28

397
+400

The best way is to do a mongodump then mongorestore. You can select the collection via:

mongodump -d some_database -c some_collection

[Optionally, zip the dump (zip some_database.zip some_database/* -r) and scp it elsewhere]

Then restore it:

mongorestore -d some_other_db -c some_or_other_collection dump/some_collection.bson

Existing data in some_or_other_collection will be preserved. That way you can "append" a collection from one database to another.

Prior to version 2.4.3, you will also need to add back your indexes after you copy over your data. Starting with 2.4.3, this process is automatic, and you can disable it with --noIndexRestore.

Sign up to request clarification or add additional context in comments.

8 Comments

It seems that mongodump don`t work if you have password protected mongo instance (and you should!)
It works on PW protected DBs you just need to pass the auth in the params
This is much faster than find/forEach/insert, in my case 2 minutes vs 2 hours
Pass in the username for the database with --username but not --password to get a prompt for the password. It is best not to put the password on your command line (ending up saving it into .bash_history or similar)
Minor: I found the file in subfolder named by some_database so this works for me: mongorestore -d some_other_db -c some_or_other_collection dump/some_database/some_collection.bson
|
270
+200

At the moment there is no command in MongoDB that would do this. Please note the JIRA ticket with related feature request.

You could do something like:

db.<collection_name>.find().forEach(function(d){ db.getSiblingDB('<new_database>')['<collection_name>'].insert(d); });

Please note that with this, the two databases would need to share the same mongod for this to work.

Besides this, you can do a mongodump of a collection from one database and then mongorestore the collection to the other database.

11 Comments

Note that if you copy in the JS shell the BSON documents are decoded to JSON during the process so some documents may incur type changes. mongodump/mongorestore are generally the better approach.
Agreed. That was more just a fun suggestion for toying around with the shell. Plus, it would not bring over the indexes. If I was doing this, I would do the mongodump/mongorestore every time.
Thanks. Please note that you have a typo in the code, not closing the getSiblingDB function. Here's the corrected code: db.<collection_name>.find().forEach(function(d){ db.getSiblingDB('<new_database>')['<collection_name>'].insert(d); });
this worked well for resetting a test mongodb from a golden copy between test runs. rather than hard coding the collection names you can do a for loop over all the collection names you want to copy with db.getCollection(name).find().forEach and supply a function that has db.getSiblingDB("otherdb").getCollection(name).insert(d).
is this efficient for huge size collections ?
|
133

Actually, there is a command to move a collection from one database to another. It's just not called "move" or "copy".

To copy a collection, you can clone it on the same database, then move the cloned collection.

To clone:

> use db1
switched to db db1

> db.source_collection.find().forEach(
      function(x){
          db.collection_copy.insert(x)
      }
  );

To move:

> use admin
switched to db admin

> db.runCommand(
      {
          renameCollection: 'db1.source_collection',
          to              : 'db2.target_collection'
      }
  );

The other answers are better for copying the collection, but this is especially useful if you're looking to move it.

5 Comments

Thx works great! Just needs a closing apostrophe in 'db1.source_collection'
Instead of "use admin" followed by "db.runCommand(..." You can do just one command, "db.adminCommand(..."
This does not work for shared collections which you cannot rename.
Copy the collection document-by-document will take ages!
For moving collection to another db, see details here: mongodb.com/docs/manual/reference/command/renameCollection
30

I would abuse the connect function in mongo cli mongo doc. so that means you can start one or more connection. if you want to copy customer collection from test to test2 in same server. first you start mongo shell

use test
var db2 = connect('localhost:27017/test2')

do a normal find and copy the first 20 record to test2.

db.customer.find().limit(20).forEach(function(p) { db2.customer.insert(p); });

or filter by some criteria

db.customer.find({"active": 1}).forEach(function(p) { db2.customer.insert(p); });

just change the localhost to IP or hostname to connect to remote server. I use this to copy test data to a test database for testing.

1 Comment

As I commented on Jason's suggestion, be aware that if you copy in the JS shell the BSON documents are decoded to JSON during the process so some documents may incur type changes. There are similar considerations to Limitations of eval and this is going to be a slower process for copying significant amounts of data between databases (particularly on the same server). So mongodump/mongorestore FTW :).
23

If between two remote mongod instances, use

{ cloneCollection: "<collection>", from: "<hostname>", query: { <query> }, copyIndexes: <true|false> } 

See http://docs.mongodb.org/manual/reference/command/cloneCollection/

6 Comments

The copyIndexes option field actually is not respected. The indexes are always copied. See SERVER-11418
Wrap that in db.runCommand() i.e. db.runCommand({ cloneCollection: "<collection>", from: "<hostname>", query: { <query> } })
How can this be used for incremental updates from one remote mongo to another?
I have user data being added to one mongo instance throughout the day. At day end I need to transfer the newly added rows to another mongo instance. How can this be achieved?
cloneCollection has been removed in MongoDB version 4.4 - so it is not available anymore in current releases.
|
20

I'd usually do:

use sourcedatabase;
var docs=db.sourcetable.find();
use targetdatabase;
docs.forEach(function(doc) { db.targettable.insert(doc); });

2 Comments

This code inserts documents one-by-one, it will be very slow! And the entire collection needs to fit into your RAM.
Booommmmm..... It works....code inserts documents one-by-one but its very easy. Thank you
14

for huge size collections, you can use Bulk.insert()

var bulk = db.getSiblingDB(dbName)[targetCollectionName].initializeUnorderedBulkOp();
db.getCollection(sourceCollectionName).find().forEach(function (d) {
    bulk.insert(d);
});
bulk.execute();

This will save a lot of time. In my case, I'm copying collection with 1219 documents: iter vs Bulk (67 secs vs 3 secs)

4 Comments

this is way better, more efficient, hammers less the db, works for any size of dataset.
If you are doing this with more than 300k records, you may need to add a .limit(300000) after the find, and before the foreach. Else the system may lockup. I usually limit bulk changes to about 100k for safety. Wrapping the entire thing in a for loop based on count and limit.
Should we insert(One) or prefer bulk insertMany?
The entire collection needs to fit into your RAM, this could be a limitation.
14

Unbelievable how many up-votes are given for agonizingly slow one-by-one copy of data.

As given in other answers the fastest solution should be mongodump / mongorestore. There is no need to save the dump to your local disk, you can pipe the dump directly into mongorestore:

mongodump --db=some_database --collection=some_collection --archive=- | mongorestore --nsFrom="some_database.some_collection" --nsTo="some_or_other_database.some_or_other_collection" --archive=-

In case you run a sharded cluster, the new collection is not sharded by default. All data is written initially to your primary shard. This may cause problems with disk space and put additional load to your cluster for balancing. Better pre-split your collection like this before you import the data:

sh.shardCollection("same_or_other_database.same_or_other_collection", { <shard_key>: 1 });
db.getSiblingDB("config").getCollection("chunks").aggregate([
   { $match: { ns: "some_database.some_collection"} },
   { $sort: { min: 1 } },
   { $skip: 1 }
], { allowDiskUse: true }).forEach(function (chunk) {
   sh.splitAt("same_or_other_database.same_or_other_collection", chunk.min)
})

Comments

12

You can use aggregation framework to resolve your issue

db.oldCollection.aggregate([{$out : "newCollection"}])

It should be noted, that indexes from oldCollection will not copied in newCollection.

2 Comments

It should be also noted, that any existing newCollection is dropped before it inserts new data.
Since Mongo 4.4 you can output to another database with { $out: { db: "<output-db>", coll: "<output-collection>" } }
9

There are different ways to do the collection copy. Note the copy can happen in the same database, different database, sharded database or mongod instances. Some of the tools can be efficient for large sized collection copying.

Aggregation with $merge: Writes the results of the aggregation pipeline to a specified collection. Note that the copy can happen across databases, even the sharded collections. Creates a new one or replaces an existing collection. New in version 4.2. Example: db.test.aggregate([ { $merge: { db: "newdb", coll: "newcoll" }} ])

Aggregation with $out: Writes the results of the aggregation pipeline to a specified collection. Note that the copy can happen within the same database only. Creates a new one or replaces an existing collection. Example: db.test.aggregate([ { $out: "newcoll" } ])

mongoexport and mongoimport: These are command-line tools. mongoexport produces a JSON or CSV export of collection data. The output from the export is used as the source for the destination collection using the mongoimport.

mongodump and mongorestore: These are command-line tools. mongodump utility is for creating a binary export of the contents of a database or a collection. The mongorestore program loads data from a binary database dump created by mongodump into the destination.

db.cloneCollection(): Copies a collection from a remote mongod instance to the current mongod instance. Deprecated since version 4.2.

db.collection.copyTo(): Copies all documents from collection into new a Collection (within the same database). Deprecated since version 3.0. Starting in version 4.2, MongoDB this command is not valid.

NOTE: Unless said the above commands run from mongo shell.

Reference: The MongoDB Manual.

You can also use a favorite programming language (e.g., Java) or environment (e.g., NodeJS) using appropriate driver software to write a program to perform the copy - this might involve using find and insert operations or another method. This find-insert can be performed from the mongo shell too.

You can also do the collection copy using GUI programs like MongoDB Compass.

2 Comments

$merge needs "into": db.test.aggregate([ { $merge: { into: { db: "newdb", coll: "newcoll" } } } ]) mongodb.com/docs/v4.4/reference/operator/aggregation/merge
Yes, you are correct, into is required @AntonioRomeroOca
6

Using pymongo, you need to have both databases on same mongod, I did the following:


db = original database
db2 = database to be copied to

cursor = db["<collection to copy from>"].find()
for data in cursor:
    db2["<new collection>"].insert(data)

3 Comments

this would take a lot of time if the data size is huge. Alternatively you can use bulk_insert
Yes, this was just a quick and dirty way I found to work for me, my database wasn't too big, but not small either and didn't take too long, but yes you are correct.
Hello @vbhakta, Unfortunately the cursor returns empty array for me. What I did: cursor = db['my-node-js'].collectioName.find(). And as you can understand the my-node-js was database name. what I got when I execute print(cursor.toArray()) was '[ ]' and print(cursor.count()) prints 0.
5

I know this question has been answered however I personally would not do @JasonMcCays answer due to the fact that cursors stream and this could cause an infinite cursor loop if the collection is still being used. Instead I would use a snapshot():

http://www.mongodb.org/display/DOCS/How+to+do+Snapshotted+Queries+in+the+Mongo+Database

@bens answer is also a good one and works well for hot backups of collections not only that but mongorestore does not need to share the same mongod.

Comments

5

This might be just a special case, but for a collection of 100k documents with two random string fields (length is 15-20 chars), using a dumb mapreduce is almost twice as fast as find-insert/copyTo:

db.coll.mapReduce(function() { emit(this._id, this); }, function(k,vs) { return vs[0]; }, { out : "coll2" })

Comments

3

If RAM is not an issue using insertMany is way faster than forEach loop.

var db1 = connect('<ip_1>:<port_1>/<db_name_1>')
var db2 = connect('<ip_2>:<port_2>/<db_name_2>')

var _list = db1.getCollection('collection_to_copy_from').find({})
db2.collection_to_copy_to.insertMany(_list.toArray())

Comments

3

Many right answers here. I would go for mongodump and mongorestore in a piped fashion for a large collection:

mongodump --db fromDB --gzip --archive | mongorestore --drop --gzip --archive --nsFrom "fromDB.collectionName" --nsTo "toDB.collectionName"

although if I want to do quick copy, its slow but it works:

use fromDB 
db.collectionName.find().forEach(function(x){
   db.getSiblingDB('toDB')['collectionName'].insert(x);
});"

1 Comment

I tried mongorestore --uri mongodb+srv://iser:[email protected] --nsFrom "weblog.contractors" --nsTo "weblog.contractors_temp" but it just tries to override the whole weblog database. Before the mongorestore i did: mongodump --uri mongo+srv://asd:[email protected]/weblog. Please be careful.
2

This won't solve your problem but the mongodb shell has a copyTo method that copies a collection into another one in the same database:

db.mycoll.copyTo('my_other_collection');

It also translates from BSON to JSON, so mongodump/mongorestore are the best way to go, as others have said.

4 Comments

Excellent. Sadly the Mongo shell reference doesn't seem to mention this method.
Yes, I know, but the MongoDB shell is awesome, if you type db.collname.[TAB] you'll see all available methods on collection object. this tip works for all other objects.
The problem is the lack of help for those commands! It is useful to be able to see the code, though by omitting the parens to a method call.
Sadly, this command has now been deprecated since version 3.0.
1

In case some heroku users stumble here and like me want to copy some data from staging database to the production database or vice versa here's how you do it very conveniently (N.B. I hope there's no typos in there, can't check it atm., I'll try confirm the validity of the code asap):

to_app="The name of the app you want to migrate data to"
from_app="The name of the app you want to migrate data from"
collection="the collection you want to copy"
mongohq_url=`heroku config:get --app "$to_app" MONGOHQ_URL`
parts=(`echo $mongohq_url | sed "s_mongodb://heroku:__" | sed "s_[@/]_ _g"`)
to_token=${parts[0]}; to_url=${parts[1]}; to_db=${parts[2]}
mongohq_url=`heroku config:get --app "$from_app" MONGOHQ_URL`
parts=(`echo $mongohq_url | sed "s_mongodb://heroku:__" | sed "s_[@/]_ _g"`)
from_token=${parts[0]}; from_url=${parts[1]}; from_db=${parts[2]}
mongodump -h "$from_url" -u heroku -d "$from_db" -p"$from_token" -c "$collection" -o col_dump
mongorestore -h "$prod_url" -u heroku -d "$to_app" -p"$to_token" --dir col_dump/"$col_dump"/$collection".bson -c "$collection"

Comments

1

You can always use Robomongo. As of v0.8.3 there is a tool that can do this by right-clicking on the collection and selecting "Copy Collection to Database"

For details, see http://blog.robomongo.org/whats-new-in-robomongo-0-8-3/

This feature was removed in 0.8.5 due to its buggy nature so you will have to use 0.8.3 or 0.8.4 if you want to try it out.

2 Comments

This feature of Robomongo is still unstable. It's a 50/50 chance to make it work.
This seems to have been removed from 0.8.5
1

use "Studio3T for MongoDB" that have Export and Import tools by click on database , collections or specific collection download link : https://studio3t.com/download/

Comments

1

The simplest way to import data from the existing MongoDB atlas cluster DB is using mongodump & mongorestore commands.

To create the dump from existing DB you can use:

mongodump --uri="<connection-uri>"

There are other options for connection which can be lookup here: https://www.mongodb.com/docs/database-tools/mongodump/

After the dump is successfully created in a dump/ directory, you can use import that data inside your other db like so:

mongorestore --uri="<connection-uri-of-other-db>" <dump-file-location>

Similarly for mongorestore, there are other connection options that can be looked up along with commands to restore specific collections: https://www.mongodb.com/docs/database-tools/mongorestore/

The dump file location will be inside the dump directory. There may be a subdirectory with the same name as DB name which you dumped. For example if you dumped test DB, then dump file location would be /dump/test

Comments

1

Starting in version 4.2, MongoDB removes the deprecated copydb command and clone command.

As an alternative, users can use mongodump and mongorestore (with the mongorestore options --nsFrom and --nsTo).

For example, to copy the test collection from source database to the target database, you can:

  1. Use mongodump to dump the test collection from source database to an archive test.agz:
mongodump --gzip --archive=/backup/path/to/test.agz --db=source --collection=test
  1. Use mongorestore with --nsFrom and --nsTo to restore (with database name change) from the archive:
mongorestore --gzip --archive=/backup/path/to/test.agz --nsFrom='source.test' --nsTo='target.test'

NOTE: Provide authentication, if necessary, with the --uri parameter if MongoDB is running on an external instance, or a combination of the --username and --password parameters if MongoDB is running on a local instance.

Reference: mongodump — MongoDB Database Tools

Comments

1

I found it easy to export collection data in MongoDB Compass, which is free, and then import it (ADD DATA button) via MongoDB Compass to another db collection

1 Comment

Yes, this is indeed the fastest way for small collections if you have only 1 or 2 to move.
0

In my case, I had to use a subset of attributes from the old collection in my new collection. So I ended up choosing those attributes while calling insert on the new collection.

db.<sourceColl>.find().forEach(function(doc) { 
    db.<newColl>.insert({
        "new_field1":doc.field1,
        "new_field2":doc.field2,
        ....
    })
});`

Comments

0

To copy a collection (myCollection1) from one database to another in MongoDB,

**Server1:**
myHost1.com 
myDbUser1
myDbPasword1
myDb1
myCollection1

outputfile:
myfile.json 

**Server2:**
myHost2.com 
myDbUser2
myDbPasword2
myDb2
myCollection2 

you can do this:

mongoexport  --host myHost1.com --db myDb1 -u myDbUser1  -p myDbPasword1 --collection myCollection1   --out  myfile.json 

then:

mongoimport  --host myHost2.com --db myDb2 -u myDbUser2  -p myDbPasword2 --collection myCollection2   --file myfile.json 

Another case , using CSV file:

Server1:
myHost1.com 
myDbUser1
myDbPasword1
myDb1
myCollection1
fields.txt
    fieldName1
    fieldName2

outputfile:
myfile.csv

Server2:
myHost2.com 
myDbUser2
myDbPasword2
myDb2
myCollection2

you can do this:

mongoexport  --host myHost1.com --db myDb1 -u myDbUser1  -p myDbPasword1 --collection myCollection1   --out  myfile.csv --type=csv

add clolumn types in csv file (name1.decimal(),name1.string()..) and then:

mongoimport  --host myHost2.com --db myDb2 -u myDbUser2  -p myDbPasword2 --collection myCollection2   --file myfile.csv --type csv --headerline --columnsHaveTypes

Comments

0

You could use the $out aggregation function to copy everything to another table.

https://www.mongodb.com/docs/manual/reference/operator/aggregation/out/

Comments

0

I came here looking for a way to transfer a collection, so thanks to Anuj Gupta for the second part of his solution, based on db.runCommand({renewCollection.... Similar kudos to Alexander Makarov's aggregate solution, and particularly to C0_42's comment.

For the rest, I see many solutions based on one-by-one read-write, which are what I would call, from the privileged standpoint of late 2023, "Winter Code". All those extra CPU cycles sure keep us warm... This includes the forEach solutions, explicit, disguised by cursor or initializeUnorderedBulkOp.

As for foreign tools, I don't really think that dump / restore or export / import are the ideal solutions. These work around the issue, avoiding mongosh, which is not ideal. I extend this assessment to recommendations such as Studio3T. Although I am a Python developer, resorting to pymongo seems to me like yet another workaround for those who refuse to learn mongosh, which is 99.7% good old JavaScript. Furthermore, the pymongo solution posted here also reads and inserts one by one.

On this same note, I found an online article here that on late May 2023 still suggests the extinct collection method copyTo, together with a bunch of scattered mentions (like the forEach or the aggregate) that are never developed into the theme of the title: "How to Copy a DB and a Collection?". Quite useless and worse: it's part of an online course, so useless and irresponsible. Watch out for responses here recommending both this method and db.copyDatabase. If your pet is a T-Rex, I guess your Mongo still has these.

My solution is similar to what I saw from Uday Krishna's post. I just made it as a function:

function transfer(collection, sourceDb, target_db) {
    db.getSiblingDB(targetDb).getCollection(collection).insertMany(
        db.getSiblingDB(sourceDb).getCollection(collection).find().toArray()
    )
}

Notice how this solution does not even force you to be using any of the DBs involved. The trick here is to do the .toArray() at the end, so that the result can be used as argument to insertMany.

Comments

0

Switch to the source database

use sourceDB;

Fetch all documents from the source collection

let documents = db.sourceCollection.find().toArray();

Switch to the target database

use targetDB;

Insert the documents into the target collection

db.targetCollection.insertMany(documents);

Comments

-2

This can be done using Mongo's db.copyDatabase method:

db.copyDatabase(fromdb, todb, fromhost, username, password)

Reference: http://docs.mongodb.org/manual/reference/method/db.copyDatabase/

1 Comment

OP wanted to copy a collection -- not the whole db.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.