0

the application I am building will require a single sqlite memory database that separate routines and threads will need to access. I am having difficulties achieving this.

I understand that:

file:my_db?mode=memory&cache=shared', uri=True

should create a memory database that can be modified and accessed by separate connections.

Here is my test that return an error: "sqlite3.OperationalError: no such table: my_table"

Code below saved as "test_create.py":

import sqlite3

def create_a_table():
    db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True)
    cursor = db.cursor()

    cursor.execute('''
        CREATE TABLE my_table(id INTEGER PRIMARY KEY, some_data TEXT)
    ''')

    db.commit()
    db.close()

The above code is imported into the code below in a separate file:

import sqlite3
import test_create

test_create.create_a_table()
db = sqlite3.connect('file:my_db')
cursor = db.cursor()

# add a row of data
cursor.execute('''INSERT INTO my_table(some_data) VALUES(?)''', ("a bit of data",))
db.commit()

The above code works fine is written in a single file. Can anyone advise how I can keep the code in separate files which will hopefully allow me to make multiple separate connections?

Note: I don't to save the database. Thanks.

Edit: If you want use threading ensure you enable the following option. check_same_thread=False

e.g.

db = sqlite3.connect('file:my_db?mode=memory&cache=shared', check_same_thread=False, uri=True)
0

1 Answer 1

2

You opened a named, in-memory database connection with shared cache. Yes, you can share the cache on that database, but only if you use the exact same name. This means you need to use the full URI scheme!

If you connect with db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True), any additional connection within the process can see the same table, provided the original connection is still open, and you don't mind that the table is 'private', in-memory only and not available to other processes or connections that use a different name. When the last connection to the database closes, the table is gone.

So you also need to keep the connection open in the other module for this to work!

For example, if you change the module to use a global connection object:

db = None

def create_a_table():
    global db
    if db is None:
        db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True)

    with db:
        cursor = db.cursor()

        cursor.execute('''
            CREATE TABLE my_table(id INTEGER PRIMARY KEY, some_data TEXT)
        ''')

and then use that module, the table is there:

>>> import test_create
>>> test_create.create_a_table()
>>> import sqlite3
>>> db = sqlite3.connect('file:my_db?mode=memory&cache=shared', uri=True)
>>> with db:
...     cursor = db.cursor()
...     cursor.execute('''INSERT INTO my_table(some_data) VALUES(?)''', ("a bit of data",))
...
<sqlite3.Cursor object at 0x100d36650>
>>> list(db.cursor().execute('select * from my_table'))
[(1, 'a bit of data')]

Another way to achieve this is to open a database connection in the main code before calling the function; that creates a first connection to the in-memory database, and opening and closing additional connections won't cause the changes to be lost.

From the documentation:

When an in-memory database is named in this way, it will only share its cache with another connection that uses exactly the same name.

If you didn't mean for the database to be just in-memory, and you wanted the table to be committed to disk (to be there next time you open the connection), drop the mode=memory component.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.