113

I am inserting JSON data into a MySQL database

I am parsing the JSON and then inserting it into a MySQL db using the python connector

Through trial, I can see the error is associated with this piece of code

for steps in result['routes'][0]['legs'][0]['steps']:
    query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
    if steps['travel_mode'] == "pub_tran":
        travel_mode = steps['travel_mode']
        Orig_lat = steps['var_1']['dep']['lat']
        Orig_lng = steps['var_1']['dep']['lng']
        Dest_lat = steps['var_1']['arr']['lat']
        Dest_lng = steps['var_1']['arr']['lng']
        time_stamp = leg['_sent_time_stamp'] 
    if steps['travel_mode'] =="a_pied":
        query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
        travel_mode = steps['travel_mode']
        Orig_lat = steps['var_2']['lat']
        Orig_lng = steps['var_2']['lng']
        Dest_lat = steps['var_2']['lat']
        Dest_lng = steps['var_2']['lng']
        time_stamp = leg['_sent_time_stamp']
    cursor.execute(query,(travel_mode, Orig_lat, Orig_lng, Dest_lat, Dest_lng, time_stamp))
    leg_no = cursor.fetchone()[0]
    print(leg_no)

I have inserted higher level details and am now searching the database to associate this lower level information with its parent. The only way to find this unique value is to search via the origin and destination coordinates with the time_stamp. I believe the logic is sound and by printing the leg_no immediately after this section, I can see values which appear at first inspection to be correct

However, when added to the rest of the code, it causes subsequent sections where more data is inserted using the cursor to fail with this error -

    raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.

The issue seems similar to MySQL Unread Result with Python

Is the query too complex and needs splitting or is there another issue?

If the query is indeed too complex, can anyone advise how best to split this?

EDIT As per @Gord's help, Ive tried to dump any unread results

cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng))
            leg_no = cursor.fetchone()[0]
            try:
                cursor.fetchall()
            except mysql.connector.errors.InterfaceError as ie:
                if ie.msg == 'No result set to fetch from.':
                    pass
                else:
                    raise
            cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng, time_stamp))

But, I still get

raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
[Finished in 3.3s with exit code 1]

scratches head

EDIT 2 - when I print the ie.msg, I get -

No result set to fetch from
1
  • Are you looping over a result set, and using the result to query again the database? Are you using the same cursor for that? It's probably good to use buffered cursor for the first one then. I have not much more to add to Gord's answer. Commented Apr 22, 2015 at 13:36

10 Answers 10

332

All that was required was for buffered to be set to true!

cursor = cnx.cursor(buffered=True)

The reason is that without a buffered cursor, the results are "lazily" loaded, meaning that "fetchone" actually only fetches one row from the full result set of the query. When you will use the same cursor again, it will complain that you still have n-1 results (where n is the result set amount) waiting to be fetched. However, when you use a buffered cursor the connector fetches ALL rows behind the scenes and you just take one from the connector so the mysql db won't complain.

Sign up to request clarification or add additional context in comments.

5 Comments

Bonus reason to use buffered=True: it makes .rowcount return the total from the start. See dev.mysql.com/doc/connector-python/en/…
ilaif's comment explains it well. But it is worth mentioning that this solution only bypasses the error. You may want to make sure if fetchone is appropriate given a full result set, expected or unexpected, to avoid any potential issue.
Please make this a better answer so people do not have to read the comments.
This stupid / frustrating exception has been bugging me for two days. THANK YOU for the fix & explanation!
This is not compatible with prepared=True.
35

I was able to recreate your issue. MySQL Connector/Python apparently doesn't like it if you retrieve multiple rows and don't fetch them all before closing the cursor or using it to retrieve some other stuff. For example

import mysql.connector
cnxn = mysql.connector.connect(
    host='127.0.0.1',
        user='root',
        password='whatever',
        database='mydb')
crsr = cnxn.cursor()
crsr.execute("DROP TABLE IF EXISTS pytest")
crsr.execute("""
CREATE TABLE pytest (
    id INT(11) NOT NULL AUTO_INCREMENT,
    firstname VARCHAR(20),
    PRIMARY KEY (id)
    )
""")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Gord')")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Anne')")
cnxn.commit()
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest")  # InternalError: Unread result found.

If you only expect (or care about) one row then you can put a LIMIT on your query

crsr.execute("SELECT firstname FROM pytest LIMIT 0, 1")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest")  # OK now

or you can use fetchall() to get rid of any unread results after you have finished working with the rows you retrieved.

crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
try:
    crsr.fetchall()  # fetch (and discard) remaining rows
except mysql.connector.errors.InterfaceError as ie:
    if ie.msg == 'No result set to fetch from.':
        # no problem, we were just at the end of the result set
        pass
    else:
        raise
crsr.execute("SELECT firstname FROM pytest")  # OK now

3 Comments

Thanks @Gord for your time! Interesting. So, I care about more than one row and so a limit on the query wont work. Im trying the fetchall method. However, I get exactly the same error as before :s. Ive edited my answer to reflect this. Any ideas? Thanks again, Gerry
"fetchall() to get rid of any unread results" was very helpful.
I think this answer more makes sense and solves the issue rather than bypassing it compare to selected answer.
20

cursor.reset() is really what you want.

fetchall() is not good because you may end up moving unnecessary data from the database to your client.

2 Comments

for mysql_connector_python-8.0.26, at least, cursor.reset() works, whereas cursor.close() does not. Strange that I could find no documentation for this. using a buffered connection is not the best solution for everyone, so building this into your cursor iteration is the best way. try: for record in cursor: do stuff; except: cursor.reset(); raise
This doesn't work. I still get the exception.
2

If you want to get only one result from a request, and want after to reuse the same connexion for other requests, limit your sql select request to 1 using "limit 1" at the end of your request.

ex "Select field from table where x=1 limit 1;"

This method is faster using "buffered=True"

Comments

2

The problem is about the buffer, maybe you disconnected from the previous MySQL connection and now it cannot perform the next statement. There are two ways to give the buffer to the cursor. First, only to the particular cursor using the following command:

import mysql.connector

cnx = mysql.connector.connect()

# Only this particular cursor will buffer results
cursor = cnx.cursor(buffered=True)

Alternatively, you could enable buffer for any cursor you use:

import mysql.connector

# All cursors created from cnx2 will be buffered by default
cnx2 = mysql.connector.connect(buffered=True)

cursor = cnx.cursor()

In case you disconnected from MySQL, the latter works for you. Enjoy coding

Comments

2

Set the consume_results argument on the connect() method to True.

cnx = mysql.connector.connect(
    host="localhost",
    user="user",
    password="password",
    database="database",
    consume_results=True
)

Now instead of throwing an exception, it basically does fetchall().
Unfortunately this still makes it slow, if you have a lot of unread rows.

1 Comment

This solved it for me :) None of the other suggestions worked.
0

There is also a possibility that your connection to MySQL Workbench is disconnected. Establish the connection again. This solved the problem for me.

Comments

0

cursor.reset() and then create tables and load entries

1 Comment

Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
0

I had the same issue. My goal was to run a SELECT on a large number of rows, fetch them one by one, and do an INSERT in another table, for each row returned by the SELECT.

The only solution I found was to open 2 connections (so 2 cursors).

Hope it helps.

Comments

-1

Would setting the cursor within the for loop, executing it, and then closing it again in the loop help? Like:

for steps in result['routes'][0]['legs'][0]['steps']:
    cursor = cnx.cursor()
    ....
    leg_no = cursor.fetchone()[0]
    cursor.close()
    print(leg_no)

2 Comments

I tested closing the cursor with unread rows, and It did not prevent the exception on the following query.
Did you test it? Now close() throws that exception.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.