3

I have been trying to get the blob data from oracle into a text file using Python. I couldn't find the answer on any of the other links.

Below is my code :

sql_string = """select 
   event_id
   ,blob_length
   ,blob field
from table"""

  cur.execute(sql_string)
    path = "P:/Folders/"

    for row in cur:
        filename = path +  "notes_" + str(row[0]) + "_" + str(row[1]) + ".txt"      
        f = codecs.open(filename, encoding='utf-8', mode='wb+')
        f.write(row[2])
        f.close()

I get the below error

TypeError: utf_8_encode() argument 1 must be str, not cx_Oracle.LOB

I have tried a few other ways but the problem is that even other approaches that I've seen only handle strings and not blobs.

1
  • You have to read the LOB data into a str. Try row[2].read() Commented Aug 16, 2018 at 0:41

4 Answers 4

3

You have to use the cx_oracle.LOB.read() method to get the content of the LOB object:

f.write(row[2].read())
Sign up to request clarification or add additional context in comments.

Comments

3

Implemented what @blhsing suggested and it worked out perfectly

    for row in cur:
        filename = path +  "notes_" + str(row[0]) + "_" + str(row[1]) + ".txt"      
        print(row[2].read())
        f = open(filename, "wb")
        f.write(row[2].read())
        f.close()        

Comments

2

If your LOBs are small enough to fit in memory you'll get better performance if you fetch BLOBs as LONG_BINARY (and use LONG_STRING for CLOBs):

def OutputTypeHandler(cursor, name, defaultType, size, precision, scale):
    if defaultType == cx_Oracle.CLOB:
        return cursor.var(cx_Oracle.LONG_STRING, arraysize = cursor.arraysize)
    if defaultType == cx_Oracle.BLOB:
        return cursor.var(cx_Oracle.LONG_BINARY, arraysize = cursor.arraysize)

See the cx_Oracle example at https://github.com/oracle/python-cx_Oracle/blob/master/samples/ReturnLobsAsStrings.py

1 Comment

Thanks Christopher. I was just reading the comments and it says that this method requires contiguous memory so is not usable for very large LOBs. And I am already facing a problem with large blobs. And I was was trying writing to text files so I could uncompress the complete blob which currently I am unable to do with dataframes or text file. stackoverflow.com/questions/51922686/… Any help/suggestions would be great!!
0

Exporting as TSV (tab separated file) from DataGrip or similar tools, you can convert the hex-data of the blob to Python blob data this way:

import binascii
hexContent = blobContent[1:-1]
blob = binascii.unhexlify(hexContent)

Then you can save it to a file (remember 'wb' to save as blob), or work with it as normal blob in other ways

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.