0

I have data that I am using df.to_sql() on to get into my postgres warehouse. I have used the following answer to handle the same error, but now all nulls are inputted as nan in my tables. Below is how I implemented the answer and some things that I have done that do not work.

Looking for any suggestions here, running into this with text columns

df = df.apply(
            lambda col: col.astype(str).str.replace("\x00", "")
            if col.dtype == "object"
            else col,
            axis=0,
        )

# added these before .to_sql()
# .replace({"": None})
# .fillna(np.nan)
# .replace({np.nan: None})

df.to_sql(name, con, schema, if_exists="append")
2
  • Add some dummy sample data and desired result, also what's the structure of target table "name" in PostgreSQL? Commented Oct 14, 2022 at 18:56
  • @PepeNO not sure how this worked but I slapped a .replace('nan', np.nan) before the to_sql and it worked Commented Oct 18, 2022 at 1:37

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.