2

I am trying to parse dirty input into postgres tables. I have a problem with a 'date' field occasionally containing non-dates such as '00000000' or '20100100'. pg refuses to accept these, and rightly so.

Is there a way to have postgres recognize invalid dates (or only valid dates, if that works better), so I can substitute a sensible default?

(I've considered building a table listing the dates I'm willing to accept, and use that in a sub-select, but that seems awfully inelegant.)

Cheers,

Jurgen

2
  • 3
    a date field cannot contain invalid dates. Do you perhaps mean a varchar column that contains date values? Commented Jul 18, 2011 at 8:07
  • It is, indeed, a varchar column which I try to interpret as a date. Commented Jul 18, 2011 at 9:35

2 Answers 2

11

http://www.tek-tips.com/viewthread.cfm?qid=1280050&page=9

A more generic approach than the above:

create function safe_cast(text,anyelement) 
returns anyelement 
language plpgsql as $$ 
begin 
    $0 := $1; 
    return $0; 
    exception when others then 
        return $2; 
end; $$;

Used like this:

select safe_cast('Jan 10, 2009', '2011-01-01'::timestamp)
select safe_cast('Jan 10, 2009', null::timestamp)

Credited to the friendly dudes at the #postgresql irc channel. :)

Sign up to request clarification or add additional context in comments.

2 Comments

That seems to be exactly what I need. Thanks!
Solid answer, would love to see an lightweight solution implemented in PostgreSQL core. As mentioned by this comment multiplying transactions with proposed answer might cause performance issue in some cases. stackoverflow.com/questions/25374707/…
0

You could write a pgsql function with an exception handling block.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.