0

I'm using below code to invoke a http call whenever an insert or update happens on TEST_TABLE via Oracle DB trigger

create or replace TRIGGER TEST_TABLE_TRIGGER 
AFTER INSERT OR UPDATE OF VALUE ON TEST_TABLE 

for each row
DECLARE

  req utl_http.req;
  res utl_http.resp;
  url varchar2(100) := 'http://{serverIP}:8086/testMethod';

BEGIN
  -- need to pass current row to the http method
  req := utl_http.begin_request(url, 'GET',' HTTP/1.1');
  utl_http.set_header(req, 'content-type', 'application/json');
  res := utl_http.get_response(req);
  utl_http.end_response(res);

END;

How can I pass the newly added/updated row as a parameter to the http request ? Http request that's being invoked is a Java RESTful web service in which I will be processing newly added/updated row.

1 Answer 1

2

Columns from the new or updated row can be referenced like :new.column_name in the trigger. You'll have to build the JSON payload yourself and place it in the header.

What happens if your REST service is down? If you're trigger throws an error, as written, the transaction will fail and the update will be rolled back. Is that the desired outcome?

Also be aware that even if it works, the transaction won't complete until the response from the REST call is received, so this arrangement could introduce a lot of latency into your application (whatever is updating the table). You might want to check out the "pragma autonomous_transaction" call if you don't want that dependency or latency.

Sign up to request clarification or add additional context in comments.

8 Comments

But "pragma autonomous_transaction" introduces it's own problem. If the transaction inserting/updating transaction subsequently fails, the service has already been notified of success. At least I presume it's a successful notification.
Correct. Also, either way, what happens if a large number of rows are modified at the same time? Either the application will lock up waiting for the trigger to fire once for each row, or the REST interface could get overwhelmed by too many simultaneous requests. In most cases the remote application would pull the data on a timer, or use the REST API as a message queue to trigger a pull, rather than have the database push the actual data in real time. That way you don't introduce unwanted latency, service dependency, or risk processing uncommitted transactions.
A message queue could be called by autonomous transactions, and the trigger or the REST API could throttle the number of messages processed to limit data pulls to a reasonable number.
@pmdba we don't want insert/update to be rolled back if the REST service is down or if trigger throws an error. Do you mean "pragma autonomous_transaction" will take care of this or should I modify the Trigger to handle this ? And also db transaction shouldn't wait for the REST api response. The table that we are using here will have max 100 inserts/updates per day.
@VGH The problem with pushing data from Oracle to your REST interface as part of the transaction (in the trigger) is that if the REST call fails, so does the transaction. You can separate the REST call into an autonomous transaction, but then you have the risk of the trigger sending data to the REST that fails to commit to the database for some other reason. You would be better off to have your Java app make a JDBC connection to the database and poll the table periodically for updates, or use the REST interface to trigger such a poll through a message instead of sending the actual data.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.