Below is the function which i am running on two different tables which contains same column names.
-- Function: test(character varying)
-- DROP FUNCTION test(character varying);
CREATE OR REPLACE FUNCTION test(table_name character varying)
RETURNS SETOF void AS
$BODY$
DECLARE
recordcount integer;
j integer;
hstoredata hstore;
BEGIN
recordcount:=getTableName(table_name);
FOR j IN 1..recordcount LOOP
RAISE NOTICE 'RECORD NUMBER IS: %',j;
EXECUTE format('SELECT hstore(t) FROM datas.%I t WHERE id = $1', table_name) USING j INTO hstoredata;
RAISE NOTICE 'hstoredata: %', hstoredata;
END LOOP;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
When the above function is run on a table containing 1000 rows time taken is around 536 ms.
When the above function is run on a table containing 10000 rows time taken is around 27994 ms.
Logically time taken for 10000 rows should be around 5360 ms based on the calculation from 1000 rows, but the execution time is very high.
In order to reduce execution time, please suggest what changes to be made.
getTableName()do? Additionally: assigning the result of a function called getTableName() to a "recordcount" seems quite strange (why do you assign a "name" to a "count"). The loop looks strange as well. Maybe if you told us what your real problem is, we could improve the function.idcolumn? And apparently you are dealing with all rows anyway, so why the LOOP`? You could simply retrieve all rows together with the ID and get the same result.