315

I have a database schema named: nyummy and a table named cimory:

create table nyummy.cimory (
  id numeric(10,0) not null,
  name character varying(60) not null,
  city character varying(50) not null,
  CONSTRAINT cimory_pkey PRIMARY KEY (id)
);

I want to export the cimory table's data as insert SQL script file. However, I only want to export records/data where the city is equal to 'tokyo' (assume city data are all lowercase).

How to do it?

It doesn't matter whether the solution is in freeware GUI tools or command line (although GUI tools solution is better). I had tried pgAdmin III, but I can't find an option to do this.

2
  • PostgreSQL cannot select across databases. At least, older versions cannot and neither can Greenplum, don't know about 9.x. Commented Feb 17, 2017 at 13:55
  • I realize this is old, but I just wanted to mention that it is possible to select across databases using dblink, which has been available since at least v8.3. It makes use of foreign servers and foreign data wrappers to connect to "remote" databases. This works whether those databases exist on the same instance or entirely different hosts. I've used it fairly extensively to create materialized views into other databases to facilitate certain reporting and such and it works great. Commented Mar 5, 2020 at 14:46

15 Answers 15

437

Create a table with the set you want to export and then use the command line utility pg_dump to export to a file:

create table export_table as 
select id, name, city
from nyummy.cimory
where city = 'tokyo'
$ pg_dump --table=export_table --data-only --column-inserts my_database > data.sql

--column-inserts will dump as insert commands with column names.

--data-only do not dump schema.

As commented below, creating a view in instead of a table will obviate the table creation whenever a new export is necessary.

Sign up to request clarification or add additional context in comments.

12 Comments

Alright, so far your solution works. One thing missed is I need to add "-U user_name". I also almost succeed with ToraSQL tool, it's just that it has error in date-time data in the script result. If none can give GUI tool solution in 2 days, your answer will be accepted
Just want to share to other people, you can also use this free GUI tool: SQL Workbench/J (with postgreSQL jdbc4 driver), to do the same thing.
This would be much better with create view export_view..., since the view would remain up-to-date with changes to the base table. The docs say --table=table: Dump only tables (or **views**... so I had some hope this would work, but dumping a view sadly yields no data. :P
@poshest It works for me in 9.5. What exactly did you try?
This solution is work. From this solution in sql file, table name is "export_table". But the original table name is nyummy.cimory. Lets take production DB and UAT DB server. In both DB server I have the nyummy schema and cimory table. I need to take dump for particular rows from production cimory table to UAT cimory table. For that I need to create one "export_table" table, then I change the table name in sql file. Then I need to copy the data to UAT table. So is there any way to take dump for particular rows from a table.
|
246

To export only data use COPY:

COPY (SELECT * FROM nyummy.cimory WHERE city = 'tokio') TO '/path/to/file.csv';

You can export a whole table, only selected columns, or the result of a query as demonstrated. No need to create a table explicitly.

You get a file with one table row per line as plain text (not INSERT commands). Smaller and faster than INSERT commands.

To import the same to another Postgres table of matching structure anywhere (columns in same order, data types compatible!):

COPY other_tbl FROM '/path/to/file.csv';

COPY writes and reads files local to the server, unlike client programs like pg_dump or psql which read and write files local to the client. If both run on the same machine, it doesn't matter much, but it does for remote connections.

There is also the \copy command of psql:

Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.

Same syntax as above. Just replace COPY with \copy.

8 Comments

The OP calls specifically for data as insert sql script file. I guess he is talking about insert commands, don't you?
@Clodoaldo: You may be right, in which case your answer would be a better fit. One could also copy the CREATE script in pgAdmin separately (as the OP mentions GUIs).
STDIN and STDOUT can be used in place of file path, useful for small data exports.
Without the --column-inserts flag, pg_dump uses a COPY from STDIN for each of the tables in the SQL code it generates.
Take care that the order of the columns you SELECT matches the order of the columns in the destination database. If it doesn't, this could fail, or worse, succeed but insert bad data.
|
71

This is an easy and fast way to export a table to a script with pgAdmin manually without extra installations:

  1. Right click on target table and select "Backup".
  2. Select a file path to store the backup. As Format choose "Plain".
  3. Open the tab "Dump Options #2" at the bottom and check "Use Column Inserts".
  4. Click the Backup-button.
  5. If you open the resulting file with a text reader (e.g. notepad++) you get a script to create the whole table. From there you can simply copy the generated INSERT-Statements.

This method also works with the technique of making an export_table as demonstrated in @Clodoaldo Neto's answer.

Click right on target table and choose "Backup"

Choose a destination path and change the format to "Plain"

Open the tab "Dump Options #2" at the bottom and check "Use Column Inserts"

You can copy the INSERT Statements from there.

5 Comments

When I do this, there is no "Bakckup" option. This is pgAdmin III v1.18.1 connecting to Greenplum 4.3.4.1 (based on PostgreSQL 8.2.15).
I installed pgAdmin III v1.18.1 and there was the "backup" option. I connected to a PostgreSQL 9.5. So the problem is most probably between pgAdmin and the Greenplum.
Works as intended in pgAdmin4
This works great in pgAdmin 8.12. Thank you for posting this.
This is great. much better than other options.
14

For my use-case I was able to simply pipe to grep.

pg_dump -U user_name --data-only --column-inserts -t nyummy.cimory | grep "tokyo" > tokyo.sql

3 Comments

One must consider about having 'tokyo' in other field.
@BuyutJokoRivai since it's a table only dump in most cases it should be fine
Though with big table, you will dump all rows for the grep which is the pitfall case for your solution. Then, the way we query and store result to a table to dump as here stackoverflow.com/a/12816187/248616 is more suiteable
11

SQL Workbench has such a feature.

After running a query, right click on the query results and choose "Copy Data As SQL > SQL Insert"

2 Comments

It works great. When you choose 'postgres' as the 'driver', It's probable that you'll have to download the JDBC drivers yourself: jdbc.postgresql.org/download.html (it's a .jar file - java binary) and add it as the 'driver' of the postgresql conneciton. The connection string (or URL as in the interface) should look like that: jdbc:postgresql://127.0.0.1:5432/db_name
DBVisualizer has a similar and excellent feature that can copy to a file or straight to the clipboard.
10

I tried to write a procedure doing that, based on @PhilHibbs codes, on a different way. Please have a look and test.

 CREATE OR REPLACE FUNCTION dump(IN p_schema text, IN p_table text, IN p_where text)
   RETURNS setof text AS
 $BODY$
 DECLARE
     dumpquery_0 text;
     dumpquery_1 text;
     selquery text;
     selvalue text;
     valrec record;
     colrec record;
 BEGIN

     -- ------ --
     -- GLOBAL --
     --   build base INSERT
     --   build SELECT array[ ... ]
     dumpquery_0 := 'INSERT INTO ' ||  quote_ident(p_schema) || '.' || quote_ident(p_table) || '(';
     selquery    := 'SELECT array[';

     <<label0>>
     FOR colrec IN SELECT table_schema, table_name, column_name, data_type
                   FROM information_schema.columns
                   WHERE table_name = p_table and table_schema = p_schema
                   ORDER BY ordinal_position
     LOOP
         dumpquery_0 := dumpquery_0 || quote_ident(colrec.column_name) || ',';
         selquery    := selquery    || 'CAST(' || quote_ident(colrec.column_name) || ' AS TEXT),';
     END LOOP label0;

     dumpquery_0 := substring(dumpquery_0 ,1,length(dumpquery_0)-1) || ')';
     dumpquery_0 := dumpquery_0 || ' VALUES (';
     selquery    := substring(selquery    ,1,length(selquery)-1)    || '] AS MYARRAY';
     selquery    := selquery    || ' FROM ' ||quote_ident(p_schema)||'.'||quote_ident(p_table);
     selquery    := selquery    || ' WHERE '||p_where;
     -- GLOBAL --
     -- ------ --

     -- ----------- --
     -- SELECT LOOP --
     --   execute SELECT built and loop on each row
     <<label1>>
     FOR valrec IN  EXECUTE  selquery
     LOOP
         dumpquery_1 := '';
         IF not found THEN
             EXIT ;
         END IF;

         -- ----------- --
         -- LOOP ARRAY (EACH FIELDS) --
         <<label2>>
         FOREACH selvalue in ARRAY valrec.MYARRAY
         LOOP
             IF selvalue IS NULL
             THEN selvalue := 'NULL';
             ELSE selvalue := quote_literal(selvalue);
             END IF;
             dumpquery_1 := dumpquery_1 || selvalue || ',';
         END LOOP label2;
         dumpquery_1 := substring(dumpquery_1 ,1,length(dumpquery_1)-1) || ');';
         -- LOOP ARRAY (EACH FIELD) --
         -- ----------- --

         -- debug: RETURN NEXT dumpquery_0 || dumpquery_1 || ' --' || selquery;
         -- debug: RETURN NEXT selquery;
         RETURN NEXT dumpquery_0 || dumpquery_1;

     END LOOP label1 ;
     -- SELECT LOOP --
     -- ----------- --

 RETURN ;
 END
 $BODY$
   LANGUAGE plpgsql VOLATILE;

And then :

-- for a range
SELECT dump('public', 'my_table','my_id between 123456 and 123459'); 
-- for the entire table
SELECT dump('public', 'my_table','true');

tested on my postgres 9.1, with a table with mixed field datatype (text, double, int,timestamp without time zone, etc).

That's why the CAST in TEXT type is needed. My test run correctly for about 9M lines, looks like it fail just before 18 minutes of running.

ps : I found an equivalent for mysql on the WEB.

Comments

6

You can make view of the table with specifit records and then dump sql file

CREATE VIEW foo AS
SELECT id,name,city FROM nyummy.cimory WHERE city = 'tokyo'

4 Comments

I tried it in pgAdmin III, but for View object, there is no option for dumping.
Try navicat. I'm using it and it has export sql script option
@Giorgi: is there freeware version?
It is not possible using Postgres 9.1
4

Here are the BASH functions that I'm using to do this easily from the CLI:

function psql_oneoff() {
    local query=$1

    PGPASSWORD=$POSTGRES_PASSWORD "psql" \
        -d "$PG_DB_NAME" \
        -h "${PG_HOST:-localhost}" \
        -U "${POSTGRES_USER:-$USER}" \
        -p "$PG_PORT" \
        -c "$query"
}

function dump_sql_query_as_insert() {
    local query=$1
    local temp_table_name="_tmp_dump_as_inserts"
    psql_oneoff "DROP TABLE IF EXISTS $temp_table_name"
    full_query="CREATE TABLE $temp_table_name AS $query"
    psql_oneoff "$full_query"
    PGPASSWORD=$POSTGRES_PASSWORD pg_dump \
        --host "${PG_HOST:-localhost}" \
        --user "${POSTGRES_USER:-$USER}" \
        --port "$PG_PORT" \
        --column-inserts \
        --data-only \
        --table="$temp_table_name" \
        "$PG_DB_NAME"
    psql_oneoff "DROP TABLE $temp_table_name"
}

# Can be used like this:
dump_sql_query_as_insert "SELECT * FROM important_table WHERE clever_condition = true"

Just for context, I mainly use this for creating essential seed data in testing and local development environments.

Comments

3

I was in need of a way where I can generate insert statements without creating a temp table (in production). I did see some useful arguments to the the pg_dump above but still ended up devising for a way to

generate the insert statements and dump into a file

The below statement did the trick which I feel will be useful for people who end up here for similar answer.

$ /usr/local/bin/pg_dump -h the_db_host_address -d the_db_name --table=schema_name.table_name --data-only --column-inserts -U postgres -p the_port_number -v -f /Users/the_user/folder_name/insrt_stmts_file_name.sql

The next thing is a prompt which asks the password, in the case above I provided the postgres password but any other user should work fine which has the read capability.

I ran the just the insert statements from the file named: insrt_stmts_file_name.sql in DBeaver client.

But the same can be run from the command prompt as well using the following script:

/usr/local/bin/psql -h the_db_host_address -d the_db_name -v -U postgres -f /Users/the_user/folder_name/insrt_stmts_file_name.sql

Reference for pg_dump / psql flags: -h = host -d = db name -v = verbose (it'll output as it progresses) -U = db user name -f = file / path

Comments

2

I just knocked up a quick procedure to do this. It only works for a single row, so I create a temporary view that just selects the row I want, and then replace the pg_temp.temp_view with the actual table that I want to insert into.

CREATE OR REPLACE FUNCTION dv_util.gen_insert_statement(IN p_schema text, IN p_table text)
  RETURNS text AS
$BODY$
DECLARE
    selquery text; 
    valquery text; 
    selvalue text; 
    colvalue text; 
    colrec record;
BEGIN

    selquery := 'INSERT INTO ' ||  quote_ident(p_schema) || '.' || quote_ident(p_table);

    selquery := selquery || '(';

    valquery := ' VALUES (';
    FOR colrec IN SELECT table_schema, table_name, column_name, data_type
                  FROM information_schema.columns 
                  WHERE table_name = p_table and table_schema = p_schema 
                  ORDER BY ordinal_position 
    LOOP
      selquery := selquery || quote_ident(colrec.column_name) || ',';

      selvalue := 
        'SELECT CASE WHEN ' || quote_ident(colrec.column_name) || ' IS NULL' || 
                   ' THEN ''NULL''' || 
                   ' ELSE '''' || quote_literal('|| quote_ident(colrec.column_name) || ')::text || ''''' || 
                   ' END' || 
        ' FROM '||quote_ident(p_schema)||'.'||quote_ident(p_table);
      EXECUTE selvalue INTO colvalue;
      valquery := valquery || colvalue || ',';
    END LOOP;
    -- Replace the last , with a )
    selquery := substring(selquery,1,length(selquery)-1) || ')';
    valquery := substring(valquery,1,length(valquery)-1) || ')';

    selquery := selquery || valquery;

RETURN selquery;
END
$BODY$
  LANGUAGE plpgsql VOLATILE;

Invoked thus:

SELECT distinct dv_util.gen_insert_statement('pg_temp_' || sess_id::text,'my_data') 
from pg_stat_activity 
where procpid = pg_backend_pid()

I haven't tested this against injection attacks, please let me know if the quote_literal call isn't sufficient for that.

Also it only works for columns that can be simply cast to ::text and back again.

Also this is for Greenplum but I can't think of a reason why it wouldn't work on Postgres, CMIIW.

Comments

2

You can create an insert script using two methods.

  1. using select query
  2. from GUI in backup the table. Also, you can apply the where condition directly and create the insert script for particular rows only.

Method 1:

SELECT 'INSERT INTO nyummy.cimory (id, name, city) VALUES (''' || id || ''', ''' || name || ''', ''' || city || ''' );' FROM nyummy.cimory where your_condition ;

The above example only has character varying for all columns. If you want a number value, I'm adding that to here.

For example, the table has another column named "pin, which is an integer type in that scenario.

SELECT 'INSERT INTO nyummy.cimory (pin) VALUES ( ' || pin || ' );'

FROM nyummy.cimory where your_condition ;

*You can download the all-row select script result in CSV using the download option available in Postgresql. Sometime, while copying and pasting the script, it contains double quotes. Just replace the double quotes with an empty value using the text editor.

Method 2:

Right-click the table in Pgadmin, then click backup. backup option popup open there choose location and file name and in data/object tab toggle only data unselect all options and in object tab toggle use inserts commands option, it generate individual insert script for each row in the table.

Comments

0

Just to add on a simple way but manual method.

1)Using PGADMIN 4, after querying the data, download the data in csv.

  1. open the csv in any notepad and copy and paste the data in an online csv to sql converter. For example: https://www.convertcsv.com/csv-to-sql.htm. You can set the name of the target table.The output is sql insert scripts.

  2. Copy the INSERT scripts back into PGADMIN4 in the targeted db query window.

Comments

0

I updated it a bit as well, to export multiple rows.

    CREATE OR REPLACE FUNCTION insert_select(IN p_schema text, IN p_table text, IN p_where text)
    RETURNS setof text AS
    $BODY$
    DECLARE
        tablefilter text;
        rowrec record;
        dumpquery text;
        dumpquery_0 text;
        dumpquery_1 text;
        updateclause text;
        selquery text;
        selvalue text;
        valrec record;
        colrec record;
        updaterec record;
        total_rows INT;
        row_number INT := 0;
        row_char_end text;
        primary_key text;
    BEGIN
        -- ------ --
        -- GLOBAL --
        --   build base INSERT
        --   build SELECT array[ ... ]

        tablefilter := format('SELECT * FROM %I.%I %I', p_schema, p_table, p_where);

        FOR rowrec IN EXECUTE tablefilter
        LOOP
            dumpquery_0 := 'INSERT INTO ' ||  quote_ident(p_schema) || '.' || quote_ident(p_table);
            selquery    := 'SELECT array[';

            <<label0>>
            FOR colrec IN SELECT table_schema, table_name, column_name, data_type
                            FROM information_schema.columns
                            WHERE table_name = p_table and table_schema = p_schema
                            ORDER BY ordinal_position
            LOOP
                -- raise notice 'tipo%', colrec.data_type;
                IF colrec.data_type = 'text' OR colrec.data_type = 'varchar' OR colrec.data_type = 'character varying' THEN
                    selquery  := selquery    || 'quote_literal(' || colrec.column_name || ')';

                ELSIF colrec.data_type = 'date' THEN
                    selquery  := selquery    || 'quote_literal(to_char(' || colrec.column_name || ', ''YYYY-MM-DD''))';

                ELSIF starts_with(colrec.data_type, 'timestamp') THEN
                    selquery  := selquery    || 'quote_literal(to_char(' || colrec.column_name || ', ''YYYY-MM-DD HH24:MI:SS.MS''))';            
                ELSE 
                    selquery  := selquery    || 'CAST(' || colrec.column_name || ' AS TEXT)';
                END IF;
                selquery  := selquery || ',';

            END LOOP label0;

            dumpquery_0 :=  dumpquery_0 || ' VALUES (';
            selquery    := substring(selquery, 1 , length(selquery)-1)    || '] AS MYARRAY';
            selquery    := selquery    || ' FROM ' ||quote_ident(p_schema)||'.'||quote_ident(p_table);
            selquery    := selquery    || ' WHERE id = '|| rowrec.id;

            
            -- GLOBAL --
            -- ------ --
            --Get total rows of all primary keys--

            SELECT count(*)
            INTO total_rows
            FROM information_schema.columns
            WHERE table_name = quote_ident(p_table)
            AND table_schema = quote_ident(p_schema)
            AND column_name NOT IN (
                SELECT a.attname 
                FROM pg_index i
                JOIN pg_attribute a ON a.attrelid = i.indrelid 
                                    AND a.attnum = ANY(i.indkey)
                WHERE i.indrelid = quote_ident(p_table)::regclass
                AND i.indisprimary
            );

            --GET PRIMARY KEY COLUMNS
            SELECT c.conname 
            INTO primary_key
            FROM pg_constraint c 
            WHERE c.conrelid::regclass::text = quote_ident(p_table) and c.contype = 'p';
        
            updateclause := ' ON CONFLICT ON CONSTRAINT (' || primary_key || ') DO UPDATE SET ';
            -- --------- --
            -- Get Unique constraints --
            <<label3>>
            FOR updaterec IN SELECT column_name
                            FROM information_schema.columns
                            WHERE table_name = quote_ident(p_table)
                            AND table_schema = quote_ident(p_schema)
                            AND column_name NOT IN (
                                SELECT a.attname 
                                FROM pg_index i
                                JOIN pg_attribute a ON a.attrelid = i.indrelid 
                                                    AND a.attnum = ANY(i.indkey)
                                WHERE i.indrelid = quote_ident(p_table)::regclass
                                AND i.indisprimary
                            )
            LOOP
                updateclause := updateclause || updaterec.column_name || ' = EXCLUDED.' || updaterec.column_name;

                IF row_number = total_rows THEN
                    updateclause := updateclause || ';';
                ELSE
                    updateclause := updateclause || ', ';
                END IF;
                row_number := row_number + 1;

            END LOOP label3;


            row_number := 0;
            EXECUTE 'SELECT COUNT(*) FROM ' ||quote_ident(p_schema)||'.'||quote_ident(p_table) || ' WHERE '||p_where
            INTO total_rows;
            -- ----------- --
            -- SELECT LOOP --
            --   execute SELECT built and loop on each row
            <<label1>>
            FOR valrec IN  EXECUTE  selquery
            LOOP
                dumpquery_1 := '';
                IF not found THEN
                    EXIT ;
                END IF;
                row_number := row_number + 1;

                IF row_number = total_rows THEN
                row_char_end := ';';
                ELSE
                row_char_end := ',';
                END IF;

                -- ----------- --
                -- LOOP ARRAY (EACH FIELDS) --
                <<label2>>
                FOREACH selvalue in ARRAY valrec.MYARRAY
                LOOP
                    IF selvalue IS NULL
                    THEN selvalue := 'NULL';
                    END IF;
                    dumpquery_1 := dumpquery_1 || selvalue || ',';
                END LOOP label2;
            
                dumpquery_1 := substring(dumpquery_1 ,1,length(dumpquery_1)-1) || ')';
                -- LOOP ARRAY (EACH FIELD) --
                -- ----------- --

                -- debug: RETURN NEXT dumpquery_0 || dumpquery_1 || ' --' || selquery;
                -- debug: RETURN NEXT selquery;
                dumpquery := dumpquery_0 || dumpquery_1 || updateclause;
                dumpquery_0 := '(';
                RETURN NEXT dumpquery;
            END LOOP label1 ;
            -- SELECT LOOP --
            -- ----------- --
        END LOOP;

    RETURN ;
    END
    $BODY$
    LANGUAGE plpgsql VOLATILE;

I know it is a crap, I will refactor it soon

1 Comment

Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.
0

Improving Vi Shein answer for no generate INSERT INTO c1,c2... for each row:

CREATE OR REPLACE FUNCTION dump(IN p_schema text, IN p_table text, IN p_where text)
   RETURNS setof text AS
 $BODY$
 DECLARE
     dumpquery text;
     dumpquery_0 text;
     dumpquery_1 text;
     selquery text;
     selvalue text;
     valrec record;
     colrec record;
     total_rows INT;
     row_number INT := 0;
     row_char_end text;
 BEGIN

     -- ------ --
     -- GLOBAL --
     --   build base INSERT
     --   build SELECT array[ ... ]
     dumpquery_0 := 'INSERT INTO ' ||  quote_ident(p_schema) || '.' || quote_ident(p_table) || '(';
     selquery    := 'SELECT array[';

     <<label0>>
     FOR colrec IN SELECT table_schema, table_name, column_name, data_type, is_generated
                   FROM information_schema.columns
                   WHERE table_name = p_table and table_schema = p_schema
                   ORDER BY ordinal_position
     LOOP
         IF colrec.is_generated = 'ALWAYS' THEN
             CONTINUE; -- skip generated columns
         END IF;
         dumpquery_0 := dumpquery_0 || quote_ident(colrec.column_name) || ',';
         raise notice 'tipo%', colrec.data_type;
         IF colrec.data_type = 'text' OR colrec.data_type = 'varchar' OR colrec.data_type = 'character varying' THEN
            selquery  := selquery    || 'quote_literal(regexp_replace(' || colrec.column_name || ', E''[\\n\\r]+'', '' '', ''g''))';

         ELSIF colrec.data_type = 'date' THEN
            selquery  := selquery    || 'quote_literal(to_char(' || colrec.column_name || ', ''YYYY-MM-DD''))';

        ELSIF starts_with(colrec.data_type, 'timestamp') THEN
            selquery  := selquery    || 'quote_literal(to_char(' || colrec.column_name || ', ''YYYY-MM-DD HH24:MI:SS''))';

         ELSE
            selquery  := selquery    || 'CAST(' || colrec.column_name || ' AS TEXT)';

         END IF;
         selquery  := selquery || ',';

     END LOOP label0;

     dumpquery_0 := substring(dumpquery_0 ,1,length(dumpquery_0)-1) || ')';
     dumpquery_0 := dumpquery_0 || ' VALUES (';
     selquery    := substring(selquery    ,1,length(selquery)-1)    || '] AS MYARRAY';
     selquery    := selquery    || ' FROM ' ||quote_ident(p_schema)||'.'||quote_ident(p_table);
     selquery    := selquery    || ' WHERE '||p_where;
     -- GLOBAL --
     -- ------ --

     EXECUTE 'SELECT COUNT(*) FROM ' ||quote_ident(p_schema)||'.'||quote_ident(p_table) || ' WHERE '||p_where
     INTO total_rows;

     -- ----------- --
     -- SELECT LOOP --
     --   execute SELECT built and loop on each row
     <<label1>>
     FOR valrec IN  EXECUTE  selquery
     LOOP
         dumpquery_1 := '';
         IF not found THEN
             EXIT ;
         END IF;
         row_number := row_number + 1;

         IF row_number = total_rows THEN
            row_char_end := ';';
         ELSE
            row_char_end := ',';
         END IF;

         -- ----------- --
         -- LOOP ARRAY (EACH FIELDS) --
         <<label2>>
         FOREACH selvalue in ARRAY valrec.MYARRAY
         LOOP
             IF selvalue IS NULL
             THEN selvalue := 'NULL';
             END IF;
             dumpquery_1 := dumpquery_1 || selvalue || ',';
         END LOOP label2;
         dumpquery_1 := substring(dumpquery_1 ,1,length(dumpquery_1)-1) || ')' || row_char_end;
         -- LOOP ARRAY (EACH FIELD) --
         -- ----------- --

         -- debug: RETURN NEXT dumpquery_0 || dumpquery_1 || ' --' || selquery;
         -- debug: RETURN NEXT selquery;
         dumpquery := dumpquery_0 || dumpquery_1;
         dumpquery_0 := '(';

         RETURN NEXT dumpquery;

     END LOOP label1 ;
     -- SELECT LOOP --
     -- ----------- --

 RETURN ;
 END
 $BODY$
   LANGUAGE plpgsql VOLATILE;

Comments

-4

have u tried in pgadmin executing query with " EXECUTE QUERY WRITE RESULT TO FILE " option

its only export the data, else try like

pg_dump -t view_name DB_name > db.sql

-t option used for ==> Dump only tables (or views or sequences) matching table, refer

1 Comment

This will only export a create view statement

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.