5

posgtres version 9.1.9

Following query produces different plan when run in two different databases.

explain (analyze,buffers) SELECT group_.groupid     AS groupId,
        group_.name        AS groupName,
        group_.type_       AS groupType,
        group_.friendlyurl AS groupFriendlyURL
 FROM   group_
        inner join groups_orgs
                ON ( groups_orgs.groupid = group_.groupid )
        inner join users_orgs
                ON ( users_orgs.organizationid = groups_orgs.organizationid )
 WHERE  ( group_.livegroupid = 0 )
        AND ( users_orgs.userid = '27091470' )
        AND ( group_.companyid = '20002' )
        AND ( group_.classnameid = 10001
               OR group_.classnameid = 10003 )
        AND ( group_.name != 'Control Panel' )
        AND ( group_.type_ != 4 )
;

Plan from Production database.

                                                                                      QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Merge Join  (cost=6.36..16.60 rows=1 width=37) (actual time=0.133..95.323 rows=3 loops=1)
   Merge Cond: (group_.groupid = groups_orgs.groupid)
   Buffers: shared hit=30829
   ->  Index Scan using group__pkey on group_  (cost=0.00..87997.62 rows=17244 width=37) (actual time=0.030..85.166 rows=13906 loops=1)
         Filter: (((name)::text <> 'Control Panel'::text) AND (type_ <> 4) AND (livegroupid = 0) AND (companyid = 20002::bigint) AND ((classnameid = 10001) OR (classnameid = 10003)))
         Buffers: shared hit=30824
   ->  Sort  (cost=6.36..6.37 rows=3 width=8) (actual time=0.076..0.079 rows=3 loops=1)
         Sort Key: groups_orgs.groupid
         Sort Method: quicksort  Memory: 25kB
         Buffers: shared hit=5
         ->  Merge Join  (cost=1.05..6.34 rows=3 width=8) (actual time=0.045..0.054 rows=3 loops=1)
               Merge Cond: (users_orgs.organizationid = groups_orgs.organizationid)
               Buffers: shared hit=5
               ->  Index Scan using users_orgs_pkey on users_orgs  (cost=0.00..10.47 rows=2 width=8) (actual time=0.012..0.014 rows=2 loops=1)
                     Index Cond: (userid = 27091470::bigint)
                     Buffers: shared hit=4
               ->  Sort  (cost=1.05..1.06 rows=3 width=16) (actual time=0.028..0.030 rows=3 loops=1)
                     Sort Key: groups_orgs.organizationid
                     Sort Method: quicksort  Memory: 25kB
                     Buffers: shared hit=1
                     ->  Seq Scan on groups_orgs  (cost=0.00..1.03 rows=3 width=16) (actual time=0.003..0.005 rows=3 loops=1)
                           Buffers: shared hit=1

Plan from database which is created by exporting/importing data from production

                                                                                      QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..18.19 rows=1 width=36) (actual time=0.053..0.104 rows=3 loops=1)
   Buffers: shared hit=18
   ->  Nested Loop  (cost=0.00..9.77 rows=1 width=8) (actual time=0.036..0.065 rows=3 loops=1)
         Join Filter: (groups_orgs.organizationid = users_orgs.organizationid)
         Buffers: shared hit=6
         ->  Seq Scan on groups_orgs  (cost=0.00..1.03 rows=3 width=16) (actual time=0.007..0.010 rows=3 loops=1)
               Buffers: shared hit=1
         ->  Materialize  (cost=0.00..8.66 rows=2 width=8) (actual time=0.008..0.012 rows=3 loops=3)
               Buffers: shared hit=5
               ->  Index Scan using ix_fb646ca6 on users_orgs  (cost=0.00..8.65 rows=2 width=8) (actual time=0.016..0.021 rows=3 loops=1)
                     Index Cond: (userid = 27091470::bigint)
                     Buffers: shared hit=5
   ->  Index Scan using group__pkey on group_  (cost=0.00..8.41 rows=1 width=36) (actual time=0.008..0.010 rows=1 loops=3)
         Index Cond: (groupid = groups_orgs.groupid)
         Filter: (((name)::text <> 'Control Panel'::text) AND (type_ <> 4) AND (livegroupid = 0) AND (companyid = 20002::bigint) AND ((classnameid = 10001) OR (classnameid = 10003)))
         Buffers: shared hit=12

Production query takes around 100ms and in other DB takes 0.1ms Difference seems to be slow index scan on group_ table (Index Scan using group__pkey on group_) Can anyone explain the difference in execution time? Tables in production are regularly vacuumed and analyzed. Production DB is more busy than other DB.

Thanks, Sameer

5
  • The tuning could be different. {work_mem, random_page_cost, shared_buffers, index_cpu_cost, ... } Please look them up and add them to the question. Thre could also be a concurrency difference (competition for buffers, disk pages, or memory) Commented Mar 29, 2014 at 12:47
  • Identical configs in both DBs. This question is related to stackoverflow.com/questions/21546608/…. Only puzzle is why same query has different plan and executes faster in other DB Commented Mar 29, 2014 at 22:31
  • Could be (lack of) clustering, or (too much) concurrency. Anything goes. Commented Mar 29, 2014 at 23:18
  • Surprisingly, the solution was to recreate statistics on users_orgs table. Stats were auto-calculated just two weeks back, so root cause is still a mystery. Commented Apr 25, 2014 at 18:42
  • 1
    No, that's not "surprising". If you change the data you need to update statistics. The fact that you mention "two weeks back" seems to indicate you turned off auto-vacuum (or you didn't update the stats after the import). Don't turn that off, you will create more problems than it could possibly solve. Commented Aug 1, 2014 at 12:17

1 Answer 1

12

I was facing the same issue, in the "backup" database a query was blazing fast, and the same query when run in the production database was horribly slow. It turns out I only needed to perform Routine Database Maintenance Tasks more often to solve the problem:

The PostgreSQL query planner relies on statistical information about the contents of tables in order to generate good plans for queries. These statistics are gathered by the ANALYZE command, which can be invoked by itself or as an optional step in VACUUM. It is important to have reasonably accurate statistics, otherwise poor choices of plans may degrade database performance.

So if you're experiencing a similar issue try running the following command on your "slow" database:

$ vacuumdb --host localhost --port 5432 --username "MyUser" -d "MyDatabase" --analyze --verbose
Sign up to request clarification or add additional context in comments.

1 Comment

Thanks a tons! Our autovacuum task apparently wasn't enough to improve the extremely slow query plans.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.