1

My table is:

price: numeric, time: timestamp, type: integer

i want to group by type and for each group find max price and earliest (by time) price.

from the computational perspective, it's a simple linear / reduce-like operation. but how can it be done in postgres? is there some existing function like that? do i have to create my own aggregate? should i encode two field into one like $time-$price and just find minimum from it?

1 Answer 1

1

Hmmm. Postgres doesn't have a first() aggregation function, but you can use arrays instead:

select type,
       max(price),
       array_agg(price order by time asc)[1] as earliest_price
from t
group by type;

EDIT:

There are other approaches, such as:

select distinct on (type) type,
       max(price) over (partition by type),
       price
from t
order by type, time asc;
Sign up to request clarification or add additional context in comments.

3 Comments

But then the first() aggregate is quite easy to implement: wiki.postgresql.org/wiki/First/last_(aggregate)
but this requires O(n) memory, right? in case each type contains a lot of rows, this might require a lot of memory. while, computationally, it can be done in O(1) memory without storing all rows - is it possible in postgres?
@piotrek . . . Google BigQuery is much nicer. It supports a LIMIT clause in array_agg() to avoid that problem.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.