0

I need to:

  1. Sort the arrays by the aggregate price of each combination.
  2. Return only 40% of them with the highest aggregate price.
$combinations = '[
    [                          //1st combination
        {"id":1,"price":11900},
        {"id":2,"price":499},
        {"id":3,"price":2099}
    ],
    [                          //2nd combination
        {"id":1,"price":11900},
        {"id":2,"price":499},
        {"id":4,"price":999}
    ],
    [                          //3rd combination
        {"id":1,"price":11900},
        {"id":2,"price":499},
        {"id":5,"price":899}
    ],
    [                          //4th combination
        {"id":1,"price":11900},
        {"id":2,"price":499},
        {"id":6,"price":2999}
    ]
]';
2
  • 1
    usort is your friend. Other than that, you'll have to show some code of what you've already tried, e.g. calculating the averages, etc. Commented Feb 27, 2022 at 17:56
  • 1
    Thanks @medilies for formatting the JSON. OP, for future reference, please try to provide your sample data in a format that's _immediately usable-. If we have to tinker and format the samples, it considerably raises the threshold of bothering to answer... Commented Feb 27, 2022 at 20:47

2 Answers 2

1
<?php

$json = json_decode('[
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":3,"price":2099}],
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":4,"price":999}],
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":5,"price":899}],
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":6,"price":2999}]
    ]');

// var_dump($json);

// ($a, $b) for ASC sorting
// ($b, $a) for DESC sorting
usort($json, function ($b, $a) {

    $a_prices = 0;
    foreach($a as $aa)
        $a_prices += $aa->price;

    $b_prices = 0;
    foreach($b as $bb)
        $b_prices += $bb->price;

    return $a_prices - $b_prices;
});

// Find where 40% stops
// It is up to you to choose between round(), ceil() or floor()
$breakpoint = round(sizeof($json) * 40 / 100);

$sorted_chunk = array_slice($json, 0, $breakpoint);
var_dump($sorted_chunk);
Sign up to request clarification or add additional context in comments.

8 Comments

That will work. You could also simply return array_sum(array_column($a, 'price')) <=> array_sum(array_column($b, 'price')) for the comparison function. If it's a large dataset, good practice would be to count the sums up front and use the "cache" in the comparison, rather than computing it all over again for each comparison. Even if our computations here are reasonably light-weight.
This is a learning moment for me, thank you so much @MarkusAO. BTW it is my first time using usort.
@medilies actually I'm ahead of myself with the array_column. All you need is array_sum($a->price) to shorten this up. Sorry about that! One might use array_column to cache up the sums. $price_sums = array_map(fn($row) => array_sum(array_column($row, 'price')), $json); would give you the sums for all the rows as a comparison basis.
Alright so I made myself a notch more clear on my comments above with a complementary answer. @Pau suggest you ask a new question starting from a point where your data is sorted (please include the ready sample data).
@Pau you already had a bunch of code written for you. Please make some effort yourself, too.
|
1

While the answer by @medilies is simple and correct, here's a more economical way to sort the data. If we're working with a large dataset, a straight-up usort can turn out rather expensive — as the comparison values have to be re-calculated for each $a vs. $b comparison. We can instead calculate the sums up-front and use the "cached" values for our comparison.

// Here's the data; decoded as an array:

$json = json_decode('[
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":3,"price":2099}],
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":4,"price":999}],
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":5,"price":899}],
    [{"id":1,"price":11900},{"id":2,"price":499},{"id":6,"price":2999}]
    ]', true);
    
// Calculate the sums for all prices per row up-front.
// Map array into sums: Get the sum for each row's "price" columns

$sums = array_map(fn($v) => array_sum(array_column($v, 'price')), $json);

// Use $sums in our key-based sorter for the comparison values:

uksort($json, function($b, $a) use ($sums) {
    return $sums[$a] <=> $sums[$b];
});

// See the sums, get the sorted data:

var_dump($sums, $json);

Here we use uksort instead of usort, since we only need to know the keys of the array members being sorted. Our "comparison cache" or the $sums array, with keys matching the target array, is passed with use() into the sorting function. Inside the function, we simply compare matching values in $sums[$a] and $sums[$b], not repeating the sum calculations. Demo: https://3v4l.org/sNluJ#v8.1.3

In this case it'd take a large dataset to make a significant difference. If there were more expensive iterations (say, multiple "heavy" function calls) required to arrive at the value to compare, the "up-front and only once" evaluation would save a lot of unnecessary computing cycles.

On returning the final top 40% result OP wants, please refer to the accepted answer.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.