PostgreSQL versions older than 10 execute each query in a single backend, which is a process with a single thread. It cannot use more than one CPU for a query. It's also somewhat limited in what I/O concurrency it can achieve within a single query, really only doing concurrent I/O for bitmap index scans and otherwise relying on the OS and disk system for concurrent I/O.
PostgreSQL 10+ support parallel query. At time of writing (PostgreSQL 12 release) parallel query is only used for read-only queries. Parallel query support enables considerably more parallelism for some types of query.
Pg is good at concurrent loads of many smaller queries and it's easy to saturate your system that way. It just isn't as good at making the best of system resources for one or two really big queries, though this is improving as parallel query support is added for more types of query.
If you're on an older PostgreSQL without parallel query, or your query doesn't benefit from parallel query support yet:
What you can do is split the job up into chunks and hand them out to workers. You've alluded to this with:
Can i modify query to get postgre to calculate GetStatistic in paralel
for different rows simultaneously, using all avaliable CPUs?
There are a variety of tools, like DBlink, PL/Proxy, pgbouncer and PgPool-II that are designed to help with this kind of job. Alternately, you can just do it yourself, starting (say) 8 workers that each connect to the database and do UPDATE ... WHERE id BETWEEN ? AND ? statements with non-overlapping ID ranges. A more sophisticated option is to have a queue controller hand out ranges of about say 1000 IDs to workers that UPDATE that range then ask for a new one.
Note that 64 CPUs doesn't mean that 64 concurrent workers is ideal. Your disk I/O is a factor too when it comes to writes. You can help your I/O costs a bit if you set your UPDATE transactions to use a commit_delay and (if safe for your business requirements for this data) synchronous_commit = 'off' then the load from syncs should be reduced significantly. Nonetheless, it' likely that best throughput will be achieved well below 64 concurrent workers.
It's highly likely that your GetStatistic function can be made a lot faster by converting it to an inlineable SQL function or view, rather than what's presumably a loop-heavy procedural PL/pgSQL function it is at the moment. It might help if you showed this function.