1.1. Quickstart Guide #

If your site administrator has not set things up in the default way, you might have some more work to do. For example, if the database server machine is a remote machine, you will need to set the PGHOST environment variable to the name of the database server machine. The environment variable PGPORT might also have to be set. The bottom line is this: if you try to start an application program and it complains that it cannot connect to the database, you should consult your site administrator or, if that is you, the documentation to make sure that your environment is properly set up. If you did not understand the preceding paragraph then read the next section.

Postgres Pro Shardman is composed of several software components:

  • PostgreSQL 17 DBMS with a set of patches.

  • Postgres Pro Shardman extension.

  • Management tools and services, including built-in shard manager to provide high availability.

  • BiHA extension.

Postgres Pro Shardman stores its configuration in an etcd cluster. Therefore, we can use an existing etcd cluster, or we can deploy a simple one-node etcd cluster.

The shardmand daemon monitors the cluster configuration and manages BiHA clusters, which are used to guarantee high availability of all shards. The common Postgres Pro Shardman configuration (shardmand) is stored in an etcd cluster.

Currently Postgres Pro Shardman packages are available for

  • Ubuntu 20.04/22.04

  • Debian 10/11/12

  • Red Hat Enterprise Linux 7/8/9

  • Red OS 7.3/8

  • Alt 9/10/10SP

  • Astra Linux 1.7/1.8/s390

1.1.1. Choosing the Packages to Install #

The table below lists all the available Postgres Pro Shardman packages.

Table 1.1. Postgres Pro Shardman Packages

Package

Description

postgrespro-sdm-17

Top-level package that installs and configures Postgres Pro Shardman for server and client systems. Do not use this package for upgrades or migrations.

Important

Installing the postgrespro-sdm-17 package can delete existing installations of Postgres Pro Shardman and PostgreSQL-based products. Similarly, this Postgres Pro Shardman installation may be automatically removed if you later install another PostgreSQL-based product.

postgrespro-sdm-17-debuginfo

Top-level package that installs debug symbols for other packages.

postgrespro-sdm-17-client

Standard client applications, such as psql or pg_dump.

postgrespro-sdm-17-client-debuginfo

Debug package.

etcd-sdm-sdm-17

etcd server.

shardman-services-sdm-17

shardmand daemon to manage Postgres Pro Shardman services.

shardman-tools-sdm-17

Command line utility to manage daemon or tools like shardmanctl.

postgrespro-sdm-17-libs

Shared libraries required to deploy client applications, including libpq; runtime libraries for ECPG processor.

postgrespro-sdm-17-libs-debuginfo

Debug package

postgrespro-sdm-17-server

Postgres Pro Shardman server and PL/pgSQL server-side programming language.

postgrespro-sdm-17-server-debuginfo

Debug package

postgrespro-sdm-17-contrib

Additional extensions and programs deployable on database servers.

postgrespro-sdm-17-contrib-debuginfo

Debug package.

postgrespro-sdm-17-devel

Header files and libraries for developing client applications and server extensions.

On Debian-based systems, this package is called postgrespro-sdm-17-dev.

postgrespro-sdm-17-devel-debuginfo

Debug package for header files.

postgrespro-sdm-17-plperl

Server-side programming language based on Perl (see Chapter 45).

postgrespro-sdm-17-plperl-debuginfo

Debug package.

postgrespro-sdm-17-plpython3

Server-side programming language based on Python 3 (see Chapter 46).

postgrespro-sdm-17-plpython3-debuginfo

Debug package.

postgrespro-sdm-17-pltcl

Server-side programming language based on Tcl (see Chapter 44).

postgrespro-sdm-17-ptcl-debuginfo

Debug package.

postgrespro-sdm-17-docs

Documentation (English).

postgrespro-sdm-17-docs-ru

Documentation (Russian).

postgrespro-sdm-17-jit

This package provides support for Just-in-Time (JIT) compilation.

This package is available only for x86_64 architecture and only for the supported Debian and Ubuntu systems, Astra Linux 1.7/1.8, supported ALT systems.

To learn more about enabling and using JIT, see Chapter 29.

postgrespro-sdm-17-jit-debuginfo

Debug package.

pg-probackup-sdm-17

pg_probackup utility.

pg-probackup-sdm-17-debuginfo

Debug package.

pgpro-controldata

pgpro_controldata application to display control information of a PostgreSQL/Postgres Pro Shardman database cluster and compatibility information for a cluster and/or server.

pgpro-pwr-sdm-17

pgpro_pwr extension that enables you to generate workload reports, which help to discover most resource-intensive activities in your database.


Also, Postgres Pro Shardman includes libraries and utilities required for the Postgres Pro Shardman server and provided packages. These packages are only provided for the distributions that don't include the required versions of these libraries.

Table 1.2. Third-party libraries and utilities for Postgres Pro Shardman

Library/utility

Description

liblz4

A library for fast lossless data compression.

perl-IO-Tty, perl-IPC-Run

Two Perl libraries used in the TAP test framework available in the postgrespro-sdm-17-devel package for extension developers and engineers. Provided for RED OS 7.3/8.


Besides, there are separate packages providing several external modules that have been pre-built for compatibility with Postgres Pro Shardman:

Table 1.3. Third-party Packages Built for Postgres Pro Shardman

Package

Description

odbc-postgrespro-sdm-17

An Open Database Connectivity (ODBC) driver for accessing database management systems (DBMS). odbc-postgresql can be used instead, yet in some cases it may conflict with the outdated libraries of the vanilla PostgreSQL that do not provide functionality available in newer libraries.

orafce-sdm-17

This package implements in Postgres Pro Shardman some of the functions from the Oracle database that are missing (or behaving differently).

pgbouncer

pgbouncer — a connection pooler for Postgres Pro Shardman.

pgvector-sdm-17

pgvector extension that provides vector similarity search for Postgres Pro Shardman.

pldebugger-sdm-17

A set of shared libraries that implement an API for debugging PL/pgSQL functions in Postgres Pro Shardman.

plv8-sdm-17

The shared library PLV8 that provides a Postgres Pro Shardman procedural language powered by V8 Javascript Engine.

This package is only available for the supported Debian and Ubuntu systems, ALT 10/11, and Red OS 7.3.

zstd

Command-line utility for the libzstd library.


Additionally, Postgres Pro Shardman provides separate packages with debug information for some operating systems:

  • On Debian-based systems, see the postgrespro-sdm-17-dbg package.

  • On ALT Linux systems, all packages containing binary files have the corresponding -debuginfo packages.

1.1.2. Cluster Configuration #

The minimal configuration consists of 1 etcd node and 2 Postgres Pro Shardman nodes. Here, let's consider a wider configuration with etcd two-node cluster and Postgres Pro Shardman two-node cluster.

Let’s suppose that we have the following node names and IP addresses:

192.0.1.1 etcd  - first etcd node
192.0.1.2 etcd  - second etcd node
 192.0.1.20 sdm01 - Shardman node1
 192.0.1.21 sdm02 - Shardman node2
192.0.1.100 ntp  - local time sync server (optional)

Each node has 4Gb RAM, 20GB HDD, 2CPU and Ubuntu 22.04 installed.

1.1.3. Preparation #

1.1.3.1. Add host names to /etc/hosts #

This step must be performed on all nodes.

sudo /bin/sh -c 'cat << EOF >> /etc/hosts
192.0.1.1 etcd1
192.0.1.2 etcd2
192.0.1.20 sdm01
192.0.1.21 sdm02
EOF'

1.1.3.2. Time Synchronization #

This step must be performed on all nodes.

Deploy and start chrony daemon on all hosts.

sudo apt install -y chrony

By default, chrony gets the time from available servers on internet or the local time server. You can check available time servers as follows:

chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? 192.0.1.100                   1   6     7     1    -98us[  -98us] +/-   11ms
^* time.cloudflare.com           3   6     7     1   +139us[ +163us] +/-   11ms
^+ tms04.deltatelesystems.ru     1   6     7     1   -381us[ -357us] +/-   17ms

It is desirable to synchronize time with your server or the local server for the cluster. In this case, it is the ntp server. To do this, make changes similar to the following to chrony configuration:

sudo tee "/etc/chrony/chrony.conf" > /dev/null << 'EOF'
server 192.0.1.100 iburst
keyfile /etc/chrony.keys
driftfile /var/lib/chrony/chrony.drift
log tracking measurements statistics
logdir /var/log/chrony
EOF
systemctl restart chrony

Check that chrony is connected to the appropriate server.

chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? 192.0.1.100                   8   6    17    37    +14us[  +70us] +/-  161us
chronyc tracking
Reference ID    : 0A80000C (ntp.local)
Stratum         : 9
Ref time (UTC)  : Wed Nov 15 11:58:52 2023
System time     : 0.000000004 seconds slow of NTP time
Last offset     : -0.000056968 seconds
RMS offset      : 0.000056968 seconds
Frequency       : 10.252 ppm fast
Residual freq   : -2.401 ppm
Skew            : 364.419 ppm
Root delay      : 0.000455358 seconds
Root dispersion : 0.010503666 seconds
Update interval : 2.1 seconds
Leap status     : Normal

1.1.4. Deploy an etcd Cluster #

These steps must be applied to all etcd nodes.

Install the following packages:

sudo apt install -y vim curl

To connect a Postgres Pro Shardman repository:

  • Run (and change username and password)

    curl -fsSL -u "<user>:<password>" https://repo.postgrespro.ru/sdm/sdm-17/keys/pgpro-repo-add.sh > pgpro-repo-add.sh
    chmod +x pgpro-repo-add.sh
    

  • Specify your login and password for the pgpro-repo-add.sh repository.

    ex -s -c "%s/REPOUSER=/REPOUSER=<user>/g" -c "wq" "pgpro-repo-add.sh"
    ex -s -c "%s/PASSWORD=/PASSWORD=<password>/g" -c "wq" "pgpro-repo-add.sh"
    

  • Run sudo pgpro-repo-add.sh.

    sudo ./pgpro-repo-add.sh
    

Install etcd-sdm packages:

sudo apt install -y etcd-sdm

In the file that lists environment variables, insert specific values for them:

sudo vim /etc/default/etcd-sdm
##next two lines must contain your host data
ETCD_NAME=<hostname>
ETCD_ADVERTISE_CLIENT_URLS=http://<host ip address>:2379
ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
ETCD_MAX_SNAPSHOTS=5
ETCD_MAX_WALS=5
ETCD_AUTO_COMPACTION_MODE=periodic
ETCD_AUTO_COMPACTION_RETENTION=5m
ETCD_QUOTA_BACKEND_BYTES=6442450944
ETCD_DATA_DIR=/var/lib/etcd-sdm/sdm-17
#the following parameters are only required for multi-node etcd
#enter your IPs
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://<host ip address>:2380
ETCD_INITIAL_CLUSTER=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380

This file will be loaded at etcd start.

Clear the etcd data directory:

sudo rm -rf /var/lib/etcd-sdm/sdm-17/*

Restart the etcd-sdm service:

sudo systemctl restart etcd-sdm

For your user, add /opt/pgpro/sdm-17/bin to the PATH environment variable:

echo "export PATH=$PATH:/opt/pgpro/sdm-17/bin" >> ~/.bashrc
source ~/.bashrc

Check that etcd is properly configured:

etcdctl endpoint --endpoints=http://192.0.1.1:2379 status health -w table
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
|        ENDPOINT        |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX |             ERRORS             |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
| http://192.0.1.1:2379  | 9324a99282752a09 |   3.5.9 |  2.1 GB |      true |      false |        14 |   91459207 |           91459207 |  memberID:10602785869456026121 |
|                        |                  |         |         |           |            |           |            |                    |                 alarm:NOSPACE  |
+------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+

etcd one-node cluster is properly configured and ready to serve requests.

To prevent bloat when etcd is intensively used, add a defragmentation command to cron:

sudo sh -c '
{ crontab -l; echo "@hourly /opt/pgpro/sdm-17/bin/etcdctl defrag"; }
| crontab'

1.1.5. Deploy Postgres Pro Shardman Nodes #

Let’s add a Postgres Pro Shardman repository on each Postgres Pro Shardman node:

  • Install the following packages:

    sudo apt install -y vim curl jq
    

  • Run

    curl -fsSL -u "<user>:<password>" https://repo.postgrespro.ru/sdm/sdm-17/keys/pgpro-repo-add.sh > pgpro-repo-add.sh | bash
    chmod +x pgpro-repo-add.sh
    

  • Specify your login and password for the pgpro-repo-add.sh repository.

    ex -s -c "%s/REPOUSER=/REPOUSER=<user>/g" -c "wq" "pgpro-repo-add.sh"
    ex -s -c "%s/PASSWORD=/PASSWORD=<password>/g" -c "wq" "pgpro-repo-add.sh"
    

  • Run sudo pgpro-repo-add.sh.

    sudo ./pgpro-repo-add.sh
    

Next step is installation of packages (on each node):

sudo apt update
sudo apt install -y postgrespro-sdm-17-server postgrespro-sdm-17-client postgrespro-sdm-17-contrib postgrespro-sdm-17-libs pg-probackup-sdm-17  shardman-services shardman-tools

Suppose we have chosen a default cluster name of cluster0. The next step is to put Postgres Pro Shardman environment vars into the /etc/shardman directory (on each node):

sudo sh -c 'cat << EOF > /etc/shardman/shardmand-cluster0.env
SDM_CLUSTER_NAME=cluster0
SDM_LOG_LEVEL=info
# enter your etcd host(s) adress
SDM_STORE_ENDPOINTS=http://etcd:2379
EOF'

The file and directory are created with sudo, but later shardmanctl does not use sudo, thus cannot access the file with the environment variables. To access it, either add the variables to the system with export, or grant user with access rights to the file and the directory.

For your user, add /opt/pgpro/sdm-17/bin to the PATH environment variable and export the SDM_STORE_ENDPOINTS variable:

echo "export PATH=$PATH:/opt/pgpro/sdm-17/bin" >> ~/.bashrc
# enter your etcd host(s) adress
echo "export SDM_STORE_ENDPOINTS=http://etcd:2379" >> ~/.bashrc
source ~/.bashrc

Let’s generate a sample configuration with the Postgres Pro Shardman utilities (only on one Postgres Pro Shardman node).

  shardmanctl config generate > spec.json

In this step, you can make some changes to the cluster specification (configuration), i.e., change the password or PostgreSQL shared_buffers parameter and so on.

For a simple cluster change Repfactor to 0. For more details, see Shardman configuration file.

jq '
  .Repfactor = 0
' "spec.json" > tmp.json && mv tmp.json "spec.json"

1.1.6. Initialize the Postgres Pro Shardman Cluster #

Now we have some final steps. First, let's initialize the cluster configuration in etcd (only on one [any] Postgres Pro Shardman node).

  shardmanctl init -f spec.json

The expected output is:

2023-04-18T12:30:03.043Z    DEBUG   cmd/common.go:100   Waiting for metadata lock...
2023-04-18T12:30:03.048Z    DEBUG   cluster/cluster.go:365  DataDir is not specified, setting to default /var/lib/pgpro/sdm-17/data

Enable and start the shardmand service (on each Postgres Pro Shardmannode):

  sudo systemctl enable --now shardmand@cluster0
  sudo systemctl status shardmand@cluster0

  ● shardmand@cluster0.service - deployment daemon for shardman
       Loaded: loaded (/lib/systemd/system/shardmand@.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2023-04-18 12:28:18 UTC; 2min 13s ago
     Docs: https://github.com/postgrespro/shardman
  Main PID: 618 (shardmand)
     Tasks: 10 (limit: 4571)
    Memory: 32.0M
       CPU: 422ms
   CGroup: /system.slice/system-shardmand.slice/shardmand@cluster0.service
           └─618 /opt/pgpro/sdm-17/bin/shardmand --cluster-name cluster0 --system-bus --user postgres

1.1.7. Add Nodes to the Postgres Pro Shardman Cluster #

The following command must be executed only on one Postgres Pro Shardman node.

In this step we assume that all previous steps were executed successfully: etcd cluster is working properly, the time on all hosts is synchronized, and the daemon is launched on sdm01 and sdm02. The final step should be executed with shardmanctl command as follows:

  shardmanctl nodes add -n sdm01,sdm02 \
             --cluster-name cluster0 \
             --log-level debug

The expected output should be:

  2023-04-18T12:43:11.300Z    DEBUG   cmd/common.go:100   Waiting for metadata lock...
2023-04-18T12:43:11.306Z    INFO    cluster/store.go:277    Checking if shardmand on all nodes have applied current cluster configuration
✓ Waiting for shardmand on node sdm01 to apply current configuration: success 0.000s
✓ Waiting for shardmand on node sdm02 to apply current configuration: success 0.000s
2023-04-18T12:43:11.307Z    INFO    add/case.go:112 Initting Stolon instances...
2023-04-18T12:43:11.312Z    INFO    add/case.go:170 Waiting for Stolon daemons to start... make sure shardmand daemons are running on the nodes
✓ Waiting for Stolon daemons of rg clover-1-sdm01: success 31.012s
✓ Waiting for Stolon daemons of rg clover-1-sdm02: success 0.012s
2023-04-18T12:43:42.336Z    INFO    add/case.go:187 Adding repgroups...
✓ waiting rg 1 config apply: done 7.014s
2023-04-18T12:43:49.444Z    DEBUG   broadcaster/worker.go:33    start broadcaster worker for repgroup id=1
2023-04-18T12:43:49.453Z    DEBUG   broadcaster/worker.go:51    repgroup 1 connect established
2023-04-18T12:43:49.453Z    DEBUG   commands/addrepgroup.go:575 waiting for extension lock...
2023-04-18T12:43:49.453Z    DEBUG   commands/addrepgroup.go:137 Loading schema into replication group rg 1
...
2023-04-18T12:44:25.665Z    DEBUG   rebalance/service.go:528    wait all tasks finish
2023-04-18T12:44:25.666Z    DEBUG   broadcaster/worker.go:75    finish broadcaster worker for repgroup id=1
2023-04-18T12:44:25.666Z    DEBUG   broadcaster/worker.go:75    finish broadcaster worker for repgroup id=2
2023-04-18T12:44:25.666Z    INFO    add/case.go:221 Successfully added nodes sdm01, sdm02 to the cluster

The Successfully added nodes sdm01, sdm02 to the cluster message means that everything is fine and nodes sdm01 and sdm02 are working properly.

1.1.8. Check the Postgres Pro Shardman Cluster Status #

Let's check the status of the cluster nodes.

  shardmanctl status
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                               == STORE STATUS ==                                               │
├────────────┬─────────────────────────────────────────────────┬────────────────────────┬────────────────────────┤
│   STATUS   │                     MESSAGE                     │    REPLICATION GROUP   │          NODE          │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│   Warning  │ Store has only one member, consider deploying   │                        │                        │
│            │                   store cluster                 │                        │                        │
└────────────┴─────────────────────────────────────────────────┴────────────────────────┴────────────────────────┘
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                             == METADATA STATUS ==                                              │
├────────────┬─────────────────────────────────────────────────┬────────────────────────┬────────────────────────┤
│   STATUS   │                     MESSAGE                     │    REPLICATION GROUP   │          NODE          │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │                 Metadata is OK                  │                        │                        │
└────────────┴─────────────────────────────────────────────────┴────────────────────────┴────────────────────────┘
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                             == SHARDMAND STATUS ==                                             │
├────────────┬─────────────────────────────────────────────────┬────────────────────────┬────────────────────────┤
│   STATUS   │                     MESSAGE                     │    REPLICATION GROUP   │          NODE          │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │      shardmand status on node sdm01 is OK       │                        │          sdm01         │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │      shardmand status on node sdm02 is OK       │                        │          sdm02         │
└────────────┴─────────────────────────────────────────────────┴────────────────────────┴────────────────────────┘
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                         == REPLICATION GROUP STATUS ==                                         │
├────────────┬─────────────────────────────────────────────────┬────────────────────────┬────────────────────────┤
│   STATUS   │                     MESSAGE                     │    REPLICATION GROUP   │          NODE          │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│    Error   │  Replication group shard-1 is OK                │         shard-1        │                        │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│    Error   │  Replication group shard-2 is OK                │         shard-2        │                        │
└────────────┴─────────────────────────────────────────────────┴────────────────────────┴────────────────────────┘
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                               == MASTER STATUS ==                                              │
├────────────┬─────────────────────────────────────────────────┬────────────────────────┬────────────────────────┤
│   STATUS   │                     MESSAGE                     │    REPLICATION GROUP   │          NODE          │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │  Replication group shard-1 master is running on │         shard-1        │       sdm01:5432       │
│            │                    sdm01:5432                   │                        │                        │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │  Replication group shard-2 master is running on │         shard-2        │       sdm02:5432       │
│            │                    sdm02:5432                   │                        │                        │
└────────────┴─────────────────────────────────────────────────┴────────────────────────┴────────────────────────┘
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                            == DICTIONARY STATUS ==                                             │
├────────────┬─────────────────────────────────────────────────┬────────────────────────┬────────────────────────┤
│   STATUS   │                     MESSAGE                     │    REPLICATION GROUP   │          NODE          │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │   Replication group shard-2 dictionary is OK    │         shard-2        │                        │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │   Replication group shard-1 dictionary is OK    │         shard-1        │                        │
└────────────┴─────────────────────────────────────────────────┴────────────────────────┴────────────────────────┘
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│                                              == KEEPER STATUS ==                                               │
├────────────┬─────────────────────────────────────────────────┬────────────────────────┬────────────────────────┤
│   STATUS   │                     MESSAGE                     │    REPLICATION GROUP   │          NODE          │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │              Keeper keeper_1 is OK              │         shard-1        │       sdm01:5432       │
├────────────┼─────────────────────────────────────────────────┼────────────────────────┼────────────────────────┤
│     OK     │              Keeper keeper_1 is OK              │         shard-2        │       sdm02:5432       │
└────────────┴─────────────────────────────────────────────────┴────────────────────────┴────────────────────────┘

1.1.9. Connect to the Postgres Pro Shardman Cluster #

To connect to the cluster we should get the cluster connection string on any cluster node (sdm01 or sdm02):

  shardmanctl getconnstr

Output example:

  dbname=postgres host=sdm01,sdm02 password=!!!CHANGE_ME!!! port=5432,5432 user=postgres

And then let’s try to connect:

  psql -d 'dbname=postgres host=sdm01,sdm02 password=!!!CHANGE_ME!!! port=5432,5432 user=postgres'

Output example:

  psql (17.5)
  Type "help" for help.

  postgres=#

1.1.10. Example: Deploy a Multi-Node etcd Cluster #

The process is described for the following servers:

192.0.1.1 etcd1
192.0.1.2 etcd2
192.0.1.3 etcd3

Install the needed packages on each server:

sudo apt install -y vim curl

To connect the repository, on each server, run:

sudo curl -fsSL https://repo.postgrespro.ru/sdm/sdm-17/keys/pgpro-repo-add.sh | bash

Install etcd-sdm packages on each server:

sudo apt install -y etcd-sdm

For each server, edit the file that lists environment variables, replacing placeholders in angle brackets with specific values:

sudo vim /etc/default/etcd-sdm
ETCD_NAME=<hostname>
ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
ETCD_ADVERTISE_CLIENT_URLS=http://<host ip address>:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://<host ip address>:2380
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_MAX_SNAPSHOTS=5
ETCD_MAX_WALS=5
ETCD_AUTO_COMPACTION_MODE=periodic
ETCD_AUTO_COMPACTION_RETENTION=5m
ETCD_QUOTA_BACKEND_BYTES=6442450944
ETCD_DATA_DIR=/var/lib/etcd-sdm/sdm-17
ETCD_INITIAL_CLUSTER=etcd1=http://<ip etcd1>:2380,etcd2=http://<ip etcd2>:2380,etcd3=http://<ip etcd3>:2380

This file will be loaded at etcd start with its own start settings on each server.

Clear the etcd data directory:

sudo rm -rf /var/lib/etcd-sdm/sdm-17/*

Restart the etcd-sdm service on each server:

sudo systemctl restart etcd-sdm

For your user, add /opt/pgpro/sdm-17/bin to the PATH environment variable:

echo "export PATH=$PATH:/opt/pgpro/sdm-17/bin" >> ~/.bashrc
source ~/.bashrc

Check that etcd is properly configured:

etcdctl member list -w table
+------------------+---------+-------+------------------------------+----------------------------+------------+
|        ID        | STATUS  | NAME  |         PEER ADDRS           |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+-------+------------------------------+----------------------------+------------+
|  318be6342e6d9ac | started | etcd1 | http://192.0.1.1:2380        | http://192.0.1.1:2379      |      false |
| 9e49480544aedb89 | started | etcd2 | http://192.0.1.2:2380        | http://192.0.1.2:2379      |      false |
+------------------+---------+-------+------------------------------+----------------------------+------------+
$ etcdctl --endpoints=http://192.0.1.1:2380,http://192.0.1.2:2380,http://192.0.1.3:2380 endpoint status health  -w table
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| http://192.0.1.1:2380      |  318be6342e6d9ac |   3.5.9 |  5.7 MB |      true |      false |        13 |     425686 |             425686 |        |
| http://192.0.1.2:2380      | 9e49480544aedb89 |   3.5.9 |  5.7 MB |     false |      false |        13 |     425686 |             425686 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
-------+

The etcd cluster is properly configured and ready to serve requests.

To prevent bloat when etcd is intensively used, add a defragmentation command to cron:

sudo { crontab -l; echo "@hourly /opt/pgpro/sdm-17/bin/etcdctl defrag"; } | crontab

The final endpoints string of the etcd cluster:

etcd1=http://<ip etcd1>:2380,etcd2=http://<ip etcd2>:2380,etcd3=http://<ip etcd3>:2380
        

It should be specified in /etc/shardman configuration file and as a --store-endpoints parameter of shardmanctl.