stty=$(stty -g)
stty raw -echo
zstd -cf |
LC_stty=$stty \
LC_TERM=$TERM \
LC_COLS=${COLUMNS:-$(tput cols)} \
LC_LINES=${LINES:-$(tput lines)} ssh -tto IPQoS=lowdelay host 'zstd'
export TERM=$LC_TERM
zstd -dfd |
socat - '\''SYSTEM:"stty cols \"$LC_COLS\" rows \"$LC_LINES\"; bash -ilil",pty,ctty,setsid,stderr'\'' 2>&1 |
zstd -cf'c' |
zstd -dfd
stty "$stty"
Where we use socat to run bash in a pseudo-terminal rather than using ssh -tt as we need the compression to happen outside of it so the post-processing done by the tty line discipline doesn't interfere. That means however no propagation of window size changes. We attempt to restore some of the functionality by passing $TERM and stty settings and setting IPQoS to the one you'd get for an interactive session.
But again, you'll run into the problem of data being compressed in (large) chunks.
AFAICT, the zstd utility cannot be told to emit compressed data for each input it gets of varying size. The file format doesn't look like it can accommodate chunks of different sizes either, so if you wanted to use the zstd algorithm, you'd likelyYou'd need to roll outwrite your own format and own compressor.
Also compressing input of an interactive session hardly makes sense. That's just characters sent upon key presses, so you're just usually sending one byte at a time which cannot be compressed (and in my tests, I see ssh sending packets of 36 byte payload for each along with dozens of other 36 byte chaff packets to obfuscate keystrokes; all random-looking incompressible data).
For a interactive zstd that emit compressed data asap, you could adapt that example from the Compress::Zstd perl module to make it use rawread instead of read so it reads what's currently available in the pipe, disable buffering and flush the compressor for every input:
#! /usr/bin/perl
use Compress::Zstd qw(ZSTD_MAX_CLEVEL);
use Compress::Zstd::Compressor qw(ZSTD_CSTREAM_IN_SIZE);
use Compress::Zstd::Decompressor qw(ZSTD_DSTREAM_IN_SIZE);
my ($decompress) = grep { $_ eq '-d' } @ARGV;
my ($level) = map { s/^-//; $_ } grep { /^-\d+$/ } @ARGV;
$level = 3 if !$level || $level < 1 || $level > ZSTD_MAX_CLEVEL;
$| = 1;
if ($decompress) {
my $decompressor = Compress::Zstd::Decompressor->new;
while (sysread(STDIN, my $buffer, ZSTD_DSTREAM_IN_SIZE)) {
print $decompressor->decompress($buffer);
}
} else {
my $compressor = Compress::Zstd::Compressor->new($level);
while (sysread(STDIN, my $buffer, ZSTD_CSTREAM_IN_SIZE)) {
print $compressor->compress($buffer) . $compressor->flush;
}
print $compressor->end;
}
And then invoke ssh as:
stty=$(stty -g)
stty raw -echo
LC_stty=$stty \
LC_TERM=$TERM \
LC_COLS=${COLUMNS:-$(tput cols)} \
LC_LINES=${LINES:-$(tput lines)} ssh -o IPQoS=lowdelay host '
TERM=$LC_TERM
socat - '\''SYSTEM:"stty cols \"$LC_COLS\" rows \"$LC_LINES\"; bash -il",pty,ctty,setsid,stderr'\'' 2>&1 |
zstdi' |
zstdi -d
stty "$stty"
(where zstdi is that perl script; skipping the input compression). That seems to be usable, but I doubt you'll save many (if any) CPU cycles compared to just using ssh -C and its zlib compression.
Compressing output of a user-interactive session will likely get you some benefit in terms of bandwidth usage. For instance, if running the same ls -l command twice on the same directory, the second one should cause significantly less traffic, but for interactive shell sessions, if throughput (not to be confused with latency) is so slow that you feel you would be getting an improvement from compression, then we're talking a few KiB/s of throughput (unless you're superhuman and can read text faster than that) and switching from zlib to zstd would not make much of difference in terms of CPU usage where you'd just go from something like 0.001% to 0.0008% (completely made-up numbers).
zstd would be useful if working with streams, like if you want to send a large amount of data for it to be processed (on the fly or not) by a remote command and get the (also large) output back and network bandwidth was the bottleneck: