At least in my version of Bash (5.2.21) it's possible to unset an array range per interval like this:
unset -v array[{1..5}]
Unfortunately, intervals do not support vars, so a dynamic approach requires eval:
eval 'unset -v a\[{'"$start"'..'"$end"'}\]'
Of course, it makes this approach rather inefficient and it's probably better to iterate over the array, unsetting a range of elements one by one, or clearing them first and un-sparse the array afterwards, as shown by Stéphane Chazelas above.
Efficient method to un-sparse array:
When multiple elements should be removed from an array, directly replacing each deleted line with the next non-empty and re-building from the in-place modified residual array afterwards is an efficient approach.
In this example input elements are trimmed and only non-blank lines preserved in the array. Empty lines are replaced during iteration, up to the last occuring non-empty line.
The array is then replaced from the modified non-empty residual part after the loop; this avoids having to re-iterate all over the original array's sparse elements, or inefficient subsequent re-splitting on IFS via unquoted create array operator, or the even more inefficient, the use of eval/unset:
## local IFS_="$IFS"; IFS=$'\0' # split on nulls, preserve newlines
readarray -t -d "$IFS" -u 0 a
local -i i=0
for e in "${a[@]}"; do
e="${e#"${e%%[^ $'\t\n\r']*}"}"
[[ -n $e ]] && a[$i]="${e%"${e##*[^ $'\t\n\r']}"}" && ((i++))
done
# Rebuild and shrink array to last modified index
a=("${a[@]:0:$i}")
## IFS="$IFS_"
In contrast, eval/unset un-sparse appears much slower:
local -i i=0
readarray -t -d "$IFS" -u 0 a
while [[ $((i++)) < ${#a[@]} ]]; do
e="${a[$i]#"${a[$i]%%[^ $'\t\n\r']*}"}"
[[ -n $e ]] && a[$i]="${e%"${e##*[^ $'\t\n\r']}"}"
done
# Rebuild and shrink array / un-sparse by re-splitting on IFS
## a=(${a[@]}) # -> prone to unexpected filename globbing!
# Unsetting resulting abundant empty elements
eval 'unset -v a\[{'"$i"'..'"${#a[@]}"'}\]'
It may be simpler/more straight-forward, to just build a new array during iteration, even when not using it directly, but re-building from it finally:
local -a res
readarray -t -d "$IFS" -u 0 a
for e in "${a[@]}"; do
e="${e#"${e%%[^ $'\t\n\r']*}"}"
[[ -n $e ]] && res+=("${e%"${e##*[^ $'\t\n\r']}"}")
done
a=("${res[@]}")
Example 1 and 3 should be roughly on-par, with 1 having a slight advantage, replacing elements in the original array and un-sparse by re-creating from its non-empty elements. If the original should be preserved, example 3 is the way to go.
Another method I found to be even a bit faster than using a for/while loop, is to (ab)use a Callback function for 'fly-by' creation of a trimmed and un-sparsed array via readarray:
# Build trimmed array callback function for readarray
function fillArray() {
if [[ $2 ]]; then
e="${2#${2%%[^ $'\t\n\r']*}}"
[[ -n $e ]] && a+=("${e%${e##*[^ $'\t\n\r']}}")
fi
}
declare -a a
readarray -t -d "$IFS" -c 1 -C fillArray -u 0
It'll actually build 2 arrays:
MAPFILE (ignored default, unchanged input)
a (trimmed and un-sparsed result)
High-performance un-sparse:
The baseline here is a fast option to prevent sparse elements in the array from the start.
Since readarray is usually performing better than while read -r; do..., for large sets, let's use the first - here an example, how to build an un-sparsed array from stdin, by applying a Callback function on each input element, storing the positions of sparse elements:
# Store sparse element indices callback function for readarray
findSparse() { [[ -z $2 ]] && sparse+=($1); }
declare -a sparse
# Read stdin with sparse lines
readarray -t -d "$IFS" -c 1 -C findSparse -u 0 a
# Build sparse element index args, invoke unset once
if [[ $sparse ]]; then
for i in ${!sparse[@]}; do sparse[$i]="a[${sparse[$i]}]"; done
unset -v "${sparse[@]}"
fi
This way, inefficient re-iteration/re-building to un-sparse huge arrays can be avoided, which may yield a significant performance benefit, especially when iterated many times.
High-performance un-sparse v.2:
If many elements (10^3 or higher magnitude) have to be processed, it may be more efficient, to use an external tool, like sed or awk, which can create (un-sparsed) arrays per use of an appropriate RegEx.
Depending on the available hardware resources, the break-even point, where either a loop or tool shows better performance, may be a lot higher; in my case, it's around when both iterations and elements count at ~600, with a greater number of iterations having a negative impact at the same number of elements from there.
Trim with awk from stdin and IFS=$'\0' to preserve newlines (untested):
readarray -t -d "$IFS" a < <(awk -v RS="[ \n\r\t]*\0" '{
if ("" != $0) printf "%s\n", gensub(/^[ \t\n\r\x0]*/, "", "g", $0)
}')
Caveat: the only problem with this specific (null-splitting) example is the huge performance drop, if the null-delimited input is 'emulated', e. g. by the use of ...< <(printf '%s\0' "${src[@]}").
Regular piped, or simple Here String input - only newline-delimited/with adapted code ...<<<"${src[@]}" - should be no problem, though.
root(really bad).