Well, you can do this: Create an associative array, iterate over lines and keep the count of the current line, then iterate over the fields and create an associative array with indexes as requested.
i=0
declare -A matrix
while IFS=, read -r -a line; do
for ((j = 0; j < ${#line[@]}; ++j)); do
matrix[$i,$j]=${line[$j]}
done
((i++))
done < itrs.csv
After it declare -p matrix would output:
declare -A matrix=([1,5]="G" [1,4]="G" [1,7]="C" [1,6]="C" [1,1]="7" [1,0]="BANAMEX" [1,3]="1:23:45" [1,2]="1:18:10" [0,4]="2019-11-05" [0,5]="2019-11-06" [0,6]="2019-11-07"[0,7]="2019-11-08" [0,0]="Loads\\PostDate" [0,1]="schedule" [0,2]="seta" [0,3]="eeta" [2,6]="G" [2,7]="C" [2,4]="G"[2,5]="G" [2,2]="0:21:00" [2,3]="1:01:00" [2,0]="EMEA" [2,1]="5" )
- See bashfaq How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
- Don't use
eval. eval is evil. Don't eval arr=($(..)) unless you know what you are doing. In your case, using eval looks like it has little to zero sense.
- The error comes from
awk. awk works like awk [options] script [file], you could awk -F, '{print $0}' itrs.csv, but it would make no sense. The itrs.csv is parsed by awk as being the script - as it makes no sense as a awk script, the tool throws an error.
- To read for example the first line only separated by comma into an array in bash you can
IFS=, line=($(head -n1 itrs.csv)). The -F, affects how awk parses the file, not how bash creates array - for that use IFS.
-f,means. Also see why-is-using-a-shell-loop-to-process-text-considered-bad-practice for some of the problems with your approach.