0

I want that output result to grep second and third column:

1       db1    ADM_DAT   300     yes     95.09
2       db2    SYSAUX          400     yes     94.52

and convert them like array for example:

outputres=("db1    ADM_DAT" "db2    SYSAUX")

and after that to be able to read those values in loop for example:

for i in "${outputres[@]}"; do read -r a b <<< "$i"; unix_command $(cat file|grep $a|awk '{print $1}') $a $b;done

file:

10.1.1.1 db1
10.1.1.2 db2

Final expectation:

unix_command 10.1.1.1 db1 ADM_DAT   
unix_command 10.1.1.2 db2 SYSAUX

This is only a theoretical example, I am not sure if it is working.

3
  • try awk '{print $2,$3}' file Commented Feb 13, 2015 at 12:45
  • @Avinash Raj , good but they will not be in array this is the easiest part to output only the result, my idea is to make them in array not just to show them. Commented Feb 13, 2015 at 12:48
  • Then: mapfile -t outputres < <(awk '{print $2,$3}' file). Commented Feb 13, 2015 at 13:18

1 Answer 1

2

I would use a simple bash while read and keep adding elements into the array with the += syntax:

outputres=()
while read -r _ a b _; do
  outputres+=("$a $b")
done < file

Doing so, with your input file, I got:

$ echo "${outputres[@]}"     #print all elements
db1 ADM_DAT db2 SYSAUX
$ echo "${outputres[0]}"     #print first one
db1 ADM_DAT
$ echo "${outputres[1]}"     #print second one
db2 SYSAUX

Since you want to use both values separatedly, it may be better to use an associative array:

$ declare -A array=()
$ while read -r _ a b _; do array[$a]=$b; done < file

And then you can loop through the values with:

$ for key in ${!array[@]}; do     echo "array[$key] = ${array[$key]}"; done
array[db2] = SYSAUX
array[db1] = ADM_DAT

See a basic example of utilization of these arrays:

#!/bin/bash
declare -A array=([key1]='value1' [key2]='value2')

for key in ${!array[@]}; do
    echo "array[$key] = ${array[$key]}"
done

echo ${array[key1]}
echo ${array[key2]}

So maybe this can solve your problem: loop through the file with columns, fetch the 2nd and 3rd and use them twice: firstly the $a to perform a grep in file and then as parameters to cmd_command:

while read -r _ a b _
do
    echo "cmd_command $(awk -v patt="$a" '$0~patt {print $1}' file) $a, $b"
done < columns_file

For a sample file file:

$ cat file
hello this is db1
and this is another db2

I got this output (note I am just echoing):

$ while read -r _ a b _; do echo "cmd_command $(awk -v patt="$a" '$0~patt {print $1}' file) $a, $b"; done < a
cmd_command hello db1, ADM_DAT
cmd_command and db2, SYSAUX
Sign up to request clarification or add additional context in comments.

17 Comments

did the IFS= read the previous output? Because I don't save that to file I want output which I get automaticaly to turn in that array outputres
then instead of while read; do ... done < file just pipe it: cat file | things | while read ... done
I don't get that information from file is just simple print from previous command which give me that output
@gniourf_gniourf you are right. I added IFS= after checking and I didn't test it. Updated, thanks!
@KalinBorisov that edit was quite helpful. See my last edit with a full answer. It is at the bottom
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.