69

I've a staging server on DO.

I want to build & deploy my node app to it.

name: Build & Deploy
on:
  push:
    tags:
      - 'v1.*.0'
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Create SSH key
        run: |
          mkdir -p ~/.ssh/
          echo "$DO_GITHUB_PRIVATE_KEY" > ../github_do.key
          sudo chmod 600 ../github_do.key
          ssh-keyscan -H ${{secrets.DEPLOY_SERVER}} > ~/.ssh/known_hosts
        shell: bash
        env:
          DO_GITHUB_PRIVATE_KEY: ${{secrets.DO_GITHUB_PRIVATE_KEY}}
      - uses: actions/setup-node@v1
        with:
          node-version: 12.x
      - name: Install Packages
        run: yarn install --frozen-lockfile
      - name: Build artifacts
        env:
          DEPLOY_SSH_KEY_PATH: ${{ github.workspace }}/../github_do.key
        run: |
          yarn shipit production fast-deploy

What i've done is to generate a new SSH private & public keys.

The private key I've saved inside DO_GITHUB_PRIVATE_KEY github secret.

The public key I've added to authorized_keys on my staging server.

When the action is triggered, it fails on:

@ v***.256.0
Create release path "/home/***/***/releases/2020-03-0***-v***.256.0"
Running "mkdir -p /home/***/***/releases/2020-03-0***-v***.256.0" on host "***".
@***-err ***@***: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
'fast-deploy:updateRemote' errored after ***.32 s
Error: Command failed: ssh -i /home/runner/work/***/***/../github_do.key ***@*** "mkdir -p /home/***/***/releases/2020-03-0***-v***.256.0"

5 Answers 5

86

I've solved it! Apparently keys were protected with passphrase 🤯.

This is the whole process:

  1. Genereate new keys

ssh-keygen -t rsa -b 4096 -C "[email protected]" -q -N ""

  1. Update your remote server authorized_keys

    ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected]

  2. Enter the remote server & run

ssh-keyscan remote-server.com

  1. Copy the output to github secret (lets call it SSH_KNOWN_HOSTS)
  2. Copy the private key to a github secret (lets call it SSH_PRIVATE_KEY)

In your workflow.yml file

#workflow.yaml
...
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Create SSH key
        run: |
          mkdir -p ~/.ssh/
          echo "$SSH_PRIVATE_KEY" > ../private.key
          sudo chmod 600 ../private.key
          echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
        shell: bash
        env:
          SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
          SSH_KNOWN_HOSTS: ${{secrets.SSH_KNOWN_HOSTS}}
          SSH_KEY_PATH: ${{ github.workspace }}/../private.key
 

Then you can use ssh with ssh -i $SSH_KEY_PATH user@host

Hope this will save few hours to someone :]

Edit

Answer to comments (how to update github secrets)

In order add github secrets you have 2 options:

  1. Via GitHub ui, https://github.com/{user}/{repo}/settings/secrets/
  2. Via GitHub API, I'm using github-secret-dotenv lib to sync my secrets with my local .env file (pre action trigger)
Sign up to request clarification or add additional context in comments.

11 Comments

though I like this approach, it seems to imply having the private key in the project. I'd suggest to add the gpg trick to avoid putting the plain key into the project as described in help.github.com/en/actions/configuring-and-managing-workflows/…
It is not in the source code, but saved inside github secrets
oh I see, can you please give some details about how you put/read from it there too please?, I ran into "invalid format" error when trying to use it
I've added this in my anwser
@lewislbr You might need to add uses: actions/checkout@v1 to the top of the list of steps
|
23

The answers of felixmosh and Cas but this feels like a better implementation. Rather than uploading your known_hosts file to the Github hosted runners just populate the ~/.ssh/known_hosts file on runtime. More flexible as it might handle issues like IP changing. I tested it and it worked for me.

- name: Write SSH keys
  run: |
    install -m 600 -D /dev/null ~/.ssh/id_rsa
    echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
    ssh-keyscan -H host.example.com > ~/.ssh/known_hosts

Or, even better,

- name: Write SSH keys
  run: |
    install -m 600 -D /dev/null ~/.ssh/id_rsa
    echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
    host='host.example.com'
    hosts="$(dig +short "$host" | grep -v '\.$' | sed -z 's|\n|,|g')$host"
    ssh-keyscan -H "$hosts" > ~/.ssh/known_hosts

This ensures that all ips of the host are recorded in known_hosts.

Just replace host.example.com and you are good to go.

5 Comments

I see a problem that by generating the known_hosts it bypasses the security that you have verified the host so that you would be unaware if the host was not the same box. Perhaps using HostKeyAlias in SSH config would be a solution: serverfault.com/a/895661
@Cas, I read the serverfault.com thread. Not sure why the author used both host and HostKeyAlias when his suggestion is to use HostKeyAlias. Will something like echo "HostKeyAlias host.example.com" > .ssh/config work or not? Or I need to use both host and HostKeyAlias?
Both would be required sinceHost directive denotes a new section or directive for ssh config with option HostKeyAlias for that hostname. However looking again at this perhaps CheckHostIP no is more straighforward
Then should I suggest host myserver.example.com HostKeyAlias myserver.example.com in an edit?
Generating the known_hosts on the runner may intreduce a security issue, since, your are not verifing the host that the domain points to, which means that other malicious GithubActions may change the target server to other server, by changing the hosts file for example...
21

felixmosh's answer was useful but I managed to simplify it further using id_rsa that will be automatically used by ssh and the secrets will be substituted without needing intermediary env vars:

  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/download-artifact@v2
        with:
          name: build
          path: build
      - name: Create SSH key
        run: |
          install -m 600 -D /dev/null ~/.ssh/id_rsa
          echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
          echo "${{ secrets.SSH_KNOWN_HOSTS }}" > ~/.ssh/known_hosts
      - name: Deploy with rsync
        run: rsync -rav --delete build/ user@host:/

Comments

3

After several hours of troubleshooting, I finally resolved the issue with automating code deployment using GitHub Actions. Below is the script that you can use by replacing the environment variables with your own.

run: |
      which ssh-agent || (sudo apk update && sudo apk add openssh-client)
      which rsync || (sudo apk update && sudo apk add rsync)
      mkdir -p ~/.ssh
      chmod 700 ~/.ssh
      touch ~/.ssh/private.key
      touch ~/.ssh/known_hosts
      chmod 600 ~/.ssh/private.key
      echo -e "${{ vars.DEV_SSH_PRIVATE_KEY }}" | tr -d '\r' > ~/.ssh/private.key
      # Append keyscan output into known hosts
      ssh-keyscan ${{ vars.DEV_PUBLIC_IP_ADDRESS }} >> ~/.ssh/known_hosts
      chmod 644 ~/.ssh/known_hosts
  - run: |
      eval $(ssh-agent -s)
      ssh-add ~/.ssh/private.key
      echo "Deploy to dev environment"
      rsync --rsync-path=/usr/bin/rsync --delete -avuz --exclude=".*" ./docker-compose/docker-compose-prod.yml ${{ vars.REMOTE_USER }}@${{ vars.DEV_PUBLIC_IP_ADDRESS }}:$BASE_DEV_SERVER_PATH/docker-compose-prod.yml

1 Comment

For me, it worked too without the know_hosts part. Thanks for your reply, it did help me
3

This is the solution I arrived at:

name: SSH Deployment

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v4

      - name: Configure SSH
        uses: webfactory/[email protected]
        with:
          ssh-private-key: ${{ secrets.DEPLOY_KEY }}

      - name: Git Pull
        run: |
          cd ${{ github.workspace }}
          ssh-keyscan -t rsa ${{ secrets.SERVER_IP }} >> ~/.ssh/known_hosts
          ssh ${{ secrets.SSH_USER }}@${{ secrets.SERVER_IP }} "cd ${{ secrets.REPO_PATH }} && git pull origin main"
          echo "Successfully pulled repo."

      - name: Check Nginx configuration
        run: |
          echo "Checking nginx config"
          ssh ${{ secrets.SSH_USER }}@${{ secrets.SERVER_IP }} "sudo /usr/sbin/nginx -t 2>&1"
          echo "Nginx configuration check completed."

      - name: Restart Nginx service
        if: success()
        run: |
          ssh ${{ secrets.SSH_USER }}@${{ secrets.SERVER_IP }} "sudo /bin/systemctl restart nginx"
          echo "Nginx service restarted successfully"

If anyone has any notes or comments about security and hardening it would be greatly appreciated. Sharing my solution as I thought it might help someone. Some of the answers here seem overly complicated.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.