I made changes to my EC2 instance's sshd_config file and now I can't access my instance using SSH. How do I resolve this?

Last updated: 2020-11-18

I changed my Amazon Elastic Compute Cloud (Amazon EC2) instance's sshd_config file, and now I can't access my instance using SSH. How can troubleshoot and I resolve this?

Short description

Changing an instance's sshd_config file might cause a connection refused error when connecting through SSH.

To confirm that you can't access the instance due to a connection refused error, access the instance through SSH with verbose messaging on:

$ ssh -i "myawskey.pem" ec2-user@ec2-11-22-33-44.eu-west-1.compute.amazonaws.com -vvv

The preceding example uses myawskey.pem for the private key file, and ec2-user@ec2.11.22.33.44 as the user name. Substitute your key file and your user name for the example's key file and user name. Make sure you use the Region where your instance is located.

The following example output shows the connection refused error message:

OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: Connecting to ec2-11-22-33-44.eu-west-1.compute.amazonaws.com port 22.
ssh: connect to host ec2-11-22-33-44.eu-west-1.compute.amazonaws.com port 22: Connection refused

To resolve this issue:

1.   Create a recovery instance and mount the impaired instance's root volume to the recovery instance.

2.    Correct or copy the sshd_config file.

3.    Reattach the volume to the original instance and test the connection.

Resolution

Note: If you're using a Nitro-based instance, device names differ from the examples given in the following steps. For example, instead of /dev/xvda or /dev/sda1, the device name on a Nitro-based instance is /dev/nvme. For more information, see Device naming on Linux instances.

Create a recovery instance and mount the impaired instance's root volume to the recovery instance

1.    Launch a new EC2 instance in your virtual private cloud (VPC). Use the same Amazon Machine Image (AMI) in the same Availability Zone as the impaired instance. The new instance becomes your recovery instance.

2.    Stop the impaired instance.

Note: If you use a store-backed instance or have instance store volumes containing data, the data is lost when you stop the instance. For more information, see Determining the root device type of your instance.

3.    Detach the Amazon Elastic Block Store (Amazon EBS) root volume (/dev/xvda or /dev/sda1) from your impaired instance.

4.    Attach the EBS volume as a secondary device (/dev/sdf) to the recovery instance.

5.    Connect to your rescue instance using SSH.

6.    Run the lsblk command to view devices:

$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk
 └─xvdf1 202:81   0   8G  0 part

7.    Create a mount point directory (/recovery) for the new volume that you attached to the recovery instance in step 4:

$ sudo mkdir /mnt/recovery

8.    Mount the volume at the directory you created in step 7:

$ sudo mount -t xfs -o nouuid /dev/xvdf1 /mnt/recovery/

9.    Run the lsblk command again to verify the volume mounted the directory:

$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk
└─xvdf1 202:81   0   8G  0 part /mnt/recovery

Correct or copy the sshd_config file

You can investigate the sshd_config file on your impaired instance and rollback your changes. Use the SSH verbose messaging output to guide you to the error location in the file.

$ sudo vi /mnt/recovery/etc/ssh/sshd_config

Alternatively, copy the sshd_config file from the recovery instance to your impaired instance using the following command. This command replaces the contents of the sshd_config file on your original instance.

$ sudo cp /etc/ssh/sshd_config /mnt/recovery/etc/ssh/sshd_config

Reattach the volume to the original instance and test the connection

1.    Run the umount command to unmount the volume:

$ sudo umount /mnt/recovery/

2.    Detach the secondary volume from the recovery instance and then attach the volume to the original instance as /dev/xvda (root volume).

3.    Start the instance.

4.    Connect to the instance using SSH to verify that you can reach the instance.


Did this article help?


Do you need billing or technical support?