I made changes to my EC2 instance's sshd_config file. Now I can't access my instance using SSH. How do I resolve this?
Last updated: 2021-08-16
I changed my Amazon Elastic Compute Cloud (Amazon EC2) instance's sshd_config file, and now I can't access my instance using SSH. How can troubleshoot and I resolve this?
Changing an instance's sshd_config file might cause a connection refused error when connecting through SSH.
To confirm that you can't access the instance due to a connection refused error, access the instance through SSH with verbose messaging on:
$ ssh -i "myawskey.pem" email@example.com -vvv
The preceding example uses myawskey.pem for the private key file, and firstname.lastname@example.org as the user name. Substitute your key file and user name for the example's key file and user name. Make sure that you use the Region where your instance is located.
The following example output shows the connection refused error message:
OpenSSH_7.9p1, LibreSSL 2.7.3 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 48: Applying options for * debug1: Connecting to ec2-11-22-33-44.eu-west-1.compute.amazonaws.com port 22. ssh: connect to host ec2-11-22-33-44.eu-west-1.compute.amazonaws.com port 22: Connection refused
Note: If you're using a Nitro-based instance, then device names differ from the examples given in the following steps. For example, instead of /dev/xvda or /dev/sda1, the device name on a Nitro-based instance is /dev/nvme. For more information, see Device names on Linux instances.
Method 1: Use the EC2 Serial Console
If you enabled EC2 Serial Console for Linux, then you can use it to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).
Before using the serial console, grant access to the console at the account level. Then create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven’t configured access to the serial console, follow the instructions in the following section, Method 2: Use a rescue instance. For information on configuring the EC2 Serial Console for Linux, see Configure access to the EC2 Serial Console.
Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.
Method 2: Use a rescue instance
Warning: Before stopping and starting your instance, be sure you understand the following:
- If your instance is instance store-backed or has instance store volumes containing data, then the data is lost when you stop the instance. For more information, see Determine the root device type of your instance.
- If your instance is part of an Amazon EC2 Auto Scaling group, then stopping the instance might terminate the instance. Instances launched with Amazon EMR, AWS CloudFormation, or AWS Elastic Beanstalk might be part of an AWS Auto Scaling group. Instance termination in this scenario depends on the instance scale-in protection settings for your Auto Scaling group. If your instance is part of an Auto Scaling group, then temporarily remove the instance from the Auto Scaling group before starting the resolution steps.
- Stopping and starting the instance changes the public IP address of your instance. It's a best practice to use an Elastic IP address instead of a public IP address when routing external traffic to your instance. If you're using Route 53, then you might have to update the Route 53 DNS records when the public IP changes.
1. Launch a new EC2 instance in your virtual private cloud (VPC). Use the same Amazon Machine Image (AMI) in the same Availability Zone as the impaired instance. The new instance becomes your rescue instance.
Note: If you use a store-backed instance or have instance store volumes containing data, then the data is lost when you stop the instance. For more information, see Determine the root device type of your instance.
3. Detach the Amazon Elastic Block Store (Amazon EBS) root volume (/dev/xvda or /dev/sda1) from your impaired instance.
4. Attach the EBS volume as a secondary device (/dev/sdf) to the rescue instance.
6. Run the lsblk command to view devices:
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk └─xvda1 202:1 0 8G 0 part / xvdf 202:80 0 8G 0 disk └─xvdf1 202:81 0 8G 0 part
7. Create a mount point directory (/rescue) for the new volume that you attached to the rescue instance in step 4:
$ sudo mkdir /mnt/rescue
8. Mount the volume at the directory you created in step 7:
$ sudo mount -t xfs -o nouuid /dev/xvdf1 /mnt/rescue/
To mount ext3 and ext4 file systems, run the following command:
$ sudo mount /dev/xvdf1 /mnt/rescue
Note: The syntax of the preceding mount command might vary. For more information, run the man mount command.
9. Run the lsblk command again to verify the volume mounted the directory:
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 8G 0 disk └─xvda1 202:1 0 8G 0 part / xvdf 202:80 0 8G 0 disk └─xvdf1 202:81 0 8G 0 part /mnt/rescue
Correct or copy the sshd_config file
You can investigate the sshd_config file on your impaired instance and rollback your changes. Use the SSH verbose messaging output to guide you to the error location in the file.
$ sudo vi /mnt/rescue/etc/ssh/sshd_config
Or, copy the sshd_config file from the rescue instance to your impaired instance using the following command. This command replaces the contents of the sshd_config file on your original instance.
$ sudo cp /etc/ssh/sshd_config /mnt/resscue/etc/ssh/sshd_config
Reattach the volume to the original instance and test the connection
Note: Complete the following steps if you used Method 2: Use a rescue instance.
1. Run the umount command to unmount the volume:
$ sudo umount /mnt/rescue/
2. Detach the secondary volume from the rescue instance, and then attach the volume to the original instance as /dev/xvda (root volume).
4. Connect to the instance using SSH to verify that you can reach the instance.