<img src="https://ws.zoominfo.com/pixel/6169bf9791429100154fc0a2" width="1" height="1" style="display: none;">

Managing Access to Ephemeral Infrastructure At Scale

StrongDM manages and audits access to infrastructure.
  • Role-based, attribute-based, & just-in-time access to infrastructure
  • Connect any person or service to any infrastructure, anywhere
  • Logging like you've never seen

Managing a static fleet of StrongDM servers is dead simple. You create the server in the StrongDM console, place the public key file on the box, and it’s done! This scales really well for small deployments, but as your fleet grows, the burden of manual tasks grows with it.

With the advent of automated scaling solutions for our cloud environment like AWS Auto Scaling Groups, we need a way for our StrongDM inventory to change in real-time along with the underlying servers. The solution: automation automation automation!

The devops mindset is key, we want to automate cloud infrastructure so it operates without manual intervention. We can write scripts that hook into instance boot and shutdown events that will automatically adjust our StrongDM inventory accordingly.

The examples in this post are written for AWS, but all major cloud providers should provide a similar API for instance information, lifecycle hooks, and metadata-like tags.

Automation— the hooks

For server access, there are two lifecycle events that we care about: server boot and server shutdown. We are going to write scripts that hook into these events and execute StrongDM CLI commands to perform the necessary actions.

We’ll need API keys to talk to StrongDM and our cloud provider.  In this case: AWS.

StrongDM and AWS API Authentication

StrongDM provides admin tokens that facilitate access to sdm admin CLI calls. The admin token has the following permissions:

On the AWS side, servers were given a programmatic (non-console) user account with one IAM policy attached. This IAM policy can also be attached to an instance role instead of embedding credentials into the script!

The policy contains one statement: allow EC2:DescribeTags. This API call is required to build out the server’s name in StrongDM.

"Version": "2012-10-17",
"Statement": [
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:DescribeTags",
"Resource": "*"

Naming Convention

Server names will be dynamically built using the server’s AWS EC2 tags, with the following schema:


A server with the following tags:

would result in the following name in the StrongDM inventory:

$ sdm status
SERVER                             STATUS            PORT      TYPE
testapp-stage-wweb-00deadbeef      connected         60456     ssh



We've built a custom base AMI based on the Amazon Linux operating system. Our provisioning workflow includes a per-boot CloudInit script that executes when the server is powered on, including restarts. This script utilizes the StrongDM command line tool to register automatically identify the information about the server, register it with StrongDM, and install the on-demand SSH keys.

In /var/lib/cloud/scripts/per-instance/00_register_with_strongdm.sh
INSTANCE_ID="$(curl --silent"
INSTANCE_ID_TRIMMED="$(echo $INSTANCE_ID | cut -d '-' -f 2)"
LOCAL_IP="$(curl --silent"
APP="$(aws ec2 describe-tags --filters Name=resource-id,Values=$INSTANCE_ID Name=key,Values=app --query 'Tags[0].Value' --output text)"
ENV="$(aws ec2 describe-tags --filters Name=resource-id,Values=$INSTANCE_ID Name=key,Values=env --query 'Tags[0].Value' --output text)"
PROCESS="-$(aws ec2 describe-tags --filters Name=resource-id,Values=$INSTANCE_ID Name=key,Values=process --query 'Tags[0].Value' --output text)-"
curl --silent -o sdm.zip -L https://app.strongdm.com/releases/cli/linux
unzip sdm.zip
mv sdm /usr/local/bin/sdm
rm sdm.zip
mkdir -p "/home/sshuser/.ssh/"
touch "/home/sshuser/.ssh/authorized_keys"
chmod 0700 "/home/sshuser/.ssh/"
chmod 0600 "/home/sshuser/.ssh/authorized_keys"
chown -R "sshuser:sshuser" "/home/sshuser/"
/usr/local/bin/sdm login
PUBLIC_KEY=$(/usr/local/bin/sdm admin servers add -p "$SDM_SERVER_NAME" "sshuser@$LOCAL_IP")
# Touch a "lockfile" so the server can be deregistered when it's shutdown
# (see shared/roles/strongdm_target/templates/remove_from_strongdm.sh.j2:10)
touch /var/lock/strongdm-registered


For removing servers from inventory, we’ve included another script into our base AMI. It is a runlevel0 script that removes the server from the StrongDM inventory when then system is halted (i.e. AWS ASG instance termination, console terminations, or restarts).

In /etc/init.d/remove_from_strongdm
# chkconfig: 0123456 99 01
# description: Deregister server from StrongDM at shutdown
# LOCKFILE is created by the cloudinit script
# set up the logger
exec 1> >(logger -s -t $(basename $0)) 2>&1
touch ${LOCKFILE}
# set up the logger
exec 1> >(logger -s -t $(basename $0)) 2>&1
# Remove our lock file
INSTANCE_ID_TRIMMED="$(echo $INSTANCE_ID | cut -d '-' -f 2)"
APP="$(aws ec2 describe-tags --filters Name=resource-id,Values=$INSTANCE_ID Name=key,Values=app --query 'Tags[0].Value' --output text)"
ENV="$(aws ec2 describe-tags --filters Name=resource-id,Values=$INSTANCE_ID Name=key,Values=env --query 'Tags[0].Value' --output text)"
PROCESS="-$(aws ec2 describe-tags --filters Name=resource-id,Values=$INSTANCE_ID Name=key,Values=process --query 'Tags[0].Value' --output text)-"
/usr/local/bin/sdm admin servers delete "$SDM_SERVER_NAME"
case "$1" in
start) start;;
stop) stop;;
echo $"Usage: $0 {start|stop}"
exit 1
exit 0

User Access Model— StrongDM Okta Sync

At Betterment, all of our access management lives in one single place: our identity provider. We wire up all of our authentication and access control to Okta. With this in mind, we wanted a way for Okta attributes, in our case group memberships, to propagate to server access in StrongDM. When we reached out to the engineers assisting us with the implementation, they came up with an amazing solution.

They wrote a stopgap tool for us that will lookup an Engineer’s group in Okta, and grant access to a set of servers based on a regular expression. It was a small tool written in Go that we could run on a schedule. The matching algorithm was controlled by a YAML file that allowed us to easily map Okta groups to StrongDM servers.

For example, if an engineer is added to the strongdm/testapp Okta group, they will now have access to all servers whose names start with testapp. Since our servers followed a distinct naming pattern, we were able to write well-defined regular expressions to match applications to their respective servers.

Here’s a small excerpt from the matchers.yml configuration file:

- name: strongdm/testapp
- testapp.*

In our case, the YAML file was stored in source control, and a Jenkins job ran the sync every five minutes.

Simple Steps to Manage Access to Ephemeral Servers

Voilà! With a few little scripts shimmed into our instance boot and shutdown lifecycle hooks, we can rely on our dynamic infrastructure to successfully register and deregister itself from our StrongDM inventory. Small steps, like writing a few bash scripts and plugging them into an AMI, can save your operations team valuable time when working with a large-scale deployment and allow your application engineers to SSH into dynamic infrastructure with ease.

You can try StrongDM out for yourself with a free, 14-day trial.

To learn more on how StrongDM helps companies with managing permissions, make sure to check out our Managing Permissions Use Case.

💙 this post?
Then get all that strongDM goodness, right in your inbox.

You May Also Like

Alternatives to ManageEngine PAM360
Alternatives to ManageEngine PAM360
ManageEngine’s PAM360 gives system administrators a centralized way to manage and audit user and privileged accounts within network resources. However, teams that need to manage secure access to Kubernetes environments or enforce password policies within their privileged access management (PAM) system may want to consider other options. This blog post will cover ManageEngine PAM 360 and some solid alternatives, along with the pros and cons of each.
Machine Identity Management Explained
Machine Identity Management Explained in Plain English
In this article, we'll cover machine identities and address the importance and challenges in machine identity management. You'll gain a complete understanding of how machine identity management works and see the concept in action through real-world examples. By the end of this article, you'll be able to answer in-depth: what is machine identity management?
The difference between SASE vs SD-WAN
SASE vs. SD-WAN: All You Need to Know
SASE is a cloud-based network security solution, whereas SD-WAN is a network virtualization solution. SASE can be delivered as a service, making it more scalable and resilient than SD-WAN. Additionally, SASE offers more comprehensive security features than SD-WAN, including Zero Trust security and built-in protection against Distributed Denial-of-Service (DDoS) attacks.
SASE vs. CASB: Everything You Need to Know
SASE vs. CASB: Everything You Need to Know
In this article, we’ll take a big-picture look at how SASE and CASB solutions fit into the enterprise security landscape. We'll explore the key differences between SASE and CASB and explain how each tool helps ensure enterprise security. You will gain an understanding of how SASE and CASB solutions compare and which might be suitable for your organization.
CyberArk vs. Thycotic (Delinea)
CyberArk vs. Thycotic (Delinea): Which Solution is Better?
In this article, we’ll compare two Privileged Access Management (PAM) solutions: CyberArk vs. Thycotic, with a closer look at what they are, how they work, and which will best fit your organization. We’ll explore product summaries, use cases, pros and cons, PAM features, and pricing to that by the end of this article, you’ll have a clearer understanding of how these PAM tools work and be able to choose the one that’s right for you.