Saturday, 20 May 2017

Email to SMS gateway with Amazon SES and SNS with Lambda (for notifications and alerts)

The purpose of this was to get a service set up that will send an SMS with only the subject of a notification  email message to a few specific, subscribed users.  In this case to those responsible for major incidents,  receiving from newrelic and solrwinds.

I'm sure there are existing services that can do this, but we already used AWS, and it would keep the billing simple.  Also, we had limited control of the format of the email, so only needed the subject line to be sent by SMS.  It seemed clear that it wouldn't be hard to do in AWS - these notes show how to do it.

As this is not something I'm likely to do a lot, I'll use the AWS console - as this is far from static, the details here may change.

First - sort your SMS limit

By default, AWS has a limit of $1 / month spend on SMS - this is very nice of them, you don't accidentally want to spend a lot, but if you want to be able to start using it, contact support (top right n the console) and get that increased immediately.  You can still specify a "soft limit" after they've increased your limit.

Set up the SNS topic and subscribe your SMS 

  • Go to the Simple Notification Service , Topics, Create new topic .
    Enter topic name and display name (the display name will appear in SMS messages) 
  • Make a note of the topic ARN. 
  • Create subscription, Protocol SMS, Endpoint is your phone number (with international prefix, e.g. +44 in the UK) 
  • You will probably now receive some SMS messages about the fact that you're subscribed.
  • You can also add an email subscription at the same time, as your SMS limit may be reached, and you still want to check it works.

Set up email reception

For myself, we already had a hosted zone set up in Route 53.  If you don't have one, it may well be a good idea to set that  up, I'm not covering that here.   If you have a hosted zone (e.g. a subdomain of your normal domain), proceed from here, otherwise you can set up the MX record for a subdomain yourself elsewhere.


  • Go to Route 53, click on hosted zones, and select the zone for which you want to receive the email.
  • Check if you already have an MX record set up (you can change the drop down from Any type to MX to show it).
  • If not, Create Record Set
  • Leave the name blank (unless you want to receive mail for a subdomain, e.g. if the main domain already has one)
  • Change the Type to MX
  • under value, enter something like:
10 inbound-smtp.eu-west-1.amazonaws.com

Processing incoming email

Now you need to set up a lambda function to handle incoming email.  You can have incoming email go directly  to SMS, but it will send everything including all headers, which is almost certainly not what you want.

We will use a lambda function to process the email, sending notification of the subject to the SNS.

Before creating a Lambda function, first create an IAM role that you'll use.  This is because the Lambda function  needs permissions to use SNS.

Create an IAM role

  • Go to IAM, click on roles 
  • Create new role 
  • Select role type, AWS Service role, AWS Lambda 
  • Skip the attach policies (i.e. click next step) 
  • Give the role a name (e.g. ses_to_sns) 
  • Create role 
  • Then find the role and edit it.  Under Permissions, go to Inline Policies and create one. 
  • Custom Policy, give it a name, and add this in (this policy is much more open than you need it, but it should work)
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:InvokeFunction"
            ],
            "Resource": "arn:aws:lambda:*"
        },
        {
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*"
            ]
        },
        {
            "Action": [
                "sns:*"
            ],
            "Resource": "*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "ses:*"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}

Create the lambda function 

  • Go to Lamda 
  • Create a Lambda function 
  • Blank function.  Skip configure triggers (we'll do this from SES), click Next 
  • Give it a name, and copy this code into the function code (note, you have to use the correct TopicArn:
var AWS = require('aws-sdk');
AWS.config.region = 'eu-west-1';

exports.handler = function(event, context, callback) {
    console.log('forward subject matcher');

    var sesNotification = event.Records[0].ses;
    console.log("SES Notification:\n", JSON.stringify(sesNotification, null, 2));

    var sns = new AWS.SNS();

    sns.publish({
        Message: sesNotification.mail.commonHeaders.subject ,
        TopicArn: 'arn:aws:sns:xxxxxwhatever'
    },  function(err, data) {
        if (err) {
            console.log(err.stack);
            return;
        }

        console.log('push sent');
        console.log( sesNotification.mail.commonHeaders.subject);
        context.done(null, 'Function Finished!');
    });
};

Set up the email receiving rule set

  • Go to SES
  • Create rule
  • Under recipient, specify the recipient you want to use in the subdomain you set up the MX record for (e.g. notifications@aws.example.com), and add, next
  • Under "Add action", select Lambda and select the name of the lambda function you created above.
  • Do NOT select an SNS topic (that's already done in the Lambda function itself)
  • Give the rule a name and go through the rest of the process.

Test it all

That should be it - send an email to the address you configured - you should receive an SMS containing the display name of the SNS topic and the subject line of the email.

Using it for alerts and notification

You will want to change some setting for the emails you are sending.  To do this, go to the Simple Notification Service in the AWS console (SNS), and select Text Messaging (SMS) and Manage text messaging preferences.

As this is for alert messages, you will want to change the message type to transactional (this is more expensive but higher priority), and the default sender ID (which shows up in the SMS).  You can also choose to have reports saved to S3.  Also set a reasonable account spend limit (not too low, you probably don't want it to silently fail, which is what will happen if you hit your limit).

Now all you need to do is set up your alerts to include this email, and subscribe anybody who needs to receive the alerts to the SNS topic.

And hope nothing ever goes wrong that you need to be alerted of.

Moving on - linux etc

It's now been more than 18 months since I last posted anything here.  The danger is, of course, that the amount I've done in the meantime means I can never catch up.  This is just an attempt to move past that, and hopefully allow me to get some more down that I want to record.

A very brief history since my last post

My client has a large number of development projects, but when I started had only one SIT and one UAT environment. These were traditionally hosted, and very expensive. My initial task was to try and build a second sit environment in AWS.

The core portal consisted of a number of Linux servers, with mostly Java applications deployed on jboss. It took some time to appreciate the size and complexity of the system. In addition, there were a number of services running on Windows servers, but fortunately I did not, initially, have any involvement in this, and they were also not initially included in our environments. I also didn't have more than the minimum involvement in the Oracle servers behind all of this.

It was also immediately clear that the actual requirement would be for multiple dev and test environments. This means the build process needed to be automated.

The production environment would almost certainly not be migrated to public cloud in at least the medium term. That means that too much reliance on cloud native services would prevent any automation applying to production. Where there is no conflict, native aws services could be used, and hopefully we could use more in future.

I evaluated a number of solutions before deciding on Ansible as the tool of choice. 

In time, our small team took over responsibility for live support of the production web portal. In time we got to rebuild and migrate the live system to a new data centre. By this time we could do the complete build, configuration and deployment of almost 80 servers in 40 distinct roles using Ansible.

The key technologies I've now been using over the past while therefore include:
Linux
Aws
Ansible
Java applications
Apache
VMware
Windows server (also SQL and SSIS)

Hopefully more coming soon.

Thursday, 11 June 2015

Moving SQL server to AWS

Why?

Just a bit of background first - why move a SQL server to AWS, rather than using the Amazon RDS service?
If it is possible for you, I'd recommend using the Amazon RDS service, which comes in MS SQL flavour as well, rather than managing your own SQL server in AWS.  However, this will not always be possible.  My client made extensive use of SQL server features, including replication to a Enterprise SQL server used for BI, both reporting services (SSRS) and analysis services (SSAS).  As neither replication, nor reporting and analysis services are currently supported in RDS, we had to host our own SQL server.

What was being moved

A number of applications were hosted on the same server as the SQL Server 2008 server.  These included web services providing the data connectivity for a number of web sites, as well as the back office ERP system, responsible for all processing of orders, stock management, sales, customer support, etc.
In addition, there was extensive intergration with partner systems as well as with internally hosted Sage instance, including automated supplier payments.  Most of this was using SQL integration services (SSIS).
Most of the applications were .net web applications and were being moved to Elastic Beanstalk.  The SSIS services would be hosted on a separate SQL server in AWS - this would require some measure of manual configuration of several of the supporting services and extensions after the initial build and configuration.
Here I concentrate on building a SQL server.

Part 1: Building a SQL server in AWS

Amazon provides machine images with SQL server standard where licencing is included in the running costs.  You could alternatively bring your own licence (under certain conditions), but this did not provide the flexibility we were looking for.
I am not a specialist DBA, but I believe that the solution I put together provides a very reasonable SQL server installation, taking care of many of the standard configuration tasks that infrastructure engineers are probably not exposed to on a regular basis.  Parts of the solution may be unneccessary, and some are constrained by limitations of older versions of Windows Server.  An attempt was made to make it work with the Amazon AMI images of all versions of SQL server from 2008 SP3 to 2014, but images change, and the template and scripts changed with time, so nothing is guaranteed (use at own risk - i.e. test the build!).  You will find errors, and some things are unnecessary, but generally most of what's in there is / was required for an image somewhere.

The parts that are used

A cloudformation template is used to build the server.  While the template includes the majority of the configuration, there are a few additional scripts.  There is a PowerShell script, New-SQLServer.ps1 that is used to kick off the build, and five files are downloaded by the template and run - they could have been included, but downloading them from S3 seemed a better solution.

New-SQLServer.ps1

Please note, as I share this script, it includes a large number of default settings - you specify all parameters when calling the script, but many of them may as well have your defaults set (I had defaults for all except ComputerName).
Essentially, the script just does some checking on the parameters, finds the current AMI image for SQL server, and then calls New-CFNStack.
Just a few notes on security - the "DomainAdmin" should NOT be a domain admin, but a domain user with just sufficient rights to create computers in the required domain or even just the specific OU.
The user running the script needs to have AWS credentials loaded before running the script, with admin rights on AWS (you can lock this down to some extent, but it does require a lot), or assume a role that has the required rights.
Your Windows user has to have the rights to add the computer to the AD group.  This is not important, but if you have Read Only Domain Controllers in AWS, and they have been set to cache credentials for only one group of computers, you want the new server to be in that group.
Also note that the credentials specified (including domain user and password), will be visible to anybody with access to view CloudFormation stacks.

The template

The template is where most of the work happens.  Rather than having all of the scripts included in the template (which is how it started), or run by the New-SQLServer script, some are downloaded from S3 to the local file system, and executed from there.  I'll get to those files, but it means you have to have an S3 bucket set up with the correct scripts in there.  In addition, backups are stored in a different S3 buicket.
In addition to setting up the server, the template creates a policy that allows access to the two S3 buckets required, creates D: (data) and E: (backup) drives, and assigns the specified security group.  It's possible to create a security group during the creation, but it made more sense for me to have a single security group assigned to all SQL servers.
Note, the template won't work as it is - it contains mappings to specific subnets (you'll presumably want backend subnets) that will have to exist and match your environment.  Also note that (through pure laziness), I have hard coded the locale and time settings.
Essentially, the steps the template goes through on the server instance are the following:
1) Fix some basic issues (e.g. with firewalls), download files required later, and sets system locale and time zone, then restarts.
2) Rename the computer, then restarts.
3) Join the domain, then restarts.
4) Change SQL collation, and add Agent user to a few local groups to avoid issues later.
5) Do SQL configuration.  This runs SQLPS, as the SQLPS module isn't supported on old platforms.  Essentially, this script does the following:
- Set mixed mode authentication
- Calculate and set max server memory for SQL
- Turn on backup compression where supported
- Change the SQL Server name to match the computer name
- Set default file locations to the D: drive
- Change the service account for SQL server
- Enable TCP (not enabled by default on express)
- Change service account for SQL Server agent, enable and start agent
- Add tempdb files to any ephemeral drives you have
- Configure DB Mail and enable for SQL Server Agent
- Set up Ola Holgren's fantastic maintenance solution - you'll have to have it downloaded, and set some basic parameters, like saving to E:\SQLBackups, how long to keep backups on the server, etc.
- Schedule all the maintenance tasks - the schedule I use is as recommended.
- Create a scheduled task to upload backups to S3 on a daily basis.
6) Update Windows (strange that you have to go through as many contortions to get PowerShell to do an update).  As images are regularly updated, this is normally a small update, but is also one of the reasons you have to test!
7) Ensure remote management is running
8) Tell Amazon you're done.....

Backups

I mention the scheduled job.  The script triggered by the schedule looks for backups in the locations where the maintenance solution puts them, then uploads them to the defined S3 bucket.  It has three targets, Daily, Weekly and Monthly.  Set up lifecycle configuration for these three targets.  I have mine set up to keep daily backups for 15 days, weekly backups for 32 days, monthly backups for 32 days then move them to Glacier.

Files needed

The template downloads five files from a specified S3 bucket.
- SQL-Configmail.ps1 includes 5 additional parameters like SMTP server and credentials.  This could have been added, but I decided it's as easy to customise and fetch that from S3 instead.
- MaintenanceSolution.sql - download the newest version of Ola Holgren's solution and customise it if required.  Remember to change the backup location to E:\SQLBackups.
- SQL-ScheduleMaintenance.sql - a reasonable starting point for scheduling backup and maintenance jobs.
- Backup-Upload.ps1 - again, customise this file. 
- UploadBackups.xml - The template uses schtasks as a workaround for older versions of Windows Server, and this is the xml file that it wants as input.  Ideally create your own manually (or with PowerShell) then run schtasks /Query /XML /TN taskname.

What's missing?

If this was all there was to configuring SQL server, there'd be lots of DBAs out of work.  I hope this provides the basics - for me this provided the basic configuration that was then (with more scripts) used to mirror the live database to before failing over to this server.  I built a second server to act as mirror and a third (express) to act as witness.  A forth server hosted all SSIS jobs and acted as distributor.
All credentials were copied from the correct servers and pretty much everything else was scripted as well.
As a bonus, it was possible to automate a new test server by building it, downloading the most recent backup and anonymising the database.  It's also possible to script a rolling server upgrade (using mirroring), although halfway through you need to pause and update connection strings in all the applications.

I would appreciate any feedback.

Thursday, 12 March 2015

PowerShell SMTP server for Elastic Beanstalk

Applications developed in .net commonly send email by creating mime files (.eml files) and dropping them into a specified folder.  The actual sending often depends on Microsoft's IIS 6 SMTP server.

SMTP server replacement in Elastic Beanstalk

When such applications get migrated to Elastic Beanstalk (EB), every instance of the application runs on it's own instance (server).  Any functionality that rights locally, writes to the local storage of that instance, which can be replaced at any time.  Each instance also has to have it's own SMTP server installation, and IIS 6 can't be easily scripted.

An alternative method is therefore required to pick up .eml files generated and emailing them.  Ideally, the applications should be changed, but as this is not always practical, I created a solution to schedule a task to regularly run a PowerShell script.  The script checks for files in a location (dropmail folder), and processes them.

The simplest way to make changes to Elastic Beanstalk instances is through “.config” files in the .ebextensions folder.  A file “schedulemail.config” is created in the .ebextensions folder.

The solution is a simple way that illustrates a number of things, including the use of ebextensions with Windows Elastic Beanstalk applications, scheduling tasks with PowerShell, and sending .eml files with PowerShell (more later).

The .config files you create in .ebextensions are YAML files.  The key things to know about these files are:
  • indentation is critical (think Python)
  • you can't use tabs, only spaces
  • elastic beanstalk applications simply fail if there's anything wrong.
I have found that the most common reason for a simple, silent, failure of an EB deployment is a tab in a .config file.

This config file creates three files:
c:/software/sendFile.ps1
c:/software/archiveEmail.ps1
c:/software/scheduleMail.ps1

It then runs the last of these as a command.  This creates a scheduled task that runs sendFile.ps1 every minute and archiveEmail.ps1 every 10 minutes.

Emails are archived to an S3 bucket, where a lifecycle rule can be set up to delete anything older than 2 weeks, but that can be modified as required.  Of course, whether or not it is necessary to archive the email files would depend entirely on your application.

The actual sendFile.ps1 in this file sends email using Mandrill, but on the way there I also used two other methods for sending .eml files.  Any of them can very easily be substituted.

The config file can be downloaded from my github repository and modified.  You will have to add your own key, and if you want to receive notification in the event of failure, add your own email.

Saturday, 7 March 2015

Sending .eml files using PowerShell

In this post, I provide three different scripts for sending a mime file (e.g. .eml file as is created by .net applications) via email.

I had to learn a few tricks along the way, and thought it worth documenting it here.

The scripts form the basis of a replacement for Microsoft's IIS 6 SMTP server.

Using Amazon SES

First ensure SES is properly configured for your domain. This involves the usual proving you own the domain etc., as well as requesting a removal of the limit that allows you initially to only test it.

Once SES is set up, you set up SNS to monitor the sending. The simplest way is to simply email SNS notifications to a dedicated mailbox where you can then search for specific email addresses on demand to see if emails were delivered, bounced or rejected.

Of course, you can set up something much more clever with SNS given time and inclination.

The work-horse of this method is Send-SESRawEmail. For me the problem was that Send-SESRawEmail takes a MemoryStream as input, and I struggled to find the documentation on how to do this. I'm not sure how efficient it is, but this script takes a file and converts it to a MemoryStream before passing it in.

Using SMTP

It is possible to send an email directly to an SMTP server.  I’ve not tried encrypted SMTP yet.  This obviously requires an SMTP server to be set up correctly, but you may well have this for other puposes.

Essentially, what you do is simply connect to the SMTP port and talk to it.  For this you create a TcpClient Object and a StreamWriter object and write the handshake, then the content to port 25.  In spite of both the sender and receiver being included in the .eml file, you still need to provide a sender and recipient in the initial handshake.

The script here will send an email file and takes 6 parameters (including the filename).  Download and use Get-Help to get a bit more information, or simply have a look at the script.  There's a lot more information on scripting network connections in PowerShell in this post by Lee Holmes.

Using Mandrill

We’ve been using Mandrill for some time to send emails.  You can send to Mandrill through SMTP, but in this solution I call the Mandrill restful API directly.  When I eventually worked out how to do it, this was the simplest solution, as well as keeping with what we already use. The SendFile-Mandrill.ps1 script requires just the filename and your Mandrill API key.

Wednesday, 18 February 2015

Connecting your AWS infrastructure to your internal network

Introduction

In some cases you will want to keep your infrastructure in AWS completely separate from your internal network.  I needed to integrate the infrastructure with the rest of our infrastructure, including connecting Windows server to our domain.

There are three obvious ways to connect the networks:

If you have an MPLS WAN, you probably want to extend this to your AWS VPC (Virtual Private Cloud).  For this you would use DirectConnect.

Another option is to use the Amazon Virtual Private Gateway (VPG) to set up a VPN connection to your edge device.

I opted for the third option, which is to create a server to provide the VPN connections.  The main drivers behind this decision were that we are planning on changing our edge device soon, we already use OpenSWAN to connect some of our networks, and we need to connect three different networks to AWS. I'll share a template and a script that will connect to a suitably configured OpenSWAN installation on your network, or connect to another instance in a different VPC (created with the same template).

Once the network connection is in place, add DHCP options to your VPC and you're ready to add Windows servers to your domain.  I added a Read Only Domain Controller to cache DNS and authentication.

Creating the VPN

This article walks you through connecting two VPCs using OpenSWAN.  You can read that for a lot of the background.  I have created a template that you can get here that will set up an OpenSWAN instance for you, using parameters you provide.  You can use this to connect to a VPN endpoint in your internal network, or, of course, to another instance in another VPC.

In my previous post, I shared a template to create a NAT instance.  There I used "UserData" to simply script the whole installation.  In this template I follow a different approach (although it would have been just as easy to do it the same way).  Using cfn-init and "Metadata" allows a more involved configuration, e.g. where reboots are required, such as adding a Windows server to a domain, but here, on linux, it's just another way of doing the same thing.

The template shows how to install a linux package openswan, create a number of configuration files (assembled from parameters provided), set the services to run (to make it persistent in case of a reboot) and run all the commands required to start the services.

The only resource created other than the single ec2 instance is a security group that allows access from private IP addresses (both internal and in AWS) only.

An alternative is to use "sources" instead of files.  This allows you to have zipped (or tarred) files downloaded from an S3 bucket and extracted to a location you specify.  This is ideal if you want to set up multiple VPN connections.

This snippet, inserted before the "files" section, will retrieve and unzip files into the /etc/ipsec.d folder

"sources" : {
  "/etc/ipsec.d" : { "Fn::Join" : ["", ["https://s3-eu-west-1.amazonaws.com/",
    { "Ref" : "SourceBucket" },
    "/",
    { "Ref" : "SourceFilesKey" }
    ]]} 

},

In my github account I have a "work in progress" template that uses this.

Although I'm using this blog mostly to record, for my own purposes, things I pick up, I also find it a lot of work - none of the scripts or templates I use here end up having much in common with what I actually use in anger - all part of trying to make it work with less things already there.

Scripting the creation

As in my previous post, I use a PowerShell script to actually create the VPN using the CloudFormation template above.

This should be simple, shouldn't it?  Actually, if you have a look at the script, you'll notice that there's 60 lines to guess the parameters if you haven't provided them, 30 lines of "help" and really only a little bit at the end that actually creates the VPN, assign the external IP and delete the previous VPN instance if there is one.

The idea of the script is to replace an existing instance, either to update settings or because something is not working as expected, with minimal disruption of the tunnel.

Preparing for extending you AD

Once you have your VPN up and running and you have created routes on your internal network to get to your VPC, you can start putting private servers in your private subnets.  This includes extending your Active Directory into your Virtual Private Cloud, should you have a reason to do so.

Once you know that instances in your VPC can happily access your on site resources, you can potentially add Windows instances to your domain.

Before it's possible to add an instance to your domain, you need to make one more change.  In the AWS Console, go to the VPC Management Console and DHCP Option Sets.  Create a new option set with a domain name and domain name servers set to your internal DNS servers (Domain Controllers).

Once this is saved, go to "Your VPCs", select your VPC, under Actions, Edit DHCP Option set and select the newly created DHCP Option set.

You probably want to create one or two read only domain controllers (RODC) in your VPC.  I've found that an RODC runs quite happily on a micro instance.  Once this is running, you need to create a new DHCP option set with your RODC(s) set as the first DNS server(s) (leave your internal ones later down the list).

You're ready to start building domain joined Windows servers in AWS now.

Thursday, 25 December 2014

AWS Cloudformation - creating a NAT instance

Introduction

Having got the basic networking in place with public and private subnets, there is one more networking ingredient that is required.  If you want to be able to use many of the AWS services in your private subnets, instances in these subnets need internet access.

In this post, I'll create a NAT instances to give internet access to private subnets. 

To build the NAT instance I'll use Cloudformation, and then I'll create a Powershell script to create or replace the NAT instance.

NAT instance

You can create an EC2 NAT instance by using one of the community NAT AMIs - I'm sure that's OK, but the one I happened to pick didn't work, and using this approach is a lot more flexible, and just as easy.

Amazon provide a very good article on how to set up high availability NAT.  What I'm doing here is quite a bit simpler - it will not give the high availability of the Amazon solution, but will allow a fairly rapid replacement of a malfunctioning NAT instance (probably 10 minutes rather than 10 seconds). I also needed to NAT an incoming port, forwarding it to another instance, meaning I could only have one working instance.

Cloudformation

Cloudformation is a fantastic service.  There is no cost associated with Cloudformation - you pay for what you create.  In this case I'm creating an EC2 instance and will pay the hourly rate as soon as it starts.  With Cloudformation you use a template to create a Cloudformation "stack".

The stack I'll create here contains an EC2 instance doing NAT and a security group.

The template I use is quite simple, and is easy to deconstruct and see how templates work.  If you want to, feel free to download the template from here, modify it, and use Cloudformation to build it.  If you are familiar with linux and bash, you will see how "UserData" in the instance properties can be used to do virtually any configuration of a linux instance.  A note here - the template is in JSON format, which can be very easy to get wrong.  I use Notepad ++ in a Windows environment, and add the JSON viewer plug-in.  This allows you to select your whole template and quickly verify if the JSON is OK.

The instance you create with this template may be included in the free tier, costing nothing.  However, it's a pretty useless instance on it's own - without modification you won't be able to access it except from other instances in your VPC.  To allow external access, make a second copy of the line:

{ "IpProtocol" : "tcp", "FromPort" : {"Ref" : "ForwardPort" }, "ToPort" : {"Ref" : "ForwardPort" }, "CidrIp" : "0.0.0.0/0" } ,

replacing  "ForwardPort" with "SSHPort", and preferably locking down the CidrIp to your own IP address.

Once the stack has been created, I need to make it work.  In my previous post I mentioned the two routing tables.  Once I have the NAT instance in place, I create a routing table with the NAT instance as default route, and I associate this routing table with the Private subnets.

PowerShell

I use a Powershell script to create or replace the NAT instance.  This script can be called by a monitoring server, or manually.

PowerShell may seem an odd choice - while the PowerShell module for AWS is very good, I'm pretty sure it's development is behind the normal AWS CLI (Command Line Interface), that can be used from any platform.  The script would have been very easy to create in bash, python or whatever.  There are two reasons I'm using PowerShell - the first is that I quite like PowerShell, but the real reason is that I'm currently building a Windows environment.  In a future post I hope to share my script for creating a full mirrored SQL server environment.  For this you really do need PowerShell, so I'd rather give my clients a consistent tool set.

The script can be used to create an initial NAT instance, as well as to replace it, either after updating the template, to change the parameters, or if the instance stops working as expected.

To use the script, ensure you have initialised your AWS settings in Powershell as explained in a previous post.  Change to the directory where the script is saved. Upload the template to an S3 bucket (if you have a local copy, you can just use the following PowerShell commands):

New-S3Bucket <uniquebucketname>
Write-S3Object -File NATInstance.template -BucketName <uniquebucketname> -Key NatInstance.template

After this, you can use the template you have uploaded with the script.  You can get the url of the file from the AWS console, or use https://s3-eu-west-1.amazonaws.com/uniquebucketname/NATInstance.template (or similar, depending on your region).

To create an initial NAT instance that serves purely to allow outgoing internet access, make sure at least one subnet has "auto-assign Public IP" enabled, and you have created a second routing table (you don't need to have anything in the routing table). Then simply run:
./Replace-NAT.ps1 -NatTemplateURL https://.....
This should create a NAT instance.  For private subnets to start using it, associate your private subnets with this routing table.

If you need to use incoming NAT on a specific port, you can create and assign an EIP to this instance (that allows you to retain the same IP address for future use).

You can replace the instance at any time, whether because of issues, or because you want to make changes (e.g. to implement incoming NAT you can simply add the -ForwardHost and -ForwardPort parameters).  The script will, by default, replace the route and delete the previous stack (after confirming with you, or without confirmation if you specify -Force).

For more information on the script (help is very limited, but at least it will list available parameters if you don't feel like editing the script), run
Get-Help ./Replace-Nat.ps1

That's all for this post - next post I'm planning on showing an alternative way to customise a linux instance.