Node.js Web App Deployed to AWS Fargate w/ Auto-Scaling

TL/DR: I present a detailed how-to for deploying a (hello world) Node.js web application (in container image form) onto AWS Fargate with auto-scaling. This could be useful for the start of your project and then add subsequent layers for your purposes, or bits and pieces of this how-to could help solve a particular problem you’re facing.

Motivation and Background

It is not enough to be able to write software.  One must also be able to deploy.  I’m reminded of the Steve Jobs quote, “real artists ship.”  Even if you wrote the next killer social media website, it means nothing unless you can get it out the door, hosted, and in a stable (and scalable!) production environment.  This post is an extracted walk-through of how I used the new AWS service Fargate to host a side project.

What is Fargate?  It’s a generalized container orchestration service.  “Generalized” here means that AWS has taken care of the underlying infrastructure usually associated with the creation of a ‘cluster’ (in the kubernetes sense) of computing resources.  Bring your own container (the portable form of your application) and through configuration in the AWS console the application can be deployed into an auto scaling cluster, with integrations for Application Load Balancing, Certificate Management (ACM) for HTTPS, and DNS (Route 53).  And what’s really nice is the container can be given an IAM role to call other authorized AWS Services.

Here’s the user story for this article, to help bridge the developer and product owner / business gap:

As an application/DevOps engineer, I want to deploy my containerized application to an orchestration service (AWS Fargate), so that I can avoid the headaches and complexity of provisioning low level services (networking, virtual machines, kubernetes) and also gain auto scalability for my production/other environment.

– an application/DevOps engineer

The Big Picture

From the Node.js source all the way to a live app, here’s how the pieces fit together in one picture. (The draw.io file is included in my github repo.)

Fig. 1: Node.js app, Image Repository, Fargate, & ALB

Node.JS Web App

A very basic ‘hello world’ app can be pulled from my github repo:

git clone \
https://github.com/yamor/nodejs_hello_world_dockered.git && \
cd nodejs_hello_world_dockered && \
npm install

# Give it a go and run
npm start
# ... then access at localhost:3000
Fig. 2: Node.js application up and running

It’s a very basic application:

  • Built from npx express-generator
  • Changed the routes/index.js ‘title’ variable to ‘nodejs_hello_world_dockered’
  • Added a Dockerfile, which we’ll walk through now…

Dockerfile

$ cat Dockerfile 
 FROM node:12.18.2-alpine3.9
 WORKDIR /usr/app
 COPY . .
 RUN npm install --quiet
 RUN npm install pm2 -g
 EXPOSE 3000
 CMD ["pm2-runtime", "start", "./bin/www", "--name", "nodejs_hello_world_dockered"]

Some explanation:

  • The COPY command is copying all the Node.js source into the container
  • pm2 is installed for process management, reload capabilities, and it’s nice for production purposes adding a layer on top of the core Node.js code, and not necessary for small development efforts.  But importantly, the container is using pm2-runtime which is needed to keep a container alive.

Docker Commands

Assumption: docker is installed and running.

$ docker -v
Docker version 19.03.6-ce, build 369ce74

Docker build, run then a curl to test.

# this command builds the image that is ultimately 
# deployed to fargate
docker build -t nodejs_hello_world_dockered . 

docker run -d -p 3000:3000 nodejs_hello_world_dockered

$ curl localhost:3000
<!DOCTYPE html><html><head><title>nodejs_hello_world_dockered</title><link rel="stylesheet" href="/stylesheets/style.css"></head><body><h1>nodejs_hello_world_dockered</h1><p>Welcome to nodejs_hello_world_dockered</p></body></html>

When done, kill the running container but keep the image.

# kills all running containers
docker container kill $(docker ps -q)

# you should see our nodejs_hello_world_dockered
docker images

Push the Image to a Container Registry

Tip: Use an EC2 or Devops/pipeline within AWS (and not your local machine) for image building and pushing, as uploads from a slow or residential network can take a long time.  Take proximity into account for your approach/strategy for large data movements. This tip should have preceded the Docker section above, but the rationale might not have become apparent until you attempt to push an image to a registry and find that it’s way too slow.

Assumption: the AWS CLI is installed and has an account with appropriate authorizations.

$ aws --v
 aws-cli/1.16.30 ...

Assumption: you have an ECR repository created.

Now to push and it’s just two commands (but preceded by an AWS ECR login), to label the image then upload it.  Notice the label contains the repositories address.

aws ecr get-login --no-include-email --region us-east-1 \
| /bin/bash

docker tag nodejs_hello_world_dockered:latest \
1234567890.dkr.ecr.us-east-1.amazonaws.com/fargate_demo:latest

docker push \
1234567890.dkr.ecr.us-east-1.amazonaws.com/fargate_demo:latest

AWS & Fargate

Congratulations, at this point the application is in a nice and portable (container) format and residing in an AWS ECR repository.  The Fargate configuration will consist of the following:

  • Task: defines the container configuration
  • Cluster: regional grouping of computing resource
  • Service: a scheduler which maintains the running Task(s) within the Cluster…
    • Auto-scaling will be configured at this level of the stack and will scale up the number of Tasks as configured

The remaining AWS service is a Load Balancer which is separate from Fargate. It will be described later as it exposes the application to the greater web.

Task Definition

Access the AWS Console > (ECS) Elastic Container Service > (left side menu) Task Definitions > click ‘Create new Task Definition’. On the next screen click ‘Fargate’ and then ‘Next Step’.

Fig. 3: Fargate launch types

On the next screen, fill in the following:

  • Name: I have called it ‘fargate-demo-task-definition’
  • Task Role: this can be left as ‘none’, but I can’t stress enough how versatile this is.  If your Node.js app needs to make call to DynamoDB, Simple Email Service, or any other Amazon service, you can enable it here.  Using the node package aws-sdk will automagically query a resource URI at runtime to gain credentials, thus granting authorizations to your app for the role specified.  This is very cool.
  • Task Execution IAM Role: leave as the default ‘ecsTaskExecutionRole’, see the image below for the succinct AWS explanation
  • Task Size: this provides a lot of room for tuning, but for this simple Node.js app I’ve plugged in 0.5GB and 0.25CPU respectively for memory and CPU allocation.
  • Add Container:
    • Container Name: I have called it ‘fargate-demo-container-image’
    • Image: Use the image URI from the end of the ‘Upload to Container Registry Section’ which was of the form ‘1234567890.dkr.ecr.us-east-1.amazonaws.com/fargate_demo:latest’
    • Memory Limits: AWS recommends 300MiB to start for web apps.
    • Port Mappings: 3000, for the container port exposing the Node.js application.
    • …then click ‘Add’.
  • Tags: always try to tag your AWS resources.
  • …then click ‘Create’.

Cluster

Access AWS ECS and click ‘Create Cluster’.

Fig. 4: Cluster creation

There are a lot of different configurations for computing resources, networking, and scaling but we’ll stick with the simple case and select Networking Only.

Fig. 5: Cluster templates

On the next screen, give it a name such as ‘fargate-demo-cluster’.  Leave the ‘Create VPC’ unchecked as we can use the default one but if you’re deploying an intensive app you may want a dedicated VPC.  Add any tags.  (I highly recommend adding tags so you can quickly search and find associated resources for your projects.)

ALB – Application Load Balancer

Access the ALB service and click ‘Create’: EC2 > (left side menu) Load Balancers > ‘Create’ > (Application Load Balancer / HTTP / HTTPS) ‘Create’.

On the next configuration screen, make the following changes:

  • Name: I have called it ‘fargate-demo-ALB’
  • Listeners: for now we’ll keep HTTP port 80, though this target group will be deleted eventually.  (The ALB creation wizard requires at least one target group.)
    • (Not included in this article, but once the entire system is up it’s easy to add a second listener for HTTPS port 443 while also including a certificate from ACM.)
  • Availability Zones: choose the intended VPC and select multiple subnets which will eventually become contain the targets for this ALB

Click ‘Next: Configure Security Groups’, though an intermediary page will warn about the absence of a ‘secure listener’.  We’ll click through this for now, but as mentioned above a 443 listener can be added in the future (but not part of this article).

On the next page, we’ll ‘Create New Security Group’ and call it ‘fargate-demo-security-group’.  Leave the default TCP port of 80, and notice that it’s open to any IP source (0.0.0.0/0, ::/0).  Then click ‘Next: Configure Routing’.

On this next page, give the target group a name (fargate-demo-target-group).  In the screengrab below, it’s important to understand that the ALB will regularly check for the application providing an HTTP status code 200 at the specified path.  The Node.js app was created to offer a basic response on the root path so the following configuration is fine.

Fig. 6: ALB health checks

Click ‘Next: Register Targets’, but we’ll skip that page and click ‘Next: Review’ then ‘Create’!

Service

The Fargate Service will provide an instance of the Task Definition to be run in the Cluster.  Navigate to AWS Console > ECS > (left side menu) Clusters > then click on the Cluster we created “fargate-demo-cluster”.  And at the bottom of the screen will be a tag for ‘Services’, click the button ‘Create’.

Fig. 7: Service creation

On the next page fill in the following info:

  • Launch type: Fargate
  • Task Definition: pull down the menu and you will see our previously configured ‘fargate-demo-task-definition’.  As you upload more revisions to this ECR repository, the revision numbers will increase.
  • Cluster: pull down the menu and find the ‘fargate-demo-cluster’ created previously.
  • Service Name: I have entered “fargate-demo-service”
  • Number of Tasks: enter ‘1’ for this demo.  You may wish to increase this depending on your application.
  • Tags: always be tagging!
  • … click ‘Next Step’.
Fig. 8: Service configuration details

On the next page, edit the following:

  • Cluster VPC + Subnets: It’s important to select your target VPC here, which will probably be your default.  But it needs to be the same as where the ALB was created earlier in this article, also select the same subnets.
  • Security Groups: click ‘Edit’ and add a Custom TCP with port 3000, and then delete the HTTP with port 80 (as this won’t be used).  The 3000 corresponds to the container’s externalized port.
    • (See Figure 9 below.)
    • … click “Save”
  • Load Balancer Type: select the radio button for “Application Load Balancer”, which will then display a pulldown where we can select the “fargate-demo-ALB” we had created earlier.
  • Container to Load Balance: pull down this menu to select the “fargate-demo-container-image” and click “Add to Load Balancer” and this will change the wizard’s form.
    • (See Figure 10 below.)
  • In the updated form, modify the following:
    • Production Listener Port: change to 80:HTTP, this is the listener originally created during ALB creation.
    • Path Pattern & Execution Order: set to ‘/’ and ‘1’ respectively, this will enable the ALB to forward base path requests to the application.
    • Health check path: also set to ‘/’, to ensure the Fargate Service doesn’t incorrectly infer that your app needs to be restarted.
  • … click “Next Step”
Fig. 9: Creating the Security Group
Fig. 10: Container for load balancing

Now the Set Auto Scaling screen is presented.  This can be bypassed by selecting “Do not adjust” in the first option, but I’ve described a minimal scaling configuration below:

  • Minimum, Desired & Maximum number of tasks: I have set as ‘1’, ‘1’ and ‘3’ respectively.  Self explanatory, and configure as your app requires.
  • IAM Role: select ‘Create new Role’
  • Automatic Task Scaling Policy
    • Policy Name: I have named it ‘fargate-demo-auto-scaling-policy’
    • ECS Service Metric & Target Value: there are three options here, I have had best experience with sticking with ‘ECSServiceAverageCPUUtilization’ set to 75%
    • (See image below.)
  • … click “Next Step”
  • Review the final configuration and click “Create Service”
Fig. 11: Number of tasks for scaling
Fig. 12: Scaling policy

In the page to view the Service, after a few minutes the Task will be listed as Running!

Fig. 13: A task with status RUNNING

Go back to the AWS Console > EC2 > Load Balancers.  In the “fargate-demo-ALB”, grab the DNS Name.

Fig. 14: Grab the DNS name from the ALB

Plug it into your browser and voila, it’s the same hello world app from before we even containerized it.

Fig. 15: ALB through to Fargate and the application running

Final Thoughts and Next Steps

Note that this is only HTTP, your browser will warn that it’s insecure.  It’s easy to add a second ALB listener on port 443 and at the same time bring in a certificate from ACM.  Then point your Rt53 to the ALB (via alias) and you’ll have your app securely offered over HTTPS!

US Senate Commerce Hearing: Section 230

On October 28th, the US Senate Commerce Committee held a hearing to discuss Section 230. (Here’s a descriptive page on 230 by the Electronic Frontier Foundation.) In short, online entities that host content provided by other providers (e.g. users or publishers) are shielded from certain laws which might otherwise hold the online entity legally responsible. This is the very basic gist of Section 230 when it was included in a bill signed into law in 1996.

Unfortunately, the title of the hearing was “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?”, and it was held only six days before the US Presidential election. In the context of President Donald Trump’s numerous labeled tweets on Twitter, and Twitter also blocking the sharing of a New York Post article about Joe Biden’s son, the hearing had very political overtones.

I’m not writing this post to delve and squabble over the partisan aspects of the hearing. In fact I’m glad it brought ‘230’ to the public’s attention and made headlines. It’s very pertinent legislation signed almost a quarter century ago which continues to shape the behavior, products, and policy of the internet giants and the products to which we’re addicted. As expert witnesses (voluntarily, not by subpoena) the committee hearing included Mr. Jack Dorsey (Twitter), Mr Sundar Pichai (Alphabet / Google), and Mr. Mark Zuckerberg (Facebook).

The actual webcast is 4h12m long, below are some notables from the hearing’s website. Each committee member was allotted seven minutes for questions to the witnesses, so you can jump around as desired. But I found it really worthwhile to listen to the hearing for the sake of removing news/media filters before it gets to your ears:

  • (The webcast displays the title page through 28:35, skip it.)
  • Mr. Dorsey’s PDF testimony; specifically section III titled “Empowering Algorithmic Choice.” Twitter has arduously honed its algorithms to best float to your feed the tweets you would like to read (i.e. maximizing eye-ball time). Mr. Dorsey’s remarks here are acquiescing to industry experts’ recommendations that might help tamper the echo chamber.
  • Political slant: conservatives tend to want these companies to be more hands-off on content, while liberals would like to see more moderation for specific causes:
    • 30:55 (Senator Roger Wicker – R) “This liability shield has been pivotal in online platforms from endless and potentially ruinous lawsuits. But it has also given these internet platforms the ability to control, stifle, and even censor content in whatever manner meets their respective standards. The time has come for that free pass to end.”
    • 42:50 (Senator Maria Cantwell – D) “I hope today, we will get a report from the witnesses on exactly what they have been doing to clamp down on election interference.”
  • Mr. Pichai’s opening remarks at 52:40. Google is clearly the secondary invite to this hearing, and listening to Mr. Pichai’s sidestepping of the direct aim of the meeting by describing how “the internet has been a powerful force for good”, or how Google helps mothers answer “how can I get my baby to sleep through the night” is politically savvy.
  • Twitter’s terms of service insight at 1:11:00 through 1:14:00, Mr. Dorsey explains how radical (jihad, annihilation of Zion) tweets by foreign leaders are considered “sabre rattling” and thus not tagged.
  • Misinformation, at 1:24:05 Mr. Dorsey goes one level more in detail on what Twitter’s misinformation policy includes: “manipulated media, public health (specifically Covid), and civic integrity, election interference and voter suppression.” Senator Cory Gardner (R) notes that this misinformation policy would not tag Holocaust denial tweets.
  • Senator Amy Klobuchar (D) at 1:31:00 through 1:33:00, questioning Mr. Zuckerberg on the political ads on Facebook including aspects of volume, revenue, profits, and automatic placement versus (apparently) scant review (by algorithm or by human).
  • Senator Ted Cruz (R) starts at 1:54:20 with pointed remarks of “the three witnesses we have before this committee today collectively pose, I believe, the single greatest threat to free speech in America and the greatest threat we have to free and fair elections”, and continues with very sharp questioning of Mr. Dorsey. Lots of great sound-bites and headline worthy quotes from this segment. Battle of the beards!

Centrists for 2020

My home, fireplace, and American flag.

I just voted.

When I was hanging the American flag in our living room this summer, I had a hard time deciding how to center it as we have an irregular fireplace. Do I align it by the brick, mantle, sconces, ceiling beams?… There were many choices I could use to achieve the balance for my display, even though it was at maximum a matter of 6″ horizontal distance.

(At this point I’m sure you are anticipating the political metaphor.)

The US is a very polarizing political scene. Binary. Blue or red. Populism (on both ends of the spectrum) devolves into volume and not enough substance. The level of partisan politics in Congress is very high, as evidenced by the diminishing cross-aisle legislation being passed.

But there’s the center. A whole bunch of us moderates that just want bipartisan efforts in making the Executive + Legislative branches productive to run the country. Professionalism and cooperation among politicians! Maybe one day we’ll have three, four, or even five major presidential candidates that enter the race knowing they will never enjoy the effects of majority politics. But for this election there are two major candidates.

My vote was for the person whom I believe has the best chance of bridging the aisle, and the morality + character to steward four inspiring years of Executive branch leadership.

Being a centrist is not no wo/man’s land. In fact it’s more important than ever. “The center must hold” (Tony Blair).

OpenSSL for CSRs

In the age of Cloud-anything, there’s a managed service for everything. And that includes all aspects of PKI (public key infrastructure). PKI ensure that people get the lock icon (indicating a secure connection) when visiting your site. If the hosted site is completely within a single Cloud environment or some other PaaS you can take advantage of such managed services. But in IT/Software consulting there will probably be some divide in the governance of PKI material, and you may be required to submit a CSR, Certificate Signing Request. Here are my notes and openSSL commands with how I’ve managed this in a few projects recently.

Warning: I’ve tried to keep things generalized, but guaranteed there will be differences in the specifics of your situation.

But before I begin, let’s make sure this page is for you. Here’s our User Story:

As an extremely diligent and cybersecurity minded infrastructure engineer/ninja, I want to submit a Certificate Signing Request (CSR) to my client (who owns the domain big-client.com), so that the client can return a certificate to me with which I can configure my AWS ELB (or other TLS termination) to offer secure https on the software I have been contracted to build and host for said client.

an Extremely Diligent and Cybersecurity Minded Infrastructure Engineer/Ninja

First, let’s start with all the files we’ll handle…

template.cnf     // template for the input into openSSL
big-client.cnf   // same file as above, but completed

# output files
big-client.key
big-client.csr   // <--provide this to domain owner / client

# Certificates returned from domain owner
# (roughly named here: root, intermediate, 'leaf')
trustedRoot.crt
intermediate.crt
big-client_ai.crt

First, we use template.cnf to create big-client.cnf. The info below will need to exactly match what your client requires for their PKI processes. The commonName is the most important part, it’s what domain you are going to protect. If your client supports Subject Alternate Names (SAN). You can add them as separate lines in the ‘alt_names’ section.

$ cat template.cnf

[ req ]
default_bits       = 4096
distinguished_name = req_distinguished_name
req_extensions     = req_ext
default_md         = sha256
prompt             = no
[ req_distinguished_name ]
countryName           = US
stateOrProvinceName   = 
localityName          = 
organizationName      = 
commonName            = 
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1   = 
DNS.2   =
DNS.3   =

$ cat big-client.cnf

[ req ]
default_bits       = 4096
distinguished_name = req_distinguished_name
req_extensions     = req_ext
default_md         = sha256
prompt             = no
[ req_distinguished_name ]
countryName           = US
stateOrProvinceName   = Massachusetts
localityName          = Amherst
organizationName      = Big Client Inc.
commonName            = big-client.ai
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1   = api.big-client.ai
DNS.2   =
DNS.3   =

Great, from this big-client.cnf let’s build the CSR!

$ openssl req -new -newkey rsa:4096 -config big-client.cnf -reqexts req_ext -keyout big-client.key -out big-client.csr

# (we're creating a 'newkey' here, but with openSSL you can specify an existing key)

# part-way through this invocation, you'll be asked for a passphrase, you could leave this blank.  If you enter a non-empty passphrase, you need to remember it!

# outputs...
big-client.key
big-client.csr

It’s a busy Monday, your fingers are flying on the CLI and you’ve lost track of all these CSRs for multiple clients. Here’s a command to inspect the content of a CSR:

$ openssl req -text -noout -verify -in big-client.csr

Provide the big-client.csr to the client. (You should not provide the big-client.key.) The cybersecurity ninjas at your Big Client should then perform some black-box awesomeness to turn this .csr into a certificate, and return to you a root, intermediate, and ‘leaf’ certificate (to form the chain of trust). Before you configure these certificates (and big-client.key) into your infrastructure, check the ‘leaf’ certificate that it matches the commonName and alt_names:

$ openssl x509 -in big-client-leaf.crt -text -noout

OpenSSL is a very very deep space. Just a reminder, this is a very narrow example of commands. Be sure to dig deeper into your use-case, requirements, and then any other needed flags of the openSSL tool to make sure you offer secure web services to your own and clients’ customers.

Kodak(!?) Early Stock Movements

Firstly, yes this is a blast from the past for the “Eastman Kodak Company” NYSE:KODK (which filed for bankruptcy in 2012), probably most famous for its photographic film. (Kodak was an early innovator in the digital camera space, but it didn’t become the blockbuster/pivot they had hoped for…)

Kodak was awarded a $765 million loan from the US Government (for producing drug ingredients). That figure is nearly 8x KODK’s market capitalization on July 24, about $92 million. It’s a huge loan for KODK. To emphasize, it’s a very very material development for KODK. But the news and information was haphazardly released, and there was a lot of early trading which got ahead of a big upswing in the stock price.

This Fox Business article does a great job of the more detailed timeline, but the rough notes are:

  • Monday July 27: Tweets and local articles are alluding to an “initiative” between KODK and the US Government.
  • Tuesday July 28: WSJ published an article, and KODK announcing later in the afternoon

Now, in my opinion, is an illustration of everything that is wrong with this situation…

KODK price and volume July 23 – 30, 2020, labels A,B,C

Label A: Monday July 27, the day before the formal announcement, and tweets & local articles alluding to the “initiative”, the KODK stock gains 24% and the trading volume is 17x the trailing two days! That’s a lot of activity based on not much news. It’s a gray area, since information was published online it might not be insider trading, but it just looks clumsy between both KODK and the US Government. At some point, too much information was shared and too early. For those who did buy on Monday…

B: The news of the loan is distributed during the course of Tuesday July 28. This is a loan. Not some new product, revenue, or M&A/transaction. For a company struggling to reinvent itself after a 2012 bankruptcy, the loan announcement attracts enough street money to triple the stock price. Zombie market much?

C: Wednesday July 29 and the news has been out for a full day. My speculation, but now the common or retail investor is wanting to get in on the action and chasing the gains. Such as from the convenience of smart phones using Robinhood. Money pours in and KODK is up another 4x!

In the span of three business days, KODK was up 1580%! I understand it’s the nature of the game for people to chase the gains on Wednesday, but the early news bits on Monday just look so unprofessional to anyone who was involved in formalizing the loan. Especially in these extraordinary times, the public markets deserve better!

Context and Perspective for Mindful Professionalism

Almost ten years into IT/software consulting has taught me how to stay mindful, enthusiastic, and maintain perspective. I was in my backyard and happened upon an analogy with some morning glories I’m trying to grow.

“How did I get here? Why am I here?”

Not: “What am I doing here? How did I get myself stuck here, all by myself…”
Rather: “Let me learn all I can about the space I’m in. I’m sure there’s more to learn and I’m sure someone else would find my insights & experience valuable.”

“We’ve come a long, long way together…”

Not: “Look at everything I have to keep track of, all the loose ends I left behind, and the crap I’ll probably have to deal with again at some point.”
Rather: “I made it to where I am not just because of my successes, but also from the failures and having learned from them. I can extract confidence from either type of outcome, and this is my inner portfolio.”

“Where I’m going, I might not need a road…”

Not: “It’s all just one soul-sucking climb that never ends.”
Rather: “The future me may have different goals, let me chew off what I think I can do today, this week, this month, and this quarter. I know people will recognize and respect that I’m trying to build towards a larger goal even if I’m not 100% sure where or what it is.”

(Organic insecticidal soap really helps keep the bugs off.)

Thank you for viewing my morning glories.

WGET

WGET, an indispensable tool for working with the web. Below are a few examples extracted from my CLI cheat-sheet, with explanation on syntax.

WGET & CURL: equivalent examples

wget -O index.html www.exampledomain.com

# The -O (upper case) is optional, and if omitted it usually saves as the index.html for the website.
curl -L www.exampledomain.com > index.html

# The -L follows all redirects before returning data.
# Curl normally spits out to the CLI, but the '>' redirects the named file.  Beware '>' overwrites, sometimes useful is '>>' to append.

Debugging

Sometimes it’s really useful to be able to grab a website in its entirety.

wget -pHk www.exampledomain.com

-p  fetches all accompanying assets (images, css, js) to view the site
-H  enables recursive run, to fetch assets from other hosts
-k  after downloading, this will change all asset links to local/relative

Extra

du -sh $pwd

After a wget -pHk (in an empty directory), use du -sh $pwd to see the size of your website. I’ve found this to be a good statistic to keep track of for UX/mobile purposes. Though there’s a lot to consider whether it’s CSS, JS, or other and a large website doesn’t necessarily mean it’s slow.

Hello World!

After reading and benefiting from so many peoples’ cool blogs, I decided I wanted to have fun too.

I’m a Software Consultant, love talking about technology, and have wide interests outside of work (because I’m human).

To start my blog, I first started building a tech stack to host my blog. I was armpit deep in standing up a jekyll app, connected to a github repo, using an EC2 instance to build, then deploy to S3 where I could then CloudFront, then buy a cert in ACM to front and then use Rt 53… blah.

I just want to blog, man. I know I have hated on WordPress before. Credit card swiped and I have a personal account for a year, and my own domain!

Hello World!

From London to Applebees

Photo by Dan Gold on Unsplash

This post requires a little bit of context. At the end of 2019 my wife and I were preparing to move back to the USA after more than two years living and working in central London. It was a very fun and educational time with lots of travel, but there were cultural things I missed back in the USA. I wrote the following missive and shared with a thousand strong social media group of American ex-pats. If you have ever lived or worked in Europe I would hope you could recognize the humorous contrast, or otherwise enjoy how much fun a trip to Applebees actually is…

I’m an American who has been living abroad in London for the past 10 months.  There are some things I really miss.  But this is why I love to travel: because while you’re learning about your destination, you’re also learning about where you came from.

I’m looking forward to being home.  I still am enjoying my time abroad, but there are things I will certainly cherish when I get home.

I want to share a story of how I want my first Friday to go when I get back Stateside.  I want to do something so ordinary, and just enjoy it because the following simply isn’t possible in Europe.

I want to go to Applebees.

I’ll drive my car into the dedicated parking lot, and find a spot that is astronomically wide.

As I walk through the front door, I am enveloped by the aroma of hamburger.  I will be greeted by a high-school aged host/ess with a huge amount of cheerfulness and acne.  Directly behind said hostess will be an employee in training awkwardly trying not to be awkward.

My friends and I shall be seated in a booth with seat backs that are six feet tall, and I will sit down and sink a full five inches into the cushioning.

The menus will already be at the table, filed neatly behind the salt and pepper shakers.  But before I can reach for the menu, a bus-person brings a 40 oz plastic cup of iced water, and leaves me with a giant straw, and then drops four spare straws on the table just-in-case.

The menus are opened, and they contain more pictures than words.  I don’t need to try to visualize what is offered by the menu, the pictures do that for me.  Every word on the menu is in English.

The waiter/waitress arrives and as I order my burger, I am delightfully informed that I can substitute sweet potato fries, and I will accept the offer.  Guacamole will only be $1 extra, and I will take that as well.

The waiter rushes away with our order.  Suddenly five other wait staff members emerge from the kitchen, all clapping in unison.  Hooting and hollering they proceed to the table next to my booth, and surround one of the guests.  A baritone and multi-tune rendition of “The Happy Birthday Song” is sung.  Not “Happy Birthday” because that’s still under copyright and Applebees’ lawyers have wisely sidestepped that liability.  And as an American, I greatly appreciate that legal distinction and make comments about it to the rest of the people at my table.

Beer arrives within 67 seconds, served in a super-chilled mug that causes some of the water content to freeze on the surface.  Someone at the table will assuredly remark that the freezing temperature of alcohol is “actually wicked lower than when water freezes.”  (This Applebees is located in Massachusetts.)

My burger arrives, and I know it’s mine and with the correct temperature because stuck in the top of the bun is a color coded toothpick.  Only now is the full design of the booth appreciated.  After initially sliding and sinking into said booth, the level of the table is perfect so that one can place forearms against the edge of the table, hold a burger, gently lean in, and form a perfect triangle with torso, arms & table so that the burger hovers over the plate and catches all over-spilling condiments and toppings.

The waiter will perform two perfectly timed flybys of the table to ask how things are going.  (Both times, my mouth will be full and mid-chew, but a quick glance and a sigh will convey the message.)  The ketchup bottle is empty but a new one appears within 9.4 seconds.  Usain Bolt ran 100 meters in a lethargic 9.58 seconds.

Plates are cleared as soon as we finish, and my forearms can now stretch out across the table as I lean back.  I sink even further into the booth’s cushioning, achieving post-meal Stage 2 depth.

“Would you like some dessert?”  It’s the most delightful upsell attempt, but the answer is always ‘no’ and the next step in the protocol will be “I’ll get your check right away.”  Note, to get the check I did not have to do any of the following (as one might do in Europe): sit idly for 2 hours, chase down the waiter at the other end of the restaurant, or stay past the closing time of the restaurant.

Credit cards are swiped, and I’m back on my feet no more than 36 minutes after I first sank into the booth for the meal.  The question: where besides the USA can you do a sit-down meal in half an hour?  The correct answer is not France.

The food portions were just a little bit more than I needed, and as a result it’s a relaxed and careful walking pace as we make our way out the front door.  An utmost attempt is made to avoid eye-contact with the dozen people waiting for a table, who are murderously envious of our condition.

Outside, it’s sunny, and hot.  Probably really hot and humid.  The car is two first-downs away from the restaurant front door, while walking it’s just enough time to situate your sunglasses on the bridge of your nose and run your hand through your hair.  We slide into the car, which is an oven, but one minute later we’re bathed in Air Conditioning powered by a large V6 engine.

I pop the car into ‘D’ for Drive, glide out of the parking lot, and make a right onto Main Street and half a mile later merge onto the highway.  Then it’s 70 mph all the way home.

I love America.