Painting with Acrylics

TLDR: I’m not good at painting, but it’s a nice form of relaxation. And it’s fun. I would highly recommend it. It’s a cheap hobby.

I’m in front of my laptop at least 50 hours a week. My partial escape is yard work, because it’s tangible and I think it helps my hands and mind reset. (And after a couple hours of being in the yard I’m usually itching and ready to get back to my digital explorations.) That’s the balance.

Painting has been a different ‘tangible’ extracurricular activity. I gave it a go for 2022.

As I said above, I’m definitely not good, but it’s fun starting a journey with the “where can this go” outlook and attitude. I am one of those kinds of people who can recognize creativity in a particular lawn mowing pattern. But painting just blows that creativity door wide open. As I approach 40 I think this is helping my mind stay pliable and young.

Below I’ll provide a little of my experience, some insights that I’ve accumulated, and will share some of my production from the last year.


Do it for you, and not someone else (like the Gram). Really, just let the act of painting be the ends, not the means to a social media post. Throw away your first ‘n’ paintings to relieve any pressure!

Don’t be afraid to copy techniques and processes (e.g. how to add elements and what is the layering order), there are so many YouTube videos to follow. Defer the creativity aspect just a little. My goal is to eventually compose my own paintings, but I realized I (still) am missing a lot of skill+technique and I decided not to sweat that deficit.

If you’re mildly proficient in Photoshop/GIMP, continue to think in layers.

And make the time for it! 25 minutes on a Friday night is all you need. Manage your expectations and keep it simple.


  • Buy the larger tubes of acrylic paint, they seem to be better quality. Not those ~30 pack of assorted colors, they feel watered down and cheap. Less is more.
  • I started with only 4 tubes of color, to get practice in mixing and finding in-between colors or shades of colors.
  • Don’t buy a crazy number of brushes, just a core 3.
  • Buy a bunch of canvases together, and commit to using them. A 10-pack for $12.99!
    • I’m using mostly 5″x7″ or when motivated 8″x10″.
  • Paper plates are easy for mixing and cleanup.
  • Always have a paper towel handy.


A quick sampling from the past year. I threw away about a dozen canvases. I wish I had taken pictures for the purposes of this blog post, but I did not. Anyways…

One of my earliest. White and black only!

  • Layering and order
    • Sky
    • Foreground plains
    • (allow to dry)
    • Horizon trees
    • Two big trees and branches

Experimenting with more elements:

  • Mountain reflection: adding just a little brown (roughly the inverted shape of the mountains) when blending the water.
  • Glaciers and water ripples: I used a scrap piece of 1″ cardboard to ‘pull’ the features onto the canvas.

I stumbled upon a photo I had taken 12+ years ago, and tried to capture it on canvas. I like how the sea is two different colors, and demarcated by the rock.

The Newport/Pell Bridge (Rhode Island), I found this angle and composition on a postcard. This was a smaller canvas (5″x7″). The canvas surface comes through a lot more, and the bridge cabling gained a nice rough texture.

The iconic angle of the Quechee Gorge. TBH I don’t’ like the quality or execution of any of the elements on their own. But the order of layers was a fun execution.

Only three colors: white, brown, black. And my ‘go to’ elements: sky, glaciated mountains, and tree. But with this one I let go just a little, and used longer strokes anytime I touched the canvas.

My ‘tree’ element readily converted into a boat. The layering order made the reflection possible: gradient water, rough inverted shape of the boat, some blurring of the inverted shape, then scrap piece of cardboard to left-right slide ripples.

I happened upon an abstract technique: loosely crunch a ball of aluminum foil, then directly drip the paint onto the canvas before ‘dabbing’ for sky and ground. Then my trusty scrap of 1″ cardboard to stamp the rain.

Side Project: Wordle Solver

TLDR: a side-project Wordle Solver, and the GitHub repository (with files/lines specifically linked throughout the rest of this post).

A New Side Project

“Side projects are good and fun.  So is Wordle.”

I always try to have a side-project in the mix.  In software development it’s quite important to stay pliable (a la Tom Brady), adaptable, and stay current to the latest in software languages, frameworks, and hosting paradigms (not necessarily Cloud just for the sake of ‘Cloud’).  It’s also important from a Product aspect.  With a side project you (as an engineer/technologist) have total control over the direction of the implementation.  The act of organizing/prioritizing what you want to implement can vastly help in your professional life where there is not as much control over the direction of Product (but on your own you will have recognized pitfalls, best-practices, or tools.)  And the value goes 4x when you collaborate on a side-project with 1+ other people.

Wordle Trie Search

“Using less electricity is good.”

One day my former roommate from college (Alex, a very bright computer scientist) sent me a text with a link to his Github repository.  He had a very advanced start on a Wordle guess validation algorithm, implemented using a recursively traversable trie structure containing ~all/most of the English words in the 5 character space.  (What’s nice about search trees is that search operations are much more efficient than a naive/linear approach.)

From the Wikipedia article Trie
Credit: Booyabazooka (based on PNG image by Deco).  Modifications by Superm401., Public domain, via Wikimedia Commons

Immediately I’m mentally committed.  This Wordle thing had taken off, I had played it a couple times and I loved the idea of being able to work with Alex again and build something in the Wordle arena. 

Updating the Algorithm

“He will win who knows when to fight and when not to fight.”— Sun Tzu

Alex had the algorithm at an 80% complete state.  Though we recognized it was not using all the information of a guess which had a correct letter but wrong location.  This code change/commit fixed the algorithm and would preclude unnecessary traversals of the trie.

For the Internet

Real artists ship.”— Steve Jobs

(No I’m not claiming to be an artist.  Just a technologist.)

I didn’t start this project, so I went looking for how I could bring extra value (i.e. enter a space for implementation that wasn’t being served yet).

Software is useless unless you have a channel to distribute it.  That’s why the Internet is so valuable.  Professionally I was already very familiar with the Java Spring framework, so I committed myself to creating a REST API to expose the underlying algorithm.

I created a Spring sub-project within the same repo, and referenced the algorithm and supporting files using symlinks which actually worked with the build!  I thought this was a neat way to include Alex’s code.  (I don’t know if I would recommend this approach professionally, it’s a little hacky.)

Automated Testing

They test it.  Exactly.

A nice addition by our third collaborator, Tyler, brought in some github workflows driving some unit tests.  This helped identify if anything was broken by a feature/bugfix branch.  Bonus: the unit tests in the ‘sub-project’ Spring application could be run consecutively after the root level tests.

For the Internet, take two

Wrapping the algorithm in Spring was not the correct idea.  I had not thought through on how I wanted to host the application.  An executable jar could have been compiled, but would have needed a virtual host or container to run on.  So instead I spent a weekend to wrap the algorithm a second time but using the AWS Lambda Handler so it could be run serverless.  (This could cut low-traffic hosting from $20/month down to about $2.)  Also some AWS ClouFormation automation helped (from an AWSDocs repo) with the iteration and deployments.  Though I manually integrated an API GW to the Lambda.

Front End

“If there’s a ‘trick’ to it, the UI is broken.”— Douglas Anderson

A little Bootstrapv4 CSS can go a long way, visually.  I’m not a front-end developer but being able to sling together some bare HTML, Bootstrap, forms and JQuery:ajax makes possible a lot of webapp creation.

I also included a fun animated background from a codepen project.

The HTML page is completely static and I uploaded to AWS S3 and aliased a Route53 record for a domain I own.

Lastly, it’s not quite a security thing (rather towards exclusivity), I modified the application to include an Access-Control-Allow-Origin in the header for every request to the API/algorithm.  This will instruct browsers to stop other websites from using my API, though anyone could curl against it if desired.  Or the repository is public, someone could deploy their own Lambda!

Thanks, and Wordle on!

Beijing 2022: An Adequate Downhill Course for 90mph?

The Beijing Winter Olympics start on February 4, 2022.  It will be the first city to have hosted a summer and then a winter Olympics.  As a skier this has me quite curious on how the alpine events will be accommodated, particularly the downhill discipline.  The Beijing (and surrounding area’s) climate and geography is in fact conducive to host a winter Olympics.  And the Chinese government is making sure that the infrastructure, logistics, support, and manufactured snow can be arranged, but I’m still left intrigued as the the design and suitability of the downhill race course.

Why the Downhill?…

In alpine ski racing there is a spectrum of disciplines (in ascending order of speed): slalom, giant slalom, super-G, and downhill. Slalom being the most technical/twisty, and downhill at the other end with the least amount of turns (meaning the highest speeds) and generally just follows the contours of the mountain.

Downhill is…

  • 90+ mph from gravity and waxed/slick skis pointed nearly straight downhill
  • Racers descend around 900 vertical meters (2900 feet) in about 2 minutes
  • Continuously oscillating squats of -1x (airtime)and 5x body-weight, also alternating leg to leg in the turns
  • 60 meter jumps
  • Two two-meter long, double-sided, razor sharp blades (skis) are bolted to the athletes’ feet
  • Protection: skin tight suit for aerodynamics (i.e. not protection), a helmet, and lots of faith in the run-off areas and safety netting

I posit that there is no greater test of a human’s courage, planning, nerves, execution, stamina, and power.  And with life-threatening consequences for even small mistakes any given course had better be world-class, tested, and accommodating with the latest in safety considerations and features.

The Downhill Establishment

There are downhill venues which are both historic and regular stops on the World Cup calendar: they have an indefinite hold on their respective winter weekends when the world’s elite racers arrive and are delighted to race on courses they grew up dreaming about. Some examples:

  • Kitzbuhel, Austria’s famous Streif course (since 1937)
  • Wengen, Switzerland’s Lauberhorn downhill (since the 1930’s)
  • Beaver Creek’s (Colorado) Birds of Prey downhill (since 1997, relatively new but now regular)
  • (Here’s a beautiful histogram on Wikipedia of Men’s downhill races.  Sadly I could not find an analogous page for Women’s races.)

These courses are time tested and continually tweaked for safety and technology improvements.  The Olympics run counter to this regularity, and most of the time necessitating a new venue to be purpose built for the particular winter Olympics.  Question: can a world class downhill be built safely for a one-off use?

Design, Build, Test…

It’s the classic project delivery cycle.  And absolutely a necessity if the world’s best (i.e. fastest) ski racers will be flying 90 miles per hour down a brand new Olympic race course.  It had better be fully vetted and adequately tested…

The Salt Lake City (USA) 2002 Olympics were similar circumstances to Beijing 2022.  Snowbasin (Utah) was identified for the downhill and super-G courses.  New trails were cut and a new ski lift was installed to bring personnel to the top of the course.  A national level (NorAm) competition was held, then a World Cup event hosted but only two days of training runs were completed before the rest of that World Cup race had to be cancelled.  With only training runs it still meant that world class skiers were racing the course and looking for speed in any angle, approach, nook, and cranny.  (And that included the legendary Herman Maier, who was the skier hurtling through the safety netting in a linked video earlier in this post!)  That’s the crux.  If a World Cup racer can find 0.4 more mph in a particular turn (e.g. 89.7 vs. 90.1mph) than national level competitors, how much more dangerous are the turns and jumps given that speed delta?  Has the course been engineered correctly for the elite level?

Are the run-off areas wide enough? Is the safety netting tall enough? Is the geometry of the landing areas at risk of not being long enough?

Is a downhill course fully vetted and adequately tested if it has only hosted national level competition?

Yanqing Alpine Ski Center

Located approximately 50 miles north of the city of Beijing, the Beijing 2022 organization has thoroughly showcased their focus and accomplishments on the supporting infrastructure (e.g. transportation) and logistics enhancement of their new world class venue.

The region and mountains selected for the downhill and super-G courses actually do not receive enough natural snowfall. While counterintuitively this can be good (because ski racers and organizers appreciate icey/rockhard courses which don’t change during the course of a race day), it can simply feel wrong and counter to the spirit of the winter Olympic games.  Ultimately this was a decision by the International Olympic Committee and Beijing’s proposal, that a successful games could still be hosted given the lack of natural snow.  At elite levels of competition it’s common to rely on artificial snow.  Such as in my home state of Vermont which hosts early season women’s races at Killington who are always eager to announce when ‘positive snow control’ is achieved.  But Vermont is a place associated with large quantities of snow.  Yanqing doesn’t receive much, and officials needed to start making snow on November 15 ahead of the February competition.

Bernhard Russi (an Olympic downhill course designer) lauded the Yanqing mountain (and the possibility of the ultimate course) back in 2019. A member of the design team for Yanqing’s downhill is Tom Johnson (US Team alpine technical advisor), who acknowledged the limitations on access and testing.

Formal testing?  The Chinese Winter Games (national level) hosted a downhill on the course in January 2020 (similar to the pattern of testing for Salt Lake City 2002).  But COVID and the pandemic precluded the arrival and racing by world class athletes for subsequent events.

  • A Men’s downhill and super-G were cancelled, originally scheduled for February 15, 2020.
  • A Women’s downhill and super-G were also cancelled, originally scheduled for February 27, 2021.


Nestled somewhere in the summits and topography of the new Yanqing Alpine Ski Center is a high speed downhill course. The Men’s downhill is scheduled for February 6, 2022 and the Women’s for February 15, 2022. At the very least I hope to have imparted an appreciation for the risk, racing peril, and logistics involved in the downhill event.  If you find yourself cheering on your respective compatriots during the downhill or super-G, please pause to consider that these are amazing athletes, made up of the right stuff, risking life and pushing the limit in extraordinary ways.

10 Malicious Requests Against My Web Application

During a recent coding experiment/competition I had a (very rough) NodeJS app which I needed to deploy and host. Horror of horrors, I manually installed it onto a bare EC2 and pointed an Elastic-IP. Using pm2 (process manager) I was up and running very quickly, and writing request logs locally.

PORT=8080 pm2 start bin/www --time --output ~/log.txt

What’s nice about running on IaaS (vs. PaaS) is there’s a lot more control and insights. Specifically the log.txt named above. I could see the legitimate requests and traffic hitting my app from my colleague coders, but there were a lot of other requests causing my application to return 404 Not Found. I was curious and started duckduckgo‘ing and discovered a lot of them were attempted web exploits hoping my server was vulnerable.

Below are ten malicious requests narrated with some of my cursory research.

I don’t claim deep expertise in any of these attacks or technologies. (Please note my non-authoritative tone where I’ve written “I believe”.) Cybersecurity is a very deep field, and if I was architecting a truly critical system there are many tools or appliances which can recognize and block such threats or malicious requests instead of my naively exposed EC2 instance. While it was entertaining to do the research below, I could have spent days looking deeper and learning about the history of each vulnerability or exploit.

Bonus: I have this list hosted in a public github repository, and I would welcome any pull requests to help correct, inform, or expand on anything below.

1) PHP and MySQL

2021-04-25T10:19:17: GET /mysql/index.php?lang=en 404 0.940 ms - 1103

PHP is a very common framework in the web development community, and there are many sites describing how it can integrate with mySQL. ‘Index’ here with the php extension implies some code process and not simply fetching a static resource (such as an HTML file). Since this is under the mysql resource, it appears to be a big sniff to see if a console to the mysql db has been left open.

2) Mirai malware, bashdoor and arbitrary code execution

2021-04-25T10:21:27: GET /shell?cd+/tmp;rm+-rf+*;wget+;chmod+777+Mozi.a;/tmp/Mozi.a+jaws 404 0.964 ms - 1103

Immediately one can recognize the shell resource, that this is a flavor of a bashdoor attack or attempting to insert and invoke arbitrary code at the command line level. It first tries to clear out everything in the ‘tmp’ direcotry (cd /tmp; rm -rf *) before fetching (wget) a remotely hosted file (‘Mozi.a`, part of the Mirai botnet) and then tries to invoke.

3) AWS Metadata (not malicious)

2021-04-25T11:17:53: GET 404 1.033 ms - 1103

Not an attack, rather something particular to AWS EC2 instance metadata. I believe it’s the AWS SDK (within my NodeJS application) locally looking for the metadata containing the AWS credentials (since my web app was integrated with DynamoDB). Noteworthy is the IP is special to every EC2 instance.

4) “The Moon” against Linksys Devices

2021-04-25T11:49:04: POST /HNAP1/ 404 0.837 ms - 1103

Home Network Administration Protocl (HNAP) is a Cisco proprietary protocol for managing network devices, going back to 2007. There was a worm, “The Moon”, back in 2014, which used the HNAP1 protocol to identify specific Linksys routers (firmware etc.), and then send a second request to invoke an exploit at the CGI/script level which downloads the worm’s script.

5) Sniffing for Environment Variables

2021-04-25T14:57:06: GET /.env 404 0.919 ms - 1103

The .env file is not specific to one framework or language, but actually closer to industry convention. I think this request is hoping that the server is simply hosting a directory and that an .env might be exposed possibly revealing things like API keys or credential keys/tokens.

6) “Hey, look at my ads!!!”

2021-04-25T17:00:00: POST

I tried the URl, and it was a ‘Not Found’, so maybe it was shut down or abandoned. Maybe someone is hoping to get more traffic to a site laden with ads. More nuisance than malice.

7) WiFi Cameras Leaking admin Passwords

2021-04-25T18:04:09: GET /config/getuser?index=0 404 0.940 ms - 1103

Specific D-Link WI-fi cameras had a vulnerability where the remote administrator password could be directly queried without authentication! Hoorah for the National Vulnerability Database (NIST), the page for this vulnerability in particular was fun to read through and click the links deeper into the vulnerability and who/how it was uncovered.

8) PHP Unit Test Framework Weakening Prod

2021-04-25T20:12:45: POST /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php 404 1.083 ms - 1103

This is a vulnerability for specific version of PHPUnit, where arbitrary PHP code could be executed! (Good example why modules specific to testing should be disabled or omitted in production deployments.) Here’s a very detailed story (by a PHP expert), on how this impacted a retail website. The first link is to, a vulnerability catalog sponsored by USA’s DHS and CISA, and the actual site is maintained by the MITRE Corp.

9) JSON Deserialization Vulnerability

2021-04-25T20:12:45: POST /api/jsonws/invoke 404 0.656 ms - 1103

Liferay is a digital portal/platform product, which had a JSON (deserialization) and remote code execution vulnerability (CVE-2020-7961) in March of 2020 and documented by Code White. Bonus, here’s the scanner (github) of a scanner someone created for this vulnerability.

10) Apache Solr Exposing Files

2021-04-25T20:12:45: GET /solr/admin/info/system?wt=json 404 0.989 ms - 1103

Ranked as the #7 Web Service Exploit of 2020, even though Apache published an issue back in 2013! The above request is a scan looking for specific versions of Apache Solr (search platform), where a particular parameter is exposed and can lead to arbitrary file reading. Apparently this is combined with some other vulnerabilities to eventually get to remote code execution, detailed in CVE-2013-6397.

NodeJS, DynamoDB, and Promises

NodeJS, Express, and AWS DynamoDB: if you’re mixing these three in your stack then I’ve written this post with you in mind.  Also sprinkled with some Promises explanations, because there are too many posts that explain Promises without a tangible use case.  Caveat, I don’t write in ES6 form, that’s just my habit but I think it’s readable anyways. The examples below are all ‘getting’ data (no PUT or POST), but I believe it’s still helpful for setting up a project. All code can be found in my github repo.

Level Set

Each example in this post will be a different resource in an express+EJS app, but they’ll all look similar to the following (route and EJS code, respectively):

//  routes/index.js

router.get('/', function(req, res, next) {
 res.render('index', { description: 'Index', jay_son: {x:3} } );

(Apologies for the non-styled code. You can view the routes/index.js file in my repo.)

<!-- views/index.ejs -->
   <h1><%= description %></h1>
   <code><%= JSON.stringify(jay_son,null,2) %></code>


{ "x":3 }

(If any of this looks foreign or confusing, you’ll probably need to backup and study NodeJS, Express and EJS.)

Basic Promise

First up, a very simple Promise that doesn’t really do anything.  But this is how it gets used:

async function simplePromise() {
   return Promise.resolve({resolved:"Promise"});
router.get('/simple_promise', function(req, res, next) {
 simplePromise().then( function(obj) {
     { description: 'Simple Promise', jay_son: obj });


Simple Promise
{ "resolved":"Promise" }

The simplePromise function returns the Promise, and the value has already been computed (because it’s static data).  The route accesses the value under the then() function and we pass obj to the view for rendering.

Get Single DynamoDB Item

For the rest of this post I’m using two dynamoDB tables.  This is not a data model I would take to a client.  It’s inefficient. Rather this data is meant for illustrative purposes only for when JSON storage is a suitable use for your tech stack:

d_bike_manufacturers //partition key 'name'
  { "name":"Fuji", "best_bike":"Roubaix3.0" }
  { "name":"Specialized", "best_bike":"StumpjumperM2" }
d_bike_models       //partition key 'model_name'
  { "model_name":"Roubaix3.0", "years":[ 2010, 2012 ] }

Back to the NodeJS code:

const AWS = require("aws-sdk")
AWS.config.update({ region: "us-east-1" })
const dynamoDB = new AWS.DynamoDB.DocumentClient();

// '/manufacturer'
router.get('/manufacturer', function(req, res, next) {
 var params = { Key: { "name": req.query.name_str }, 
                TableName: "d_bike_manufacturers" };
 dynamoDB.get(params).promise().then( function(obj) {
     { description: 'Simple Promise', jay_son: obj }


Single DynamoDB Item

Very important to note here is how the return data is wrapped within a structure under the key ‘Item’.

Consecutive DynamoDB GETs

If the ultimate database item is not immediately query-able (for some reason), then consecutive DynamoDB calls can be made.  Notice how the Promises are nested.  From the perspective of performance, this is not desirable and can introduce a lot of latency if this pattern is further repeated.  (Some basic testing I did showed P95 times of around 100ms, server side.)  The point of this demonstration is that the first Promise needs to resolve before the second can be constructed. (Best to avoid such consecutive queries, but we’re illustrating here.)

// '/manufacturers_best_bike'
router.get('/manufacturers_best_bike', function(req, res, next) {
  var params = { Key: { "name": req.query.name_str }, 
                 TableName: "d_bike_manufacturers" };
 dynamoDB.get(params).promise().then( function(obj) {
   if (Object.keys(obj).length==0) {
       { description: 'Consecutive DynamoDB Items', 
         jay_son: {err: "not found"} 
   var bikeName = obj.Item.best_bike;
   var params2 = { Key: { "model_name": bikeName }, 
                   TableName: "d_bike_models" };
   dynamoDB.get(params2).promise().then( function(obj2) {
       { description: 'Consecutive DynamoDB Items', 
         jay_son: obj2 }


Consecutive DynamoDB Items
      "years":[ 2010, 2012 ]


Promises aren’t strictly run in parallel, rather the creation of the Promise starts the respective work.  Using the all() method simply waits for all to resolve and can make your code look a lot cleaner.  In the resource below, we’re querying DynamoDB twice using information from the supplied query parameters.

// '/manufacturer_and_bike'
router.get('/manufacturer_and_bike', function(req, res, next) {
  // Two query params
 var params = { Key: { "name": req.query.name_str }, 
                TableName: "d_bike_manufacturers" };
 var params2 = { Key: { "model_name": req.query.bike_str }, 
                 TableName: "d_bike_models" };
 var promises_arr = [
 Promise.all(promises_arr).then( function(obj) {
     { description: 'Promise.all() for Concurrency', 
       jay_son: obj } );


Promise.all() for Concurrency
         "years":[ 2010, 2012 ]

Note how the items are returned in an array, and in the order of the array of Promises.  One gotcha with Promise.all() is that it’s ‘all’ or nothing, and you’ll need to ‘catch’ if any one of the promises fail.

DynamoDB BatchGet

Using the AWS SDK to perform more of the data operations on the Cloud side is always a good idea.  Instead of iterating over many keys, include all the keys in a single request.  Note the form of the params object allowing us to query multiple tables.

// '/manufacturers' (batch get)
router.get('/manufacturers', function(req, res, next) {
 var manufTableName = "d_bike_manufacturers";
 var manufacturers_array = req.query.names_str.split(",");
 var bikeTableName = "d_bike_models";
 var bikes_array = req.query.bikes_str.split(",");
 var params = { RequestItems: {} };
 params.RequestItems[manufTableName] = {Keys: []};
 for (manufacturer_name of manufacturers_array) {
     { name: manufacturer_name } );
 params.RequestItems[bikeTableName] = {Keys: []};
 for (bike_name of bikes_array) {
     { model_name: bike_name } );
 dynamoDB.batchGet(params).promise().then( function(obj) {
     { description: 'DynamoDB BatchGet', jay_son: obj }

Output from console.log(JSON.stringify(params,null,2));

         "Keys":[ { "name":"Fuji" }, { "name":"Specialized" } ]
         "Keys":[ { "model_name":"Roubaix3.0" }, 
                  { "model_name":"StumpjumperM2" } 


DynamoDB BatchGet
  "Responses": {
    "d_bike_models": [
        "model_name": "StumpjumperM2",
        "years": [
        "model_name": "Roubaix3.0",
        "years": [
    "d_bike_manufacturers": [
        "name": "Specialized",
        "best_bike": "StumpjumperM2"
        "name": "Fuji",
        "best_bike": "Roubaix3.0"
  "UnprocessedKeys": {}


A little JSON can go a long way especially when building an MVP, or dealing with unstructured data. IMHO NodeJS + DynamoDB is a very powerful pairing to facilitate this data in your AWS premise.

Happy coding.

Diligent (not Dry) January

I really enjoy beer. I also enjoy this Ben Franklin quote (though apparently he never said it):

“God made beer because he loves us and wants us to be happy.”

– not Ben Franklin, but found on t-shirts on college campuses across the country

When the end of December is approaching, New Year’s resolutions become the default water cooler topic. (Sorry, it’s pandemic. Rather it’s the conversation piece which fills the first two minutes of Zoom calls.) And on the topic of alcohol consumption people sometimes commit to a dry January. Truly, kudos to everyone who commits and completes. Even if it’s not to completion, partial participation is certainly beneficial. To me it feels like a half baked idea. It’s not something that will have a lasting effect on my health or habits beyond the month of January. I’ll pass.

Instead, as 2020 was winding to a close I envisioned a ‘structured’ January for my alcohol consumption, my physical exercise, and myself. At a high level, I wanted to…

  • Enjoy an occasional beer through January
  • Have some form of accountability with respect to an exercise regimen
  • Include metrics and statistics, because everything is more fun when numbers are attached

So I sat down with a beer (paradoxically) in hand, and I started a Google Doc to define my program. The result:

  • At a minimum, require myself to run one mile per day, on average. (Or I could run 3 miles every third day, etc.)
  • One mile run would equate to one beer. (If I ran, on average, 2 miles a day, I could still enjoy a beer every day in January.)
  • 50 burpees would equate to one mile run. This to allow for foul weather as I could do burpees in my basement. This also gave me an upper body aspect to the regimen.
  • Balances would carry over to all subsequent days
  • Caveat: I’m not a fitness professional! Consult your own personal trainer!

And this is how it looked after January was done:

MilesBurpeesBeersBank Accountcommentary
1-Jan2.261.261.00 is subtracted every day
3-Jan2.003010.86Miles, burpees, and a beer. A good day.
7-Jan1-0.24I didn’t specify that I couldn’t go negative…
12-Jan701.02Hitting a groove with burpees, feeling great…
13-Jan2.532.55… but I’m not CrossFit crazy.
18-Jan3.002.19Wait, do I enjoy running?
21-Jan2.003012.29Sam Adams Winter Lager, great.
25-Jan651.45This was a rainy week, hard to fit in runs.
31-Jan2.0010.31Finish strong.
Totals26.8372410Numbers are fun. Bring it, February.

Takeaways? I think the biggest one is that a beer is so much more enjoyable when it’s earned, especially with sweat. I think it’s an all too easy, and bad, habit to keep a 6-pack chilled in the fridge for the occasional or random drink. The next takeaway was the convenience of converting miles and burpees. On bad weather days this allowed me to keep pace, and if I did this program again I might add a third or fourth conversion: 200 jump ropes, etc.

Here’s a link (view only) in case anyone wants to steal/copy my Google Doc and use for their own ‘diligent’ month and not just for January. Interested in any of the other eleven months of the year? DM me and we can set up a group program for March. Maybe it’s the start of a billion dollar app idea, which we can eventually SPAC

Now please excuse me, I have a chilled Long Trail VT IPA in the fridge waiting for me. And, no, I did not do any running to securitize this beer.

Subnet CIDR Coverage Calculator

Oh no…. Not another CIDR calculator, because there really are a bunch.

Allow me to advocate for my app CIDRizer, focused on CIDR space coverage. If one has a CIDR block (account, VPC, etc.), and their engineers take a little slice here and there (of varying mask size and probably not consecutive), how can one actually see what’s left?

Example (CIDRizer input):   # Overall Account/VPC  # Jim's tests # dev4a # Ruth's team

… can be completely covered/represented as follows (CIDRizer output):



Simplest Coverage CIDR Blocking (SCCB)


Algorithm & Code

I have made the repository public, with an ISC license. This unit test file details what the core algorithm is doing (and can be run via npm test), the rest of the project’s code is simply enabling it as a NodeJS web app.

Run / Deploy

As usual npm start, but there’s a serverless.yml which is what I’ve used to deploy it to my domain

That’s it. My contribution to the CIDR calculator space.

Britishized Slogans of American Companies

In a previous post, I had written (humorously, hopefully) on the contrast between American and European restaurant culture. Recently I’ve been digging through my Google drive and found a list I had written of corporate slogans and their respective translations into British parlance. The humor is all in the contrast IMHO, and how the British are quite particular and politic with any public messaging. My favorite British word: ‘sorted‘. I highly recommend giving it a try in your next professional conversation.


Original: Just do it

British: Say ‘sorted’ sooner


Original: Think different

British: Mind other potentially revealing perspectives


Original: It’s finger lickin’ good

British: So tasteful your fingers will be improperly dirty

Coca cola

Original: Open happiness

British: After opening, feel fresher


Original: I’m loving it

British: Difficult to detest

Dunkin Donuts

Original: America runs on Dunkin

British: Enjoy an American size portion of caffeine, and with a donut


Original: Get in the zone

British: Be immersed and focused with your automobile

Aflac (supplemental health insurance)

Original: Ask about it at work

British: You already have NHS

Quest Diagnostics

Original: The patient comes first

British: First the Queen, then wherever the patient is in queue. Please mind the queue.

Home depot

Original: More saving. More doing.

British: Be efficient with both your money and labor

Tractor Supply Company

Original: For life out here

British: For the Cotswolds or even further…

Wells Fargo

Original: Together we’ll go far

British: Banking so proper you won’t want to try anywhere else.

… how about the other way while we’re here …

Americanized slogans of British companies:

BT (British Telecom)

Original: It’s good to talk

American: Never miss an important post, DM, or stream. Ever.

Tesco (super and express markets)

Original: Every little helps

American: Everything you need, quickly

British Airways

Original: The world’s favorite airline

American: The best airline in the universe

Marks & Spencer (grocer and department store)

Original: The customer is always and completely right

American: We promise you won’t go wrong


Original: Make the most now

American: Grab life and do your thing


Original: Own a Jaguar at a price of a car

American: Get in a Jag now, we’ll figure out the financing (subject to terms and conditions, assuming an 84.7 month lease, medium-good credit score or verbally stated income stream(s), and variable interest financing adjusting every 39 days but not to exceed 200 basis points movement or fall below the predominant Greek 10 years bond rate whichever is higher, subject to cancellation and not available in the states of Idaho, Florida, or Reno Nevada due to ongoing litigation)

Advent of Code 2020: Day 18 Order of Operations (Arithmetic)

As I had written in a previous blog post, I participated in the Advent of Code, 25 programming problems to help improve my skills. I maintained really good momentum through day 20 before holiday activities forced me to pause. (Bonus: I have 5 really solid problems to play with for the next month.)

One of the best aspects of such a side activity is discussing the problems with some of my colleagues, and seeing the different approaches. It was important we treated the discussion as a open and safe space. Of course there are wrong answers, the underlying problem needs to be solved. But there are many different ways to approach and implement the solution including non-optimal ones. It was not meant to be a code golf challenge: rather by experimenting with unfamiliar aspects of our programming languages and solving abstract problems, we deepened our expertise.

I particularly enjoyed problem Day 18. Perhaps you remember your PEMDAS (order of arithmetic operations) from junior high. The Day 18 problem jumbled the notion of PEMDAS and asked to evaluate expressions if operations were evaluated “strictly left to right” or “addition comes before multiplication.” (The horror…)

My solution involved mapping the ‘depth’ of an expression wrt parentheses, then building a method to do the custom expression evaluation. Wrapping the two in iteration, I could then slice, divide, and conquer the expression to obtain the final result. Here’s an example of an expression, where the updated depth map is followed by the sliced expression selected for evaluation and finally replaced. This keeps drilling down until the entire expression has been evaluated:

### addition before multiplication ###



line_value: 17275 

How did my colleagues approach this?

  • David – used a clever regex to find the ‘deepest’ pair of parentheses, then evaluated the contained expression, and repeat as long as there was a ‘(‘ character remaining in the expression.
  • Josh – employed a similar regex recognition, but wrapped his solution in a very tidy `map()` for an almost minified look. By using less memory, Josh is minimizing his electricity consumption and saving the environment!

Advent of Code 2020: A Challenge Ahead of the Holidays

It’s important to continually hone your own skills true to being a true professional. Credit to my colleague David Lozzi for bringing Advent of Code 2020 to my attention, to which I immediately knew I had to partake.

Daily coding challenges are posted while the site interweaves some stats, socialization, and leader boards. My personal goals:

  • Complete all 25 challenges
  • Try a new language for at least one challenge

For example, the Day 3 challenge: ‘ski’ down a grid thousands of lines long and count the number of trees encountered for a specified slope…

Please check out my solutions on GitHub (all JS so far, and my solution to the Day 3 challenge) or follow along. After only three days I’ve already found it incredible revealing to see how my fellow Slalom colleagues have solved the same problem with different languages, approaches, and personal coding style.

Happy Coding!

Merry Christmas!

Happy New Year!