Migrating Java CI to CD onto AWS: Part 1: The Plan

Phil Rogers
4 min readFeb 1, 2021

This is Part 1 in a series. Further posts will follow soon and be linked here when available.

The website in question — old enough to drink in America now

For the last 21 years, I’ve run a small website for Hendon Football Club which has been invaluable to my career as a Software Engineer. Anytime I’ve wanted to learn something new, I’ve got a ready made project that I can use to apply it in practical terms.

While doing my degree I took the time to migrate from a static website to a dynamic one powered by Java Servlets and a MySQL database, when Spring Framework became important at work, I migrated it to Spring MVC, I’ve switched from Eclipse compilation through Maven to Gradle and have applied all kinds of design and testing patterns over time to familiarise myself with them. I’ve moved from shared hosting through a dedicated physical server (kindly donated by a sponsor and dramatically over-specced for my needs!) into cloud hosting.

However, one thing I’ve never really focused on is the deployment process, and to be blunt, it hurts.

Code coverage is automated… but perhaps not where it needs to be yet!

I currently have to manually deploy to my server, restart services and apply any database changes. Moreover, although I have a basic CI pipeline setup with GitHub actions, the only two things I learn from it are whether the build passes, and what does my code coverage look like. Versioning is currently a distant dream and deployments are irregular so if something breaks then it can take quite a while to track down the issue — was it introduced today, last week or last motth? — fix it, build and redeploy.

So here’s the aim.

I want to set things up so when I merge my code and/or database changes, it all automatically gets versioned, built and deployed without my intervention. I want to know when it breaks, but not be bothered when it’s going well. I also want to achieve this as cheaply as realistically possible and maintenance to a minimum. Obviously I need to pay for my AWS services, but if I can I want to stay in free tiers of any other tools.

Currently the website is packaged as a Spring Boot 2 runnable Jar, deployed onto an AWS EC2 instance. I want to keep this the same, as I’ve got t3a instances paid up through till August 2023. Dockerising it and running in containers would be nice, but I’m not ready to write off the money I’ve already spent on those instances. I’ve got it installed as a service, so I can start and stop it easily from the command line or scripts.

I also have a mariaDB database. I considered using Amazon’s RDS service for this, but as I’m doing this on a budget and only have a single schema with very few database users, I didn’t fancy paying twice as much for my instances. It’s currently co-located with the website though — this should change with the data getting it’s own dedicated instances.

Uptime monitoring. Guess when my LetsEncrypt certificate expired. 🤦

My source code is stored in a private repo in GitHub. In the past I ran Jenkins and SonarQube in containers on my laptop to act as a CI platform, but this required spinning them up to get any benefit and I was too lazy in keeping them upgraded, so I switched to trying out Github actions last year, and although I have trouble understanding the setup they’ve worked reliably.

For monitoring at present I use Uptime Robot — I’ve got a single monitor that emails me if the homepage is unavailable — and similarly will receive an email if my Github actions fail. I don’t feel the need to do much more than that — an SMS would be nice for my uptime monitor, but it’s not necessary.

So my aims:

  • Automatically deploy new code to my EC2 instances on merge to master.
  • Automatically run any necessary SQL scripts against the DB on merge to master.
  • Get automated versioning working on builds — this can just be a build number — and automatically tag the source.
  • Maintain code coverage metrics.
  • Minimise any downtime of services.
  • Have alerts set up that let me know when things go wrong.
  • Ideally build on what I already have in Github.
  • Sort out how I handle secrets if possible — there’s a bunch where they shouldn’t be!

Things I’m not worried about now (things for the future!):

  • Running my services in containers
  • Improved monitoring
  • Green/Blue or Canary deployments

Step 1 is going to be to create a test repo I can mess around with that doesn’t affect my existing commit history; I’ll get that up and public soon along with Part 2: Automating Versioning.

--

--

Phil Rogers

Full time software engineer, part time web-developer, occasional app coder.