It's cool to see how big organizations have deployment setups, while it feels like there is not enough resources about how one should setup a deployment system for a new startup just in the beginning.
The setup I currently use is custom bash scripts setting up EC2 instances. Each instance installs a copy of the git repo(s), and runs a script to pull updates from production/staging branches, compiles a new build, replaces the binaries & frontend assets, then restarts the service, and sends a slack message with list of changes just deployed.
It works good enough for a startup with 2 engineers. However, I'd like to know what could be better ? What could save my time from maintaining my own deployment system in AWS world, without investing days of resources to K8s?
You don't have to do a big-bang style Google thing. You can just invest in some continuous improvement over the next few years:
Iteration 0: What you have now.
Iteration 1: A build server builds your artifact, and your EC2 instances download the artifact from the build server.
Iteration 2: The build server builds the artifact and builds a container and pushes it to ECR. Your EC2 instances now pull the image into Docker and start it.
Iteration 3: You use ECS for basic container orchestration. Your build server instructs your ECS instances to download the image and run them, with blue-green deployments linked to your load balancer.
Iteration 4: You set up K8s and your build server instructs it to deploy.
I went in a similar trajectory, and I'm at iteration 3 right now, on the verge of moving to K8s.
It's your call on how long the timespan is here, and commercial pressures will drive it. It could be 6 months, it could be 3 years.
For me, it feels a bit "wrong" to be building on each production server.
Firstly, production servers are usually "hardened", and only have installed what they need to run, reducing the attack surface as much as possible.
Secondly, for proprietary code, I don't want it on production servers.
But most importantly, I want a single, consistent set of build artifacts that can be deployed across the server/container fleet.
You can do this with CI/CD tools, such as Azure DevOps (my personal favourite), Github Actions, CircleCI, Jenkins and Appveyor.
The way it works is you set up an automated build pipeline, so when you push new code, it's built once, centrally, and the build output is made available as "build artifacts". In another pipeline stage, you can then push out the artifacts to your servers using various means (rsync, FTP, build agent, whatever), or publish them somewhere (S3, Docker Registry, whatever) where your servers can pull them from. You can have more advanced workflows, but that's the basic version.
Automate compilation on a buildserver and run tests on that, and if everything is ok, use the artifacts to push to your servers. This way you can guarantee that the code is tested and all running versions are from the same build environment.
If you make your application stateless and have it in a container then there are many managed services out there that can do this for you. For example, in AWS there is fargate and EKS.
The setup I currently use is custom bash scripts setting up EC2 instances. Each instance installs a copy of the git repo(s), and runs a script to pull updates from production/staging branches, compiles a new build, replaces the binaries & frontend assets, then restarts the service, and sends a slack message with list of changes just deployed.
It works good enough for a startup with 2 engineers. However, I'd like to know what could be better ? What could save my time from maintaining my own deployment system in AWS world, without investing days of resources to K8s?