2

Should I infastructure as code or should I not? The god damn difficulty of approaching a cloud infra modernization project… so much to potentially do, don’t even know where to start and what is or isn’t relevant in the end… gaah

Comments
  • 1
    I'd start with the question what Infrastructure as a code means to you.

    Modernization project sounds like there is existing infra...

    IAC is easier when the existing infra has been consolidated, so you can separate between common ground (aka base layout) and specializations (eg php-nginx, haproxy, elasticsearch, nfs-san-storage)...

    If we're talking about IaC in form of Ansible / Puppet / ...
  • 2
    If you don't write Infrastructure as code, you will probably end up wasting hours of manual effort leading to errors and unpredictability

    But it is going to be a big long effort for the first time if it is not.already done. You will have to see what makes sense to begin the effort with and then think of continuing or stopping it all together depending on how it turns out to be
  • 0
    @IntrusionCM yeah so… ugh. It’s a clusterfuck. We have an existing prod infra in AWS, set up 5-6 yrs ago and has remained pretty much static since. Part of it has been done via CloudFormation templates, most of it not. Apparently there has been some Terraform going on as well… the real problem is that no one who was involved with setting the infra up is around anymore, and there’s virtually zero documentation on the infra.

    What we really need to do is set up another environment (our staging account is empty atm) to replicate the prod infra so we can start modernizing it in low risk manner. I thought of using CDK, but at the moment I’m uncertain whether it’s really the right tool for the job.

    The end game here is to bring the infra from the basically lift-n-shifted from onPrem to cloud native with CI/CD pipelines and the relevant shablam that we can leverage to get the most out of AWS - and to ease our jobs as dev(ops) when it comes to dealing with the infra.
  • 1
    @100110111

    Sounds like a fun party pooper.

    Exclaimer: I don't do Cloud stuff

    Templates im AWS are in JSON I think.

    Terraform would be platform agnostic, which might be wasted money if the setup is rather static / doesn't change often.

    I think I'd start by gathering all JSON templates and aggregating the configuration.

    Next I'd try - depending on size - to autogenerate a preliminary documentation, trying to find the association between the machines and getting a point of view perspective on how the stuff is scrambled together.

    Most of the time it's for me: Python parse the templates, get the JSON, parse the configuration folder (s)… generate DTO containing IP addresses / host names and configurations. Then searching becomes very easy as I have per resource one text blob.

    Rest is putting together a diagram and making to do list for regenerating the setup like ya said.
  • 1
    @IntrusionCM you can use both JSON and YAML in cloud formation. Shouldn't really matter though. Probably.
  • 1
    As a DevOps guy…

    - Do you instances support auto scaling?
    - Do you have a consistent way of build NEW EC2 instances without copying an old instance as an image?
    - Do you have your EC2 resources like queues, load balancers and auto scaling groups automated?

    Those questions guide my decisions on what to put into Infra as code. Also, if you haven’t checked out Terraform, it’s worth the review.

    Don’t try to boil the ocean here. Just do one thing first.
  • 0
    @devphobe answers to your questions: No, no and no. There’s basically zero taking advantage of the cloud other than all them stuff just being there…
  • 0
    My two cents - check out Hashicorp Packer and find a way to build a new EC2 image from scratch. It’ll go a very long way. Then you can spin up new EC2 images from a template.
  • 0
    My two cents:

    Do it in steps.

    First step is actually not automation, it's noting down requirements, consolidation, and standardization.

    That starts with keeping track of dependencies & versions, and making sure everyone uses identical setups. Docker can help, but also has a "cost" (complexity, initial time investment).

    It can also be simply a set of bash scripts which takes a clean laptop or webserver to a predictable state. There's risk: A new OS release might break things.

    Of course, this is not IaC at all, but a migration to IaC STARTS with a reproducible, clearly documented setup procedure.

    Step 2 is unattended, reliable CI/CD.

    I think both Github Actions & Gitlab CI offer great solutions. The difference with many other SaaS CIs? In my opinion, your config should live in your codebase. You need to be able to define testing dependencies per branch.
  • 0
    Step 3 is full containerization & k8s. Again, I think having configs within your codebase is essential.

    I would define "true IaC" as: A developer can make a new branch from the main codebase, and define that that branch requires Redis, purely by editing config file(s). The local environment, testing pipeline, staging servers and production will adapt by running a container with the correct Redis version on a specific port.

    Going "all the way" is only necessary once the company grows beyond a specific point, but 1-2 are always needed.
  • 0
    @bittersweet good points, good points there.
Add Comment