8

i don’t know what docker is and at this point im too afraid to ask.

i read an explainer on it every few months. i’ve used it a couple times. still don’t know what it does.

“it’s a container!” okay.. so is my tupperware. it has beans in it. what’s your point.

Comments
  • 7
    Do you understand what a VM is? Container is like that but lightweight.

    Smartypants incoming
  • 2
    it's a virtualisation technology. but instead of emulating a computer, it emulates an operating system. which makes it much more lightweight and scalable. and easier to setup.
  • 0
    https://platform.sh/blog/...

    Read this. Bookmark it. Share it.

    (Not the author, but I like the author a lot).
  • 0
    In a world where we have chatgpt, you really shouldn't be a developer if you need basic explaining of something like this on a forum..
  • 2
    Remember those "next->next->next" install screens? And those progress bars that show like a million files being spread all over your system? That is an inefficient and unsafe way to get and distribute new software!

    First let's solve the "files all over" problem. Let's create some sort of super-file that organizes all the files needed for your software artifact inside a single ZIP/RAR/tar-ball/"blob"/file object. This super file must be efficiently stored, distributed, downloaded and loaded in atomic operations, so every time you need to run the software, all the directories and files needed are there, and when the software is closed, it leaves no files behind to mess up your system. Let's call this thing a "docker Image file".
    Not unlike those fancy tupperwares with divisions that make sure that your dhal, spinach and rice do not get mixed up, and can be heated together even in a crappy company microwave.
    There is more to it, but that concept is already good enough to begin with.
  • 2
    Now let's add some fancy features.
    If we can package our files so nicely, could we include some not-so-easy-to-set-up things?
    Like, my software would really like if the OS had no limitations in the number of open files, had a high nicety setting while managing memory and used the most unobtrusive garbage collector possible (my software is kind of a memory hog).
    Easy to set up if using an specific version of the RHEL OS with some config files in arcane locations. But it could really piss in the soup of other applications the user/host system might have.
    So let's pack all OS' files together with ours, inside the docker image file! Then the OS will be just the way we like it when our application is loaded!
    Isn't that inefficient? A bit, but docker has some daemonized/emulated features that mitigates this problem enough to be worth the effort. As a bonus, applications get isolated by minimizing shared system files. Safety!
    Let's call a running, isolated Docker Image a "Docker Container".
  • 2
    Finally, let's bake in some CI/CD, DevOps and GIT/versioning best practices.
    Let's use a "script file" that sets up our docker image (the "Dockerfile") making it very obvious which files are a part of our software package, and where are those. Also, every command that must be run, environment variables and system settings should be set up using this file. Repeatability!
    Let's also use some GIT goodness when creating new image file versions - let's pack "layers" of changes to each image, storing only the differentials. So an image file is like a bunch of "commits" on top of a "main branch" version.
    Finally, let's use a very GIT-resembling repository structure to distribute releases of our docker images. It makes deploying to exotic runtimes a walk in the park.

    I hope this explanation is simple enough to catch you up on the conversation about containerization, but not too smarty-pants (@retoor) as to make things more confusing, nor condescending. It is worth it to know about this thing.
  • 0
    cgroup + pid control. Fairly contained process to make it a more abstract black box that you then can run seamlessly in most environment.

    But if you just need to run the program in your environment it's overkill.

    But it's good to learn, it's not a fad, it's is/going to be new standard how to release software solutions.
  • 0
    @myss i yearn for human connection
  • 3
    ever heard of "it works on my machine" as an excuse? we sorta kinda use containers to stop that shit from being a thing.

    Can be quite the complex environment though. Pretty interesting technology.
  • 2
    All right, so… you know when you’re downloading a program and it offers you multiple ways of installing it based on the computer you’re on?

    And how different operating systems have different underlying code that will interact differently with whatever program you install?

    Containers are so you can create a handy little environment that operates with a specific operating system and with specific versions of programs that other people can also run on their machines regardless of what their underlying architecture is.

    So if you have some teammates who are on Linux, some on Mac, and some on Windows, you can all run the code in a virtual machine and it’ll behave the same way for all of you, whereas the experience could vary wildly without the containerization and virtual machine.
  • 0
    @AmyShackles i love you i love you
Add Comment