3

How do other developers handle local websites that are large in size? Currently the code and site need to be setup for each client for several developers. We use github and bitbucket repos for the code.

The biggest issue is downloading the site files and setting it up locally for each developer. We used Docker for some projects but ran into permission issues and storage space became an issue.

Comments
  • 0
    How big are these things that they can't be containerized?

    I'm currently working with about 20 apps all containerized and it takes up about 70gb(source, databases, images).
  • 0
    One of the websites is 15gb and another is roughly 25gb. The issue is having something that works for in house deva and remote devs.

    The issue the team are experiencing are:

    1. Running out of disk space for the sites

    2. Ensuring their local environment is setup correctly and works

    3. Docker Desktop isn't supported on Linux and the solution should work for Mac and Linux environments

    I have considered:
    - Jetbrains Space Environments
    - Docker
    - Managed cloud hosted server

    Solution should be reasonable for a small business with 10 developers. If its more than £500/m it's unlikely to be viable.

    I had issues where Docker Desktop wouldn't show containers after doing a docker-compose up -d and file permission errors between host and Docker became an issue.

    We don't want to have a new dev join the team and have to spend hours or days setting things up, we're looking to improve that.

    Ben
  • 4
    @laceytech 15 .... And 25 gigabyte?

    Gigabyte?

    Are u kidding?

    To the rest.... Get a competent admin / DevOps role.

    Permission issues with Docker plus the size of project sounds like there is a lot of knowledge missing...

    The gigabyte makes me really wonder what you packed into the repository, 4K PSD files?
  • 1
    @laceytech something is very wrong with the way your data is set up.
    there is no way you have gigabytes of *code* in the repo.
    Your most likely scenario, is that you have large resources - probably media in different resolutions.
    The way to set this up better, is to wrap the server/code/libs in a docker container, and load the resources from a central web server for all developers as required. To speed up development, load the code/libraries using a local directory mapping from host to docker container.
    There is another option - your git repos has large non text files in it. Solve that, and the repo size should resolve.
  • 0
    For the record these are sites we didn't build and the sizes are due to client resources like images etc.
  • 0
    @magicMirror how would we resolve the permissions issue from host to guest Docker containers?
  • 0
    @laceytech
    are you using windows to develop? if yes, move the repo into WSL, and map the WSL directory to your windows env.
    If not windows, then what is the problem exactly?
  • 1
    So running on Linux, I would add my user on the host to the Docker group and restart but when I fired up the environment I could not write the files from the host to the Docker container. I had to sudo chown -r ben:ben /path/ to be able to allow PHP Storm to write the files. Only other thought was to have sshfs installed and mount an ssh volume from the host to Docker container to have read/write.
  • 1
    @laceytech that's insane. I won't cover everything because others have already given you some clearly good advice.

    I'd like to clear up the docker desktop mix-up though. There's no docker desktop for Linux because docker runs natively on Linux. Docker desktop is also just a GUI that interacts with the docker vm that runs on windows and Mac. TBH docker CLI is so much easier to understand than all the hidden bullshit in the GUI.

    Also, if you're using docker desktop for Windows or Mac for commercial products you *may* be in licensing violation.

    I think your team may be missing a DevOps engineer if these are the problems you're running into.
  • 1
    @laceytech
    Thats actually a valid permission problem. The docker container runs the internal user as root, or uid 1000 (if you used the best practise tutorials). But your external dev user is not uid 0, or uid 1000. this causes the problem for you. find the external uid (user ben on the host), and set your docker container to run under that uid, using a combination of the RUN createuser [params] and USER directives.

    This should not be a part of the production image - but there is no actual problem with the custom uid getting to peoduction. The real issue will be with other devs, on thier own dev machines, with thier own uids. So - use a custom dockerfile per dev, or pass the uid as an ENV var to the docker build.
  • 1
    @magicMirror Yeah...

    But afaik on Windows Docker and permissions is impossible.

    Unless using network mounts...

    Which is cumbersome on it's own.

    One of the reasons I can only recommend providing hardware inhouse with remote access to devs.

    Otherwise it will always be a "myriad of evergrowing problems with no end in sight".

    Regarding UID / GID.

    Take one - stick with it.

    Many service images use e.g. 999 / 999 as UID and GID.

    Make the environment user / remote user for devs all the same with the same UID / GID, non root dockerfiles and alot of problems will vanish
  • 0
    @IntrusionCM
    On windows you can use the WSL mount point to solve the permission problem. Not easy - but workable.

    My personal way to solve this particular problem is using a remote linux desktop, that has all the docker setup on it. It also has the git repo mapped to the running container. So - code on laptop, push to remote, then pull on the other side, and restart the container.
  • 1
    @magicMirror damn! Thank you for the easy to understand explanation of that. Hopefully I can get it working under Linux with fewer issues. On Windows I didn't have to make changes to the permissions or user running within Docker containers. Not sure why?
  • 0
    @laceytech Windows does not use the linux ACL at all. By default, all files, are read/write/executable when mapped to a linux filesystem, unless the linux network file driver does something. Samba, and CIFS for example.
    So - no problem!

    You need just figure out hotloading the code in the container, and remote debugging!
Add Comment