Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "container file"
-
You know who sucks at developing APIs?
Facebook.
I mean, how are so high paid guys with so great ideas manage to come up with apis THAT shitty?
Let's have a look. They took MVC and invented flux. It was so complicated that there were so many overhyped articles that stated "Flux is just X", "Flux is just Y", and exactly when Redux comes to the stage, flux is forgotten. Nobody uses it anymore.
They took declarative cursors and created Relay, but again, Apollo GraphQL comes and relay just goes away. When i tried just to get started with relay, it seemed so complicated that i just closed the tab. I mean, i get the idea, it's simple yet brilliant, but the api...
Immutable.js. Shitload of fuck. Explain WHY should i mess with shit like getIn(path: Iterable<string | number>): any and class List<T> { push(value: T): this }? Clojurescript offers Om, the React wrapper that works about three times faster! How is it even possible? Clojure's immutable data structures! They're even opensourced as standalone library, Mori js, and api is great! Just use it! Why reinvent the wheel?
It seems like when i just need to develop a simple react app, i should configure webpack (huge fuckload of work by itself) to get hot reload, modern es and jsx to work, then add redux, redux-saga, redux-thunk, react-redux and immutable.js, and if i just want my simple component to communicate with state, i need to define a component, a container, fucking mapStateToProps and mapDispatchToProps, and that's all just for "hello world" to pop out. And make sure you didn't forget to type that this.handler = this.handler.bind(this) for every handler function. Or use ev closure fucked up hack that requires just a bit more webpack tweaks. We haven't even started to communicate to the server! Fuck!
I bet there is savage ass overengineer sitting there at facebook, and he of course knows everything about how good api should look, and he also has huge ass ego and he just allowed to ban everything that he doesn't like. And he just bans everything with good simple api because it "isn't flexible enough".
"React is heavier than preact because we offer isomorphic multiple rendering targets", oh, how hard want i to slap your face, you fuckface. You know what i offered your mom and she agreed?
They even created create-react-app, but state management is still up to you. And react-boierplate is just too complicated.
When i need web app, i type "lein new re-frame", then "lein dev", and boom, live reload server started. No config. Every action is just (dispatch) away, works from any component. State subscription? (subscribe). Isolated side-effects? (reg-fx). Organize files as you want. File size? Around 30k, maybe 60 if you use some clojure libs.
If you don't care about massive market support, just use hyperapp. It's way simpler.
Dear developers, PLEASE, don't forget about api. Take it serious, it's very important. You may even design api first, and only then implement the actual logic. That's even better.
And facebook, sincerelly,
Fuck you.17 -
> dockerized gitea stops working 502,
> other gitea with same config works just fine
> is the same config the issue? maybe the network names can't be the same?
> no
> any logs from the reverse proxy?
> no
> does it return anything at all on that port?
> no
> any logs inside the container?
> no
> maybe it logs to the wrong file?
> no others exist
> try to force custom log levels
> ignored
> try to kill the running pid
> it instantly restarts
> try to run a new instance with specifying the new config
> ignores config
> check if theres anything even listening
> nothing is listening on that port, but is listening in the other working gitea container
> try to destroy the container and force a fresh container
> still the same issue
> maybe the recent docker update broke it? try to make a new one and move only necessary
> mkdir gitea2
> all files seem necessary
> guess I'll try to move the same folder here
> it works
> it is exactly the same files as in gitea1, just that the folder name is different
>10 -
cw: I need a server to put my node backend
me: sure, I'll run a docker container for you
cw: nice, I've never worked with docker but I learn quickly, I'm already reading the Docker file docs
me: no wait, you don't need to learn anything, you'll be inside the container, so you only need an ssh connection and that's it
cw: this Dockerfile stuff is really complicated, it'll take me a while, but it's ok you don't have to worry, I like learning new things
me: you won't need that, just imagine it's a cloud server with Ubuntu installed, you only have to use it, I'll put node, git and ssh there for you
cw: ok got it, I'll have to learn the commands to run the docker, I'm on windows but I can use PowerShell and stuff I'll figure it out
me: ...
cw: ssh is a linux command right? does it have a push or publish option? how do you upload files there
me: ...you can use a ftp client but you'll need ssh to run the node server
cw: ok, I'm almost done with the Dockerfile, I only need to add git and nodejs, I'm starting to understand this thing...
me thinking: yeah keep doing that, you're such a crack, such a quick learner...
This son of a bitch is either a retard or is doing it on purpose and laughing at me the whole time, making my life so miserable, but I'm about to go insane with this dude, I'm proud of how I've been able to control myself, BUT ONE OF THESE DAYS I'LL LOSE MY COOL AND FORCE THIS MOTHERFUCKER TO DRINK A BIG POT OF BOILING, SALTY AND STINKING VOMIT WITH A SIDE OF STEAMING DIARRHEAL GREEN DOG SHIT WITH WHITE CHOCOLATE CHIPS WHILE I PUT MY OLD CRT MONITOR TO GOOD USE BY BEATING HIS FUCKING HEAD WITH IT!!!3 -
!!oracle
I'm trying to install a minecraft modpack to play with a friend, and I'm super psyced about it. According to the modpack instructions, the first step is to download the java8 jre. Not sure if I actually need it or not, but it can download while I'm doing everything else, so I dutifully go to the download page and find the appropriate version. The download link does point to the file, but redirects to a login page instead. Apparently I need an oracle account to download anything on their site. stupid.
So I make an account. It requires my life story, or at least full name and address and phone number. stupid. So my name is now "fuck off" and I live in Hell, Michigan. My email is also "gofuckyourself" because I'm feeling spiteful. Also, for some reason every character takes about 3/4ths of a second to type, so it's very slow going. Passwords also cannot contain spaces, which makes me think they're doing some stupid "security" shenanigans like custom reversible encryption with some 5th grade math. or they're just stupid. Whatever, I make the stupid account.
Afterwards, I try to log in, but apparently my browser-saved credentials are wrong? I try a few more times, try enabling all of the javascripts, etc. No beans. Okay, maybe I can't use it until I verify the email? That actually makes some sense. Fine, I go check the throwaway inbox. No verification email. It's been like five minutes, but it's oracle so they probably just failed at it like everything else, so I try to have them resend the email. I find the resend link, and try it. Every time I enter my email address, though, it either gives me a validation error or a server error. I try a few mores times, and give up. I try to log in again; no dice. Giving up, I go do something else for awhile.
On a whim later, I check for the verification email again. Apparently it just takes bloody forever, but it did show up. Except instead of the first name "Fuck" I entered, I'm now "Andrew", apparently. okay.... whatever. I click the verify button anyway, and to my surprise it actually works, and says that I'm now allowed to use my account. Yay!
So, I go back to the login page (from the download link) and enter my credentials. A new error appears! I cannot use redirects, apparently, and "must type in the page address I want to visit manually." huh? okay, i go to the page directly, and see the same bloody error because of course i do because oracle fucking sucks. So I close the page, go back to the download list, click the link, wait for the login page redirect (which is so totally not allowed, apparently, except it works and manual navigation does not. yay backwards!), and try to log in.
Instead of being presented with an error because of the redirect, it lets me (try to) log in. But despite using prefilled creds (and also copy/pasting), it tells me they're invalid. I open a new tab container, clear the cache (just to be thorough), and repeat the above steps. This time it redirects me to a single signon server page (their concept of oauth), and presents me with a system error telling me to contact "the Administrator." -.- Any second attempts, refreshes, etc. just display the same error.
Further attempts to log in from the download page fail with the same invalid credentials error as before.
Fucking oracle and their reverse Midas touch.10 -
I showed a friend of mine a project I made in two days in Docker and Symfony php. It is a rather simple app, but it did involve my usual setup: Nginx with gzip/cache/security headers/ssl + redis caching db + php-fpm for symfony. I also used php7.4 for the lolz
He complained that he didn't like using Docker and would rather install dependencies with composer install and then run it with a Laravel command. He insisted that he wanted a non-docker installation manual.
I advised him to first install Nginx and generate some self-signed certificates, then copy all the config files and replace any environment-injected values (I use a self-made shell script for this) with the environment values in the docker-compose files.
Then I told him to download php-fpm with php 7.4 alpha, install and configure all the extensions needed, download and set up a local Redis database and at last re-implement a .env file since I removed those to replace them with a container environment.
He sent an angry emoji back (in a funny way)
God bless containerized applications, so easy to spin up entire applications (either custom or vendor like redis/mysql) and throw them away after having played with them. No need to clutter up your own pc with runtime environments.
I wonder if he relents :p9 -
I just gave robocopy another try, in order to get my WanBLowS D: drive and my file server synchronized again, in preparation to move that file server VM to a LXC container instead.. bad choice. I should've used rsync in WSL.
Hey you Not so Robust File Copier for WanBLowS, how many attempts of you fucking up my file server's dotfiles does it take before I configure you right with every fucking option you have specified? How about you actually behave somewhat decently like rsync where -avz works 99% of the time, in local, remote, any scenarios that you can think of that aren't super obscure?! HOW DIFFICULT CAN IT BE, REDMOND CERTIFIED ENGANEERS?!!
Drown in a pond of bleach, Microshit certified MOTHERFUCKERS!!!!
Well, at least this time it didn't fuck up my .ssh directory so I can still authenticate to the VM.. so I guess that at least that's a win. Even that you can't take for granted anymore with this piece of garbage!!!4 -
Me: We need to allow the team in the newly acquired subsidiary to access our docker image repositories.
Sec Guy: Why?
Me: So they can run our very expensive AI models that we have prepared onto container images.
Sec Guy: There is a ban on sharing cloud resources with the acquired companies.
Me: So how we're supposed to share artifacts?!?
Sec Guy: Can't you just email them the docker files?
Me: Those images contain expensively trained AI models. You can't rebuild it from the docker files.
Sec Guy: Can't you email the images themselves?
Me: Those are a few gigabytes each. Won't fit in an email and won't even fit the Google drive / onedrive / Dropbox single file size limit.
Sec Guy: Can't you store them in a object storage like S3/GCS/Azure storage?
Me: Sure
Proceed to do that.
Can't give access to the storage for shit.
Call the sec guy
Me: I need to share this cloud storage directory.
Sec Guy (with aparent amnesia): Why?
Me: I just told you! So they can access our AI docker images!
Sec Guy: There is a ban on sharing cloud resources with the acquired companies.
Me: Goes insane
Is there a law or something that you must attempt several alternative methods before the sec people will realize that they are the problem?!?! I mean, frankly, one can get an executable artifact by fucking email and run it but can't pull it from a private docker registry? Why the fuck would their call it "security"?9 -
You mother fucking piece of shit.
Whoever taught you programming should be removed from history.
And whatever form of intelligence you claim to possess, let me assure you: breathing is the limit of it.
--
Some of the projects I'm working on are really the epitome of "YOLO let's turn the poopomat machine on in diarrhea mode".
The worst: I cannot really give examples.
I've seen the last days everything.
(bash scripting, docker, services like nginx /haproxy/...)
Eval as an template generator in bash...
Declaring an whole environment in an Dockerfile, that should never be used as it is only necessary for building... But not checking if an env file is provided, so the whole thing can blow up spectacularly.
A nearly 1k long bash calculator for system limits, reading out all kinds of stuff from /proc and /sys, seemingly partially stolen from NGINX Docker.
Declaring and starting an own DNS Server to bypass the Docker DNS service inside an docker container.
Mkfifo fun for creating several stdout and stderrs for seemingly no reason...
Actively not using bash, instead of creating shell only functions to emulate bash...
I could go on.
But really. I'm getting too old for this shit.3 -
I'm getting really tired of those dumbass programmers that do not understand shit and then come to me when production breaks. (I am also a programmer, not really a DevOps engineer, but I'm the least worst at DevOps stuff, so it's my job...).
We're programming some kind of document management tool. Today we had a release, and one of the new features is to download all of your documents as a zip file, which is asynchronuously generated. When it's done, the user gets a mail with the download link to the zip file.
The feature works basically, but today it broke our production service, as somebody was running a test of it.
Turns out all the documents are loaded into memory to be zipped. So if you have 2 gigs of documents, a container with memory restrictions in that area will crash.
I asked the programmer who reported this «ops problem» to me, why he didn't just shit the files into a temp foler in order to zip them in there.
He told me that he wanted to do so, but did not know how to mock this for a unit test, and therefore went to the in-memory «solution», which was easier for him to mock.
For fuck's sake, unit tests and mocks are fucking tools, not ends in itself! I don't give a fuck about your pointless mocking code when the application crashes!
When I got to deal with such dumbasses, I'd prefer to mock those motherfuckers with a leaky bucket of liquid shit, which basically accomplishes the same task from my perspective: dripping shit all over the place and make everything suck as fuck.3 -
I was cleaning up dangling images in docker, and I accidentally removed the production database container as well.
Its not a big issue, I can just up the container back and everything should be fine. But after I up the container and connected to the database, I found out there's no data inside. I thought I fucked up, and sent msg in slack channel that I nuked the db.
Later my friend asked me which compose file I am using and that's when I realized I used the wrong config to up the db. Used the correct config to up the database again and everything goes back to normal.
It's friday evening and if I really dropped the db it would be fucking bad weekend....3 -
Spend two days struggling with environment variables not being available in python script running on a docker container, tried providing environment variables directly, not working, tried providing a env file, not working
.
.
.
Turns out I was passing variables to the wrong container fml2 -
Warning long rambling story cause sleep deprivation
I never really bothered with ssh outside of using putty to remote into my servers and rpi's from my desktop to run updates, install something, or whatever else.
But today I was on a call with my cousin bored cause she was just rambling, so I opened vscode to clean my install of unnecessary extensions I installed and haven't used more than once or twice.
I saw Remote - SSH and as I was bored listening to a teenager complain about high school just like I used to (lol) and responding when she asked me something. I scrolled through the page, then the documentation just casually skimming the text
I setup an ssh key on an rpi I threw manjaro arm following the instructions on their tips and tricks page
I then moved the key to my desktop using winscp (cause lazy)
leading to having a minor hicup of rsa not being an accepted keytype (thanks 'your favorite search engine' for the help)
Finally, I was able to connect using the private key
at this point my cousin went to bed cause she has school tomorrow. But I was still doing stuff with ssh, I created a new ssh connection in VSCode, but had to go to the documentation to figure out how to make it use my fancy new key file, not hard took 30 seconds of looking to get it working.
Now that I was in, I moved to my development folder, created a folder for PiHole, created a compose yml, created a pihole-data folder.
I opened the yml and pasted in a compose from dockerhub.
at this point I thought 'i can't just run this from terminal can I'. and Obviously it worked cause there's literally no reason it wouldn't I'm just stupid to think it might not.
So I created folders and files on a remote system, launched a docker container, checked for package updates after on a linux machine. All from VS-Code on a windows machine.
I know this is simple for some people, i know some people are like 'where's the interesting part'. but ehhh I thought it was cool to get it setup, I now really regret not getting into ssh sooner, and I'm definitely going to uninstall vscode on all my smaller graphical VM's in favor of doing this. and this will definitely help with my headless vm's.
I also will have to thank my cousin, might not have done this if I wasn't stuck at my computer on messenger call with her lol
I'm gonna go to bed now, But I feel accomplished for the first time in a while even if it's for something so simple as setting up anssh key for the first time3 -
I am scratching my head since 2 days cause a rather large Dockerfile doesn't work as expected.
CMD Execution just leads to "File not found".
Thanks, that's as useless as one ply toilet paper...
Whoever wrote the Dockerfile (not me…) should get an oscar...
Even in diarrhea after eating the good one day old extra hot china takeout from dubious sources I couldn't produce such a dumpster fire of bullshit.
The worst: The author thought layering helps - except it doesn't really, as it's a giant file with roughly 14 layers If I count correctly.
I just found out the problem...
The author thought it would be great to add the source files of the node project that should be built as a volume to docker... Which would work I guess....
Except that the author is a clueless chimp who thought at the same time seemingly that folder organization means to just pour everything into one folder....
Yeah. That fucker just shoved everything into one folder.
Yeeeeeesssssssss.
It looks like this:
source
docker-compose.mounts.yml
docker-compose.services.yml
docker-compose.yml
Dockerfile-development
Dockerfile-production
Dockerfile
several bash scripts
several TS / JS / config files
...
If you read the above.... Yes.
He went so far to copy the large Dockerfile 3 times to add development and production specific overrides.
I can only repeat what I said many times before: If you don't like doing stuff, ask for fucking help you moron.
-.-
*gooozfraba*
Anyways...
He directly mounts this source directory as a volume.
And then executes a shell script from this directory...
And before that shit was copied in the large gooozfraba Dockerfile into the volume.
Yeeeaaah.
We copy stuff inside the container, then we just mount on start the whole folder and overwrite the copied stuff.
*rolls eyes* which is completely obvious in this pit latrine of YML fuckery called Dockerfile.
As soon as I moved the start script outside the folder and don't have it running inside the folder that is mounted via volume, everything works.
Yeah.... Maybe one should seperate deployment from source files, runtime related stuff from build stuff.
*rolls eyes*
I really hate Docker sometimes. This is stuff that breaks easily for reasons, but you cannot see it unless you really grind your teeth and start manually tracing and debugging what the frigging fuck the maniac called author produced.1 -
I'm currently between jobs and have a few rants about my previous job (naturally). In retrospect, it's somewhat therapeutic to range about the sheer brainfuckery that has taken place. Enjoy!
First, let me set the scene: legacy B2B web app made with LEMP stack and sencha ext.js 3 + 4 (don't ask) and a lot of madness. Let's call that app "Alpha".
Alpha is a self made CMS build for typical ERP stuff. Yes, a self made CMS: entities are containers, containers have types and fields and values. Like so many legacy PHP apps, it does not have a dedicated FE: the HTML is rendered on the server and then spewed out to the browser.
Easy right? Coding like it's 1999! But there was a twist: Because everything is basically a container, the HTML-templates are saved in the DB. Along with the nessary JS and the CSS. And the translation variables. Why? Because fuck you! That's why. Who needs a git history anyways.
For some reason, Alpha was kinda slow.
There was also an editor, that allowed you to modify templates (web, mail, pdf) on the fly in prod. Because templates contain repeating data (header/footer), one template could contain additional templates. Much confusion. You could change templates via migration (slow, boring) or just ctrl-c/ctrl-v that sucker (fast, much excitement).
Did I mention Alpha was slow?
On with the rant: e-mails! How do they work? Noone knows. How to send mails asynchronous in PHP? Witchcraft is the only possible answer to that riddle. Here is your enterprise™ solution:
1. create mail
2. insert mail into DB
3. WAIT UP TO 59 SECONDS FOR A FUCKING CRON TO SEND MAIL
Why? "Because that way, we can resend mails in case the network is down :)"
Same procedure for the SOAP-API (db-queue + cron). You read that right: all requests to various other systems are processed once a minute.
Alpha slow.
Alpha was only one of several systems. Imagine a bunch of monolithic php apps, interconnected via SOAP, REST and GraphQL like a godamn intergalactic orgy. Image having to debug that cluster fuck.
Let's say there is a bad request. These things happen. No biggie. Remember the db-queue? Let's try to send the bad request a second time! And a third time! Still no luck? How odd. Let's create a specific file in a specific directory: a LOCK-file. Now, "the db-queue is on hold and no request gets processed :)"
Golly gee thanks Alpha.
Anyhow, did you know that MySQL has a join limit of 61 tables?3 -
I was refactoring a 4000 lines JavaScript file, turning whatever I could into components. It was a pretty old project that I ended up being one of the few wizards to actually know how most of it works.
At one point, I noticed that login into a user account redirected us to the dev-test server, even when working on a local Docker container.
I never quite understood the issue, but ended up having the bug mysteriously disappear.
Until it came back two months later, then disappeared again.
At one point, all of the dev team on the project (four developers, me included, and one project manager) was investigating the issue, and we never figured out how or why this was happening.
Just Symfony routes shenanigans I guess.15 -
I freaking hate slow IDEs, especially ones made in Java.
I used to use an IDE/text editor called geany, and it was great, you could do almost every language in it and it worked great. It was fast, and efficient, it was a no-nonsense editor. That was when I was a kid, but I got in univ and got a job, so I had to start using big boy """""enterprise""""" IDEs like eclipse.
Eclipse, netbeans, and intellij (basically every Java based IDE except BlueJ) are exactly what is wrong with IDEs. They are clunky editors that frankly would be better off gone. They are slow, eat RAM like crazy (like most Java software). You just CANNOT have eclipse open for extended periods of time, because it WILL take up too much resources and get slow as heck. Android Studio (based on intellij) is a nightmare to work with. It just does not want to cooperate with you (I will agree they have improved a lot though).
I cannot believe I am saying this, but even the electron based IDEs like atom and code-oss are better than them. They are very easily expandable, something that Java was supposed to be, but is not. They have tons of plugins. Even if its not there, you can make one without having to spend a lifetime making the plugin! They look good. I never thought that going from IDEs with """""enterprise""""" UIs to something modern like code-oss would feel this great. Its ridiculous, I don't want to create a darn project for every single file that I want to edit, I just want syntax highlighting for a single .sh file that I want to edit right now. A project is just a way to logically define what is one "unit" or a "container for multiple files", you know what else is that? A simple directory.
Also I don't want 9 billion .xml files for the IDE to store its crap. Just make a .vscode like folder to hide your shit.11 -
How to run PHP in a container :
1. Begin a docker file for an existing php cron app (when all you know is php, everything looks like a php app)
2. Set the FROM.. Apt get update .. Do composer install
3. Builds the image
4. Discover I need git
5. Add git to apt get install step
6. Builds the image
7. Launch the php script
8. Fatal : use of undefined constant SOL_UDP
9. Opens the source code of the third party. The there's no mention of where that constant is from.
10. Spend many minutes online to find what's missing.
11. Find the PHP sockets page about that option. Digs into the documentation to find out that's missing from the installed PHP.
12. Find out I need to add a step to install the socket extension in my docker file.
13. Build the image again
14. Execute it, finally it works
15. Remember why I hate php
(for brevity I've omitted the even more complex part of having to set up zlib)
How to install node js in a container image:
FROM node:8
ADD package.json
RUN npm install7 -
After a couple of days reading bout CI/CD i find it really compilated for my usecase. Server for jankins server for rancher server for anything and a lot of configurations to make it work. So i gave up and decided to build my own CI tool which turned out to be easiear than all of this. So i wrote a simple cli tool in go which listens on master branch of every directory specified in a .yml file and everytime a new commit is done it pulls the repo and rebuilds the container. Pretty easy without having to deal with all the bullshit15
-
Deploying into linux containers (lxc) as of 2013 before docker even was da hype.
(Experience was a bit problematic tho, as it was in a highly virtualized environment whose backup would really badly kill the whole container every now and then: you could still ssh to the machine but with every access to the file system you'd lose your shell. and only the "echo 1 > /proc/sys/kernel/sysrq" would help to restart the box.) -
My group set up a Linux Dev server. We got hacked by Chinese hackers. We set it up again but even more secure with only people inside the uni can access it. We got hacked again.Turns out one of the modules in a container was using an outdated CentOS version. P.S The malicious file on the server was called kk.love.1
-
Duck! this sloppy whiny winnfsd.
Yay! Let's use state of the art Docker with a VirtualBox VM on Windows10.
Don't get me wrong.
The Docker containers in this VM doing a great job on performance.
But in the very moment a Docker container uses a mounted folder via the windows network filesystem, all hell is breaking loose.
Building a vendor folder using a composer Docker image with 84 Packages takes about 15 seconds when cache has been warmed up.
The same Docker command pointing on a folder mounted to Windows Filesystem with warmed up cache takes about 10 Minutes!@&&@""+&
And what is the duckin' reason for this delay?
Because every transfer of a teeny tiny file has to establish a connection to fat ass Windows OS and has to pass it's glorious "security" layer.
DUCK it!
For real.
I currently working on a shell script which builds the whole vendor folder on a volume on Docker VM.
After completion, the shell script will compress the folder to one file.
This one file will be transferred over this god damned network filesystem.
Finally the script will unpack the compressed vendor folder in it's destination folder.
*sigh*
What year is it?!??3 -
I'm trying to stand up a docker container to read a storage queue with dotnet and invoke ffmpeg to convert some videos. For a whole day I fought with this wrapper (FFMpegCore) which kept throwing file not found errors on the ffmpeg binary itself. Locally (windows) it worked fine.
I spent a ton of time trying to install the Debian package, trying to add it to the path manually, trying to just use the wrapper just to generate the arguments I wanted (I'm not an ffmpeg pro, so the fluent API the wrapper has is super useful) and running it manually, nothing worked. Finally, I realized it wasn't getting to the part where I ran it manually: just using the fluent API to get the arguments was invoking ffmpeg and throwing.
I took away the wrapper completely, start ffmpeg manually and it works...
Ay carumba. Things just can't be easy.2 -
Mount an azure file share in an app service container? Sounds handy. Nice clicky-draggy wizard to set it up, pick your file share, type a path to mount it to, hit save.
And does it work?
Does it buggery.
And is there a helpful error message so you can see what you've done wrong?
In a pig's arse is there a helpful fucking error message.
"Application error", and a link to some "diagnostic resources" that displays the exact same error message, including the same link, so a link to itself, in an infinite recursive loop of rank, inhuman stupidity.
Let me see what's in the logs. Absolutely fuck all. No, wait! There's the html markup for the fucking useless error message I'm looking at in the browser. So the UI is telling me to fuck off, and the logs are recording that I have been told to fuck off.
But this is Azure. So there isn't just one place to look at the logs, there are many places to look at the logs. And they are all geologically slow and most of them don't work.
It's probably a firewall issue. I'll have a look later on if I can be arsed, but frankly I'd rather be performing cunnilingus on a lion.1 -
Fuck you Redis
Goes in a docker container, calls bgrewriteaof, get success, checks info - no pending writes, last write success.
Tries to scp to remote, fails - Unexpected AOF
Decides to shut down the local redis to be able to port, in case it's blocking it
Calls redis-cli shudown (expect to just shut redis down rigth)
It fucking deleted all my data, now I see the docs
"Flush the Append Only File if AOF is enabled."
Why the fuck? Fuck you redis, fuck you1 -
2 hours of debugging a web page. We had a simple border: 1px; margin: 4px; on the container, but on the right side it the border was outside the screenspace.
Like, gooohhh I hate this, is it some element that's making the container grow or something? Doesn't happen on all screen sizes..? So confused. No scrollbar so where's the overflow: hidden? ARGH I'm no pro at this! And this is a spagetti css file; just work stupid css!
The reason why, which took toooo long to find 😓, was that the monitors physical border was overlapping the right side of the screen, oh at such a suddle amount. Enough to not be noticeable normally, it looks completely usual, but it hid that stupidly stupid border!
I so hate/love when there's so such a simple solution. But now I'm gonna be stuck checking the screen border when instead it's my css that is off...2 -
I remember playing games like wolfenstein3D, supaplex, sokoban etc. on our family computer 486 which had as I remember around 100mhz processor. (120mhz in TURBO mode)
Yeah and I did created a few levels in wolfenstein, there was a simple editor.
From programming view I did code my first website only using html and inline css in early 2000s. However internet was a thing of a rich people back then (in my country), so my brother downloaded the whole website with docs and basics of html/css/js for me in collage. My first website was coded on 300mhz pentium2 (or 3?) with no internet connection, took me about a two months to complete and was total mess. But was graphically satisfying with nice gifs which took tens od seconds to download. Main container had 600px width and looked pretty good on my 800x600 resolution.
I still remember messing with BOM signature because of notepad could not save a file without BOM. Leaving all utf8 chars as mess after saving.
Good old times. -
I’m so sorry if this is the place for questions. I’m terrified of stack overflow and have been searching for a week for a solution and can’t find one. This is for React.js people.
I was tasked to create a webpage with react. The limitation is, they did not wanna adopt the node.js dependency. I said ok, I’ll figure it out. You can inject react, material UI, and babel with script tags in HTML, then put ur lil components in it. I did that and it works beautifully.
However, now I have to write tests for this. I think it’s actually impossible without a way to render React, so I have to use the browser, or node, right? I convinced my boss to allow me to use a node.js container just for testing, which I thought would make my life easier.
I don’t know how to render this thing with node. It’s just an HTML file that pulls react via script tags, and idk how to serve html with node. Additionally, none of the React testing libraries seem to support testing a system that wasn’t designed to be served with node, at least not easily. My gut tells me that the complication with how things are imported contributes at least a little to this (dependencies pulled via script tags in the HTML file and made available to react through global const variables).
I could be wrong about any of this — im fairly new. But how tf do I go about testing these react components? For reference, if you go to Reacts docs, there’s a section called “add react to a page in one minute” that’s pretty much what I did.20 -
For me that would be Proxmox. I know, people like it - but for no apparent reason it decided to nuke half my ZFS datasets in a pool, with no logic behind it whatsoever. All disks were tested, all came out good. Within the same pool there were datasets that were lost and some that remained.
I really don't get it. Looking at Proxmox' source code, it's more or less the command line tools and then there's the web interface (e.g. https://github.com/proxmox/...). Oh and they have the audacity to use their own file extension. Why not I guess?
Anyway, half my data was gone. I couldn't tell how or why or what the fuck even happened there. But Proxmox runs Debian underneath and I've been rather pissed about Proxmox' idea of "don't touch the host system aaa" for a while at that point. So I figured, fuck it I'll just take pure Debian then and write my own slightly better garbage on top of that. And as such the distribution project was born. I've been working on it for a little over a year now. And I've never had such issues again.
I somewhat get the idea of "don't touch the host" now, but still not quite. Yes, the more you do in the containers, the better. And the less you do on the host in terms of reconfiguration, the longer it will stay alive for. That goes for any system - more reconfiguration means usually means less stability and harder to replace. But sometimes you just have to work from the host. Like say migrating a container between hosts, which my code can do. You can't do that from a container, at all. There are good reasons to work with the host. Proxmox isn't telling that. Do they expect their users to be idiots? Only enterprise sysadmins amirite?
So yeah, that project - while I do take inspiration from it in mine - I don't like it. It's enterprise, it has the ZFS and the Ceph and the LXC and the VM's - woohoo! Not like anyone could implement that on a base Debian system. But they have the configuration database (pmxcfs), the distributed configuration database of a couple MB large and capped there, woah!
Ok sure it isn't Microsoft or IBM or Oracle or whatever, and those are definitely worse. But those are usually vendor lock-ins.. I avoid those on that premise alone :)3 -
Does any one know how Containers (?) like Zip or exe files work? I mean when you open it with a texteditor you can't see much?!15
-
I had the funniest thing today... So our company has some servers off somewhere in a VPN, as well as one server in our own office.
So, for simplicity, S1 is my own laptop, S2 is our office server, S3 is one VPN server, and S4 another.
I want to get a file from S2 to S4. S1 can SSH into S2 and S3, S2 can't ssh into any server, S3 can ssh into S2 and S3, and S4 can't ssh into any server.
So to get a file from S2 to S4, I took the path
S1 pull from S2 -> S1 push to S3 -> S3 push to S4
Part of it was preexisting keys meaning it was easier to send S1 to S4 via S3 than get my pubkey from S1 onto S4, but also S2 not being on the VPN meant I couldn't go straight from S2 to S3 or S4, so I had to route through S1, which I could add to the VPN (I'd sshed into S2 from home and thus couldn't put it on the VPN not to mention permissions, whereas I could put S1 easily onto it)
Twas certainly a fun time :P
Plus, port forwarding from a Docker container on S2 to S2's port to S1's port via ssh was fun to get set up.
Time to document this process :)2 -
Can't believe I'm about to say this, but:
Systemd-container is a rather cool SysD extension.
It allows me (Root on most servers) to switch to a customer account in a completely new session, setting all the .profile and .bashrc stuff up, so I can do stuff like control their rootless docker, and no longer have to add my SSH key to their authorized_keys file then re-login under their user.
Nice.1 -
I just finally took time to look at creating symbolic links for node modules and package and package lock json files all from a boilerplate code for my frontend projects,
I saved over 15gb of disk space,
my SSD is 40gb so that's fair
now I have all my node modules for frontend projects in a separate container and node modules for backend projects in a separate container
I still havent figured out how to trim down my package.json file before pushing because there'll be some unused libraries.
for now any direct changes I make to the package and package lock json files will be reflected in the the symlinked directory and them reflect over all projects that share it
I have to be careful here8 -
I've got a Linux Server running with 2 NVMe disks in RAID 1 configuration using mdadm. But if I want to create a big file, say 4GB, the whole system starts to hang.
I found out it's because of the journaling process which gives the CPU a long IOWAIT.
My problem is, I want to unzip a huge file, but this results in an immense server hang. Is there any way to do this without the server hang?
I unzip it inside of a docker container, if that is of any help.7 -
(Warning, wall of text)
Settle an argument for me. Say you have a system that deals with proprietary .foo files. And there are multiple versions of foo files. And your system has to identify which version of foo you are using and validate the data accordingly.
Now the project I was on had a FooValidator class that would take a foo file, validate the data and either throw an error or send the data on its merry way through the rest of the system. A coworker of mine argued that this was terrible practice because all of the foo container classes should just contain a validate method. I argued that it was a design choice and not bad practice just different practice. But I have also read that rather than a design choice that having a FooValidator is the right way to do OOP. Opinions?1 -
So after deploying an update for my apps api this morning, I very quickly had to move hosts as openahift for some reason does not install express on its 8.x container, heck its not even in the package lock file, and no help from Google. On the plus side, new ones faster, even though it's further away, and I can only image a replica set DB is better than a single instance.
-
Not sure if this is somehow standard but have a new dev process we need to use to deploy a Docker image to an Openshift container.
(Is an container one node/vm or could be many?)
In the Jenkins build it seems the files are copied to in Docker image.
But they aren't copied to the container/OPENSHIFT deployment image. There's something mentioned about config map but not sure how that's related to file copy...10 -
Have you ever stuck between user permissions, docker, Jenkins and Linux?
Running a docker with a particular user that doesn't have permission to folder inside container. user permissions should be same on host and inside container. When I did that container user does have permission of certain file on the host.
Successfully wasted my day on fucking things.
I have to again start with this tomorrow wish me a luck.4 -
#Suphle Rant 6: Deptrac, phparkitect
This entry isn't necessarily a rant but a tale of victory. I'm no more as sad as I used to be. I don't work as hard as I used to, so lesser challenges to frustrate my life. On top of that, I'm not bitter about the pace of progress. I'm at a state of contentment regarding Suphle's release
An opportunity to gain publicity presented itself last month when cfp for a php event was announced last month. I submitted and reviewed a post introducing suphle to the community. In the post, I assured readers that I won't be changing anything soon ie the apis are cast in stone. Then php 7.4 officially "went out of circulation". It hit me that even though the code supports php 8 on paper, it's kind of a red herring that decorators don't use php 8 attributes. So I doubled down, suspending documentation.
The container won't support union and intersection types cuz I dislike the ambiguity. Enums can't be hydrated. So I refactored implementation and usages of decorators from interfaces to native attributes. Tried automating typing for all class properties but psalm is using docblocks instead of native typing. So I disabled it and am doing it by hand whenever something takes me to an unfixed class (difficulty: 1). But the good news is, we are php 8 compliant as anybody can ask for!
I decided to ride that wave and implement other things that have been bothering me:
1) 2 commands for automating project setup for collaborators and user facing developers (CHECK)
2) transferring some operations from runtime to compile/build TIME (CHECK)
3) re-attempt implementing container scopes
I tried automating Deptrac usage ie adding the newly created module to the list of regulated architectural layers but their config is in yaml, so I moved to phparkitect which uses php to set the rules. I still can't find a library for programmatically updating php filed/classes but this is more dynamic for me than yaml. I set out to implement their library, turns out the entire logic is dumped into the command class, so I can neither control it without the cli or automate tests to it. I take the command apart, connect it to suphle and run. Guess what, it detects class parents as violations to the rule. Wtflyingfuck?!
As if that's not bad enough, roadrunner (that old biatch!) server setup doesn't fail if an initialization script fails. If initialization script is moved to the application code itself, server setup crumbles and takes the your initialization stuff down with it. I ping the maintainer, rustacian (god bless his soul), who informs me point blank that what I'm trying to do is not possible. Fuck it. I have to write a wrapper command for sequentially starting the server (or not starting if initialization operations don't all succeed).
Legitimate case to reinvent the wheel. I restored my deleted decorators that did dependency sanitation for me at runtime. The remaining piece of the puzzle was a recursive film iterator to feed the decorators. I checked my file system reader for clues on how to implement one and boom! The one I'd written for two other features was compatible. All I had to do was refactor decorators into dependency rules, give them fancy interfaces for customising and filtering what classes each rule should actually evaluate. In a night's work (if you're discrediting how long writing the original sanitization decorators and directory iterator), I coupled the Deptrac/phparkitect library of my dreams. This is one of the those few times I feel like a supreme deity
Hope I can eat better and get some sleep. This meme is me after getting bounced by those three library rejections