Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "apt-cache"
-
I'm a DevOps engineer. It's my job to understand why this type of shit is broken, and when I finally figure it out, I get so mad at bullish players like AWS.
It's simple. Install Python3 from apt.
`apt-get update && apt-get install -y python3-dev`
I've done this thousands of times, and it just works.
Docker? Yup.
AWS AMI? Yup.
Automation? Nope.
WTF? Let's waste 2.5 hours and figure out why this morning.
In docker: `apt-cache policy python3-dev` shows us:
python3-dev:
http://archive.ubuntu.com/ubuntu focal/main amd64 Packages
But in AWS instance, we see we're reading from "http://us-east-1.ec2.archive.ubuntu.com/... focal/main" instead!
Ah, but why does it fail? AWS is just using a mirror, right? Not quite.
When the automation script is running, it's beating AWS to the apt mirror update! My instance, running on AWS is trying to access the same archive.ubuntu.com that the Docker container tried to use. "python3-dev" was not a candidate for installation! WTF Amazon? Shouldn't that just work, even if I'm not using your mirror?
So I try again, and again, and again. It works, on average, 1 out of every 5 times. I'm assuming this means we're seeing some strange shit configuration between EC2 racks where some are configured to redirect archive.ubuntu.com to the ec2 mirror, and others are configured to block. I haven't dug this far into the issue yet, because by the time I can SSH into the machine after automation, the apt list has already received it's blessed update from EC2.
Now I have to build a graceful delay into my automation while I wait for AWS to mangle, I mean "fix up" my apt sources list to their whim.
After completely blowing my allotted time on this task, I just shipped a "sleep" statement in my code. I feel so dirty. I'm going to go brew some more coffee to be okay with my life. Then figure out a proper wait statement.7 -
!rant
Me: sudo apt-get update
PC: Noope. There is a problem with a package.
Me: Ugh... ok I'll fix it. *20 minutes later* Fixed. Sudo apt-get update
PC: Noope, the package cache file is corrupted.
Me: GO FUCK YOURSELF LINUX OMFG.
Oh, I fixed it.
I LOVE YOU LINUX.11 -
>Installs NodeJS (from default Debian repo)
>Tries to install yarn
>Yarn tries to uninstall nodejs
Weird
>apt-cache depends yarn
yarn
Recommends: nodejs
Conflicts: nodejs
10/10, gave me a good chuckle. Time to add the NodeJS repo.10 -
Want to make someone's life a misery? Here's how.
Don't base your tech stack on any prior knowledge or what's relevant to the problem.
Instead design it around all the latest trends and badges you want to put on your resume because they're frequent key words on job postings.
Once your data goes in, you'll never get it out again. At best you'll be teased with little crumbs of data but never the whole.
I know, here's a genius idea, instead of putting data into a normal data base then using a cache, lets put it all into the cache and by the way it's a volatile cache.
Here's an idea. For something as simple as a single log lets make it use a queue that goes into a queue that goes into another queue that goes into another queue all of which are black boxes. No rhyme of reason, queues are all the rage.
Have you tried: Lets use a new fangled tangle, trust me it's safe, INSERT BIG NAME HERE uses it.
Finally it all gets flushed down into this subterranean cunt of a sewerage system and good luck getting it all out again. It's like hell except it's all shitty instead of all fiery.
All I want is to export one table, a simple log table with a few GB to CSV or heck whatever generic format it supports, that's it.
So I run the export table to file command and off it goes only less than a minute later for timeout commands to start piling up until it aborts. WTF. So then I set the most obvious timeout setting in the client, no change, then another timeout setting on the client, no change, then i try to put it in the client configuration file, no change, then I set the timeout on the export query, no change, then finally I bump the timeouts in the server config, no change, then I find someone has downloaded it from both tucows and apt, but they're using the tucows version so its real config is in /dev/database.xml (don't even ask). I increase that from seconds to a minute, it's still timing out after a minute.
In the end I have to make my own and this involves working out how to parse non-standard binary formatted data structures. It's the umpteenth time I have had to do this.
These aren't some no name solutions and it really terrifies me. All this is doing is taking some access logs, store them in one place then index by timestamp. These things are all meant to be blazing fast but grep is often faster. How the hell is such a trivial thing turned into a series of one nightmare after another? Things that should take a few minutes take days of screwing around. I don't have access logs any more because I can't access them anymore.
The terror of this isn't that it's so awful, it's that all the little kiddies doing all this jazz for the first time and using all these shit wipe buzzword driven approaches have no fucking clue it's not meant to be this difficult. I'm replacing entire tens of thousands to million line enterprise systems with a few hundred lines of code that's faster, more reliable and better in virtually every measurable way time and time again.
This is constant. It's not one offender, it's not one project, it's not one company, it's not one developer, it's the industry standard. It's all over open source software and all over dev shops. Everything is exponentially becoming more bloated and difficult than it needs to be. I'm seeing people pull up a hundred cloud instances for things that'll be happy at home with a few minutes to a week's optimisation efforts. Queries that are N*N and only take a few minutes to turn to LOG(N) but instead people renting out a fucking off huge ass SQL cluster instead that not only costs gobs of money but takes a ton of time maintaining and configuring which isn't going to be done right either.
I think most people are bullshitting when they say they have impostor syndrome but when the trend in technology is to make every fucking little trivial thing a thousand times more complex than it has to be I can see how they'd feel that way. There's so bloody much you need to do that you don't need to do these days that you either can't get anything done right or the smallest thing takes an age.
I have no idea why some people put up with some of these appliances. If you bought a dish washer that made washing dishes even harder than it was before you'd return it to the store.
Every time I see the terms enterprise, fast, big data, scalable, cloud or anything of the like I bang my head on the table. One of these days I'm going to lose my fucking tits.10 -
I think I made someone angry, then sad, then depressed.
I usually shrink a VM before archiving them, to have a backup snapshot as a template. So Workflow: prepare, test, shrink, backup -> template, document.
Shrinking means... Resetting root user to /etc/skel, deleting history, deleting caches, deleting logs, zeroing out free HD space, shutdown.
Coworker wanted to do prep a VM for docker (stuff he's experienced with, not me) so we can mass rollout the template for migration after I converted his steps into ansible or the template.
I gave him SSH access, explained the usual stuff and explained in detail the shrinking part (which is a script that must be explicitly called and has a confirmation dialog).
Weeeeellll. Then I had a lil meeting, then the postman came, then someone called.
I had... Around 30 private messages afterwards...
- it took him ~ 15 minutes to figure out that the APT cache was removed, so searching won't work
- setting up APT lists by copy pasta is hard as root when sudo is missing....
- seems like he only uses aliases, as root is a default skel, there were no aliases he has in his "private home"
- Well... VIM was missing, as I hate VIM (personal preferences xD)... Which made him cry.
- He somehow achieved to get docker working as "it should" (read: working like he expects it, but that's not my beer).
While reading all this -sometimes very whiney- crap, I went to the fridge and got a beer.
The last part was golden.
He explicitly called the shrink script.
And guess what, after a reboot... History was gone.
And the last message said:
Why did the script delete the history? How should I write the documentation? I dunno what I did!
*sigh* I expected the worse, got the worse and a good laugh in the end.
Guess I'll be babysitting tomorrow someone who's clearly unable to think for himself and / or listen....
Yay... 4h plus phone calls. *cries internally*1 -
tl;dr - install ‘Pop!_os’ and try it out if you haven’t yet, it’s pretty damn good!
Heavy Micro$haft user here, have tried using ubuntu a bunch of times in the past and fucking regretted it every time. Ran into issues with stupid shit like the apt cache growing exponentially until the drive was full, or something like the the system python getting borked.
To be fair, I’m 120% certain my dumb-assery is what caused the problems. I’m definitely not trying to blame the OS. But my experience was shitty, even if it was at my own hands lol.
Started playing around with Pop!_os from the system76 team. And I’m seriously in freakin’ love with this OS. It’s clean, is performant, feels way less buggy or just feels more stable somehow. I know it’s based on ubuntu, but I’ve had a great time thus far using it. I’ve got ansible, docker, aws toolkit, aws cli, sam-cli, vscode, dynamodb-local, serverless, npm, brew, and working on steam now.
Everything has been a breeze and again the system feels really fast and snappy. It feels a lot like mac on the smoothness scale, but snappy like a windows box with beefy hardware specs.
I’m still just in the testing phase on a VM, but I’m seriously thinking about blowing away my windows install for Pop!_os.
(I’ll try arch someday when I’m up for some hardcore masochism)8 -
I just setup an apt-cache on my Macbook. Docker no longer takes 10 years to `apt-get install` when I'm at the coffee shop. This coffee shop is going to loose so much money now that my work is done faster.