Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "cron"
-
*Now that's what I call a Hacker*
MOTHER OF ALL AUTOMATIONS
This seems a long post. but you will definitely +1 the post after reading this.
xxx: OK, so, our build engineer has left for another company. The dude was literally living inside the terminal. You know, that type of a guy who loves Vim, creates diagrams in Dot and writes wiki-posts in Markdown... If something - anything - requires more than 90 seconds of his time, he writes a script to automate that.
xxx: So we're sitting here, looking through his, uhm, "legacy"
xxx: You're gonna love this
xxx: smack-my-bitch-up.sh - sends a text message "late at work" to his wife (apparently). Automatically picks reasons from an array of strings, randomly. Runs inside a cron-job. The job fires if there are active SSH-sessions on the server after 9pm with his login.
xxx: kumar-asshole.sh - scans the inbox for emails from "Kumar" (a DBA at our clients). Looks for keywords like "help", "trouble", "sorry" etc. If keywords are found - the script SSHes into the clients server and rolls back the staging database to the latest backup. Then sends a reply "no worries mate, be careful next time".
xxx: hangover.sh - another cron-job that is set to specific dates. Sends automated emails like "not feeling well/gonna work from home" etc. Adds a random "reason" from another predefined array of strings. Fires if there are no interactive sessions on the server at 8:45am.
xxx: (and the oscar goes to) fuckingcoffee.sh - this one waits exactly 17 seconds (!), then opens an SSH session to our coffee-machine (we had no frikin idea the coffee machine is on the network, runs linux and has SSHD up and running) and sends some weird gibberish to it. Looks binary. Turns out this thing starts brewing a mid-sized half-caf latte and waits another 24 (!) seconds before pouring it into a cup. The timing is exactly how long it takes to walk to the machine from the dudes desk.
xxx: holy sh*t I'm keeping those
Credit: http://bit.ly/1jcTuTT
The bash scripts weren't bogus, you can find his scripts on the this github URL:
https://github.com/narkoz/...53 -
An entirely typical exchange at work:
PM: How long would it take to build an application that collates Gubblefluffs and exports them as a PDF?
ME: Hard to say. What’s a Gubblefluff?
PM: Nothing complex. Its basically an object with some stuff in.
ME: Erm, okay. So I’ll define a Gubblefluff object plus methods to add edit and delete, then for each Gubblefluff have it write a line to a PDF.
PM: It will need to email that PDF to somebody.
ME: Okay, cool. “Gubblefluffs-by-email” should take about a day.
6 hours later…
ME: I’ve done Gubblefluffs-to-pdf, I’m not clear on what’s in a Gubblefluff but I’ve made it flexible so it can take almost anything.
PM: No, a Gubblefluff can ONLY be one of 4 Snigglefingers plus a timestamp and some JSON.
ME: What? Right. Okay. What’s a Snigglefinger?
PM: (sighs) A Snigglefinger is the collection of relevant Babelsets.
ME: Babelsets?
PM: Yeah, a user can have any number of Babelsets but they must correspond to one of the four types of Snigglefingers.
ME: There are users!?
PM: Of course!
ME: But I’ve not coded anything for users.
PM: Shit. I’ve told the client they can have it today. How long to add in users?
ME: And Babelsets, and Snigglefingers and the new Gubblefluff rules?
PM: Yeah.
6 days later…
ME: This is done now. It’s a beast but it works. Who should it email the PDFs to?
PM: Client X, plus cc to Y and bcc to Z.
ME: What? It doesn't support CC and BCC!
1 hour later…
ME: This is done. I’ve tested it and sent you a copy of the PDF it generates.
PM: Okay thanks. Is the cron running daily?
ME: What cron?
…
ME: Okay, so the cron’s running once a day at 8pm.
PM: Oh, it’ll need to be at 3:15pm. That’s when we’ve told the client they’ll get it.
ME: Right. I’ll change it...
PM: Also, the PDF you sent me looks nothing like the visual.
ME: What visual?
...53 -
boss' revenge
So here https://devrant.com/rants/1349878/... posted prank played on boss. For 3days I been freaking out what boss will do as revenge (check env and alias everytime I login). Then yesterday happened his revenge.
Was doing testing on my programs & sometime some programs would run but sometime it get segmentation fault. Seemed random first but then saw a pattern... everytime I get segmentation fault and I run again it would be fine. Checked alias... nothing, /etc/crontab, env, ps -ef... nothing seemed off, cksum of my binary... correct. Fuck! "What my boss did?" asked myself. Finally .5hrs later I saw entry in my id's crontab but then 1min later it's gone from my crontab
From there figured out how boss did it:
1) He replaced ntpd with his C program that runs in background creating an entry in my crontab every few mins
2) The entry in my crontab set to run /foobar/ulittleprick.sh every 2mins
3) ulittleprick.sh picks random binary owned by me, rename binary.name to .binary.name.nitwit and create a script named binary.name
4) Then ulittleprick.sh will remove itself from cron
What the generated binary.name script does? Sleep for 2 secs, echo "Segmentation fault", then rename back .binary.name.nitwit to binary.name. It even exits with status 139! I want to cry! Worst part is comment in 2nd line of ulittleprick.sh... kill me now29 -
A guy on another team who is regarded by non-programmers as a genius wrote a python script that goes out to thousands of our appliances, collects information, compiles it, and presents it in a kinda sorta readable, but completely non-transferable format. It takes about 25 minutes to run, and he runs it himself every morning. He comes in early to run it before his team's standup.
I wanted to use that data for apps I wrote, but his impossible format made that impractical, so I took apart his code, rewrote it in perl, replaced all the outrageous hard-coded root passwords with public keys, and added concurrency features. My script dumps the data into a memory-resident backend, and my filterable, sortable, taggable web "frontend"(very generous nomenclature) presents the data in html, csv, and json. Compared to the genius's 25 minute script that he runs himself in the morning, mine runs in about 45 seconds, and runs automatically in cron every two hours.
Optimized!22 -
Probably the biggest one in my life.
TL:DR at the bottom
A client wanted to create an online retirement calculator, sounds easy enough , i said sure.
Few days later i get an email with an excel file saying the online version has to work exactly like this and they're on a tight deadline
Having a little experience with excel, i thought eh, what could possibly go wrong, if anything i can take off the calculations from the excel file
I WAS WRONG !!!
17 Sheets, Linking each other, Passing data to each sheet to make the calculation
( Sure they had lot of stuff to calculate, like age, gender, financial group etc etc )
First thing i said to my self was, WHAT THE FREAKING FUCK IS THIS ?, WHAT YEAR IS THIS ?
After messing with it for couple of hours just to get one calculation out of it, i gave up
Thought about making a mysql database with the cell data and making the calculations, but NOOOO.
Whoever made it decided to put each cell a excel calculation ( so even if i manage to get it into a database and recode all the calculations it would be wayyy pass the deadline )
Then i had an epiphany
"What if i could just parse the excel file and get the data ?"
Did a bit of research sure enough there's a php project
( But i think it was outdated and takes about 15-25 seconds to parse, and makes a copy of the original file )
But this seemed like the best option at the time.
So downloaded the library, finished the whole thing, wrote a cron job to delete temporary files, and added a loading spinner for that delay, so people know something is happening
( and had few days to spare )
Sent the demo link to client, they were very happy with it, cause it worked same as their cute little excel file and gave the same result,
It's been live on their website for almost a year now, lot of submissions, no complains
I was feeling bit guilty just after finishing it, cause i could've done better, but not anymore
Sorry for making it so long, to understand the whole thing, you need to know the full story
TL:DR - Replicated the functionality of a 17 sheet excel calculator in php hack-ishly.8 -
So that high level prank from yesterday.
Senior Linux engineer, the fucker.
He somehow installed shitloads of cron jobs onto my system.
Every few minutes it would create a new user with a freaking complicated password. Then it would install openssh server in case it wasn't installed yet. After that it'd set all iptables rules to allow incoming AND outgoing connections on port 22.
That was one badass ansible script though!
I'm not sure what more there's to it because sometimes when i removed crons, they'd magically appear again later AND i forgot to check the boot scripts so i might be fucked again when I get to work today!
Plus side, i finally fully understand cron 😅19 -
Every Unix command eventually become an internet service .
Grep- > Google
rsync- > Dropbox
man- > stack overflow
cron- > ifttt5 -
Pranks again today. Mother of God the level of those pranks is becoming high as fuck.
Define high?
Having to debug shit at system (cron, firewalling, users, sometimes even digging through logs/dmesg) level because weird shit happens all day long.
This is upping my Linux skills a lot though! I love it 😍9 -
***Interviewing potential sys admins so us devs don't have to build everything and run everything***
Coworker: Do you know how to use cron and cron jobs?
Candidate: Yes I'm familiar with setting up users and permissions.
Me: 😳
Coworker: 😳
Boss: We will give you a call have a good day.
If you had just admitted you didn't know but we thought you could learn we might have been open to teaching you but brazenly acting like you know something when you don't is dangerous if you're running a multi thousand user production system.3 -
More Unix commands are becoming web services. What else can you think of?
Grep -> Google
rsync -> Dropbox
man -> stack overflow
cron -> ifttt"9 -
I now know another person's password without even wanting to.
He was sitting in the row in front of me, logging into our course page and then *brrrrraaaaapppp* - ran his index finger along the top number row and hit enter.
1234567890
I don't even know what to say.13 -
So Someone in my team decided to create a cron job that auto send email to the client that says "GO FUCK YOURSELF" in clients' Hostinger (shared hosting) account .
My friend , why?14 -
I've got a confession to make.
A while ago I refurbished this old laptop for someone, and ended up installing Bodhi on it. While I was installing it however, I did have some wicked thoughts..
What if I could ensure that the system remains up-to-date by running an updater script in a daily cron job? That may cause the system to go unstable, but at least it'd be up-to-date. Windows Update for Linux.
What if I could ensure that the system remains protected from malware by periodically logging into it and checking up, and siphoning out potential malware code? The network proximity that's required for direct communication could be achieved by offering them free access to one of my VPN servers, in the name of security or something like that. Permanent remote access, in the name of security. I'm not sure if Windows has this.
What if I could ensure that the system remains in good integrity by disabling the user from accessing root privileges, and having them ask me when they want to install a piece of software? That'd make the system quite secure, with the only penetration surface now being kernel exploits. But it'd significantly limit what my target user could do with their own machine.
At the end I ended up discarding all of these thoughts, because it'd be too much work to implement and maintain, and it'd be really non-ethical. I felt filthy from even thinking about these things. But the advantages of something like this - especially automated updates, which are a real issue on my servers where I tend to forget to apply them within a couple of weeks - can't just be disregarded. Perhaps Microsoft is on to something?11 -
So I looked at our dashboard and noticed a banner mentioning scheduled maintenance set for 7:00 AM. And I thought to myself, "I never released an update, and even if I had, the maintenance would be performed 15 minutes after the build finished, not at 7:00 AM." So I emailed my coworkers, asking if they had put up the banner, no, no. I started pulling my hair out trying to figure out what caused this banner to be created. Was there some old job that was just now running? I combed through the server logs, thousands of entries later, and I found the banner was installed by some user with the IP 172.18.0.1...which was the local machine. I went through all the users on the system, running atq to see if anyone had jobs scheduled. And there was one job scheduled, under the root user. At that moment, I legit thought to myself, "have we been hacked? How is that possible?" It's wasn't! Then I looked under /var/spool/atjobs to see what the job actually was. And then I saw it. My weekly updater cron job had installed updates and had scheduled a maintenance window to reboot the system. And I smiled, realizing that my code was now sentient.
-
Just set a cron on a coworkers machine to play "What does the fox say" at max volume at 8 when he's the only one here.
May need to review the security footage in the morning.2 -
Alright, so my previous rant got a way better response than I expected! (https://devrant.io/rants/832897)
Hereby the first project that I cannot seem to get started on too badly :/.
DISCLAIMER: I AM NOT PROMOTING PIRACY, I JUST CAN'T FIND A SUITABLE SERVICE WHICH HAS ALL THE MUSIC I WANT. I REGULARLY BUY ALBUMS. before everyone starts to go batshit crazy regarding piracy, this is legal in The Netherlands for personal use. I think that supporting the artists you love is very good and I actually regularly pay for albums and so on but:
- I want all the music from about every artist in my scene. Either on Deezer or on Spotify this is not available and I'm not gonna get them both (they both have about half of the music I want). Their services are awesome but I'm not going to pay for something if I can't listen to all the music I like, hell even some artists (on deezer mostly) only have half their music on there and it's mostly not better on Spotify.
- I'd happily buy all albums because I love supporting the artists I love but buying everything is just way too fucking much."Get a premium music streaming subscription!" - see the first point.
You can either agree or disagree with me but that's not what this rant is about so here we go:
The idea is to create a commandline program (basically only needs to be called by a cron job every day or so) which will check your favourite youtube (sorry, haven't found a suitable non-google youtube replacement yet) channels every day through a cronjob and look for new uploads. If there are, it will download them, convert them to MP3 or whatever music format you'd like and place them in the right folder. Example with a favourite artist of mine:
1. Script checks if there are any new uploads from Gearbox Digital (underground raw hardstyle label).
2. Script detects two new uploads.
3. Script downloads the files (I managed to get that done through the (linux only or also mac?) youtube-dl software) and converts them to mp3 in my case (through FFMPEG maybe?).
4. Script copies them to the music library folder but then the specific sub-folder for Gearbox Digital in this case.
You should be able to put as many channels in there as you want, I've tried this with the official YouTube Data API which worked pretty fine tbh (the data gathering through that API). The ideal case would be to work without API as youtube-dl and youtube-dlg do. This is just too complicated for me :).
So, thoughts?43 -
So today was the worst day of my whole (just started) career.
We have a huge client like 700k users. Two weeks ago we migrated all their services to our aws infrastructure. I basically did most of the work because I'm the most skilled in it (not sure anymore).
Today I discovered:
- Mail cron was configured the wrong way so 3000 emails where waiting to be sent.
- The elastic search service wasn't yet whitelisted so didn't work for two weeks.
- The cron which syncs data between production db en testing db only partly worked.
Just fucking end me. Makes me wonder what other things are broken. I still have a lot to learn... And I might have fucked their trust in me for a bit.13 -
Just tested my GPU code vs my non-GPU code.
Its a simple game of life implementation. My test is on a 80 x 40 grid running for 100,000 cycles.
The normal code took 117 seconds.
The CUDA code took 2 seconds.
Holy fuck this is terrifying.3 -
mysql server crashes every 18 days, no oom, no crash logs, no sigkill being sent (used auditd). so I figure it's a unknown corner case bug in mysql. now I use a cron job to restart the damn thing every week at 3am, not a problem anymore8
-
I can't figure out what's worst in the new system we got handed over from a client:
1. that they have an every-minute-cron-job with "sudo chmod -R /var 777"
2. that the backup of their database haven't worked since 2014 because the S3 bucket is full
3. It's written in PHP, by one guy, who didn't knew PHP when he started work on it. (All MYSQL calls are String-concats, etc)3 -
Official Release of Wikipedia Bot is Here
How to Use: @wiki <Query>
Runs on cron job, every minute.
If there is literally no Wikipedia page for query, still gets a relevant result.
Demo in Comments.282 -
Ngl I'm glad I found this community full of people who complain. I'm not even joking I'm legitimately glad that I can complain without being reprimanded lol8
-
me: I will major in CS so I can work with computers, not people
narrator: But little did he know...4 -
So we ordered a piece of software from external software house becouse I was low on time and we needed it asap.
So. Long story short, their software was bugged as hell, they deny all the bugs and they have their BDD that they done and anything we say about it like "feature XYZ is broken on firefox" they will deny it "becouse it wasn't on BDD" or "let's get on call" (in which +- 6-7 people participate from their side and we of course have to pay them for this...)
So they fixed like 20% of bugs (mostly trivials/minors) Application is fairly small scope. You have integration with like 3 endpoints on arbitary API, user registration/login, few things to do in database (mainly math running from cron).
They done it in ASP so I don't know the language and enviroment so can't just fix it myself.
2 days ago (monday) they annoyed me to point where I just started to break things. For starters I found that every numeric input is vunrable to integer overflow (which is blocker). I figured most of fields are purefect opportunity to XSS (but I didn't bother to do JS... anything but not JS...). I figured I can embed into my name/surname/phone (none validated) anything in HTML...
So for now we have around 25 bugs, around 15 of them are blockers.
They figured it's somehow our fault that it's bugged and decided to do demo with us to show off how perfectly it works. I'm happy to break their demos. I figured I will register bunch users that have name - image with fixed/absolute position top:0;left:0 width/height 100% - this will effectively brick admin panel
Also I figured I can do some addotional sounds in background becouse why not. And I just dont know what to put in. It links to my server for now so I can freely change content of bricked admin panel.
I have curl's ready to execute in case they reset database.
I can put in GIFs or heck, even videos, dosen't really matter. Framework escapes some things for them so at least that. But audio/image/video works.
Now I have 2 questions:
- what image + audio combo will work the best (of course we need to keep it civil). Im thinking finding some meme with bugs or maybe nuclear logo image with some siren sound
- am I evil person?
Edit:
I havent stated this clearly:
"There is no BDD that describes that if user inserts malicious input server should deny it" - that's almost literally what we get from them....11 -
Self induced devRant rage.
The Setting:
I like to pull the "recent" feed up and put it on a third of one of my monitors staying out of the way, but letting me glance over to see new posts or new notifications without depending on the iOS app.
So I put this page on a 5 minute cron reload, so the feed and notifs will update.
Then I get a notification and go to the rant and wrote out like a long ass comment to somebody.
And as some of you have already guessed, just as I was about to post this long ass comment the five minute cron timer ran out and my fucking browser page reloaded.
TL;DR ~ I have played myself.3 -
Whatever you do, don't do this.
- a sleep for 300s to avoid full 24 hour rollover (lol)
- sleep 1d instead of cron; so at random times emails will be sent out or they won't at all
- this is a laravel project, there's a thing called task scheduling: https://laravel.com/docs/5.8/...
Git blame: https://github.com/invoiceninja/...
The actual core project docs at least tell you to setup a cron, though not via * * * * * nor task scheduling, which isn't as much of an issue though as their dogshit docker compose: https://invoice-ninja.readthedocs.io/...6 -
And this, ladies and gentlemen, is why you need properly tested backups!
TL;DR: user blocked on old gitlab instance cascade deleted all projects the user was set as owner.
So, at my customer, collegue "j" reviews gitlab users and groups, notices an user who left the organisation
"j" : ill block this user
> "j" blocks user
> minutes pass away, working, minding our own business
> a wild team devops leader "k" appears
k: where are all the git projects?
> waitwut?.jpg
> k: yeah all git projects where user was owner of, are deleted
> j.feeling.despair() ; me.feeling.despair();
> checks logs on server, notices it cascade deletes all projects to that user
> lmgt log line
> is a bugreport reported 3(!) years ago
> gitlab hasnt been updated since 3 years
> gitlab system owner is not present, backup contact doesnt know shit about it
> i investigate further, no daily backup cron tasks, no backup has been made whatsoever.
> only 'backups' are on file system level, trying to restore those
> gitlab requires restore of postgres db
> backup does not contain postgres since the backup product does not support that (wtf???)
> fubar.scene
> filesystem restore finished...
> backup product did not back up all files from git tree, like none of refs were stored since the product cannot handle such filenames .. Git repo's completely broken
Fuck my life6 -
Well, for starters there was a cron to restart the webserver every morning.
The product was 10+ years old and written in PHP 5.3 at the time.
Another cron was running every 15 minutes, to "correct" data in the DB. Just regular data, not from an import or something.
Gotta have one of those self-healing systems I guess.
Yet another cron (there where lots) did run everyday from 02:00 to 4ish to generate the newest xlsx report. Almost took out the entire thing every time. MySQL 100%. CPU? Yes. RAM? You bet.
Lucky I wasn't too much involved at the time. But man, that thing was the definition of legacy.
Fun fact: every request was performed twice! First request gave the already logged-in client an unique access-token. Second request then processed the request with the (just issued) access-token; which was then discarded. Security I guess.
I don't know why it was build this way. It just was. I didn't ask. I didn't wanted to know. Some things are better left undisturbed. Just don't anger the machine. I became superstitious for a while. I think, in the end, it help a bit: It feels like communicating with an alien monster but all you have is a trumpet and chewing gum. Gentle does it.
Oh and "Sencha Extjs 3" almost gave me PTSD lol (it's an ancient JS framework). Followed by SOAPs WSDL cache. And a million other things.5 -
My coworker doesn't know how to use a terminal. He talked himself into his position and instead of taking the time to learn about the basic commands he keeps asking someone else (including the teammanager, who's actually a software engineer) to do things for him.
For reference; we need the terminal to tail log files, keep track of processes, cron jobs, manipulate file structures, use scp (I use sshfs) to move things between other workstations and servers etc. Being able to use a terminal is one of the basic requirements for our job.
What.
Why.
How.
Why do people do this?2 -
Set up a mail server for our office using one of those all-in-one mail server installations. It would keep denying connections every few days, so me, being the inexperienced shit I was, setup a cron job to restart the server at midnight everyday. It worked and still, to this day, it still works.1
-
We're digital plumbers.
90% of this job is figuring out what thing to connect to what thing and then figuring out how to connect them.
Writing the code that goes in-between both ends of the pipe is easy if not trivial 90% of the time.
Meaningful change in this industry is centered around endpoints: contracts, deployments, etc. Nobody needs yet another way to organize and import their leftpad().10 -
Fuck this
I get to work with API where you CAN authenticate with username/password and get a token
But you CAN'T get user info from token (auth response contains ONLY token)
So what I have to do:
1. Get token
2. Request ALL FUCKING USERS and load them into my DB
3. Search through local DB by username and, yeah, here I go
Now I need to have a cron job to update user DB 1/2 times per day
I can't think of ANY reason not to allow this8 -
Typos kill, kids! And deploying to production.
Instead of "for item in items" in my script, I accidentally did "for items in items". Thus, an exponential loop has been entering things into the database for the past few hours before I found the place to fix it.
By the way, this runs on cron every minute. So there are processes still running exponentially right now, possibly 180+.
Yeah, I'm setting up a a test server instead now.11 -
Going to create a couple of cron jobs that delete stuff from database.
What could possibly go wrong? ;)6 -
First rant (well, first on devRant anyway)! So, I'm working on a project to refactor a decade old codebase so that all references to ip addresses are in config. It sucks. But I did find an ascii art fish in an old cron script, so that's something.9
-
This was about 3 years ago. I’m on vacation and just getting off the plane, when my boss calls me on his cellphone. Apparently the crontab on our main file upload server had gotten nuked, and he was asking if there were any backups.
A word about this server. I work with video, so this thing is doing about a few gigabits of traffic incoming at any moment. The cron jobs are necessary to move and organize these massive files into a sane scheme for processing. Hundreds of drop folders receiving thousands of files resulting in terabytes of data every single day. Our storage vendor tells us we have the third largest deployment they know about.
No cron jobs mean all of this content is just sitting around piling up. I tell him sorry, try contacting $otherAdmin since he’s more familiar with that system.
A few days later, after the vacation, I come back in. $boss and $otherAdmin have reconstructed the crontab from scratch after an all nighter.
I ask how it got deleted.
$boss was training some people how to set up new customers on this file server, and he told the trainees to open the crontab in read-only mode. One of them ran:
crontab -r
Yes, we back up our crontabs now.3 -
05 13 * * * export DISPLAY=:0 && brave-browser https://www.swiggy.com/
Cron to make sure I eat on time. Swiggy is food delivery service.9 -
I hate time.
Yes, that dimension which unidirectionally rushes by and makes us miss deadlines.
Also yes, that object in most programming languages which chokes to death on formatting conversions, timezones, DST transitions and leap seconds.
But above all, I hate doing chronological things from the point of view of code, because it always involves scheduling and polling of some kind, through cron jobs and queues with workers.
When the web of actions dependent on predicted future and passed past events becomes complicated, the queries become heavy... and with slow queries, queues might lock or get delayed just a little bit...
So you start caching things in faster places, figure out ways to predict worker/thread priorities and improve scheduling algorithms.
But then you start worrying about cache warming and cascading, about hashing results and flushing data, about keeping all those truths in sync...
I had a nightmare last night.
I was a watchmaker, and I had to fix a giant ticking watch, forced to run like a mouse while poking at gears.
I fucking need a break. But time ticks on...2 -
21 lines of business logic (including whitespace and comments)
9 lines of build config
386 lines of tests (and tests for the tests)9 -
POV: You fixed a hairy bug at a FAANG as a:
L3: "You're a technical wizard!"
L4: "Good job."
L5: "Why did you allow this into your team's code base?"
L6: "Why didn't you delegate that?"1 -
I've fucking had it with youtube, fucking jizz slapping knob butlers. I'm going to setup a mirror on my server, the idea is:
- Setup a youtube-dl cron that fetches multiple times a day both audio and video versions of the music playlist I have, hopefully with some sort of progress tracking of each download and total, so I could check if it has run successfully and have a nice dashboard, might need to do that myself (except if compactd proves itself to manage that all)
- Need to figure out a way to download the "best" quality but not go beyond 1080p, since if some videos for some reason are uploaded @ 4k, that'll be a waste of space
- Have Compactd/Funkwhale/Koel as the music player frontend for the audio version of the files, preferably one of them should offer download of the files too, so I could have a similar setup to spotify, though I could probably also just have some filebrowser installed or have a password protected index.
- Not sure what to use for the video versions, since sometimes the video goes with the music; plex? emby? suggestions are welcome
- Saw somebody (ab)using google drive as their backup for all the music they download, so I want to setup something similar, rsyncing all videos and music to some account, so in case shit majorly hits the fan, I can just download everything back15 -
Schrodingers cron: when a cronjob does not appear to be running and you turn email on it runs fine. Then you turn email off and doesn't appear to be running anymore...1
-
So i was bitching about some cron task few days before
and guess the issue
Fucking Client just shut down the server by himself for night and cron job scheduled for night time :(
I fucking waste 2-3 days to debug the function whats wrong with them until one day I checked server is closed at night :(2 -
Quite happy with myself, I just made a 60 line backup script that takes in a super simple JSON file and backs up your files in multiple formats. Attached to a cron job and boom. 60 line home baked backup solution4
-
Hi! I'm new in freelancing. I've created a program that scrapes data from a website, parses it, runs DB queries, and emails the prepared data to the customer for whom I've created this program. The whole program is written in PHP and uses a MySQL table. There's almost no front-end, it's just like an automated background process that runs with a cron job. I've bought and set up a domain and hosting for them (my cutomer paid it all). I got the core part of the program running after ~2 days, and it took me ~a week to complete the project including adding features and the testing phase. Now, I'd like to know, how much does this kind of project cost? The business operates in Silicon Valley.question php scraping webdevelopment scraper cost webdev freelancing price cost of website siliconvalley silicon valley10
-
Every Unix command eventually become an internet service.
Grep -> Google
rsync -> Dropbox
man -> stack overflow
cron -> ifttt"
Anything more you can think of?4 -
-Writes a function that I'm going to schedule for django.
-works in development.
-adds it to production cron using django-crontab
-not working.
-spends 3 hours editing code, searching for similar problems and reading documentations but find nothing wrong and it's still not working.
-maybe it's django-crontab so I decide to just write a custom management command and call it through cron.
-still not working.
-calls function using what I'm telling cron to do.
-everything works.
-?????????
-adds logs to cron command (sorry for not making it earlier)
-mfw the code is not working because I imported 'patterns' in urls.py which has been deprecated since django 1.8 -
#!/usr/bin/rant
So, we are a web development and marketing agency. That's fine... except now it seems that we are a marketing and web development agency. Where the head marketing guy feels it's his job to head up web development.
This is NOT what I signed up for.
When you offer web services to a client, the one meeting with the client should understand at least basic stuff, and know when to pull in a heavyweight for more questions. Instead, our web team is summarized by a guy who listens to 80's rock music in a shared office (used to be just me in there) and spends his days trying to get 30-year-olds on Facebook to click an ad.
He was on the phone yesterday with some ecommerce / CRM support, trying to tell them that they have an API, that "it's a simple thing, I'm sure you have it", and that's all we need to do business with them. Which is not his call, it's my call, but for some reason he's the one on the phone asking for API info. The last time I took someone else's word on an API, I underquoted the work and eventually found out that their "API" was nothing more than a cron job which places a CSV file on your server via FTP.
Anyway, we now have a full-time marketer and two part-time interns, with another ad out for an AdWords specialist. Meanwhile, I'm senior dev with a server admin / retired senior dev, and if we don't focus on hiring a front-end guy soon we're going to lose business.
Long story short, I'm getting sick of having a guy who does not understand basic web concepts run the show because he's the one who talks to the client.3 -
Well fuck me, thought I could use pyload for my youtube music playlist downloader (has live progress updates, can download playlists, can be setup to download via cron too and ignore existing files..), but the plugin hasn't been maintained for a year (fails with ~60-80% of videos), it was based on youtube-dl code, which has been updated, but fixing that plugin with the code from youtube-dl would take a while, ughhh..2
-
Coworker left his computer unlocked so I set up a cron to change his background to Hello Kitty every few minutes. It also played the audio from this https://youtu.be/yPxJnvSZrU03
-
I recently started using Linux on my desktop.
I just love how when I see some minor thing I don't like about the operating system I can just change it myself!
Want to remove the menu icon? Simple change to the settings. Want your downloads folder to be clean? Simple cron job which asks you if you want to clean your downloads folder every day.
Man I love having the freedom to screw up my operating system!9 -
Changing the native browser scrollbar should warrant the death penalty.
Do not make it narrower. Do not make the colors blend with the background. Do not hijack it its functionality. Do not minimize it until I hover.
I am so fucking tired of websites that think they are in charge of my browsing "experience" and hide or otherwise marginalize the single most useful part of the page's UI.4 -
- Launch the new version of the system I have been refactoring for 2 years and counting, then ceremoniously burn (literally) the legacy code as well as the cluster fuck of hardware it runs on.
- Decrease my stress + bus factor by bringing another up to speed on my code & the new version (his cluster fuck now).
- Pay attention to & take better care of health, my wrists in patricular.
- Find a mentor and mentor someone else.
- Get out of crisis management mode and find the time to write tuts, experiment and live a little.
- Find & join a local dev meetup, maybe make a local dev friend.
- Book leave and actually take it, preferabbly without having to take my laptop to the beach - actually, preferabbly at least have the choice to take a offline vacation.
- Sort through the drives containing ALL the code I have ever written, migrate the usefull interesting bits to Github.
Phew, that bit of self reflection was intense! I'm adding a cron to my server to sms & email me this rant in a year to remind me what hope looks like. -
Saturday 9.00 AM. I was sleeping, my colleague (on holiday) sent me a text: "We got a problem on our system, probably we ran out of space". I checked the log and found out that several cron jobs failed due to not enough space on the disk. I started deleting some unnecessary logs (we're paranoid) and ended up to squeeze the vm like a lemon to save some space. Sent an email to the sysadmin, "We got to add more space ASAP, users are getting 500 errror for almost everything". Silence. I thought to myself: "Until monday we're safe..". I did a df (96%) and sent a screen to the sysadmin, just to be sure that we understood each other. Finally monday comes, nobody worries about the issue. At noon I literally takled the guy of IT dept. "Yeah, we read your email. I think the sysadmin didn't take you seriously". "Why? Which part of 'we're running out of space' isn't serious?!!!". "He just told me that we have unlimited space on that vm". Unlimited space...sure.... "Right.....the disk is at 96%, buuuuut if he said so No news to worry. Don't call me if everything burns. Have a good day!!!"4
-
Can anyone recommend a nice set of DnD dice for a gift? It's for my boyfriend's younger sister. She's just getting into it and we're all playing on her birthday.
There's cheap sets everywhere and I'd like to get her something of nice quality.8 -
Sprint 0: This design is the appropriate amount of engineering abstraction.
Sprint 2: This is over-engineered, too much work
Sprint 5: This is under-engineered, too many edge cases
Sprint 10: This is over-engineered, component Foo could be replaced by a bash script
Sprint 42: Foo is now the cornerstone of half our business logic2 -
>Adds new feature
>New feature works fine on dev
>New feature works fine on staging
>New feature doesn't work on live
>You can't easily figure out what is wrong because you need to wait an hour for it each time :|5 -
I've been using the Square REST API and I spent one hour thinking there was something wrong in my code until I f** found that THEY were not following OAuth 2 guidelines, which made their workflow incompatible with the OAuth lib I was using, so I had to mark an exception for Square's OAuth from the rest of my OAuths. Specifically, RFC 6749 Section 4.2.2 and 5.1.
However, after reading OAuth 2 guidelines, I became angry at THEM instead. The parameter `expires_in` should be the "lifetime in seconds" after the response. This will always be innevitably inaccurate, since we are not taking into account the latency of the response. This is, however, not a huge problem, since the shortest token lifetimes are of an hour (like f** Microsoft Active Directory, who my cron jobs have to check every ten minutes for new access tokens). Many workflows (like Microsoft, Square, and Python's oauthlib) have opted to add the `expires_at` parameter to be more precise, which marks the time in UTC. However, there's no convention about this. oauthlib and Microsoft send the time in Unix seconds, but Square does this in ISO 8601. At this point, ISO 8601 is less ambigious. Sending a raw integer seems ambiguous. For example, JavaScript interprets integer time as Unix _milliseconds_, but Python's time library interprets it as _seconds_. It's just a matter of convention, a convention that is not there yet.
Hope this all gets solved in OAuth 2.1 pleeeaasseee1 -
Somebody tell me why I shouldn't use systemd timers, as opposed to crontab entries. Because I've been very impressed with them, so far.8
-
The fact that there's only two characters between "run this job every 10 minutes" and "run this job every hour on the tenth minute" was the fix for the particular problem i just spent 5 hours on :facepalm:8
-
sudo rm -rf *
Just started out on linux, learning the ins and outs. All I wanted to do was remove two directories. Thankfully it was a fresh install, didn't lose anything important.
A valuable lesson was learned that day. 😂2 -
So... My boss is "hard working", meaning that she'd rather edit and upload a html file every morning at 5am for the last 5 years and manually send a push notification notifying the user that the new file is up than learning a little bit about automation (cron? IFTTT?) and even after letting her know about those options she has "no time"
She'd rather keep source code (pug, sass), manually build on local computer and upload to live servers instead of learning git and letting me setup once and for all CI/CD
SERIOUSLY!?!? NO TIME!?!? But there's time to do things at a turtle pace like in the 90s... 🤦♂️5 -
Typed crontab - r instead of
crontab - e, gonna be a long weekend to recover crons from log files.3 -
I have multiple contenders ;)
Contender one:
A program used to sort emails.
We was in the process of moving from lotus notes to exchange and needed a way route emails to the right server internally.
Solution, a qmail to receive all emails, a script running by cron every minute to read the emails, check the recipient name to a list and resending to the right server. The script was written in php :P since that was the only way we at the time had to read an email into an object, it was run just like any other shell script :D
Contender two:
A multi threaded mail sender that fetch email addresses and content from a database and posted them through qmail using background execution and pipes to get the result back and then update the database, written in bash script.
Contender three:
A c program used in a similar way as in one but this time using dial up and uucp to fetch email and then drop these either into lotus notes or into a bbs for our customers to give them an email address. This was around 1993, so not to many isp’s offered email and not to many had internet anyway, dial up bbs was much more common.5 -
NOTHING FUCKING WORKS OMG.
I WANT YOU TO RUN EVERY FUCKING 15 MINUTES, IS THAT SO FUCKING HARD? THE ONLY THING THAT'S GOING TO BE HARD AROUND HERE, IS ME!
Geez, all I am trying to do is to run a php script every 15 minutes, and literally every solution I have tried has failed...4 -
Fuck me why do my unit tests pass with garbage input.
I can't go into a long weekend with this shit in my head rent free.3 -
ZNC shenanigans yesterday...
So, yesterday in the midst a massive heat wave I went ahead, booze in hand, to install myself an IRC bouncer called ZNC. All goes well, it gets its own little container, VPN connection, own user, yada yada yada.. a nice configuration system-wise.
But then comes ZNC. Installed it a few times actually, and failed a fair few times too. Apparently Chrome and Firefox block port 6697 for ZNC's web interface outright. Firefox allows you to override it manually, Chrome flat out refuses to do anything with it. Thank you for this amazing level of protection Google. I didn't notice a thing. Thank you so much for treating me like a goddamn user. You know Google, it felt a lot like those plastic nightmares in electronics, ultrasonic welding, gluing shit in (oh that reminds me of the Nexus 6P, but let's not go there).. Google, you are amazing. Best billion dollar company I've ever seen. Anyway.
So I installed ZNC, moved the client to bouncer connection to port 8080 eventually, and it somewhat worked. Though apparently ZNC in its infinite wisdom does both web interface and IRC itself on the same port. How they do it, no idea. But somehow they do.
And now comes the good part.. configuration of this complete and utter piece of shit, ZNC. So I added my Freenode username, password, yada yada yada.. turns out that ZNC in its infinite wisdom puts the password on the stdout. Reminded me a lot about my ISP sending me my password via postal mail. You know, it's one thing that your application knows the plaintext password, but it's something else entirely to openly share that you do. If anything it tells them that something is seriously wrong but fuck! You don't put passwords on the goddamn stdout!
But it doesn't end there. The default configuration it did for Freenode was a server password. Now, you can usually use 3 ways to authenticate, each with their advantages and disadvantages. These are server password, SASL and NickServ. SASL is widely regarded to be the best option and if it's supported by the IRC server, that's what everyone should use. Server password and NickServ are pretty much fallback.
So, plaintext password, default server password instead of SASL, what else.. oh, yeah. ZNC would be a server, right. Something that runs pretty much forever, 24/7. So you'd probably expect there to be a systemd unit for it... Except, nope, there isn't. The ZNC project recommends that you launch it from the crontab. Let that sink in for a moment.. the fucking crontab. For initializing services. My whole life as a sysadmin was a lie. Cron is now an init system.
Fortunately that's about all I recall to be wrong with this thing. But there's a few things that I really want to tell any greenhorn developers out there... Always look at best practices. Never take shortcuts. The right way is going to be the best way 99% of the time. That way you don't have to go back and fix it. Do your app modularly so that a fix can be done quickly and easily. Store passwords securely and if you can't, let the user know and offer alternatives. Don't put it on the stdout. Always assume that your users will go with default options when in doubt. I love tweaking but defaults should always be sane ones.
One more thing that's mostly a jab. The ZNC software is hosted on a .in domain, which would.. quite honestly.. explain a lot. Is India becoming the next Chinese manufacturers for software? Except that in India the internet access is not restricted despite their civilization perhaps not being fully ready for it yet. India, develop and develop properly. It will take a while but you'll get there. But please don't put atrocities like this into the world. Lastly, I know it's hard and I've been there with my own distribution project too. Accept feedback. It's rough, but it is valuable. Listen to the people that criticize your project.9 -
Created a simple bot for an online game using puppeteer.
After an evening (and night) of dev and debugging (quite some rejected promise errors), it worked fine and was ready for a 10-minutely cron job.
Fixed a couple bugs in the first three hours. Then started playing minecraft, which lagged like hell.
Opened task manager and saw a list of about 25 headless chrome processes. They had not been closed because of unhandled errors before the close method call 😵
Now added some basic error handling ☺2 -
I made a bash script for my website that anonymises the visitor IPs in the Awstats logs by replacing the last octet with 0. It can either process all logfiles except the one of the current month, or only the one of the previous month. The latter mode is how I put it in a cron job to be called on the first day of each month.
Everything worked flawlessly with test data, but on the server, some visitor IPs were not anonymised. I noticed that all of them were from the last day of the previous month. Looking at the time stamp of the logfile, it was indeed from the first of the current month, but not from 00:21 where my cron job runs - instead, it was modified around 14:30.
Then I realised that the Awstats engine seems to be configured to batch add the log entries once per day at 14:30 so that when my cron job ran, the visitor data from between 14:30 and 00:00 were not yet in the file!
Solution: batch process all previous logfiles once to clean them up, and schedule the cron job on the 2nd of each month at 00:21.2 -
Well it's a bit long but worth reading, two crazy stories in one rant:
So there are 2 things to consider as being my first job. If entrepreneurship counts, when I was 16 my developer friend and I created a small local music magazine website. We had 2 editors and 12 writers, all music enthusiasts of more or less our age. We used a CMS to let them add the content. We used a non-profit organization mentorship and got us a mentor which already had his exit, and was close to his next one. The guy was purely a genius, he taught us all about business plans, advertising, SEO, no-pay model for the young journalists (we promised to give formal journalist certificates and salary when the site grows up)
We hired a designer, we hired a flash expert to make some advertising campaigns and started filling the site with content.
Due to our programming enthusiasm we added to the raw CMS some really cool automation: We scanned our country's radio charts each week using a cron job and the charts' RSS, made a bot to search the songs on youtube and posted the first search result as an embedded video using some reg-exps. This was one of the most fun coding times I've had. Doing these crazy stuff with none to little prior knowledge really proved me I can do anything with the power of will.
Then my partner travelled to work in an internship in the Netherlands and I was too lazy to continue it on my own and it closed, not so surprisingly for a 16 years old slacker boy.
Then the mentor offered my real first job. He had a huge forum (14GB of historical SQL) but it was dying, the CMS version was very old and he wanted me to upgrade it to the latest. It didn't seem hard at first, because there were very clear instructions in the CMS website on how to do that. However, the automation upgrade scripts didn't work well because the forum owners added some raw code (not MVC plugins but bad undocumented code) and some columns to the SQL tables. I didn't give up and decided to migrate between the versions without the scripts. I opened a new CMS and started learning by heart all of the database columns so I can make a script to migrate between the versions. The first tests ran forever because processing 14GB of data on a single home computer is not a task meant to be done. I didn't give up. I made an old forum and compared the table structures and code with my mentor's. I think I didn't exhaustively finish this solution, the task was too big on my shoulders and eventually I gave up. I still owe thanks for that mentor for teaching me how to bare with seemingly (and practically) impossible tasks, for learning not to fear from being a leader and an entrepreneur and also for paying me in time even though I didn't deliver anything 😂 -
Yesterday my tests caught more typos in copy/paste test cases than in actual logic.
'twas was a decent day.2 -
Developer problems:
6 different package managers to keep up to date.
Gem
Pip
Npm
Emacs
Homebrew
Aptitude
Good thing bash scripts and cron jobs exist5 -
I'm currently between jobs and have a few rants about my previous job (naturally). In retrospect, it's somewhat therapeutic to range about the sheer brainfuckery that has taken place. Enjoy!
First, let me set the scene: legacy B2B web app made with LEMP stack and sencha ext.js 3 + 4 (don't ask) and a lot of madness. Let's call that app "Alpha".
Alpha is a self made CMS build for typical ERP stuff. Yes, a self made CMS: entities are containers, containers have types and fields and values. Like so many legacy PHP apps, it does not have a dedicated FE: the HTML is rendered on the server and then spewed out to the browser.
Easy right? Coding like it's 1999! But there was a twist: Because everything is basically a container, the HTML-templates are saved in the DB. Along with the nessary JS and the CSS. And the translation variables. Why? Because fuck you! That's why. Who needs a git history anyways.
For some reason, Alpha was kinda slow.
There was also an editor, that allowed you to modify templates (web, mail, pdf) on the fly in prod. Because templates contain repeating data (header/footer), one template could contain additional templates. Much confusion. You could change templates via migration (slow, boring) or just ctrl-c/ctrl-v that sucker (fast, much excitement).
Did I mention Alpha was slow?
On with the rant: e-mails! How do they work? Noone knows. How to send mails asynchronous in PHP? Witchcraft is the only possible answer to that riddle. Here is your enterprise™ solution:
1. create mail
2. insert mail into DB
3. WAIT UP TO 59 SECONDS FOR A FUCKING CRON TO SEND MAIL
Why? "Because that way, we can resend mails in case the network is down :)"
Same procedure for the SOAP-API (db-queue + cron). You read that right: all requests to various other systems are processed once a minute.
Alpha slow.
Alpha was only one of several systems. Imagine a bunch of monolithic php apps, interconnected via SOAP, REST and GraphQL like a godamn intergalactic orgy. Image having to debug that cluster fuck.
Let's say there is a bad request. These things happen. No biggie. Remember the db-queue? Let's try to send the bad request a second time! And a third time! Still no luck? How odd. Let's create a specific file in a specific directory: a LOCK-file. Now, "the db-queue is on hold and no request gets processed :)"
Golly gee thanks Alpha.
Anyhow, did you know that MySQL has a join limit of 61 tables?3 -
Official Release of Meme Bot is here, Though it's a meme bot, it can be used as any image bot, since it googles the text. For better results, give proper description.
The script runs on a cron job, checking for mentions every minute, so, it will reply within 1 minute.
@memesbot <name-of-the-meme>
Here's the source code: https://gist.github.com/theabbie/...
Demo in comments415 -
I fucked up. I used the shebang line #!/usr/bin/env python3 in a script that was being ran every 5 minutes with a cron job. This generated an email to a system that dropped a file for processing and sent an age email for each file every minute. Because the Linux OS generated emails didn’t contain a keyword the script closed by design but I forgot to uncomment the delete temp file line. This started on Wednesday before a 4 day weekend. By the time I got in on Monday I was 40GB over my email quota and receiving 2500 emails a minute. I fixed the script and stopped the emails but down I have to clear out those emails. Here it is Wednesday and I am deleting 1 MB every 3 seconds. This is painful.1
-
Manager said we need to use Queue. Several meetings after then I looked at prototype by 6 senior devs:
A QueueListener connects to RabbitMQ check for payload then *disconnects*;
A TaskProvider in ASP.Net.MVC.Core(whatever it is) listening http and dependency inject that QueuePoller;
A Visual Cron timer calls that http url every 5 minutes.
Wait for it: a set of database tables to store messages for another MessageProcessor.
It’s a XML to CSV file conversion project consists of 43 unique projects under a solution. I did it within 500 lines of Node with ElasticSearch and told we don’t use fancy new stuffs here.1 -
Work has been inefficiently using multiple cron jobs to run php scripts to generate pre-baked data.
The last two days I took the steps needed to internalize all those scripts and run them from an individual php controller which is ran from Jenkins. My script keeps track of scheduling and error tracking.
I'd say I'm pretty proud of what I came up with.1 -
Please, share your website backup strategies and practices - I have a simple php/mysql webapp and files don't actually have any backup other than the fact that they're also saved in a dropbox, and for DB I have a cron job that will export it daily and send it to my email.
How do you do it? How large are sites/app that you're backing up?6 -
FUCKING SYSTEMD PIECE OF CRAP.
*Punches a wall or something*
Ugh, newest version of PHP-FPM apparently has a dependency on a Systemd package. The package doesn't change the system's init daemon to systemd, but just the fact that it has that, that more and more stuff is becoming dependent on that crap of a bloated piece of software is driving me crazy.
I hate systemd from the bottom of my soul, not for being a bad piece of software by any means. The systemd environment is quite well fitted together, but for being a monolithic monstrosity that is taking over more and more of the traditionally independent system services.
It would be absolutely good in my book, if it allowed a user or admin to choose which parts of SystemD they are going to install, and so, in the core, it would be a mere init daemon.
But noooooo, systemd has to take over cron, system dns resolver, home and user management and I bet its not the end.
GNU/Linux is becoming GNU/SystemD/Linux...9 -
When your Alma Mater wishes you on your birthday.
My name is not Ajay Reddy and it's not my birthday4 -
Found out today that someone has a cron job running doing a daily DB export since June 1 of 2015. The dumps average 2GB. I am amazed that storage engineering hasn't complained. Here's the kicker. It's a Dev database. WTF!? Dev? I freed up a terabyte of space today. Now I need to find out who owns that cron job.2
-
So, if I was to emigrate, should I come to your country?
I'll finish my bachelors in Comp Sci next May and Ireland isn't really livable right now with property rental prices. Time to look elsewhere I think.18 -
Spent over an hour on a shell script that wasn't working properly. I use it, works perfectly. Every time cron executes, does nothing, not even log an error.
It took me that long to realize that the user I was getting the cron to run on didn't have permission to write to my log file... You would think I'd realize this when my error scripts didn't log...
(on that note, the Bandit games at OverTheWire have been awesome refresher on getting back into the swing of linux - highly recommend) -
How to run PHP in a container :
1. Begin a docker file for an existing php cron app (when all you know is php, everything looks like a php app)
2. Set the FROM.. Apt get update .. Do composer install
3. Builds the image
4. Discover I need git
5. Add git to apt get install step
6. Builds the image
7. Launch the php script
8. Fatal : use of undefined constant SOL_UDP
9. Opens the source code of the third party. The there's no mention of where that constant is from.
10. Spend many minutes online to find what's missing.
11. Find the PHP sockets page about that option. Digs into the documentation to find out that's missing from the installed PHP.
12. Find out I need to add a step to install the socket extension in my docker file.
13. Build the image again
14. Execute it, finally it works
15. Remember why I hate php
(for brevity I've omitted the even more complex part of having to set up zlib)
How to install node js in a container image:
FROM node:8
ADD package.json
RUN npm install7 -
New Year's day 10am I got a text message from the product owner.
To wish me a happy new year one would guess. But no! She was telling me that the cron job that I developed 2 years ago failed and I need to apply the change on the configuration manually.
What really irritated me was that was no matter of life and death, this could wait for us to get back at the office. Or even worse, she could have done the change herself, after all she was checking emails anyway.
What a b.2 -
Currently working on a GUI config generator using MFC in VS.
Firstly, fuck sake Microsoft. Why can't I just use a normal string? The amount of times I've had to do god awful conversions to/from CString using their numerous typedefs L, _T and don't even get me started on LPCTSTR, LPCWSTR... It's just ugly and tedious. I've gotten used to it and all but still, ugh.
Secondly, some of the functions are just stupid. Want to disable a control? Hmm, we'll there's a function called EnableWindow, but no DisableWindow. How did I do it before? Oh, so to disable the control it's EnableWindow(FALSE). Of course it is, duh. Why am I so stupid?
Let's use the GetWindowText function. Simples. CString something_txt = GetWindowText().
Nope, it takes the CString as a parameter and copies it into that rather than just returning the text. Now one line becomes two. I get that this is a really small semantic thing but it irks me.
I just want to go back to my fedora partition. Wah.
PS: I'm sure there's good reasons for what I'm ranting about, but I really don't care. I just need to rant about my frustrations. 😂1 -
When a safety check you added to a cron script triggers early in the morning and you know you might be walking into a shitstorm.
Still better than not having that check and replicating an incomplete database to production. -
We have a cron job. We have alerts to monitor the cron job. We also have a monthly task to check the above are working.
Why? I wrote the first 2 so as far as I'm concerned, I'm going to tick yes on the last.6 -
Sometimes there's no way to get someone to understand. Sometimes it's better to just build a cron job that will compensate for this asinine mistakes.
-
a Senior co-worker start cron job using cpanel to fetch tweets every minute .
the problem he didn't use/know
'/dev/null'
which send email to the admin for every successful fetch
after a week we discovered this problem , admin inbox full of emails ,also our server get blacklisted (ie. cannot send emails) -
Most of the faculty on my college's IT engineering department aren't exactly adept with Linux, despite the fact that 10/12 labs in our building run on Ubuntu.
Last week, a really great professor (who doesn't take any classes I can attend) from the Electronics and Communications department and I wrote some bash scripts to automate updates and so on, staying back after college until late evening to try to get the PCs updated.
We'll be trying to use SSH to update as many computers as we can remotely, and trying to learn to use Cron to automate the whole updating deal.
I'm learning this stuff on the side, since it's not on my syllabus at all, and the professor isn't even related to the departments that run the labs usually.
We're not getting anything for doing this, the head of my department (who has it in for me) has no idea about this, and nobody else is bothered enough to learn either. -
Deleted the database of an application I built for college since they were replacing it with a better one. Later, the teacher remembered that he didn't take a backup.
Fortunately, I remembered I had configured a cron job an year back in the app which saved me that day. 😅 -
A shitty internet connection and visual studio make the best of fucking friends. This is going on a half hour now.
Had to switch to my windows partition for a project and I'm not happy.
Cry for me Fedora. -
Fucking group projects fuck them oh so much fucking fuck fuck fuck.
What's that? You want to basically ignore the spec and do something else? Fuck.
Wait, let's not use the great resources given to us? Fucking fuck.
Oh, you're just going to ignore the fact that everyone else disagrees with you? Fuckity fuck fuck.
I am so angry. You don't get to railroad your team.
You fucker. Ugh. -
Recently briefed a manager that we need to adjust Q2 plans to address the institutional knowledge lost during layoffs.
Yes, I know, write documentation, etc. This is reality, all our shit ain't documented.2 -
Made this one-liner today:
hostname $(curl nsanamegenerator.com | grep body | sed -e 's/<.*;//g' | sed -e 's/<.*>//g')
and added it to my laptop's crontab...4 -
Who/what are y'all's favorite coding related YouTube channels? Mine are Kalle Halden and NetworkChuck.7
-
when the git push doesn't work since the first one
(cron job, not able to access and fix the script as i'm not on-site and won't be for a while)3 -
Imagine filling 50 files full of garbage unreadable code to build what is essentially a cron job microservice...
Oh we have a console program
then a module to pull in all the services
then a manager to manage the actual jobs
then if they fail it all cascades back up
My god, this isn't NASA.
The amount of overengineering I have seen in the past few hours is insane.
Keep It Simple, Stupid!!!2 -
PHP features the best of the wicked minds.
In this legacy but still used project just so to save the scourge opening tcp connection (I suppose) some guy wrapped js libs like jQuery, mootools in a script tag.. In individual php files. Then from a main.php include all those libraries. This produces a 2Mb file to send to the client and it's not even compressed. This guy never had any thought about maintenance.
This is one symptom of the problem with PHP that every company developed or have in-house undocumented unmaintained frameworks made by devs without any idea about testing, security and more.
Gosh in a previous work I've seen a PHP cron that used arguments passed to a switch case of 25 cases.
It took 19 years for the language to get a standard, meanwhile leaving the web landscape as a mess of bad coding practices, bad design practices, SQL injections, outdated tutorials and more. PHP is the example that it's not because it's used on almost all the web that it's good, it only means that's it's cheap! Cheap like asking a red neck to build you a car and he tows (deploy) it to your house with his own tow truck he built.
https://blog.codinghorror.com/codin... -
New form of self-inflicated torture:
Formatting changelogs . .
https://github.com/ElectronicsArchi...2 -
Officially Running node-cron as separate thread in nodejs to update 27600rows every minute. Honestly, I am Supprised how well it is behaving.1
-
Spent most of the day debugging a timezone related bug in a cron job.
Reminded me of this video.. Relateable.
https://youtube.com/watch/... -
triggers {
cron('H */1 * * 1-5') // 23 every working day
}
I hate when comments are not updated along with code.
Even more when git blame points at me xD1 -
I have a small NUC-like machine in my home with an old external hdd connected to it. I use it to run my local gitlab, nextcloud and to test a few websites I build for the lolz.
If you too have a homelab, whether it's a single raspberry or an entire room full or racks, you know damn well that everything you have running locally as a web service keeps going until it doesn't, for whatever fucking reason. This time, it was the turn of my nextcloud.
The machine has arch linux running, I chose it since I already use it on my coding laptop and being a rolling release means I don't have to manually upgrade to a newer version, risking various fuck-ups and consequent screaming of profanity.
The downside is that arch is a bleeding-edge distro, so, despite being pretty good for what concerns security, as updates are pushed out some packages may still require legacy software to work as intended, since obviously not all developers for all packages can release simultaneously.
The problem was that php reached 8.2.x but nextcloud couldn't use anything beyond 8.1, so the highlighted solution was to download php-legacy, a package with a set of utilities which the cloud could use instead of mainline php.
Pretty easy, right? fuck my life, here we go.
I edited apache-httpd's configurations to link the new libraries, updated every reference in every virtual host that could possibly screw up the web server.
Done.
Then I went on and disabled the php-fpm mainline, creating a new systemd unit that would instead run the legacy executable and afterwards I edited nextcloud's additional configs so they use that instead.
Done, getting a bit dizzy, but I reboot everything and breathe.
At this point the migration should be complete, but wait, the server returns an error saying that the application is still trying to use php 8.2+...wait, what in the sysadmin Christ?
Back to nextcloud config, everything is set, everything else in every other fucking php-legacy and web server is fine, the old fpm service is disabled, I am confused, and why in the FUCKING FUCK is the new php-fpm unit failing to start at boot with "error 78/config - directory not found"? Hello? Am I being trolled by a shitty dual-core amazon fake NUC?
Maybe yes, cause it turns out that the unit was referencing a directory in the external hdd, which gets mounted at boot time after the unit itself starts, so nothing much, just a matter of tinkering with cron jobs, a reboot and at least this one is off my balls.
But why still isn't the server responding correctly? why? WHY?
After slamming my cock on the keyboard here and there scrolling back through all the config files I think to myself, hmmm, my gitlab is working flawlessly, well yeah, I didn't need to install the whole web stack, everything was nice and easy wrapped in a docker container...so why am I even here, why the fuck am I bothering with all this layered web-app bullshit, why don't I just run the up-to-date docker image that someone else has already set up for me, back up all the data and reupload them on the application?
Oh joy, you can't imagine, after 3...almost 4 hours of pure computer-touching the relief I had from seeing the blue web page with the "welcome to nextcloud" title.
Right now it's copying back all the files, and the external hdd is now linked to include the data folder.
Like really, everything was solved in two lines of bash.
I am still fuming, but at least I learned a valuable lesson, if you want a service up for yourself, implement it and deploy it as fucking easy straight-forward as you can, giving MAXIMUM priority to already fully-working options that are out there just waiting to be downloaded and used. I swing my scrotal sack on web-apps elegance as long as it's MY homelab in MY place.
Eat a fat dick php.
sudo pacman -Rns nextcloud
sudo systemctl disable --now php-fpm-legacy
sudo pacman -Rns php-legacy
sudo pacman -Rns $(sudo pacman -Qdtq)2 -
Need some help finding a free weather api please.
I need to be able to search the api based on temperature or humidity. Alternatively I could locally store the next 2 weeks of data for every country in a database and update it every night with a cron.
Has anyone used weather APIs before? Could you suggest me some?6 -
So I was instructed today, after lunch, to spend an hour teaching a member of my team how to SSH, store keys, basic io routines, and create CRON jobs to auth our ECR registry by my team lead.. Why am I wasting dev time teaching someone how to use an operating system? Need I add, our primary Dev workspace is a spun up using vagrant using xubuntu. I just can't comprehend how this person has been using xubuntu as their primary OS for two months and doesn't know the SSH protocol. Much less how they landed a dev job without any prior experience with a *NIX based OS.2
-
!rant
Avatar request: separate colour choice for beard.
... I'm not exactly in the 99%, but my beard hair isn't the same colour as my head hair. 😂5 -
I hate cron jobs. Hours of googling and double checking. My job is perfect. Still doesn't run according to the logs2
-
!rant with a l'il rant at the end.
Anyone have any Android games they would recommend? I just want something to help me unwind, without being baited every 10 minutes into buying upgrades or coins or whatever.
In app transactions have ruined Android games for me. It drives me insane, but, I still would like something for when I'm not bothered to boot up my ps4 and spend hours living in The Witcher.4 -
Not learning to unit test as I was embarrassed that'd I'd missed it in college.
Now, thanks to a great ruby module I've taken this year, I'm leaning towards TDD. I really enjoy it. -
Just realized that CRON only supports up to year 2099.
Y3K might become what Y2K was hyped up to be. Time is ticking...15 -
Because of cache split brain issue I have to invalidate cache every 5min. I've said to lead dev about this hack and we both agree to solve it asap.
This was 3 months ago...
Temporary fix becomes production solution. And it only took me 10min to add cron entry to every prod srv.
So productive!
Btw you should see users faces when page referesh changes page completely because of load balancing xD)1 -
Hello, I have a question for anyone familiar with multithreading!
I just started working with threading for the first time, I mostly write powershell scripts 😅, I found that certain conditions make using multithreading an absolute time saver. And of course in some tasks it's not such a big deal.
I am currently working on a project that runs multiple threads and each thread might invoke one of my functions that also threads the work.
I'm a total newbhat when it comes to this stuff, but if my main process is 4 threads, and I can spin up, up-to 4 more threads to run one of my functions, does the math equate to a possible total number of threads of 16 or is it possible to have the threading go ape-shit bananas and utterly thrash the cpu with rampant threads getting created?
I've looked online and based on some of the info that I've managed to come across on my own, the answers elude towards being safe because I'm creating pools for running the threads first and the pool is responsible for maintaining min/max threads, but I can't seem to find good info on running a pool+threads inside another thread.
Just to let you in on what the function does that requires threading in the first place, I need to basically query CloudTrail based on ARN's to find events, but I can only pass a single ARN to the find-ctevent cmdlet. So I'm essentially making 1500-ish really really small calls to AWS just to get back event data for the ARN.
Serially, this takes like almost 20 mins, on my laptop using stupid settings like 24 threads, it completes in about 95seconds. On the actual server that will be running this code, I'm going to limit it to 4 threads and try to figure out a way to cache the info locally and update the info on a cron or schedule so only the initial scrape takes forever and then the updates can be done nightly or something.
thank you in advance for your help, I'm not too sure if the question is dumb but please let me know either way!8 -
i need an adult. I know noone who would understand my worries, so you guys need to be it.
i have a nextcloud running on my raspberry pi. performance is horrible, dont ask, but it works.
i mostly use it to backup the photos of my phone sd card every night when my phone charges. Internally this works good. If i am elseplace it wont for obvious reasons.
In my youthful joy of doom i opened port 443 and forward it to my raspi. I get internet via cable and my ip is pretty much static (it was the same for 10 months). So external access is provided.
Now i thought, its stupid that i cannot sign an ssl certificate cause i dont have a domain. Lets buy domain. But before i do that i did some try runs with duckdns to test the principle.
Some back and forth, it works now. Pretty god, i could even make a cron job on the raspbi to renew (that should work right?). Only problem. randoname.duckdns.org doesnt work internally. Or should not at least.
So i googled a bit and it turns out that my router (a cable fritz!box i bought myself) can be a local network dns. Or cannot. Regardless what i try, it doesnt accept the changed config file.
Now the problem.
It works anyway. randoname.duckdns.org points to my external "static" ip and resolves to that from my internal network..so it works on my phone or laptop. if i traceroute the thing it goes via two hops out and finishes in less than 1ms.
Now to the problem:
I have no fokkin clue why. The expected behaviour would be that it shouldnt work. If i do what i intended todo on pc in the hosts file tracert works correctly, directly pointing to the internal ip.
What i cannot figure out, is it the fritz!box being smart? Is it my ISP being smart?
Reason to rant: i have absolutly NOONE to ask, i know not a single person who would even understand what troubles me. I want to learn, i want to know WHY not just some mindless russian patchwork of "if it works its good enough".
thats depressing.8 -
Constantly looking at the clock, waiting for backround processes to execute. My colleagues must think that I can't wait to go home.
-
I had a colleague, who built a bunch of smaller systems for the company I'm working in. He didn't want to waste his time building a "perfect" system (which I generally agree with, the question is just where to draw the line).
But because it took him so long to build the prototype, usually it went into production without being hardened (like basic input validations were missing. It wouldn't allow anything malicious, but instead of a validatiom error it'd just 500).
When he left, literally less then a week later, one of his systems, which was a prototype and nobody except him could maintain, because it was done in a fancy new technology, which wasn't even v1 at that time and their documentation said, it's production ready when we release v1. Anyway, that one system started crashing just few days after him leaving. Another Dev and me tried to fix it, but every time we touched it, it just got worse.
At some point, we gave up and just configured a cron job to reboot it every 12h. He could have probably fixed it, but to us it was just black magic.
Anyhow, this rent isn't about him, AFAIK all the systems still working, as long as you provide the correct input. Nor is it about the management decisions, which lead to this Frankenstein service on live support, which we had to increase, to be restarted every 8 hours, 6h, 4h, 3h, .....
It's about the service itself, which I'm looking forward to every day, when the rewrite will be done and I can nuke the whole git repository.
I was even thinking about moving all the related files onto a USB stick and putting that on 🔥, once we're done rewriting it....
Maybe next month or in 2. Hopefully before we'll have to configure the cron job to restart the service every couple minutes.... -
!rant
Tablet recommendations? I know this isn't really the place to ask but I trust you devs.
I'm just about to start back college and I'd like to have a light carry around for days I don't need my laptop. I love my 17" Dell, it's a beast and that's why I bought it but damn it's overkill for taking notes and running little things through a bash terminal. A tablet and keyboard seems like a nice idea.
Ideally, I wanna run Linux. But I'm not sure if there's a commercial tablet that facilitates OS changes easily out of the box.
No iPad. Not an apple fan, and it's just not what I'm after.
The MS surface seems pretty good, but I haven't looked too deeply into replacing the OS.
I just want a nice Linux tablet. I dunno.
Thanks!5 -
I should run a daily cron job for my Mac rants. Today it was just this: I connected to some other iMac over network discovery, but from within in the GUI it is impossible for me to get the IP or any information of that machine. All information it displays you see below. Thanks very helpful. (Only lastly I found the information by pinging administrators-iMac.local)3
-
When your customer face a lot of 503 on the main website since weeks and you are confirming that the code changes doesn't affect anything for that.
Later in a 2 hour calls you see that the machine with the MySQL server (that wasn't monitored) had issues with table never optimized/repaired because the cron that do that was remove because useless (for them). -
/me trying to automate stuff with fancy tools
colleague: you are going to fast. We want to have everything in control by hand (ssh -> cron * 1000 servers)3 -
Always valued my every minute but seems I have given up the principle for a cron job which I have to wait for every minute to run so I can see what I am doing on the log file.
-
Statically linking to qt5 is quickly driving me fucking insane.
I've a list of unresolved dependencies during linking longer than a really long fucking list. Ugh.
Cmake, why can't you save me?
Think I'll just go back to dynamic and build on each needed system.1 -
Manual EC2 instances + Elastic load balancer or Elastic beanstalk for a PHP 7 application? I might have some cron jobs to be run too...
-
I've searched for this a few times now but can't really find a good solution for my "problem".
I want to update multiple Debian servers / machines at ones or from one machine.
Puppet doesn't seem to support newer versions of Debian and I don't really want to force the updates via cron.
Are there any other ways to do it?1 -
Is the current humble book bundle of any use? Dev ops by Packt. Lots of docker and kubernetes stuff.
https://humblebundle.com/books/...1 -
Was using an open source piece of software for data storage and visualisation to work with the loggers my company makes. When importing old data for historical views, some of the csv imports would fail without any specific error messages.
It took me a couple of hours but after looking at their csv parser and making my own little one to test with, I eventually found out that it was all down to the way datatime (I think it was?) in java deals with DST, which apparently was to just fuck shit up.
Anyhow, a few simple lines added into the parser later and it all works just fine.
Was super proud of that one as it was the first time I actually looked somewhat good in front of my senior dev. -
Already languishing custom software project on a test system automatically emails hundreds of expired users asking them to renew via the test system because I wasn't paying attention to the fact that a developer had added a cron job? Sure. Bring on the suck. Because I have nothing better to do than clean up after myself and my lack of attention to detail.
-
Our #bigtech repo is like 19 billion LOC so how am I the first one to try to do simpleThing with obviouslyRelatedThing?
My search foo yields nothing and I fear its not supported yet.
fml4 -
I work so hard to name things well and yet last year's code is always full of esoteric and misleading nonsense.2
-
- Create a Cron to delete users with course status 'enrolled/incomplete' for several months based on existing courseUpdateLogs
- Run Cron manually 1st time
- Find out courseUpdateLogs were very incorrect
- Find out a LOT of valid users got deleted and can't access their worksite
- F***9 -
!rant
Soooo, badges. They seem to have some prevalence in the open source community. I'd love to earn some from fedora. One day!
Anyone have any fun ones to show off?2 -
It's the small things. Chronos task run syntax:
- P10M = 10 months
- PT10M = 10 minutes
And for some reason, lots of our task are defined to only run every 10 months, should be run every 10 minutes though.
Even weirder: Why do most of our tasks run daily? Either we have some cron job task firing curl requests at chronos rest api, or some poor content managers clicks a button daily. -
Caching in Prestashop 1.6 (idk about 1.7) is fucking bullshit. I don't know who made it but he surely must be an idiot. There is no way that the cache is going to speed up your website after a few days of using it.
Memcache/d - For some strange reason, it gets slower and slower after just a few hours. There is literally almost no entries in memcache, but it becomes slower than without cache? WTF
APC - Do you have multiple websites running? You are out of luck. Do you make a change to your website? Restart PHP to see changes. WTF
Redis - Same as APC, but you have to run flushall manually. WTF
CacheFS - God, this is a fucking monstrosity. It rapes the storage drives so hard, it is like running a fucking benchmark nonstop. 400-600MB writes are completely "normal". I have no idea, what is it doing tho. I would expect that writing ~3MB file to disk doesn't require over 100MB/s disk write for 2(!) or more seconds. Also, it doesn't clean up after itself, so after a few days you are out of disk inodes and you have to setup CRON to clean this shit up regularly. In the end, it makes your website fast, but only as long as you have <={number of CPU cores} customers shopping. Then, it becomes a complete disaster and requests are taking 5+ seconds to finish.3 -
Of course I came down with a frickin fever during my first hackathon and missed the last like 14 hours of it. Smh.4
-
> Get sent to local client that manages most services on prem themselves
> We just deliver the software and setup instructions, client is self-proclaimed "technical enough" to handle the rest
> Never had issues with them, client for about 1,5year, we assume they are indeed technical enough
> Local client needs me for some help with their "backup solution"
> Cron job that dd's entire disk every week to external ssd.
> External ssd finally caved in after what was most likely years of torture
> Has nothing even remotely to do with our software (which has built-in backups, which they apparently don't use)
> I get scolded and screamed at when I say not our problem
Fml2 -
!rant
Just did some really satisfying refactoring. Much happier with my work now. Its a little cli app to poll M-bus devices and write the data to file if the user wants. Can scan the whole range, search for specific devices and VIFE codes, parse an input file for lots of the previous data and one or two other things.
How's everyone's else's weekend? -
Participating in an online hackathon in a few days. Still don't have a single person on my team lol6
-
What is the best alternative to cronjobs, guaranteeing high availability and jobs not being duplicated?6
-
Effective 1:1s are perhaps the most important soft skill that no one teaches you.
The HR onboarding section for 1:1s is only chapter one. But your manager won't teach you, your skip level won't teach you, and your mentor won't teach you. At best one of them even has an effective 1:1 skill set.
90% of 1:1s become operations: What went wrong this week and what needs to happen next week. Basically a private standup.
You attend 1:1s all year and yet somehow your manager doesn't know the difficulties you overcame, what you'd like to change, or how you're pushing yourself to grow. Then you get re-orged to a new manager.
If like to meet someone with effective 1:1s *and* low job satisfaction.3 -
I'm mad I didn't find out about skillshare in 2020 I would be so good at python by now... I was using some ude.y course that I didn't like dude skillshare is so much better in so many ways.4
-
Having problems with getting user's IP address with PHP.
So basically I made a custom DDoS protection for my linux server.
It works like this: php website gathers visitor IP address when he does a certain action (in this case registers an account). All visitor ips are stored in ips.txt securely on my website ftp.
Then my linux server has iptables rules setup in a way where it blocks all traffic except my website traffic.
On linux server I have a cron job which pulls whitelisted ips every 5 minutes from my php website FTP and then whitelists all IP's in iptables.
That way only visitor IP's (of those who registered account in my website) are being whitelisted in my linux server.
In case of a DDoS attack, all traffic is dropped except for the whitelisted visitor's IP's gathered from website ips.txt
Now I'm having a problem. My PHP script is not accurate. Some visitors in my website are not being whitelisted because they might have a different ipv4 ip address than what is given from php website. So basically I am looking for some php script/library that would gather ALL ipv4 ips from a visitor, then whitelist them.
Also regarding ipv6, my iptables are all default (which means that all ipv6 visitor traffic is allowed) so problem is not with visitors that have ipv6. Problem is with my script not getting ALL ipv4 ip addresses assigned to the user.
Can you recommend me some php library for that? So far I've used https://github.com/marufhasan1/... but apparently it's not accurate enough.16 -
CRONS :(
I have some module which is not working at all, It is written as cron and now I don't have idea How to debug that
I miss office :(
This WFH sucks
I could have taken the help from my seniors But in WFH they just leave the messages after seen or on call --> Network problem -> talk to you later :(5 -
Hello guys, just want to ask if any one has done a speedtest result twitter auto poster? Like a cron job that executes a speedtest... test the post the result to twitter as to show our telco how shitty their service is.
I am thinking of selenium script but I wanted it to run on raspberry pi. Do you think it's doable?1 -
I personally hate that ad blockers block analytics software, specially since (as by the law) users must give consent before it collects data, several options exist to opt-out, and they are not f'ing ads.
I've been thinking to set a cron job to fetch analytics.js regularly and serve it locally as, let's say jquery-4.0.js, and screw it. What's your insight about it, fellow frontend devs?8 -
At old e-commerce job, some orders were coming through with most of the shipping info missing. The only info filled out was the State. When we looked at Heap, we could see the user was filling in those fields. There was both frontend and backend validation for required form data, so the user shouldn’t have been able to checkout without an address.
When I looked at the BE logic, I saw addresses were retrieved from our database by using a method called GetOrCreateDefaultAddress. When the website couldn’t find the address in the db, it created a new one where the only address field that was filled in was the state.
Unfortunately, this default address creation was happening after the submit button had been hit. There was no logic to validate the address this late in the checkout because the earlier form validation in the process should have caught this.
The orders did have email addresses, so customer service did have a way to contact the customer. I have no idea what happened to the user’s address. Was it never saved? Did it get caught up in a cron job to delete old users and addresses from the db??1 -
How I can get from Where function is called. It looks like ghost to me :(
Actually When payment is completed, It updates the status.
It is well written and working
I checked all the files
Cron files
Webhook references
I am not able to find from Where it runs :(
Any suggestion
The platform is not build by me2 -
Favorite / most used systemd timers?
I recently wrote one to pull the location data of users for an app and make a heat map of sorts which updates every hour. Probably could make it run on a quicker timer but not many eyes on it at the moment.5 -
Does anybody know of a Java library that you can use to parse cron expressions? I mainly want to give it a cron expressions and have it calculate its next execution time.6
-
Just built out my first app using Cloudflare Workers, Typescript, and DurableObjects. Holy shit, this is nice stuff.
It's taken little to no time to build out:
* JSON API written in Typescript
* JWT verification against my OAuth backend (SAML support too)
* CI Automated Deployments including unit tests
* DurableObject support
* 3rd party HTTP calls + caching (built in to the framework!) to reduce network latency and hiccups.
* Cron-like tasks on each stored object so they can awaken the app on a schedule and update themselves as necessary
* Rapid deployment to new environments
The local testing with coordinated "miniflare" is dreamy too. -
CRON JOBS SUCK. @LINUX YEAH YOU HEARD ME
MY PROGRAM WRITES INFO TO A DATABASE, SENDS EMAILS AND OUTPUT IS PIPED TO A LOG FILE. NONE OF THESE THINGS HAVE OCCURRED DURING THE CRON RUN SO I DON'T KNOW WHAT IS OR ISN'T WORKING.5