Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "logs file"
-
Good Morning!, its time for practiseSafeHex's most incompetent co-worker!
Todays contestant is a very special one.
*sitcom audience: WHY?*
Glad you asked, you see if you were to look at his linkedin profile, you would see a job title unlike any you've seen before.
*sitcom audience oooooooohhhhhh*
were not talking software developer, engineer, tech lead, designer, CTO, CEO or anything like that, No No our new entrant "G" surpasses all of those with the title ..... "Software extraordinaire".
*sitcom audience laughs hysterically*
I KNOW!, wtf does that even mean! as a previous dev-ranter pointed out does this mean he IS quality code? I'd say he's more like a trash can ... where his code belongs
*ba dum tsssss*
Ok ok, lets get on with the show, heres some reasons why "G" is on the show:
One of G's tasks was to build an analytics gathering library for iOS, similar to google analytics where you track pages and events (we couldn't use google's). G was SO good at this job he implemented 2 features we didn't even ask for:
- If the library was unable to load its config file (for any reason) it would throw an uncatchable system integrity error, crashing the app.
- If anything was passed into any of the functions that wasn't expected (null, empty array etc.) it would crash the app as it was "more efficient" to not do any sanity checks inside the library.
This caused a lot of issues as some of the data needed to come from the clients server. The day we launched the app, within the first 3 hours we had over 40k crash logs and a VERY angry client.
Now, what makes this story important is not the bugs themselves, come on how many times have we all done something stupid? No the issue here was G defended all of this as the right thing to do!
.. and no he wasn't stoned or drunk!
G claimed if he couldn't get the right settings / params he wouldn't be able to track the event and then our CEO wouldn't have our usage data. To which I replied:
"So your solution was to not give the client an app instead? ... which also doesn't give the CEO his data".
He got very angry and asked me "what would you do then?". I offered a solution something like why not have a default tag for "error" or "unknown" where if theres an issue, we send up whatever we have, plus the file name and store it somewhere else. I was told I was being ridiculous as it wasn't built to track anything like that and that would never work ... his solution? ... pull the library out of the app and forget it.
... once again giving everyone no data.
G later moved onto another cross-platform style project. Backend team were particularly unhappy as they got no spec of what needed to be done. All they knew was it was a single endpoint dealing with very complex model. There was no Java classes, super classes, abstract classes or even interfaces, just this huge chunk of mocked data. So myself and the lead sat down with him, and asked where the interfaces for the backend where, or designs / architecture for them etc.
His response, to this day frightens me ... not makes me angry, not bewilders me ... scares the living shit out of me that people like this exist in the world and have successful careers.
G: "hhhmmm, I know how to build an interface, but i've never understood them ... Like lets say I have an interface, what now? how does that help me in any way? I can't physically use it, does it not just use up time building it for no reason?"
us: "... ... how are the backend team suppose to understand the model, its types, integrate it into the other systems?"
G: "Can I not just tell them and they can write it down?"
**
I'll just pause here for a moment, as you'll likely need to read that again out of sheer disbelief
**
I've never seen someone die inside the way the lead did. He started a syllable and his face just dropped, eyes glazed over and he instantly lost all the will to live. He replied:
" wel ............... it doesn't matter ... its not important ... I have to go, good luck with the project"
*killed the screen share and left the room*
now I know you are all dying in suspense to know what happened to that project, I can drop the shocking bombshell that it was in fact cancelled. Thankfully only ~350 man hours were spent on it
... yep, not a typo.
G's crowning achievement however will go down in history. VERY long story short, backend got deployed to the server and EVERYTHING broke. Lead investigated, found mistakes and config issues on every second line, load balancer wasn't even starting up. When asked had this been tested before it was deployed:
G: "Yeah I tested it on my machine, it worked fine"
lead: "... and on the server?"
G: "no, my machine will do the same thing"
lead: "do you have a load balancer and multiple VM's?"
G: "no, but Java is Java"
... and with that its time to end todays episode. Will G be our most incompetent? ... maybe.
Tune in later for more practiceSafeHex's most incompetent co-worker!!!31 -
pm: our client wants a proprietary pdf compression app.
me: Okay gimme 3 days and some sample PDFs.
pm: they won't supply any sample PDFs because they contain confidential information.
me: okay fine, I'll download some from the interwebs.
** 3 days later **
me: here is the pdf compression app. all done and works with all of about 100 PDFs we tested with.
pm: okay great I'll have the client take a look.
** half and hour later **
pm: the client said that the compression app errors out.
me: okay I'll go look at the server logs to see what's up.
** 10 seconds later **
me: what the shit is a "foxit phantompdf" file.
pm: it's the proprietary pdf format that they are using.
me: oh joy. I'll go try to find some sample files and see if I can fix it.
** 1 hour later, no sample files found **
pm: got anything?
me: *sobs obnoxiously*9 -
this.title = "gg Microsoft"
this.metadata = {
rant: true,
long: true,
super_long: true,
has_summary: true
}
// Also:
let microsoft = "dead" // please?
tl;dr: Windows' MAX_PATH is the devil, and it basically does not allow you to copy files with paths that exceed this length. No matter what. Even with official fixes and workarounds.
Long story:
So, I haven't had actual gainful employ in quite awhile. I've been earning just enough to get behind on bills and go without all but basic groceries. Because of this, our electronics have been ... in need of upgrading for quite awhile. In particular, we've needed new drives. (We've been down a server for two years now because its drive died!)
Anyway, I originally bought my external drive just for backup, but due to the above, I eventually began using it for everyday things. including Steam. over USB. Terrible, right? So, I decided to mount it as an internal drive to lower the read/write times. Finding SATA cables was difficult, the motherboard's SATA plugs are in a terrible spot, and my tiny case (and 2yo) made everything soo much worse. It was a miserable experience, but I finally got it installed.
However! It turns out the Seagate external drives use some custom drive header, or custom driver to access the drive, so Windows couldn't read the bare drive. ffs. So, I took it out again (joy) and put it back in the enclosure, and began copying the files off.
The drive I'm copying it to is smaller, so I enabled compression to allow storing a bit more of the data, and excluded a couple of directories so I could copy those elsewhere. I (barely) managed to fit everything with some pretty tight shuffling.
but. that external drive is connected via USB, remember? and for some reason, even over USB3, I was only getting ~20mb/s transfer rate, so the process took 20some hours! In the interim, I worked on some projects, watched netflix, etc., then locked my computer, and went to bed. (I also made sure to turn my monitors and keyboard light off so it wouldn't be enticing to my 2yo.) Cue dramatic music ~
Come morning, I go to check on the progress... and find that the computer is off! What the hell! I turn it on and check the logs... and found that it lost power around 9:16am. aslkjdfhaslkjashdasfjhasd. My 2yo had apparently been playing with the power strip and its enticing glowing red on/off switch. So. It didn't finish copying.
aslkjdfhaslkjashdasfjhasd x2
Anyway, finding the missing files was easy, but what about any that didn't finish? Filesizes don't match, so writing a script to check doesn't work. and using a visual utility like windirstat won't work either because of the excluded folders. Friggin' hell.
Also -- and rather the point of this rant:
It turns out that some of the files (70 in total, as I eventually found out) have paths exceeding Windows' MAX_PATH length (260 chars). So I couldn't copy those.
After some research, I learned that there's a Microsoft hotfix that patches this specific issue! for my specific version! woo! It's like. totally perfect. So, I installed that, restarted as per its wishes... tried again (via both drag and `copy`)... and Lo! It did not work.
After installing the hotfix. to fix this specific issue. on my specific os. the issue remained. gg Microsoft?
Further research.
I then learned (well, learned more about) the unicode path prefix `\\?\`, which bypasses Windows kernel's path parsing, and passes the path directly to ntfslib, thereby indirectly allowing ~32k path lengths. I tried this with the native `copy` command; no luck. I tried this with `robocopy` and cygwin's `cp`; they likewise failed. I tried it with cygwin's `rsync`, but it sees `\\?\` as denoting a remote path, and therefore fails.
However, `dir \\?\C:\` works just fine?
So, apparently, Microsoft's own workaround for long pathnames doesn't work with its own utilities. unless the paths are shorter than MAX_PATH? gg Microsoft.
At this point, I was sorely tempted to write my own copy utility that calls the internal Windows APIs that support unicode paths. but as I lack a C compiler, and haven't coded in C in like 15 years, I figured I'd try a few last desperate ideas first.
For the hell of it, I tried making an archive of the offending files with winRAR. Unsurprisingly, it failed to access the files.
... and for completeness's sake -- mostly to say I tried it -- I did the same with 7zip. I took one of the offending files and made a 7z archive of it in the destination folder -- and, much to my surprise, it worked perfectly! I could even extract the file! Hell, I could even work with paths >340 characters!
So... I'm going through all of the 70 missing files and copying them. with 7zip. because it's the only bloody thing that works. ffs
Third-party utilities work better than Microsoft's official fixes. gg.
...
On a related note, I totally feel like that person from http://xkcd.com/763 right now ;;21 -
Holy fuck, this is starting to work!
Problem: I am highly anti google/facebook/few others and I'd rather null route those DNS requests.
The problem is that the pihole only can blacklist domains or wildcard domains but not words. So if Google would come up with a new name for some of their domains, I'd be fucked because I can't filter out the word Google through the pihole.
Today I fucking found the solution (still a work in progress but a PoC is nearly working):
Compiled a program which can monitor DNS queries/requests and logs them to a file.
Have a php (yes I write most of my cli tools in php) script tailing the log file and gathering the requested domains from it.
Then I can see if the domain contains the substring which I don't like (google as word for example) and echo it to the end of my hosts file with 0.0.0.0 in front of it if that's the case.
Holy fuck this seems to be working! 😍24 -
Site (I didn't build) got hacked, lots of data deleted, trying to find out what happened before we restore backup.
Check admin access, lots of blank login submissions from a few similar IPs. Looks like they didn't brute force it.
Check request logs, tons of requests at different admin pages. Still doesn't look like they were targeting the login page.
We're looking around asking ourselves "how did they get in?"
I notice the page with the delete commands has an include file called "adminCheck".
Inside, I find code that basically says "if you're not an admin, now you are!" Full access to everything.
I wonder if the attack was even malicious.3 -
So, in the printing industry, FTP has a long and storied history as the standard method of sending art assets. But as time has gone on, more and more people are utterly incapable of handling FTP.
Customer: "I sent you the file. It's called xyz.zip"
PM: "I don't see the file."
Customer: "I know I sent it."
PM: "Let me check with IT."
I check the logs. No such file was uploaded.
PM: "What program did you use to send the file?"
Customer: "Firefox"
Every. Fucking. Time.
It turns out the Germans actually have a word for this:20 -
Had an interview in a MNC company.
He: Propose a solution for reading huge logs file like 1 GB and parse errors with today's date.
Me: Gave two solution, one with regex and second with buffering the logs (reason: reading the entire in same shot will cause cpu spike with huge memory consumption) and I fell in love with my second approach. By the way it was on paper.
He: (Without seeing the logic) Your syntax is wrong.
Me: Got frustrated who the hell checks syntax in interview. I asked how may years of experience you have?
He: 10 years.
Me: I don't wanna continue, and I left.5 -
I think this is so far one of the most priceless WTF moments I encountered at my current work:
A coworker of mine came up to me explaining the problem he had with russian characters in the filename. He explained in detail that everything works ok (the other part of the code he was fixing) if he changes the name of the file to test1.xlsx for example which doesn't use russian characters. OK great.
Then he goes on to show me how he fixed the other stuff and of course everything blows up. The file he used for demonstration was of course the original file our cusotomer provided, he just deleted the obvious russian chars and left the rest.
МТС != MTC
I cracked up: but you still have russian chars in the name.
The guy: no way, I deleted them all.
Me: but what about that МТС in the name?! Guy: what about it?
Me: did you actually typed that in or you left it there?! Those are russian chars that are fucking things up for you.
Guy: no way, it's MTC.
Me: checked the logs, you have ??? In the filename instead of МТС..don't you find that at least a little bit suspicious?!
Guy: but it looks the same. How does it (the computer) know it is in russian?!? //Why doesn't it understand?!
O.o I still can't believe it.. Is it just me & my high standards, or should it be normal for coders to know things such as character encoding & stuff?!?
I almost died of laughter, he and some other guy had problems finding customers in the software due to not being able to type the russian chars << happened more then once before, even after I told them about a quick hack on how to use google translate onboard keyboard & other stuff to make proper chars so they can get a match..
I think when they bury me, I'll still be facepalming and laughing over this incident. 🤣🤣🤣🤣🤣🤣🤣7 -
From my work -as an IT consultant in one of the big 4- I can now show you my masterpiece
INSIGHTS FROM THE DAILY LIFE OF A FUNCTIONAL ANALIST IN A BIG 4 -I'M NOT A FUNCTIONAL ANALYST BUT THAT'S WHAT THEY DO-
- 10:30, enter the office. By contract you should be there at 9:00 but nobody gives a shit
- First task of the day: prepare the power point for the client. DURATION: 15 minutes to actually make the powerpoint, 45 minutes to search all the possible synonyms of RESILIENCE BIG DATA AGILE INTELLIGENT AUTOMATION MACHINE LEARNING SHIT PISS CUM, 1 hour to actually present the document.
- 12:30: Sniff the powder left by the chalks on the blackboards. Duration: 30 minutes, that's a lot of chalk you need to snort.
13:00, LUNCH TIME. You get back to work not one minute sooner than 15.00
- 15:00, conference with the HR. You need to carefully analyze the quantity and quality of the farts emitted in the office for 2 hours at least
- 17:00 conference call, a project you were assigned to half a day ago has a server down.
The client sent two managers, three senior Java developers, the CEO, 5 employees -they know logs and mails from the last 5 months line by line-, 4 lawyers and a beheading teacher from ISIS.
On your side there are 3 external ucraininans for the maintenance, successors of the 3 (already dead) developers who put the process in place 4 years ago according to God knows which specifications. They don't understand a word of what is being said.
Then there's the assistant of the assistant of a manager from another project that has nothing to do with this one, a feces officer, a sys admin who is going to watch porn for the whole conference call and won't listen a word, two interns to make up a number and look like you're prepared. Current objective: survive. Duration: 2 hours and a half.
- 19:30, snort some more chalk for half an hour, preparing for the mail in which you explain the associate partner how because of the aforementioned conference call we're going to lose a maintenance contract worth 20 grands per month (and a law proceeding worth a number of dollars you can't even read) and you have no idea how could this happen
- 20:00, timesheet! Compile the weekly report, write what you did and how long did it take for each task. You are allowed to compile 8 hours per day, you worked at least 11 but nobody gives a shit. Duration: 30 minutes
- 20:30, update your consultant! Training course, "tasting cum and presenting its organoleptic properties to a client". Bearing with your job: none at all. Duration: 90 minutes, then there's half an hour of evaluating test where you'll copy the answers from a sheet given to you by a colleague who left 6 months ago.
- 22:30, CHANCE CARD! You have a new mail from the HR: you asked for a refund for a 3$ sandwich, but the receipt isn't there and they realized it with a 9 months delay. You need to find that wicked piece of paper. DURATION: 30 minutes. The receipt most likely doesn't even exist anymore and will be taken directly from your next salary.
- 23:00 you receive a message on Teams. It's the intern. It's very late but you're online and have to answer. There's an exception on a process which have been running for 6 years with no problems and nobody ever touches. The intern doesn't know what to do, but you wrote the specifications for the thing, 6 years ago, and everything MUST run tonight. You are not a technician and have no fucking clue about anyhing at all. 30 minutes to make sure it's something on our side and not on the client side, and in all that the intern is as useful as a confetto to wipe your ass. Once you're sure it's something on our side you need to search for the senior dev who received the maintenance of the project, call him and solve the problem.
It turns out a file in a shared folder nobody ever touches was unreachable 'cause one of your libraries left it open during the last run and Excel shown a warning modal while opening it; your project didn't like this last thing one bit. It takes 90 minutes to find the root of the problem, you solve it by rebooting one of your machines. It's 01:00.
You shower, watch yourself on the mirror and search for the line where your forehead ends and your hair starts. It got a little bit back from yesterday; the change can't be seen with the naked eye but you know it's there.
You cry yourself to sleep. Tomorrow is another day, but it's going to be exactly like today.8 -
Long rant ahead.. so feel free to refill your cup of coffee and have a seat 🙂
It's completely useless. At least in the school I went to, the teachers were worse than useless. It's a bit of an old story that I've told quite a few times already, but I had a dispute with said teachers at some point after which I wasn't able nor willing to fully do the classes anymore.
So, just to set the stage.. le me, die-hard Linux user, and reasonably initiated in networking and security already, to the point that I really only needed half an ear to follow along with the classes, while most of the time I was just working on my own servers to pass the time instead. I noticed that the Moodle website that the school was using to do a big chunk of the course material with, wasn't TLS-secured. So whenever the class begins and everyone logs in to the Moodle website..? Yeah.. it wouldn't be hard for anyone in that class to steal everyone else's credentials, including the teacher's (as they were using the same network).
So I brought it up a few times in the first year, teacher was like "yeah yeah we'll do it at some point". Shortly before summer break I took the security teacher aside after class and mentioned it another time - please please take the opportunity to do it during summer break.
Coming back in September.. nothing happened. Maybe I needed to bring in more evidence that this is a serious issue, so I asked the security teacher: can I make a proper PoC using my machines in my home network to steal the credentials of my own Moodle account and mail a screencast to you as a private disclosure? She said "yeah sure, that's fine".
Pro tip: make the people involved sign a written contract for this!!! It'll cover your ass when they decide to be dicks.. which spoiler alert, these teachers decided they wanted to be.
So I made the PoC, mailed it to them, yada yada yada... Soon after, next class, and I noticed that my VPN server was blocked. Now I used my personal VPN server at the time mostly to access a file server at home to securely fetch documents I needed in class, without having to carry an external hard drive with me all the time. However it was also used for gateway redirection (i.e. the main purpose of commercial VPN's, le new IP for "le onenumity"). I mean for example, if some douche in that class would've decided to ARP poison the network and steal credentials, my VPN connection would've prevented that.. it was a decent workaround. But now it's for some reason causing Moodle to throw some type of 403.
Asked the teacher for routers and switches I had a class from at the time.. why is my VPN server blocked? He replied with the statement that "yeah we blocked it because you can bypass the firewall with that and watch porn in class".
Alright, fair enough. I can indeed bypass the firewall with that. But watch porn.. in class? I mean I'm a bit of an exhibitionist too, but in a fucking class!? And why right after that PoC, while I've been using that VPN connection for over a year?
Not too long after that, I prematurely left that class out of sheer frustration (I remember browsing devRant with the intent to write about it while the teacher was watching 😂), and left while looking that teacher dead in the eyes.. and never have I been that cold to someone while calling them a fucking idiot.
Shortly after I've also received an email from them in which they stated that they wanted compensation for "the disruption of good service". They actually thought that I had hacked into their servers. Security teachers, ostensibly technical people, if I may add. Never seen anyone more incompetent than those 3 motherfuckers that plotted against me to save their own asses for making such a shitty infrastructure. Regarding that mail, I not so friendly replied to them that they could settle it in court if they wanted to.. but that I already knew who would win that case. Haven't heard of them since.
So yeah. That's why I regard those expensive shitty pieces of paper as such. The only thing they prove is that someone somewhere with some unknown degree of competence confirms that you know something. I think there's far too many unknowns in there.
Nowadays I'm putting my bets on a certification from the Linux Professional Institute - a renowned and well-regarded certification body in sysadmin. Last February at FOSDEM I did half of the LPIC-1 certification exam, next year I'll do the other half. With the amount of reputation the LPI has behind it, I believe that's a far better route to go with than some random school somewhere.25 -
The website for our biggest client went down and the server went haywire. Though for this client we don’t provide any infrastructure, so we called their it partner to start figuring this out.
They started blaming us, asking is if we had upgraded the website or changed any PHP settings, which all were a firm no from us. So they told us they had competent people working on the matter.
TL;DR their people isn’t competent and I ended up fixing the issue.
Hours go by, nothing happens, client calls us and we call the it partner, nothing, they don’t understand anything. Told us they can’t find any logs etc.
So we setup a conference call with our CXO, me, another dev and a few people from the it partner.
At this point I’m just asking them if they’ve looked at this and this, no good answer, I fetch a long ethernet cable from my desk, pull it to the CXO’s office and hook up my laptop to start looking into things myself.
IT partner still can’t find anything wrong. I tail the httpd error log and see thousands upon thousands of warning messages about mysql being loaded twice, but that’s not the issue here.
Check top and see there’s 257 instances of httpd, whereas 256 is spawned by httpd, mysql is using 600% cpu and whenever I try to connect to mysql through cli it throws me a too many connections error.
I heard the IT partner talking about a ddos attack, so I asked them to pull it off the public network and only give us access through our vpn. They do that, reboot server, same problems.
Finally we get the it partner to rollback the vm to earlier last night. Everything works great, 30 min later, it crashes again. At this point I’m getting tired and frustrated, this isn’t my job, I thought they had competent people working on this.
I noticed that the db had a few corrupted tables, and ask the it partner to get a dba to look at it. No prevail.
5’o’clock is here, we decide to give the vm rollback another try, but first we go home, get some dinner and resume at 6pm. I had told them I wanted to be in on this call, and said let me try this time.
They spend ages doing the rollback, and then for some reason they have to reconfigure the network and shit. Once it booted, I told their tech to stop mysqld and httpd immediately and prevent it from start at boot.
I can now look at the logs that is leading to this issue. I noticed our debug flag was on and had generated a 30gb log file. Tail it and see it’s what I’d expect, warmings and warnings, And all other logs for mysql and apache is huge, so the drive is full. Just gotta delete it.
I quietly start apache and mysql, see the website is working fine, shut it down and just take a copy of the var/lib/mysql directory and etc directory just go have backups.
Starting to connect a few dots, but I wasn’t exactly sure if it was right. Had the full drive caused mysql to corrupt itself? Only one way to find out. Start apache and mysql back up, and just wait and see. Meanwhile I fixed that mysql being loaded twice. Some genius had put load mysql.so at the top and bottom of php ini.
While waiting on the server to crash again, I’m talking to the it support guy, who told me they haven’t updated anything on the server except security patches now and then, and they didn’t have anyone familiar with this setup. No shit, it’s running php 5.3 -.-
Website up and running 1.5 later, mission accomplished.6 -
I had a huge epiphany on Friday... not all developers enjoy coding.
Discovered when they brought down 2 of our environments, well told them what was wrong with the changes in their code that caused the environments to break, gave them links directly to the file in the gitlab repo that needed to be updated, and...
They fucking went home. The change would’ve taken all of about 30-45 seconds to update and they fucking left.
This person’s team lead come storming in pissed off because her manager is furious about 2 environments going down and preventing everyone else from being able to deploy their changes.
We provide the exact same details to the team lead about what needs to be changed, and advise that her team member took off....
30 mins later, her manager is storming up to us (devops/sre) livid as hell.
Explain the situation for a third time... manager is like, why can’t you guys fix it?
Look here you dense motherfuckers, we can fix the code. We can be the plumbers that clean up your shit. But what value do you gain as a developer if you don’t understand how the systems work and you keep pushing shit in?
Made the changes, fixed the environments, done right? Wrong.
The original developer made more changes not knowing what would happen and thoroughly fucked the environments again.
This dumb-fucking dumpster fire of a dude then sends us a slack message. “It’s down again, can you fix it?”
Our manager steps in and tells us to send him a link to the logs and have him fix it himself!
Thank goodness we have a badass manager.
Send logs, send repo file links (again), and send line numbers in the logs to try and help just a bit more. Dude goes almost the whole day without fixing it, environments are down, other devs are pissed, we throw this dude to the wolves. His manager starts to head over and was about to talk with my team lead when our manager steps out of his office and tells him the in’s and out’s of the situation and that our job isn’t to play log parser/error fixer for the developers. This dude that’s breaking the environments needs to be the one to fix the issue and his team lead should be aware of the problems and should have been able to correct his errors before it ever came to us.
The amount of hand-holding we do is ridiculous.
(Disclaimer, this one guy making some mistakes doesn’t sound too bad, but this is actually a common occurrence for like 40% of all of our developers)
We literally have interns still in college running circles around some of our full time devs. I know I’m not a developer, but for anyone that’s new-ish to developing, when you see shit like that please don’t lose hope. Those ass-hats got into programming purely for a paycheck, not because of passion.
Stick with it and your greatness will know no bounds 👍
As for you craptastic dipstick lickers, FUCK YOU!!! Go back to school and learn how to give a damn.4 -
> dockerized gitea stops working 502,
> other gitea with same config works just fine
> is the same config the issue? maybe the network names can't be the same?
> no
> any logs from the reverse proxy?
> no
> does it return anything at all on that port?
> no
> any logs inside the container?
> no
> maybe it logs to the wrong file?
> no others exist
> try to force custom log levels
> ignored
> try to kill the running pid
> it instantly restarts
> try to run a new instance with specifying the new config
> ignores config
> check if theres anything even listening
> nothing is listening on that port, but is listening in the other working gitea container
> try to destroy the container and force a fresh container
> still the same issue
> maybe the recent docker update broke it? try to make a new one and move only necessary
> mkdir gitea2
> all files seem necessary
> guess I'll try to move the same folder here
> it works
> it is exactly the same files as in gitea1, just that the folder name is different
>10 -
Have been trying to setup Netdata as a monitoring system for a while now and finally got it working!
Instead of the built-in webhooks I just did a curl to a url containing a php page/file which error logs the status and description (just for testing).
It took me way too long to get it to work but BAM.
Immediately made a new cpu load rule (one minute high load):
The satisfaction of getting an error message in the php logs containing my custom rule as warning and a minute later as critical 😍
Netdata ❤6 -
And this, ladies and gentlemen, is why you need properly tested backups!
TL;DR: user blocked on old gitlab instance cascade deleted all projects the user was set as owner.
So, at my customer, collegue "j" reviews gitlab users and groups, notices an user who left the organisation
"j" : ill block this user
> "j" blocks user
> minutes pass away, working, minding our own business
> a wild team devops leader "k" appears
k: where are all the git projects?
> waitwut?.jpg
> k: yeah all git projects where user was owner of, are deleted
> j.feeling.despair() ; me.feeling.despair();
> checks logs on server, notices it cascade deletes all projects to that user
> lmgt log line
> is a bugreport reported 3(!) years ago
> gitlab hasnt been updated since 3 years
> gitlab system owner is not present, backup contact doesnt know shit about it
> i investigate further, no daily backup cron tasks, no backup has been made whatsoever.
> only 'backups' are on file system level, trying to restore those
> gitlab requires restore of postgres db
> backup does not contain postgres since the backup product does not support that (wtf???)
> fubar.scene
> filesystem restore finished...
> backup product did not back up all files from git tree, like none of refs were stored since the product cannot handle such filenames .. Git repo's completely broken
Fuck my life6 -
What features would you want in a logger?
Here's what I'm planning so far:
- Tagged entries for easy scanning of log file
- Support for indenting to group similar sequential entries
- Multiple entry types (normal, info, event, warning, error, fatal, debug, verbose)
- Meta entries, so the logger logging about itself, e.g. disk i/o failures.
- Ability to add custom entry types, including tag, log-level, etc.
- Customizable timestamp function
- Support for JS's async nature -- this equates to passing a unique key per 'thread'; the logger will re-write all the parent blocks for context, if necessary. if that sounds confusing, it's okay; just trust that it makes sense.
- Caching, retries, etc. in the event of disk i/o issues.
- Support for custom writers, allowing you to e.g. write logs to an API rather than console or disk.
How about these features?
- Multiple (named) logs with separate writers (console, disk, etc.)
- Ability to individually enable/disable writing of specific entry types. (want verbose but not info? sure thing, weirdo!)
- Multiple writers per log. Combined with the above, this would allow you to write specific entry types (e.g. error, warning, fatal) to stderr instead of stdout, or to different apis.
- Ability to write the same log entry to multiple logs simultaneously
What do you think of these features?
What other features would you want?
I'm open to suggestions!18 -
Ok so I've been working on this bug for the past four days, fucking non-stop. I wanted to fucking die, was wishing I could just "pkill -f mylife". I tried fucking everything, did what the documentation told me to, stack overflow, tried different versions of the API, read through more lines of documentation than lines in the bible, to no avail. Start comparing screenshots of error logs from the past four days, notice that I started getting a line saying that it's connecting to the config file in a different location from default. I realize that the config file does not match the config file provided by the package installed, so I switch it to the default location. IT FUCKING WORKED, I've tested it nearly 10 times now and I am still in disbelief. It was a rollercoaster of emotions fixing the bug but now I'm just smiling like a fool in my chair at work now.6
-
Never mess with a motivated developer. I will make your life difficult in return.
Me: we need server logs and stats daily for analysis
DBA: to get those, you need to open a ticket
Me: can't you just give me SFTP access and permissions to query the stats from the DB?
DBA: No.
*OK.... 🤔🤔🤔*
*Writes an Excel Template file that I basically just need to copy and paste from to create a ticket*
This process should not take me more than 2mins 👍😁😋🙂😙😙😙😙😙😙😙😙
For them.... 😈😈😈😈😈😈😈😈😈😈😈9 -
Our website stopped working. A coworker said her guess from an error she saw in the logs was that it might have something to do with a bit of a commented line in .htaccess taking itself out of comment and into code due to a specific set of characters the line contained. I looked at .htaccess and thought “Nah, that can’t be it.” and proceeded to troubleshoot pretty much everything but that. After 30 minutes my coworker opened the file and fixed the comment and all was well. I felt stupid because on our team I’m supposed to be the expert that figures out stuff like this.
-
Holy FREAKING shit!! This was worst stupidest mistake I have ever made!
About 9 hours ago, i decided to implement brotli compression in my server.
It looked a bit challenging for me, because the all the guides involved compiling and building the nginx with brotli module and I was not that confident doing that on live site.
By the end of the guide, the site was not reachable anymore. I panicked.
Even the error logs and access logs were not picking up anything.
About a dozens guides and a new server and figuring out few major undocumented errors later, it turns out the main nginx.conf file had a line that was looking for *.conf files in the sites-enabled directory.
But my conf file was named after the domain name and ending with .com and hence were not picked up by the new nginx.conf
I'm not sure if I wasted my 9 hours because of that single line or not. But man, this was a really rough day!3 -
That strange moment an entire PHP project no longer throws any errors, warnings or notices and you don’t know if you should believe it or not.
It’s been a long day trawling the logs file by file, action by action, surely I’ve missed something or it’s just waiting to break like hell when I commit the changes. -
Java dev here. I rewrote an app and replaced a system call to ssh with a modern jaxrs post for uploading a file and (new) some additional data.
I even used a stream.
1 hour in production, first client doesn't get his file. Log says OutOfMemoryError: heap.
Me: wtf? I already use streams.
Looking at the Jersey library. Docs say nothing. An issue from 2013 says: oh if you silly don't use the Apache httpclient addon, we disable chunking and buffer the whole body, because our tests fail with the jdk included http client otherwise.
Me: meh.
No warning in the logs. Thank you soooooo much! Who could have known?4 -
Rant!!!!!!!
When you work hard on building frontend and suddenly, you realise whenever you restart your localhost, some URLs don't work. And it's random. Error logs also seem meaningless as the latest error report keeps changing the error location from file to file. Wasted hours to identify the abnormal behaviour.
I always had the mentality to keep its programmers fault in order to always consider all possible flaws.
But realised later that it was the OS setting issue. Did a stacktrace about 300 lines and found out the root cause(hopefully as no issues till now). The bug was related to total allowed open files at a time.5 -
ALWAYS read warnings guys.
Story time !
A client of ours has a synchronization app (we wrote it) between his inhouse DB and our app. (No, no APIs on their end. It’s a schelduled task).
Because we didn’t want to ask them for logs every single time, the app writes logs to disk (normal) and in Applications Insights in Azure.
When needed, I can go in portal, get all logs for last execution in a nice CSV file.
Well, recently we added more logs (Some problems were impossible to track).
So client calls us : “problem with XXX”
Me : Goes to Azure, does the same manipulation as always. Dismiss a smaaaaaalish warning without reading. Study logs. Conclusion: “The XXX is not even in the logs, check your DB”.
Little I knew, the warning was telling me “Results are truncated at 10.000 lines”.
So client was right, I was wrong and I needed to develop a small app to get logs with more than 10.000 lines. (It’s per execution. Every 3 hours) -
Want to make someone's life a misery? Here's how.
Don't base your tech stack on any prior knowledge or what's relevant to the problem.
Instead design it around all the latest trends and badges you want to put on your resume because they're frequent key words on job postings.
Once your data goes in, you'll never get it out again. At best you'll be teased with little crumbs of data but never the whole.
I know, here's a genius idea, instead of putting data into a normal data base then using a cache, lets put it all into the cache and by the way it's a volatile cache.
Here's an idea. For something as simple as a single log lets make it use a queue that goes into a queue that goes into another queue that goes into another queue all of which are black boxes. No rhyme of reason, queues are all the rage.
Have you tried: Lets use a new fangled tangle, trust me it's safe, INSERT BIG NAME HERE uses it.
Finally it all gets flushed down into this subterranean cunt of a sewerage system and good luck getting it all out again. It's like hell except it's all shitty instead of all fiery.
All I want is to export one table, a simple log table with a few GB to CSV or heck whatever generic format it supports, that's it.
So I run the export table to file command and off it goes only less than a minute later for timeout commands to start piling up until it aborts. WTF. So then I set the most obvious timeout setting in the client, no change, then another timeout setting on the client, no change, then i try to put it in the client configuration file, no change, then I set the timeout on the export query, no change, then finally I bump the timeouts in the server config, no change, then I find someone has downloaded it from both tucows and apt, but they're using the tucows version so its real config is in /dev/database.xml (don't even ask). I increase that from seconds to a minute, it's still timing out after a minute.
In the end I have to make my own and this involves working out how to parse non-standard binary formatted data structures. It's the umpteenth time I have had to do this.
These aren't some no name solutions and it really terrifies me. All this is doing is taking some access logs, store them in one place then index by timestamp. These things are all meant to be blazing fast but grep is often faster. How the hell is such a trivial thing turned into a series of one nightmare after another? Things that should take a few minutes take days of screwing around. I don't have access logs any more because I can't access them anymore.
The terror of this isn't that it's so awful, it's that all the little kiddies doing all this jazz for the first time and using all these shit wipe buzzword driven approaches have no fucking clue it's not meant to be this difficult. I'm replacing entire tens of thousands to million line enterprise systems with a few hundred lines of code that's faster, more reliable and better in virtually every measurable way time and time again.
This is constant. It's not one offender, it's not one project, it's not one company, it's not one developer, it's the industry standard. It's all over open source software and all over dev shops. Everything is exponentially becoming more bloated and difficult than it needs to be. I'm seeing people pull up a hundred cloud instances for things that'll be happy at home with a few minutes to a week's optimisation efforts. Queries that are N*N and only take a few minutes to turn to LOG(N) but instead people renting out a fucking off huge ass SQL cluster instead that not only costs gobs of money but takes a ton of time maintaining and configuring which isn't going to be done right either.
I think most people are bullshitting when they say they have impostor syndrome but when the trend in technology is to make every fucking little trivial thing a thousand times more complex than it has to be I can see how they'd feel that way. There's so bloody much you need to do that you don't need to do these days that you either can't get anything done right or the smallest thing takes an age.
I have no idea why some people put up with some of these appliances. If you bought a dish washer that made washing dishes even harder than it was before you'd return it to the store.
Every time I see the terms enterprise, fast, big data, scalable, cloud or anything of the like I bang my head on the table. One of these days I'm going to lose my fucking tits.10 -
The company considers the project manager I work with to be the best. After working with him, I consider him to be everything that is wrong with project management.
This PM injects himself into everything and has a way of completely over-complicating the smallest of things. I will give an example:
We needed to receive around 1000 rows of data from our vendor, process each row, and host an endpoint with the data in json. This was a pretty simple task until the PM got involved and over complicated the shit out of it. He asks me what file format I need to receive the data. I say it doesnt really matter, if the vendor has the data in Excel, I can use that. After an hour long conversation about his concerns using Excel he decides CSV is better. I tell him not a problem for me, CSV works just as good. The PM then has multiple conversations with the Vendor about the specific format he wants it in. Everything seems good. The he calls me and asks how am I going to host the JSON endpoints. I tell him because its static data, I was probably going to simply convert each record into its own file and use `nginx`. He is concerned about how I would process each record into its own file. I then suggest I could use a database that stores the data and have an API endpoint that will retrieve and convert into JSON. He is concerned about the complexities of adding a database and unnecessary overhead of re-processing records every time someone hits the endpoint. No decision is made and two hours are wasted. Next day he tells me he figured out a solution, we should process each record into its own JSON file and host with `nginx`. Literally the first thing I said. I tell him great, I will do that.
Fast forward a few days and its time to receive the payload of 1000 records from the Vendor. I receive the file open it up. While they sent it in CSV format the headers and column order are different. I quietly without telling the PM, adjust my code to fit what I received, ran my unit test to make sure it processed correctly, and outputted each record into its own json file. Job is now done and the project manager gets credit for getting everything to work on the first try.
This is absolutely ridiculous, the PM has an absurd 120 hours to this task! Because of all the meetings, constant interruptions, and changing of his mind, I have 35 hours to this task. In reality the actual time I spent writing code was probably 2-3 hours and all the rest was dealing with this PM's meetings and questions and indecisiveness. From a higher level, he appears to be a great PM because of all the hours he logs but in reality he takes the easiest of tasks and turns them into a nightmare. This project could have easily been worked out between me and vendor in a 30 min conversation but this PM makes it his business to insert himself into everything. And then he has the nerve to complain that he is so overwhelmed with all the stuff going on. It drives me crazy because this inefficacy and unwanted help makes everything he touches turn into a logistical nightmare but yet he is viewed as one of the companies top Project Managers.3 -
Set up a 2GB upload to run and a 6GB folder to compress while I went to do an errand. Came back to find computer had rebooted itself while I was out. No reason for it in event logs. Just a random reboot for giggles, I guess. File upload aborted with no resume and I’m unsure if the full folder compressed. Have to start over.3
-
Now this looks stupid already, but here is the kicker: by "partially hydrated cursor" i mean that once every page size an sql query is ran to get the next page content. This code is put in an event handler, executed once every time a file is uploaded in a dms where files get uploaded by the thousand.
To sum things up, this simple snippet achieves triple dipping:
* waste time on useless sql queries
* waste cpu on useless iterations
* waste disk space on useless logs
Icing on the cake, the author of this piece of shit was complaining about the overall slowness of the process.
Needless to say that when I stumbled on this, both internal *and* external screaming ensued...4 -
Not a question per se but an assignment -
Design an application that could find logs between two timestamps where the logs are stored in 10000 files, each with a file size of ~16GB.
For an entry level position this was a really good and interesting problem to solve.11 -
TLDR; I was editing the wrong file, let's go to bed.
We have this huge system that receives data from an API endpoint, does a whole bunch of stuff, going through three other servers, and then via some calculation based on the data received from the UI, and data received from the endpoint, it finally sends the calculated fields to the UI via websocket.
Poor me sitting for over 4 hours debugging and changing values in the logic file trying to understand why one of the fields ends up being null.
Of course every change needs a reboot to all the 4 servers involved, and a hard refresh of the UI.
I even tried to search for the word null in that file, but to no avail.
After scattering hundreds of console logs, and pulling my hair out, I found out that I am editing the wrong file.
I guess it's time for some sleep.1 -
I used to think that I had matured. That I should stop letting my emotions get the better of me. Turns out there's only so much one can bottle up before it snaps.
Allow me to introduce you folks to this wonderful piece of software: PaddleOCR (https://github.com/PaddlePaddle/...). At this time I'll gladly take any free OCR library that isn't Tesseract. I saw the thing, thought: "Heh. 3 lines quick start. Cool.", and the accuracy is decent. I thought it was a treasure trove that I could shill to other people. That was before I found out how shit of a package it is.
First test, I found out that logging is enabled by default. Sure, logging is good. But I was already rocking my own logger, and I wanted it to shut the fuck up about its log because it was noise to the stuffs I actually wanted to log. Could not intercept its logging events, and somehow just importing it set the global logging level from INFO to DEBUG. Maybe it's Python's quirk, who knows. Check the source code, ah, the constructors gaves `show_log` arg to control logging. The fuck? Why? Why not let the user opt into your logs? Why is the logging on by default?
But sure, it's just logging. Surely, no big deal. SURELY, it's got decent documentation that is easily searchable. Oh, oh sweet summer child, there ain't. Docs are just some loosely bundled together Markdowns chucked into /doc. Hey, docs at least. Surely, surely there's something somewhere about all the args to the OCRer constructor somewhere. NOPE! Turns out, all the args, you gotta reference its `--help` switch on the command line. And like all "good" software from academia, unless you're part of academia, it's obtuse as fuck. Fine, fuck it, back to /doc, and it took me 10 minutes of rummaging to find the correct Markdown file that describes the params. And good-fucking-luck to you trying to translate all them command line args into Python constructor params.
"But PTH, you're overreacting!". No, fuck you, I'm not. Guess whose code broke today because of a 4th number version bump. Yes, you are reading correctly: My code broke, because of a 4th number version bump, from 2.6.0.1, to 2.6.0.2, introducing a breaking change. Why? Because apparently, upstream decided to nest the OCR result in another layer. Fuck knows why. They did change the doc. Guess what they didn't do. PROVIDING, A DAMN, RELEASE NOTE. Checked their repo, checked their tags, nothing marking any releases from the 3rd number. All releases goes straight to PyPI, quietly, silently, like a moron. And bless you if you tell me "Well you should have reviewed the docs". If you do that for your project, for all of your dependencies, my condolences.
Could I just fix it? Yes. Without ranting? Yes. But for fuck sake if you're writing software for a wide audience you're kinda expected to be even more sane in your software's structure and release conventions. Not this. And note: The people writing this, aren't random people without coding expertise. But man they feel like they are.5 -
So today I found a file share containing some super super sensitive information accessible to what I think was our entire user base (6,500 users) if you knew the server name and had an interest in nosing around.
I reported it to our head of IT and heard nothing after, although 5 mins after reporting I could no longer access...
I suspect the infrastructure lead is going to be a dick (because his one of them awkward non team player kind of guys) and not thank me for preventing our company from being in national news papers... but try to spin it on why am I nosing around his servers in the first place..
I actually feel 50/50 about if I should of told or not.. but on flip side, I guess the access logs of me listing the files as I flick through to confirm my suspicions would of caused s bigger headache.
Fucking useless infrastructure engineers!9 -
I made a bash script for my website that anonymises the visitor IPs in the Awstats logs by replacing the last octet with 0. It can either process all logfiles except the one of the current month, or only the one of the previous month. The latter mode is how I put it in a cron job to be called on the first day of each month.
Everything worked flawlessly with test data, but on the server, some visitor IPs were not anonymised. I noticed that all of them were from the last day of the previous month. Looking at the time stamp of the logfile, it was indeed from the first of the current month, but not from 00:21 where my cron job runs - instead, it was modified around 14:30.
Then I realised that the Awstats engine seems to be configured to batch add the log entries once per day at 14:30 so that when my cron job ran, the visitor data from between 14:30 and 00:00 were not yet in the file!
Solution: batch process all previous logfiles once to clean them up, and schedule the cron job on the 2nd of each month at 00:21.2 -
Stupid pipeline bullshit.
Yeah i get it, it speeds up development/deployment time, but debugging this shit with secret variables/generated config and only viewable inside kubernetes after everything has been entered into the helm charts through Key Vaults in the pipeline just to see the docker image fail with "no such file found" or similar errors...
This means, a new commit, a new commit message, waiting for the docker build and push to finish, waiting for the release pipeline to trigger, a new helm chart release, waiting for kubernetes deployment and taking a look at the logs...
And another error which shouldn't happen.
Docker, fixes "it runs on my machine"
Kubernetes, fixes "it runs on my docker image"
Helm, fixes "it runs in my kubernetes cluster"
Why is this stuff always so unnecessarily hard to debug?!
I sure hope the devs appreciate my struggle with this... well guess what, they won't.
Anyways, weekend is near and my last day in this company is only four months away.2 -
once upon a time I went on vacation.
It was for 5 days and I went to Leh-Ladakh with my family. (Me, My big bro and my parents.)
It's a beautiful and cold place. Snow and High Mountain and no phone call from anyone.
It was supposed to be no call. But on the 3 days, I got a call from my junior and he said to me that server is not working and it's giving 404 error.
So I told him to go to Cpanel (It was client's server). After 1 hour I got a call back from him and he was not able to fix it.
So I had to open the Cpanel in my Galaxy Note 8, Open file manager, go through all the files and logs and fix it code in 2 or 3 files.
It took 4 hours to fix the problem. But that day I understood the value of my Note 8 and its big screen. Thank you, Samsung.
Note: The lake in the photo is Pangong Lake/5 -
Spend literally two days trying to figure out why I have a 2 hour offset in my timezones for a lamp web app. This isn't even close to my first timezone rodeo.
Check logs, reset Apache/MySQL/PHP timezones in like 100 places. Use 3rd party server side and client side timezone libraries. Moment.js you say? Shit works like a charm... but is, of course, still two hours off.
MySQL is right. PHP is right. Apache is right. PHP libs are in place. Finally convert the entire damn project to use epoch time because I have a deadline, I have no more time to read backwater AWS docs and try to figure out why the hell this Ubuntu EC2 is fucked up, and I literally cannot figure out why in the hell the damn clock is off.
Several days later notice a variable in the main .config file... right in root... 2 hour timezone offset.
Fuuuuuuuuuuuuuuuuuuuuck.8 -
Pentesting for undisclosed company. Let's call them X as to not get us into trouble.
We are students and are doing our first pentest at an actual company instead of assignments at school. So we're very anxious. But today was a good day.
We found some servers with open ports so we checked a few of them out. I had a set of them with a bunch of open ports like ftp and... 8080. Time to check this out.
"please install flash player"... Security risk 1 found!
System seemed to be some monitoring system. Trying to log in using admin admin... Fucking works. Group loses it cause the company was being all high and mighty about being secure af. Other shit is pretty tight though.
Able to see logs, change password, add new superuser, do some searches for USERS_LOGGEDIN_TODAY! I shit you not, the system even had SUGGESTIONS for usernames to search for. One of which had something to do with sftp and auth keys. Unfortunatly every search gave a SQL syntax error. Used sniffing tools to maybe intercept message so we could do some queries of our own but nothing. Query is probably not issued from the local machine.
Tried to decompile the flash file but no luck. Only for some weird lines and a few function names I presume. But decompressing it and opening it in a text editor allowed me to see and search text. No GET or POST found. No SQL queries or name checks or anything we could think of.
That's all I could do for today. So we'll have to think of stuff for next week. We've already planned xss so maybe we can do that on this server as well.
We also found some older network printers with open telnet. Servers with a specific SQL variant with a potential exploit to execute terminal commands and some ftp and smb servers we need to check out next week.
Hella excited about this!
If you guys have any suggestions let us know. We are utter noobs when it comes to this.6 -
Using Ubuntu from years, never knew this logs file named 'syslog.1' could start filling about 80% of my root space 🤦.6
-
Yesterday was a horrible day...
First of all, as we are short of few devs, I was assigned production bugs... Few applications from mobile app were getting fucked up. All fields in db were empty, no customer name, email, mobile number, etc.
I started investigating, took dump from db, analyzed the created_at time stamps. Installed app, tried to reproduce bug, everything worked. Tried API calls from postman, again worked. There were no error emails too.
So I asked for server access logs, devops took 4 hrs just to give me the log. Went through 4 million lines and found 500 errors on mobile apis. Went to the file, no error handling in place.
So I have a bug to fix which occurs 1 in 100 case, no stack trace, no idea what is failing. Fuck my job. -
When starting projects, following semver.org quickly gets out of hand. Nothing is "backwards-compatible"
0.0.0 gitignore
0.1.0 prints arguments
1.0.0 prints text of web requests
2.0.0 prints parsed web requests
3.0.0 prints filtered requests
4.0.0 prints using yaml
5.0.0 logs to file
5.0.1 catch error
5.1.0 interactive2 -
Been working on a new project for the last couple of weeks. New client with a big name, probably lots of money for the company I work for, plus a nice bonus for myself.
But our technical referent....... Goddammit. PhD in computer science, and he probably. approved our project outline. 3 days in development, the basic features of the applications are there for him to see (yay. Agile.), and guess what? We need to change the user roles hierarchy we had agreed on. Oh, and that shouldn't be treated as extra development, it's obviously a bug! Also, these features he never talked about and never have been in the project? That's also a bug! That thing I couldn't start working on before yesterday because I was still waiting the specs from him? It should've been ready a week ago, it's a bug that it's not there! Also, he notes how he could've developes it within 40 minutes and offered to sens us the code to implement directly in our application, or he may even do so himself.... Ah, I forgot to say, he has no idea on what language we are developing the app. He said he didn't care many times so far.
But the best part? Yesterday he signales an outstanding bug: some data has been changed without anyone interacting. It was a bug! And it was costing them moneeeeey (on a dev server)! Ok, let's dig in, it may really be a bug this time, I did update the code and... Wait, what? Someone actually did update a new file? ...Oh my Anubis. HE did replace the file a few minutes before and tried to make it look like a bug! ..May as well double check. So, 15 minutes later I answer to his e-mail, saying that 4 files have been compromised by a user account with admin privileges (not mentioning I knee it was him)... And 3 minutes later he answered me. It was a message full of anger, saying (oh Lord) it was a bug! If a user can upload a new file, it's the application's fault for not blocking him (except, users ARE supposed to upload files, and admins have been requestes to be able to circumvent any kind of restriction)! Then he added how lucky I was, becausw "the issue resolved itself and the data was back, and we shouldn't waste any more yime.on thos". Let's check the logs again.... It'a true! HE UPLOADED THE ORIGINAL FILES BACK! He... He has no idea that logs do exist? A fucking PhD in computer science? He still believes no one knows it was him....... But... Why did he do that? It couldn't have been a mistake. Was he trying to troll me? Or... Or is he really that dense?
I was laughing my ass of there. But there's more! He actually phones my boss (who knew what had happened) to insult me! And to threaten not dwell on that issue anymore because "it's making them lose money". We were both speechless....
There's no way he's a PhD. Yet it's a legit piece of paper the one he has. Funny thing is, he actually manages to launch a couple of sort-of-nationally-popular webservices, and takes every opportunity to remember us how he built them from scratch and so he know what he's saying... But digging through google, you can easily find how he actually outsurced the development to Chinese companies while he "watched over their work" until he bought the code
Wait... Big ego, a decent amount of money... I'm starting to guess how he got his PhD. I also get why he's a "freelance consultant" and none of the place he worked for ever hired him again (couldn't even cover his own tracks)....
But I can't get his definition of "bug".
If it doesn't work as intended, it's a bug (ok)
If something he never communicated is not implemented, it's a bug (what.)
If development has been slowed because he failed to provide specs, it's a bug (uh?)
If he changes his own mind and wants to change a process, it's a bug it doesn't already work that way (ffs.)
If he doesn't understand or like something, it's a bug (i hopw he dies by sonic diarrhoea)
I'm just glad my boss isn't falling for him... If anything, we have enough info to accuse him of sabotage and delaying my work....
Ah, right. He also didn't get how to publish our application we needes access to the server he wantes us to deploy it on. Also, he doesn't understand why we have acces to the app's database and admin users created on the webapp don't. These are bugs (seriously his own words). Outstanding ones.
Just..... Ffs.
Also, sorry for the typos.5 -
This literally happened in my current team, and I'm not even an experienced dev yet.
Incident happened like this :
Our team is working on a RCP based on eclipse plugins, which has a headless mode and a GUI mode. Now, in the GUI mode, my manager cum architect thought there are no need of user log files (long story) because the user can see the info on screen, whereas in the headless mode, she wanted me to print the logs onto the console and a log file as well.
Now it just so happened that our team had got a recent addition as a replacement to our lead developer (she left the company) who claimed she had 3 years of expertise and a masters degree, and she was assigned a task. The task was to format a custom file we were generating out of the product (basically dumping info in a file) in a human-readable format. Miss new-addition-masters-degree decided it would be a very good idea to redirect the standard java output stream to a file output stream ( which she used for generating the formatted file ) but somehow never realized that she needed to reset the output stream back to standard output.
Consequences were devastating. I wrote the logic for the logger ( yes, apparently any available logging mechanism won't do it, again, long story ) and had it printing to a file in tmp directory. The logs seemed to be working fine initially but after a few logs, specifically from the point where the formatter started working, all the logs got printed in the formatted file. And this file was supposed to be used by our clients to develop something on top of it. Naturally, I got the heat of it and then naturally, worried and nervous and curious and in a frenzied state of mind, I started debugging.
When I got to the actual fault, I seriously could not decide whether to cry or laugh or call up miss masters and scream at her. I decided to ask her about what the hell she had written and her answer was most of it was written by the developer she replaced, so she didn't know it would cause this much problem. Anyway, I fixed the leak after that and averted the catastrophe.
And that, fellow devs, is the story of how I solved a crisis in my first year at corporate.1 -
Existing code:
Logger class would block the caller, lock a mutex, call CreateFile(), write a single line to the file, unlock the mutex and return.
Improvement:
Added two logging queues and created a thread that will periodically lock one queue and write it to the disk, around 500 entries at a time, while new entries are being inserted into the other queue. Kinda like a bed pan or urine bottle. While emptying one bottle, the logs go into the other one. Added fatal exception handlers so that the log queues are dumped when the application is crashing. When the exception handler is triggered, logging method does not return so that the application STOPS working to make sure there are no "not logged" activities.7 -
So, it's been a while since I've been working on my current project and I've never had the "luck" to touch the legacy project wrote in PHP, until this week when I got my first issue.
And damn, this goddamn issue. It was a bug, a very strange bug, that only happens in production and that nobody has any idea what was happening, so yeah, I didn't have anyone to ask and I got less time than usual ( because Thanksgiving ).
And thus, I have no starting point, no previous knowledge on PHP and less time! I expected a very fun week 😀 and it was beyond my expectations.
First I tried to understand what might be causing the issue, but there wasn't any real clue to star with, so no choice, time to read the flow on the code and see what are they're doing and using ( 1k line files, yay, legacy ). Luckily I got some clues, we're using a cookie and a php session variable for the session, ok, let's star with the session variable. Where it's that been initialize ? Well, spoiler alert, I shouldn't start with that, because my search end up in the login method of the API that set a that variable and for some reason in the front end app it was always false and that lead me to think that some of the new backend functions were failing, but after checking the logs I got no luck.
Ok, maybe the cookie it's the issue, I should try open the previous website on the brow...redirect to new project login, What? Why ? I ask around and it's a new feature push on Monday, ok I got Chrome Dev tools I can see which value of the cookie it's been set and THERE IT WAS it has a wrong domain! After 2 days ( I resume a lot of my pain ) I got what I've been looking for, so now I should be able to fix the bug. Then where is the cookie initialized ? In the first file the server hits whenever you tried to enter any page of the app, ok, I found the method, but it's using a function that process the domain and sets it correctly? wtf ? Then how in heaven do I get the incorrect domain ? Hello? Ok, relax, you still have one more day to fix this, let's take it easy.
Then, at the end of the Wednesday, nope I still have no clue how this is happening. I talked with the Devops guy and he explain me how this redirection happens and with what it depends on, I followed the PHP code through and nothing, everything should works fine, sigh. Ok I still have 2 days, because I'm not from US and I'm not in US, so I still have time, but the Sprint is messed up already, so whatever I'm gonna had done this bug anyhow.
Thursday ! I got sick, yay, what else could happen this week. Somehow I managed to work a little and star thinking in what external issue could affect the processing, maybe the redirection was bringing a wrong direction, let's talk with the Devops guy again, and he answer me that the redirection it was being made by PHP code, IN A FILE THAT DOESN'T EXIST IN THE REPOSITORY, amazing, it's just amazing. Then he explained me why this file might be missing and how it's the deployment of this app ( btw the Devops guy it's really cool and I will invite him a beer ) . After that I checked the file and I see a random session_star in the first line of the code, without any configuration, eureka ! There was the cause and I only need to ask someone If that line it's necessary anymore, but oh they're on holiday, damn, well I'll wait till Monday to ask them. But once and for all that bug was done for ! 🎉
What do I learn ? PHP and that I don't want any more tickets of PHP 😆. -
A large pool of application instances' is writing logs to the same physical file. No way to distinguish which instance wrote which line.
Welcome to hell
We're being asked questions. We're replying that we cannot help unless logging is fixed. Noone's bothering to fix this mess and instead returns tickets with requests to investigate more.
F.U.N
/s3 -
I really like helping other learn how to use a programming language or solve problems on general. I often go out of my way and stop working on my hobby projects, just to help someone.
Thag being said, I'm no prgramming god. I myself am striving to become a better programmer.
I make mistakes, I can't always help you, I am still learning, but I only have good intentions. And you are by no means obligated to follow my advice. Quite the contrary, fight me, try to prove me wrong or say point out possible flaws. THINK ABOUT WHAT I TELL YOU. DON'T JUST BLINDLY FOLLOW MY ADVICE AND BITCH ON ME LATER.
This happens rather often and I can see why you want to blame me. And I can't deny that part of this is also my fault.
Situations like these don't really tilt me.
But today someone had the fucking nerve to pop a file into the chat and get mad at me for sugvesting a cleaner, shorter and more efficient solution. LIKE I DON'T FUCKING CARE THAT IT TOOK YOU A WHOLE DAY TO IMPLEMENT SOMETHING I CAN DO BETTER IN MINUTES, I JUST WANT TO HELP YOU.
But the best thing I get afterwards: "But you told me to do it like that" BITCH WHAT!?
I have chat logs telling me loud and clear that the concept we never talked about before in private nor on a public server (bless discord's search function). And I will not accept your lousy excuse of having me cobfused with someone. You disrespected me greatly, you put words in my mouth, just to justify your pity anger, when I'm trying to help you?!
Get crucified and put on a shooting range!
I offer you out of pure goodwill. Something you'd normally have to pay for. And this is the treatment I get in return?
Just rm -rf your disastrous, dd -if=/dev/urandom your harddrive and sod off!2 -
Let me start this off by stating I'm a Java dev, and a noob with C++.
Thought it'd be cool to learn some OpenCL, since I want to do some maths stuff and why not learn something new.
So I sat down, installed Nvidia proprietary drivers, broke my x-org server, purged, reinstalled, rebooted and after a while I got stuff sorted out.
Then on to my IDE. I use CLion and it uses Cmake. C++ noob knows shit about Cmake, so struggle for two hours trying to figure out wtf is going on with the OpenCL libs and why they're only partially detected. Fml.
Finally, everything is configured and I'm set. I start working on a Hello World program using OpenCL. Finish it in 20 mins, all good. No output. Do some googling, check my program a million times. Nothing wrong here. Check the kernel, everything as in the tutorial.
I start checking error codes after a while reported by OpenCL (which I had no clue was a thing) and I get some code saying the program was not created properly (to run the kernel). No fucking clue what's up with that. Google around, find another tutorial, rewrite my code in case I'm using outdated code or something. Nothing.
Fast forward an hour, I find out that OpenCL has logs! So I grab some code from the website I found it on, and voila, I finally get some info on what's going on.
Get a load of this bs.
In the kernel file, so that OpenCL knows that it's a function to run, you have to put __kernel. But in all the places I read, it said to put it as _kernel.
Add the underscore, compile, run and everything is perfect.
Then I tried just putting 'kernel'. Also compiles and runs fine.
Two hours hours and my program was fixed by adding an underscore. IF ONLY C++ GAVE AN INDICATION OF WHAT BLEW UP INSTEAD OF SITTING BACK AND BEING LIKE "oh wow man feels bad, work some magic and try again" THEN THIS WOULD NOT HAVE TAKEN SO LONG.
Then again, it was OpenCL that was being shitty with its styling enforcement or whatever the hell the underscore business is. But screw it. C++ eats shit too for this. Sure, maybe Java babies you by giving you the exact error and position that the error took place at. But at least that way you don't waste hours of your life chasing invisible bugs 😠😠
I'm going to eat some food... Too much energy was consumed fighting the system... Then I'll get back to OpenCL because 😇 but that doesn't make it less bs.1 -
>finally gets around to installing vsftpd on home server RPi
>doesn't work
hmm.mp2
>configurating
>confusing as fuck template documentation
>man page isn't much better
>gets it working
>goes to log in
User: pi
Password: a
(What? It's a home file/command server isolated from the Internet. Sue me.)
nope.avi
>why
>tries again
nope.svg
>FUCK
>sees small raw-command log in bottom-right of phone FTP client
hmm.flac
>tries again, watches log
PASS *****
>the fuck
>goes to change user pass over SSH
# passwd
"Current password?"
about half a second later
"passwd: auth token manipulation denied"
>the delay tho
>WAIT A SECOND
one time i got past some parental software bullshit on a tablet by abusing the delay between opening a banned app and the redirect to the normal software at like age 7. (Doing so let me enable remote wipe through Google. bye bye software!)
>*inner 7 year old has autistic screech*
# nano temp
a
abcdefghi
abcdefghi
^O Y ^X
# passwd < temp
>fucking works
>logs in to FTP server successfully
>does the one file download that was needed
why and how did that fucking work -
Remember kids, passwd is a readable file! You can have a very bad day trying to figure out a user's shell from side-channel attacks and getting nowhere, or you could remember that it LITERALLY SAYS WHAT IT IS PUBLICLY IF YOU DON'T FORGET THAT IT'S THERE.
On the plus side, I learned a ton about what you can do with ssh arguments and debugging logs. Shit's pretty cool.5 -
Mount an azure file share in an app service container? Sounds handy. Nice clicky-draggy wizard to set it up, pick your file share, type a path to mount it to, hit save.
And does it work?
Does it buggery.
And is there a helpful error message so you can see what you've done wrong?
In a pig's arse is there a helpful fucking error message.
"Application error", and a link to some "diagnostic resources" that displays the exact same error message, including the same link, so a link to itself, in an infinite recursive loop of rank, inhuman stupidity.
Let me see what's in the logs. Absolutely fuck all. No, wait! There's the html markup for the fucking useless error message I'm looking at in the browser. So the UI is telling me to fuck off, and the logs are recording that I have been told to fuck off.
But this is Azure. So there isn't just one place to look at the logs, there are many places to look at the logs. And they are all geologically slow and most of them don't work.
It's probably a firewall issue. I'll have a look later on if I can be arsed, but frankly I'd rather be performing cunnilingus on a lion.1 -
A command line tool built in Python that helps you analyse your git logs by exporting them into a csv/json file.
Can fetch the logs from a given file path or a git directory.
https://github.com/dev-prakhar/...3 -
I feel like a fucking god now!
We run a webshop and we are in contract with the national post office. Every time there is an update to their program I fear ahead of time what will be fucked up again.
After today's update we weren't able to open any shippment list we just saw a mile long error message. After the customer care couldn't figure out the problem, and the suggested solution might take up to 2days, and it is basically only a new customer file, i fired up my good old sqlite viewer friend, to chek if I am lucky...
Guess what! That shit is using unsecured sqlite dbs, so i've had no problem examining and even rewriting the values. So checking the logs and scraping the DB I've found the problem.
Apparently some asshole thought that deleting a service but keeping all of its references in other tables scattered around is a good fucking idea. And take it customer care, the new customer file won't fix shit, because it was in the global DB. I swear i am getting more familiar with that piece of garbage then the ones who made it.
On top of that the customer care told us, that if we couldn't manage to send the shippment list with the program we are not elligible for our contractual prices.
It is not enough that I had to fix their fucking shit program, they also "would like to charge us" because their pogram isn't working. What a fucking great service. (At least the lady on the telephone was friendly)1 -
Software has no pre-built packages. Clones repo and tries to compile from source. Spends 1.5 hrs hunting for the libraries - no list published. Configure of course had trouble finding one I had installed; had to debug the configure file to see how it was search for it, turns out it was applying a subdirectory to whatever path I gave it. FINALLY configures and I run "make all". Everything compiles!!! Try to follow documtation to setup the software, 1st cli command -> Segmentation Fault with no logs....
-
Newbie Linux User - Story about not working GUI
I am a proud Opensuse user for about a year, still struggling with some basic stuff, terminal, etc.
The story begins when a few days ago I try to login to the system. To my trusty Gnome. I get stuck on login loop;
successful login - > black screen for a second - > back to login screen.
Zero feedback, not a single error message
Stress level increases taking in count that I am at a climax at my university with tons of projects on my computer.
I assemble the Team A:
Me, Google, Stackoverflow, and for desperate times Russian Stackoverflow
Over 4 hours, found out that my user is affected by this, tried restoring default Gnome configuration, went through bunch of logs only to find out that every user gets the same errors, still only my not working. Even KDE denied to cooperate with the same result.
So what went wrong you may be thinking.
One line in file replaced by miniconda, that changed the PATH.
Linux is the best detective game that I've ever played.
Is it something that I should get used to?2 -
Hey their did anybody notice unauthorized login attempt over ssh. Means I have a demo digitalocean droplet I just left it for some logs their isn't any imp data over but when I try to ssh back that machine after an interval of max 5 to 6 days after login message displayed their were 9876 login attempts were made, then I directly go to ssh log over secure log file get all those IP, found out max were from China some from France and all are doing random login names like user, admin etc etc and with random password over multiple ports even non standard one, is anyone finds this happening10
-
Pushed some changes to PROD today. Go to login and check changes .. noooooope!
Still a bit new to Symfony 5... but I'm just not a fan right now. The login screen just jumps back to itself. No login failed message and prod log had a size of 0 so that was no help.
Traced this thing way down into the CSRF Authentication functions. \is_callable(...namespace) just returning null so no go on getting a token for isTokenValid() =/
ugh! This is truly the most torturous junk I've ever seen. Nothing in the logs so I decided to just use the good old ECHO'HERE' debugger.
What was the issue you might ask?... effin' yaml file
Fix for now is to set the session handler_id back to null -
Bullshittery continues. This time around, absolutely innocent, clamav is root cause. For once not incompetent idiot, but piece of software. IDK if that makes me happy or upset.
So our email server that I configured and took care of died. RIP. Damn, better put it back together ASAP. So Im under pressure, while still pissed at everything that I ranted before (actually my last 2 rants were throttled, and in total all of that happened past 60 minutes but devrant rate limiting) I start auditing logs. You imagine, we kindda need it NOW, and it's second time last month clamav is pulling stunts and MTA refuses (properly) to work without antivirus. So pressurized, I look at logs, what the fuck went wrong.
clamav deamonize() failed - cannot allocate memory
Hmm. Intresting, but sounds like bullshit. I know server is quite micro becouse they wanted to save on costs as much as possible, but it has well over half a gig free ram just before it crashes (like 800MB) with that message. Is it allocating almost gig in one call or what? Looked carefully at trusty htop while it was starting, and indeed, suddenly it just dies with quite a bit of ram free, almost as much as it weights already. And I remember booting it up when I was configuring it, and it had fair bit of headroom.
Google, help me friend... Okay, great, so apparently at some point clamav loads virus DB into ram (dafuq?), and than forks, which causes spike of 2x the ram usage, and than immidietely frees it up.
Great, that sounds like great design decision... At least I know, I can just slap on SWAP file, restart it and call it a day.
It worked, swap file is almost empty (used 15megs, 900 megs free ram, whatever).
That leaves me wandering, who figured out to load DB to ram? That means pretty much that clamav will eat a little bit more ram each vir db update, and that milisecond "double ram" spike will confuse innocent people who just wanted to run clamav and it worked last *long period of time* and now crashes without warning without any changes to configuration.
Maybe there is logical explanation, I want to know it.8 -
Quick Plesk config question...
Been getting open_basedir() notices in the WordPress logs, and frankly it's flooding the log right now. Sample below:
[24-Feb-2019 07:05:19 UTC] PHP Warning: file_exists(): open_basedir restriction in effect. File(/var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-content/db.php) is not within the allowed path(s): (/var/www/vhosts/webspacedomain.com/:/tmp/) in /var/www/vhosts/webspacedomain.com/SiteInstallDirectory/wp-includes/load.php on line 397
Checking the settings for open_basedir in the domain's PHP settings, it's currently set to the following default value:
{WEBSPACEROOT}{/}{:}{TMP}{/}
By my read, that **should** be granting permission to the directory. I just checked it against the setting on the dev server (which doesn't report this error), and it's configured in the same manner. Only difference between Dev environment and this one is that the one in Dev is in vhosts/webspacedomain.net/DEV instead of just vhosts/webspacedomain.net
Is there something I'm missing here?4 -
When an error page fails to download a file crucial to the error page and there's nothing in the logs2
-
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2 -
tldr: I am looking for recommendations for a basic website for my parents. GOTO question;
Pre-Story:
My parents have a small (offline) business. They have a website to give some general information and list their weekly offers.
When I felt that what has come out of the website-building tool (you know, clicky clicky stuff) looked a bit too early 2000's and is a total ripoff for what you get (almost 20€ per month), I created something with Google Sites for them. Feel free to roast me, but web development is not my field and now it looks much more modern, is mobile friendly and does what it is supposed to do. Weekly offers are edited in a google sheets file, which is embedded in the website. Not great, but this way my mom doesn't have to deal with editing a tables on the page - trust me, it won't look good. This also meant they could downgrade the hosting package to discard the clicky-tool and just the domain (maybe 1€ per month). The website itself is hosted for free by Google.
Some time ago GDPR became a thing and then I was tasked to have a look at it. (side note: I don't want to rant about being responsible for it, that's fine. My parents don't really ask me to do a lot for them.) You can't enter any data on the website, it's just very basic stuff and data protection wise there's just the "usual" stuff (cookies, embedded tools, logs). I added another site with a halfway complete privacy policy. Regarding the whole cookie issue (do not enforce unnecessary cookies) I couldn't find an easy solution. It's not 100%, but what can you really expect from a small business like this? I've seen worse.
Now to the question:
Can you recommend a good alternative to the current solution (Google Sites)?
It should be cheap (<3€/month incl. domain) and my parents should be able to make some basic changes (just text in predefined locations). I am not afraid to get my hands dirty - I can deal with some HTML, CSS, JS - but I don't want to sink a lot of time into this. No need for analytics or the like. Maybe a newsletter would be cool (with the weekly offers), but that's just a random thought of mine and definitely not necessary.
Thanks for reading :)18 -
Hey guys!
Once again, I got a little stumped when writing one thingmajig in Python.
I am normally not a programmer (Work as sysadmin), so I don't really know all the fancy abstract ways things are done "properly", which is why I need to ask here:
I have a program, separated into parts. The "core" is a part that sets commandline argument structure (using the argparse library), loads master configuration file, sets up the main logging facility, and then proceeds to load "plugins" - python files with one or more classes that implement one specific abstract class that forces them to implement a common interface of init, run, cleanup functions.
The core then proceeds to initialize those classes, run the "run" function, and run the "cleanup" function.
If the plugin class throws a Warning, it is only logged and runtime continues. If it is anything else, the program logs it and stops.
Now, the issue is, sometimes, a user may want to continue even if a non-warning occurs.
Lets say that I am creating a user, and the user already exists. Sometimes, the program user might want to continue with further plugin execution. And what I was told was to implement specific commandline switches that force continuation of runtime despite the plugin failing.
How should I implement it? The most obvious thing is to add a specific switch for every plugin, but that is exactly what I am trying to evade. I want to have the core as abstract as possible.
Other solution I thought of is to have a file of some sort that would list extra switches to implement, then it would be up to the class to implement if it uses the switch or not (I pretty much pass the entire Namespace received from parse_args() function), but this also feels kinda hackish.
I thought about having some sort of function that the plugin could call in the core to add a new argument, but at the point that plugins start loading, the argument parser is already compiled and cannot be changed further.
Any other ideas of how to re-implement the program are also welcome! I may not do it this times, but I'd at least learn something new again.3 -
chmod a+w storage/logs/laravel.log
This command makes file writable.
So why I cannot edit the file after runnign this command ? No errors were given after running.
Tried also with -R on logs folder.
WHat is happenign with the software, why nothiing works?14 -
Moengage is one of the worst analytics software I have ever worked with...
Integrating it into a react website is a pain in the ass, they don't have a npm package, you need to add a script tag to html file.
It also has a wierd bug that the service worker they mentioned in the documentation doesn't work when the debug logs are off.
Aaaargh. Now I have to make a service worker handler to import this service worker and see if it works... -
Despite already having a few years of professional experience dealing with Linux servers, I still, to this day, confuse, which environment file gets sourced and when...
There's /etc/profile, /etc/bashrc, ~/.bash_profile, ~/.profile, ~/.bashrc
I think it's... Bashrc for interactive shells, profile for login shells.
But then I have examples like "ssh user@server 'echo $var'" that... Don't source any of the files!
You can enable user environment files for SSH that get sourced whenever a user logs on through SSH (~/.ssh/environment / environment specified for a key in ~/.ssh/authorized_keys)
Is there some sort of master environment file that gets sourced *every* time, no matter what kind of shell starts?1