Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "too many incoming"
-
Paranoid Developers - It's a long one
Backstory: I was a freelance web developer when I managed to land a place on a cyber security program with who I consider to be the world leaders in the field (details deliberately withheld; who's paranoid now?). Other than the basic security practices of web dev, my experience with Cyber was limited to the OU introduction course, so I was wholly unprepared for the level of, occasionally hysterical, paranoia that my fellow cohort seemed to perpetually live in. The following is a collection of stories from several of these people, because if I only wrote about one they would accuse me of providing too much data allowing an attacker to aggregate and steal their identity. They do use devrant so if you're reading this, know that I love you and that something is wrong with you.
That time when...
He wrote a social media network with end-to-end encryption before it was cool.
He wrote custom 64kb encryption for his academic HDD.
He removed the 3 HDD from his desktop and stored them in a safe, whenever he left the house.
He set up a pfsense virtualbox with a firewall policy to block the port the student monitoring software used (effectively rendering it useless and definitely in breach of the IT policy).
He used only hashes of passwords as passwords (which isn't actually good).
He kept a drill on the desk ready to destroy his HDD at a moments notice.
He started developing a device to drill through his HDD when he pushed a button. May or may not have finished it.
He set up a new email account for each individual online service.
He hosted a website from his own home server so he didn't have to host the files elsewhere (which is just awful for home network security).
He unplugged the home router and began scanning his devices and manually searching through the process list when his music stopped playing on the laptop several times (turns out he had a wobbly spacebar and the shaking washing machine provided enough jittering for a button press).
He brought his own privacy screen to work (remember, this is a security place, with like background checks and all sorts).
He gave his C programming coursework (a simple messaging program) 2048 bit encryption, which was not required.
He wrote a custom encryption for his other C programming coursework as well as writing out the enigma encryption because there was no library, again not required.
He bought a burner phone to visit the capital city.
He bought a burner phone whenever he left his hometown come to think of it.
He bought a smartphone online, wiped it and installed new firmware (it was Chinese; I'm not saying anything about the Chinese, you're the one thinking it).
He bought a smartphone and installed Kali Linux NetHunter so he could test WiFi networks he connected to before using them on his personal device.
(You might be noticing it's all he's. Maybe it is, maybe it isn't).
He ate a sim card.
He brought a balaclava to pentesting training (it was pretty meme).
He printed out his source code as a manual read-only method.
He made a rule on his academic email to block incoming mail from the academic body (to be fair this is a good spam policy).
He withdraws money from a different cashpoint everytime to avoid patterns in his behaviour (the irony).
He reported someone for hacking the centre's network when they built their own website for practice using XAMMP.
I'm going to stop there. I could tell you so many more stories about these guys, some about them being paranoid and some about the stupid antics Cyber Security and Information Assurance students get up to. Well done for making it this far. Hope you enjoyed it.26 -
Often I hear that one should block spam email based on content match rather than IP match. Sometimes even that blocking Chinese ranges in particular is prejudiced and racist. Allow me to debunk that after I've been looking at traffic on port 25 with tcpdump for several weeks now, and got rid of most of my incoming spam too.
There are these spamhausen that communicate with my mail server as much as every minute.
- biz-smtp.com
- mailing-expert.com
- smtp-shop.com
All of them are Chinese. They make up - rough guess - around 90% of the traffic that hits my edge nodes, if not more.
The network ranges I've blocked are apparently as follows:
- 193.106.175.0/24 (Russia)
- 49.64.0.0/11 (China)
- 181.39.88.172 (Ecuador)
- 188.130.160.216 (Russia)
- 106.75.144.0/20 (China)
- 183.227.0.0/16 (China)
- 106.75.32.0/19 (China)
.. apparently I blocked that one twice, heh
- 116.16.0.0/12 (China)
- 123.58.160.0/19 (China)
It's not all China but holy hell, a lot of spam sure comes from there, given how Golden Shield supposedly blocks internet access to the Chinese citizens. A friend of mine who lives in China (how he got past the firewall is beyond me, and he won't tell me either) told me that while incoming information is "regulated", they don't give half a shit about outgoing traffic to foreign countries. Hence all those shitty filter bag suppliers and whatnot. The Chinese government doesn't care.
So what is the alternative like, that would block based on content? Well there are a few solutions out there, namely SpamAssassin, ClamAV and Amavis among others. The problem is that they're all very memory intensive (especially compared to e.g. Postfix and Dovecot themselves) and that they must scan every email, and keep up with evasion techniques (such as putting the content in an image, or using characters from different character sets t̾h̾a̾t̾ ̾l̾o̾o̾k̾ ̾s̾i̾m̾i̾l̾a̾r̾).
But the thing is, all of that traffic comes from a certain few offending IP ranges, and an iptables rule that covers a whole range is very cheap. China (or any country for that matter) has too many IP ranges to block all of them. But the certain few offending IP ranges? I'll take a cheap IP-based filter over expensive content-based filters any day. And I don't want to be shamed for that.7 -
As a developer, I constantly feel like I'm lagging behind.
Long rant incoming.
Whenever I join a new company or team, I always feel like I'm the worst developer there. No matter how much studying I do, it never seems to be enough.
Feeling inadequate is nothing new for me, I've been struggling with a severe inferiority complex for most of my life. But starting a career as a developer launched that shit into overdrive.
About 10 years ago, I started my college education as a developer. At first things were fine, I felt equal to my peers. It lasted about a day or two, until I saw a guy working on a website in notepad. Nothing too special of course, but back then as a guy whose scripting experience did not go much farther than modifying some .ini files, it blew my mind. It went downhill from there.
What followed were several stressful, yet strangely enjoyable, years in college where I constantly felt like I was lagging behind, even though my grades were acceptable. On top of college stress, I had a number of setbacks, including the fallout of divorcing parents, childhood pets, family and friends dying, little to no money coming in and my mother being in a coma for a few weeks. She's fine now, thankfully.
Through hard work, a bit of luck, and a girlfriend who helped me to study, I managed to graduate college in 2012 and found a starter job as an Asp.Net developer.
My knowledge on the topic was limited, but it was a good learning experience, I had a good mentor and some great colleagues. To teach myself, I launched a programming tutorial channel. All in all, life was good. I had a steady income, a relationship that was already going for a few years, some good friends and I was learning a lot.
Then, 3 months in, I got diagnosed with cancer.
This ruined pretty much everything I had built up so far. I spend the next 6 months in a hospital, going through very rough chemo.
When I got back to working again, my previous Asp.Net position had been (understandably) given to another colleague. While I was grateful to the company that I could come back after such a long absence, the only position available was that of a junior database manager. Not something I studied for and not something I wanted to do each day neither.
Because I was grateful for the company's support, I kept working there for another 12 - 18 months. It didn't go well. The number of times I was able to do C# jobs can be counted on both hands, while new hires got the assignments, I regularly begged my PM for.
On top of that, the stress and anxiety that going through cancer brings comes AFTER the treatment. During the treatment, the only important things were surviving and spending my potentially last days as best as I could. Those months working was spent mostly living in fear and having to come to terms with the fact that my own body tried to kill me. It caused me severe anger issues which in time cost me my relationship and some friendships.
Keeping up to date was hard in these times. I was not honing my developer skills and studying was not something I'd regularly do. 'Why spend all this time working if tomorrow the cancer might come back?'
After much soul-searching, I quit that job and pursued a career in consultancy. At first things went well. There was not a lot to do so I could do a lot of self-study. A month went by like that. Then another. Then about 4 months into the new job, still no work was there to be done. My motivation quickly dwindled.
To recuperate the costs, the company had me do shit jobs which had little to nothing to do with coding like creating labels or writing blogs. Zero coding experience required. Although I was getting a lot of self-study done, my amount of field experience remained pretty much zip.
My prayers asking for work must have been heard because suddenly the sales department started finding clients for me. Unfortunately, as salespeople do, they looked only at my theoretical years of experience, most of which were spent in a hospital or not doing .Net related tasks.
Ka-ching. Here's a developer with four years of experience. Have fun.
Those jobs never went well. My lack of experience was always an issue, no matter how many times I told the salespeople not to exaggerate my experience. In the end, I ended up resigning there too.
After all the issues a consultancy job brings, I went out to find a job I actually wanted to do. I found a .Net job in an area little traffic. I even warned them during my intake that my experience was limited, and I did my very best every day that I worked here.
It didn't help. I still feel like the worst developer on the team, even superseded by someone who took photography in college. Now on Monday, they want me to come in earlier for a talk.
Should I just quit being a developer? I really want to make this work, but it seems like every turn I take, every choice I make, stuff just won't improve. Any suggestions on how I can get out of this psychological hell?6 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
I am building a website inspired by devrant but have never built a server network before, and as im still a student I have no industry experience to base a design on, so was hoping for any advice on what is important/ what I have fucked up in my plan.
The attached image is my currently planned design. Blue is for the main site, and is a cluster of app servers to handle any incoming requests.
Green is a subdomain to handle images, as I figured it would help with performance to have image uploads/downloads separated from the main webpage content. It also means I can keep cache servers and app servers separated.
Pink is internal stuff for logging and backups and probably some monitoring stuff too.
Purple is databases. One is dedicated for images, that way I can easily back them up or load them to a cache server, and the other is for normal user data and posts etc.
The brown proxy in the middle is sorta an internal proxy which the servers need to authenticate with to connect to, that way I can just open the database to the internal proxy, and deny all other requests, and then I can have as many app servers as I want and as long as they authenticate with the proxy, they can access the database without me changing any firewall rules. The other 2 proxies just distribute requests between the available servers in the pool.
Any advice would be greatly appreciated! Thanks in advanced :D13 -
In my previous company we developed a CRM web app for the company to use internally and it was in my humble opinion really easy to make sense of, but for some freaking we kept getting calls whenever someone got an error, and our default response was always to send us an email, then we will get back to you, as it was mostly stupid things they called about, for example, a customer might have to be status terminated, before you can click button A, button A would then be disabled and employees would call asking why. Apparently, people got annoyed by our response and went to the management, to get some guidelines as to when they could call the "development apartment" for help, so the management sends out some guidelines as to when they could call, write or whatever... The following was done without consulting us in any way ANY WAY AT ALL!... Because we all know management knows fucking best, and why bother asking the people that sit with it every day, and the way it was done was by saying:
If the background color on your error is red, it means the error is fatal and you can call the developers immediately, if its orange send an email and they will answer within 48 hours LIKE WTF... Seriously???. That was basically it, and honestly we had just been using colors, without much thought to it ofc red, was an error etc. But they we're not "OMG EVERYTHING IS BREAKING" alert, so we decided to use a couple of hours refactoring the color of the flash errors, and after that, we did not have many red alerts(None, yes none what so ever) We changed all the red ones to orange, and introduced some new colors. That worked for some time around 6 months or so, but then people obviously started calling again like, why even bother... So we created a simple service desk, blocked all incoming calls to our phones that were from regular employees, heard a lot of complaints about this from the employees, management was mad, we had so many meetings with those top paid management fuckers that know everything (way better than you and me), about how to handle this. As it took way too much of our time, that people couldn't bother trying simple things, or make some sense as to why a button is disabled etc. We ended up "winning", was allowed to block calls for some time, till the employees had learned to use a freaking simple service desk, it's not fucking rocket science Okay, stop being a pain in the ass... And it actually fucking worked! Most relaxing time after people got a hang of using the service desk instead of calling life was good after that... <3 -
Oddly enough, i have simultaneously been less busy and more productive since working 66% remotely.
I find myself with more time that feels "wasted" or not busy, but my metrics show that I have more production, better results, and far nicer documentation. A bunch of us also sat down and did a bunch of coursework on really putting together a domain script library for one click onboarding of new servers or new client setups. We spun up a bunch of new virtual environments that literally solved headaches that had existed for years that never got dealt with because of too many other tickets.
Some of our web clients freaked out at us because the business is moving away from doing maintenance of legacy web work (small to midsize businesses). But it didn't matter. Rather than respond with a "make them happy," the response was "well, we will get rid of them as clients. We need to focus our energy on the essential service sectors we support."
Hell, we even got an automated test that has been broken apparently since 2018 to work again.
Granted, the incoming workload has slowed down. But it's still interesting to me to see that despite the slowdown, there isn't any concern; its still paying the bills and we are getting rid of technical debt everywhere. Tbh, this has really been a good reality check.1 -
I'm trying to improve my email setup once again and need your advice. My idea is as follows:
- 2-5 users
- 1 (sub)domain per user with a catchall
- users need to be able to also send from <any>@<subdomain>.<domain>
- costs up to 1€ per user (without domain)
- provider & server not hosted in five eyes and reasonably privacy friendly
- supports standard protocols (IMAP, SMTP)
- reliable
- does not depend on me to manage it daily/weekly
- Billing/Payment for all accounts/domains at once would be nice-to-have, but not necessary
I registered a domain with wint.global the other day and I actually managed to get this to work, but unfortunately their hosting has been very underwhelming.. the server was unreachable for a few minutes yesterday not only once, but roughly once an hour, and I'd really rather be able to actually receive (and retrieve) my mail. Also their Plesk is quite slow. To be fair for their price it's more like I pay for the domain and get the hosting for free, but I digress..
I am also considering self hosting, but realistically that means running it on a VPS and keeping at secure and patched, which I'd rather outsource to a company who can afford someone to regularly read CVEs and keep things running. I don't really want to worry about maintaining servers when I'm on holiday for example and while an unpatched game server is an acceptable risk, I'd rather keep my email server on good shape.
So in the end the question is: Which provider can fulfill my email dreams?
My research so far:
1. Tutanota doesn't offer standard protocols. I get their reasons but that also makes me depended on their service/software, which I wouldn't like. Multiple domains only on the business plans.
2.With Migadu I could easily hit their limits of incoming mails if someone signs up for too many newsletters and I can't (and don't want to) micromanage that.
3. Strato: Unclear whether I can create mails for subdomains. Also I don't like the company for multiple reasons. However I can access a domains hosted there and could try...
4. united-domains: Unclear whether I can create mails for subdomains.
5. posteo: No custom domains allowed.
I'm getting tired.. *sigh*21 -
Oh let the rant time begin…
So previous post I mentioned about this dev who has resigned and how I was going to see about a Snr. position.
Management is now scrambling to figure out what to do as this dev managed all the migration to AWS etc, I know servers but haven’t got too much familiarity with AWS.
Anyways so I finally get a 1:1 with my new line manager. I ask about the position and he says they don’t know what there going to do yet. Hire a new dev in India to offset and with the same knowledge even though the guy leaving is in the U.K. Bad idea as the servers are in the U.K. so if we get downtime or the server crashes we have no one in the U.K. to reset or access to the servers. India are very cagey who gets access which is annoying to say the least even though us (three devs) in the U.K. are the principal engineering team so there looking at all options.
Anyways we have a back and fourth, we discuss some of the plans for the app, some of which we are nowhere near ready to even conceptualise as the app in its current state sucks, (ruby 2.2.6 and rails 5 but not really). Needs major refactoring and rewrite, one thing they want to do is multi tendency which again given the state is laughable.
So, as my manager is speaking my head is screaming being like “this is just going to be a massive disaster”. Then we go onto that he’s seeing what everyone’s strengths are etc. And then we get onto the upgrade and that he wants me to work on it.
Yes.. the upgrade I’ve been trying to do for the past 4+ months but I keep getting told to stop and getting pushed backed.
I’ve been told we have devOps looking into restructuring the app, not possible as how the app is written, we have India trying to multi tenant again disaster incoming as they’ll end up rushing it. Legal are going to have a field day. Every time I say the issues are the fundamentals with the app, here’s how we can sort it. In one ear out the other basically there patching the ship even though it’s still leaking.
I have so many ideas, and things I can do to improve the app and get it back to not only working order, fix the performance issues, data issues and everything else. Brick wall.
So rants ensue where I basically say I would love to do the upgrade but management gives me no time in the roadmap (we have no say in planning). At this point I’m just speaking to a brick wall.
After the meeting I have a chat with the BAs, we all have the same issues so honestly it sucks we end up ranting to each other for an hour.
I’m being under-utilised, being told do this, do that even though I’ve had two stabs but told to stop and pushed back, I know what benefits I can bring to the app with a refactoring, ideas and how to properly lead the team because honestly we’re working on an old legacy app, and management are clueless and there priorities are all wrong, the company is getting frustrated and it’s a sinking ship. They would rather patch issues without solving them and everything I say goes in one ear and out the other.
Frustrating is not the word.1 -
I'm in a big fat fucking stinking rut, as in progress on this project has absolutely stagnanted.
Gonna rubber face your duck now **UNZIPS** excepts I don't have zippers, as joggers are the one true way; fake Adidas til I fucking drop.
Brain damage aside, I understand both how I've layed out the data and what I'm supposed to do with it. We have a virtual machine, an array of instructions and arguments for a given process within it, and we need to walk this array and map values to registers.
We also need to spill values inside registers to stack, IF they are required at a further point within that block. This also isn't terribly complex. We simply look forward in the array and see if the value is an argument to any instruction that *needs* this value to be loaded (ie, within a register).
So this implies multiple iterations; we need to better understand how one particular value is used throughout an F before we can make a final decision on how many registers and stack space are actually needed for the whole block.
Here's where it gets tricky. If there's a call, we need to be certain that the symbol being invoked has already been fully processed. Besides the obvious fact that recursion fucks me up, there's another matter: say a private method gets invoked by another private method. We can take advantage of this, by which I mean, sacrilege incoming so put on this toga.
Looking at the output for C compilers, it would seem this is not done in practice, I would assume because it's a pain in the ass. But when you have the guarantee that F will only be called internally, as that's what "private" means, there's two ways it can go:
0. It's well below the 13-20 cycle threshold, so you inline the fucker. No suprises there.
1. It's a more involved affaire, and invoked in more than one place, so you don't inline it. Codesize matters.
Recursion and [1] are the big deal things holding me back. Not because it's too hard, like I said this is kindergarten level abstraction. I'm just slow and fanatical, which is how I prefer to spell "constant obsessive paranoid delusions". I can see the potential optimization I can pull here, so I'm stuck trying to figure it out.
Idea would be, handling the register allocation and stack spill for an internal-internal (or deep internal; what we like to call a "guts" method) in synchronization with the *calling* processes. This is, fundamentally, violating all conventions -- but so under the hood no one will notice.
Let me give you an example. If we were to pass some value to a function, expecting to mutate it and get a different value back, in a lot of cases it'd be stupid to make an implicit copy by using two registers, one for input and another for the output. Dude, it's one cycle. Multiply it by a million, say sixty times per second, for every time you __needlessly__ make a copy of a value that we've already stated is mutable.
Clearly unacceptable. This is, in the strictest sense, everywhere in every single codebase. Premature micro optimization is the root of all goodness, God is great and praiseworthy. So how do we go about it?
Answer is I know and I don't know. By which I mean to say, this very thing I've done by hand. Assembly is fun. Now the issue is teaching a calculator how to do it. Not so fun.
There is a dependency chain between processes, as I believe I've kind of alluded to. I'm trying to make decisions on the side of the caller depending on the details of the callee, which is why recursion is rawdogging my soul. This is the same situation, it's inverting the direction of one or more links in the dependency chain, which makes no fucking sense.
And yet it does.
Brain, explain yourself.
How do *you* handle this without crashing?
Brain?
<<ME STEWPED; BEEP-BOOP>>
Alright then, that was a useless attempt at fuckery. Let's have a nap then, maybe it'll come to me in the morning. That's what I've been saying to myself for almost a month now.
Perhaps it is a hardcoded fuk.1