Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "slow network"
-
!rant
After over 20 years as a Software Engineer, Architect, and Manager, I want to pass along some unsolicited advice to junior developers either because I grew through it, or I've had to deal with developers who behaved poorly:
1) Your ego will hurt you FAR more than your junior coding skills. Nobody expects you to be the best early in your career, so don't act like you are.
2) Working independently is a must. It's okay to ask questions, but ask sparingly. Remember, mid and senior level guys need to focus just as much as you do, so before interrupting them, exhaust your resources (Google, Stack Overflow, books, etc..)
3) Working code != good code. You are an author. Write your code so that it can be read. Accept criticism that may seem trivial such as renaming a variable or method. If someone is suggesting it, it's because they didn't know what it did without further investigation.
4) Ask for peer reviews and LISTEN to the critique. Even after 20+ years, I send my code to more junior developers and often get good corrections sent back. (remember the ego thing from tip #1?) Even if they have no critiques for me, sometimes they will see a technique I used and learn from that. Peer reviews are win-win-win.
5) When in doubt, do NOT BS your way out. Refer to someone who knows, or offer to get back to them. Often times, persons other than engineers will take what you said as gospel. If that later turns out to be wrong, a bunch of people will have to get involved to clean up the expectations.
6) Slow down in order to speed up. Always start a task by thinking about the very high level use cases, then slowly work through your logic to achieve that. Rushing to complete, even for senior engineers, usually means less-than-ideal code that somebody will have to maintain.
7) Write documentation, always! Even if your company doesn't take documentation seriously, other engineers will remember how well documented your code is, and they will appreciate you for it/think of you next time that sweet job opens up.
8) Good code is important, but good impressions are better. I have code that is the most embarrassing crap ever still in production to this day. People don't think of me as "that shitty developer who wrote that ugly ass code that one time a decade ago," They think of me as "that developer who was fun to work with and busted his ass." Because of that, I've never been unemployed for more than a day. It's critical to have a good network and good references.
9) Don't shy away from the unknown. It's easy to hope somebody else picks up that task that you don't understand, but you wont learn it if they do. The daunting, unknown tasks are the most rewarding to complete (and trust me, other devs will notice.)
10) Learning is up to you. I can't tell you the number of engineers I passed on hiring because their answer to what they know about PHP7 was: "Nothing. I haven't learned it yet because my current company is still using PHP5." This is YOUR craft. It's not up to your employer to keep you relevant in the job market, it's up to YOU. You don't always need to be a pro at the latest and greatest, but at least read the changelog. Stay abreast of current technology, security threats, etc...
These are just a few quick tips from my experience. Others may chime in with theirs, and some may dispute mine. I wish you all fruitful careers!222 -
Developer (master's degree, -bleeping- smart guy, no kidding) was bragging on how he made a piece of code 3x faster (with the usual pinch that the original dev was incompetent) and spent nearly 6 weeks working on it (wrote his own parallel-foreach library because Microsoft's parallel library was "too slow").
I was the original dev and he didn't know I had my own performance counters where I broke down each stack (database access, network I/O, and the code logic).
Average time was around 5ms (yes milliseconds) and worst case was around 10 seconds. His '3x improvement' was based on the worst test case, which improved by about a second. Showed our boss my graph (laughed out loud, said 'WTF', other curse words) and the dev hasn't spoken to me in weeks (I say 'hi' in the hall and he keeps walking)
Take that master's degree and high IQ and shove it.17 -
This is kind of a horror story, with a happing ending. It contains a lot of gore images, and some porn. Very long story.
TL;DR Network upgrade
Once upon a time, there were two companies HA and HP, both owned by HC. Many years went by and the two companies worked along side each one another, but sometimes there were trouble, because they weren't sure who was supposed to bill the client for projects HA and HP had worked on together.
At HA there was an IT guy, an imbecile of such. He's very slow at doing his job, doesn't exactly understand what he's doing, nor security principles.
The IT guy at HA also did some IT work for HP from time to time when needed. But he was not in charge of the infrastructure for HP, that was the jobb for one developer who didn't really know what he was doing either.
Whenever a new server was set up at HP, the developer tried many solutions, until he landed on one, but he never removed the other tested solutions, and the config is scattered all around. And no documentation!!
Same goes with network, when something new was added, the old was never removed or reconfigured to something else.
One dark winter, a knight arrived at HP. He had many skills. Networking, server management, development, design and generally a fucking awesome viking.
This genius would often try to cleanse the network and servers, and begged his boss to let him buy new equipment to replace the old, to no prevail.
Whenever he would look in the server room, he would get shivers down his back.
(Image: https://i.bratteng.xyz/Ie9x3YC33C.j...)
One and a half year later, the powerful owners in HA, HP and HC decided it was finally time to merge HA and HP together to HS. The knight thought this was his moment, he should ask CEO if he could be in charge of migrating the network, and do a complete overhault so they could get 1Gb interwebz speeds.
The knight had to come up with a plan and some price estimates, as the IT guy also would do this.
The IT guy proposed his solution, a Sonicwall gateway to 22 000 NOK, and using a 3rd party company to manage it for 3000 NOK/month.
"This is absurd", said the knight to the CEO and CXO, "I can come up with a better solution that is a complete upgrade. And it will be super easy to manage."
The CEO and CXO gave the knight a thumbs up. The race was on. We're moving in 2 months, I got to have the equipment by then, so I need a plan by the end of the week.
He roamed the wide internet, looked at many solutions, and ended up with going for Ubiquiti's Unifi series. Cheap, reliable and pretty nice to look at.
The CXO had mentioned the WiFi at HA was pretty bad, as there was WLAN for each meeting room, and one for the desks, so the phone would constantly jump between networks.
So the knight ended up with this solution:
2x Unifi Securtiy Gateway Pro 4
2x Unifi 48port
1x Unifi 10G 16port
5x Unifi AP-AC-Lite
12x pairs of 10G unifi fibre modules
All with a price tag around the one Sonicwall for 22 000 NOK, not including patch cables, POE injectors and fibre cables.
The knight presented this to the CXO, whom is not very fond of the IT guy, and the CXO thought this was a great solution.
But the IT guy had to have a say at this too, so he was sent the solution and had 2 weeks to dispute the soltion.
Time went by, CXO started to get tired of the waiting, so he called in a meeting with the knight and the IT guy, this was the IT guys chance to dispute the solution.
All he had to say was he was familiar with the Sonicwall solution, and having a 3rd party company managing it is great.
He was given another 2 weeks to dispute the solution, yet nothing happened.
The CXO gave the thumbs up, and the knight orders the equipment.
At this time, the knight asks the IT guy for access to the server room at HA, and a key (which would take 2 months to get sorted, because IT guys is a slow imbecile)
The horrors, Oh the horrors, the knight had never seen anything like this before.
(Image: https://i.bratteng.xyz/HfptwEh9qT.j...)
(Image: https://i.bratteng.xyz/HfptwEh9qT.j...)
(Image: https://i.bratteng.xyz/hmOE2ZuQuE.j...)
(Image: https://i.bratteng.xyz/4Flmkx6slQ.j...)
What are all these for, why is there a fan ductaped to on of the servers.
WHAT IS THIS!
Why are there cables tied in a knot.
WHY!
These are questions we never will know the answers too.
The knight needs access to the servers, and sonicwall to see how this is configured.
After 1.5 month he gains access to the sonicwall and one of the xserve.
What the knight discovers baffles him.
All ports are open, sonicwall is basically in bridge mode and handing out public IPs to every device connected to it.
No VLANs, everything, just open...10 -
A conversation with our network/system admin.
Me : Can I install linux on my computer, windows is slow and terrible.
Him : No, if you use anything but Windows in this company, you will be fired for bypassing our security protocols. Its written in your contract.
Me : *boots up my Macbook*10 -
In a user-interface design meeting over a regulatory compliance implementation:
User: “We’ll need to input a city.”
Dev: “Should we validate that city against the state, zip code, and country?”
User: “You are going to make me enter all that data? Ugh…then make it a drop-down. I select the city and the state, zip code auto-fill. I don’t want to make a mistake typing any of that data in.”
Me: “I don’t think a drop-down of every city in the US is feasible.”
Manage: “Why? There cannot be that many. Drop-down is fine. What about the button? We have a few icons to choose from…”
Me: “Uh..yea…there are thousands of cities in the US. Way too much data to for anyone to realistically scroll through”
Dev: “They won’t have to scroll, I’ll filter the list when they start typing.”
Me: “That’s not really the issue and if they are typing the city anyway, just let them type it in.”
User: “What if I mistype Ch1cago? We could inadvertently be out of compliance. The system should never open the company up for federal lawsuits”
Me: “If we’re hiring individuals responsible for legal compliance who can’t spell Chicago, we should be sued by the federal government. We should validate the data the best we can, but it is ultimately your department’s responsibility for data accuracy.”
Manager: “Now now…it’s all our responsibility. What is wrong with a few thousand item drop-down?”
Me: “Um, memory, network bandwidth, database storage, who maintains this list of cities? A lot of time and resources could be saved by simply paying attention.”
Manager: “Memory? Well, memory is cheap. If the workstation needs more memory, we’ll add more”
Dev: “Creating a drop-down is easy and selecting thousands of rows from the database should be fast enough. If the selection is slow, I’ll put it in a thread.”
DBA: “Table won’t be that big and won’t take up much disk space. We’ll need to setup stored procedures, and data import jobs from somewhere to maintain the data. New cities, name changes, ect. ”
Manager: “And if the network starts becoming too slow, we’ll have the Networking dept. open up the valves.”
Me: “Am I the only one seeing all the moving parts we’re introducing just to keep someone from misspelling ‘Chicago’? I’ll admit I’m wrong or maybe I’m not looking at the problem correctly. The point of redesigning the compliance system is to make it simpler, not more complex.”
Manager: “I’m missing the point to why we’re still talking about this. Decision has been made. Drop-down of all cities in the US. Moving on to the button’s icon ..”
Me: “Where is the list of cities going to come from?”
<few seconds of silence>
Dev: “Post office I guess.”
Me: “You guess?…OK…Who is going to manage this list of cities? The manager responsible for regulations?”
User: “Thousands of cities? Oh no …no one is our area has time for that. The system should do it”
Me: “OK, the system. That falls on the DBA. Are you going to be responsible for keeping the data accurate? What is going to audit the cities to make sure the names are properly named and associated with the correct state?”
DBA: “Uh..I don’t know…um…I can set up a job to run every night”
Me: “A job to do what? Validate the data against what?”
Manager: “Do you have a point? No one said it would be easy and all of those details can be answered later.”
Me: “Almost done, and this should be easy. How many cities do we currently have to maintain compliance?”
User: “Maybe 4 or 5. Not many. Regulations are mostly on a state level.”
Me: “When was the last time we created a new city compliance?”
User: “Maybe, 8 years ago. It was before I started.”
Me: “So we’re creating all this complexity for data that, realistically, probably won’t ever change?”
User: “Oh crap, you’re right. What the hell was I thinking…Scratch the drop-down idea. I doubt we’re have a new city regulation anytime soon and how hard is it to type in a city?”
Manager: “OK, are we done wasting everyone’s time on this? No drop-down of cities...next …Let’s get back to the button’s icon …”
Simplicity 1, complexity 0.16 -
Fucking 20 hour days. Third one this week.
Been at work since 6am, it is now midnight. Spent the morning fixing bush league code mistakes from "expert" onshore developers, and explaining how-to-wipe-your-ass level concepts to some rude cunt who is absolutely going to take credit for my work after I leave.
Now I'm just waiting on this slow boat scp to finish because the invalids the customer hired to manage their infra can't figure out the 3 minute exercise that is standing up a registry, so the container deployment process is fucking export multiple 500mb Redhat images as a tar and ship it across the cripplenet they call a datacenter. And of course the same badmins don't understand rsync and can't manage to get network throughput in a datacenter with a $300M annual budget over 128kbps. I guess that's fast for whatever jugaad horseshit network they're used to.
I've said it before, but it bears repeating. Fuck IBM. They're a cancer and at this point I question the moral compass of anyone who works for them.7 -
Manager: The site I loading too slow. How can we improve this?
Me: *f5 & look at the network log* the server is taking too long to respond some image requests. We could encode them into the Html to have them all delivered in a single request.
Manager: GTMetrix says we need to compress the images.
Me: *reads GTMetrix report* we would only have a 150kb improvement. It won't even be noticeable.
Manager: If the images take a long time to load, it means that they're too big, right?
Me: or the server is taking a long time to respond our request for them, which is the case.
Manager: compress the images and upload them.
Me: *compresses the images and uploads them* done.
Manager: I don't see any improvement.
Me: if only there was someone who could have predicted such an outcome...1 -
--- GitHub 24-hour outage post mortem ---
As many of you will remember; Github fell over earlier this month and cracked its head on the counter top on the way down. For more or less a full 24 hours the repo-wrangling behemoth had inconsistent data being presented to users, slow response times and failing requests during common user actions such as reporting issues and questioning your career choice in code reviews.
It's been revealed in a post-mortem of the incident (link at the end of the article) that DB replication was the root cause of the chaos after a failing 100G network link was being replaced during routine maintenance. I don't pretend to be a rockstar-ninja-wizard DBA but after speaking with colleagues who went a shade whiter when the term "replication" was used - It's hard to predict where a design decision will bite back and leave you untanging the web of lies and misinformation reported by the databases for weeks if not months after everything's gone a tad sideways.
When the link was yanked out of the east coast DC undergoing maintenance - Github's "Orchestrator" software did exactly what it was meant to do; It hit the "ohshi" button and failed over to another DC that wasn't reporting any issues. The hitch in the master plan was that when connectivity came back up at the east coast DC, Orchestrator was unable to (un)fail-over back to the east coast DC due to each cluster containing data the other didn't have.
At this point it's reasonable to assume that pants were turning funny colours - Monitoring systems across the board started squealing, firing off messages to engineers demanding they rouse from the land of nod and snap back to reality, that was a bit more "on-fire" than usual. A quick call to Orchestrator's API returned a result set that only contained database servers from the west coast - none of the east coast servers had responded.
Come 11pm UTC (about 10 minutes after the initial pant re-colouring) engineers realised they were well and truly backed into a corner, the site was flipped into "Yellow" status and internal mechanisms for deployments were locked out. 5 minutes later an Incident Co-ordinator was dragged from their lair by the status change and almost immediately flipped the site into "Red" status, a move i can only hope was accompanied by all the lights going red and klaxons sounding.
Even more engineers were roused from their slumber to help with the recovery effort, By this point hair was turning grey in real time - The fail-over DB cluster had been processing user data for nearly 40 minutes, every second that passed made the inevitable untangling process exponentially more difficult. Not long after this Github made the call to pause webhooks and Github Pages builds in an attempt to prevent further data loss, causing disruption to those of us using Github as a way of kicking off our deployment processes (myself included, I had to SSH in and run a git pull myself like some kind of savage).
Glossing over several more "And then things were still broken" sections of the post mortem; Clever engineers with their heads screwed on the right way successfully executed what i can only imagine was a large, complex and risky plan to untangle the mess and restore functionality. Github was picked up off the kitchen floor and promptly placed in a comfy chair with a sweet tea to recover. The enormous backlog of webhooks and Pages builds was caught up with and everything was more or less back to normal.
It goes to show that even the best laid plan rarely survives first contact with the enemy, In this case a failing 100G network link somewhere inside an east coast data center.
Link to the post mortem: https://blog.github.com/2018-10-30-...6 -
A quite normal Windows day:
Bios to Windows: "Go now! Get up!"
Windows to Bios: "Always slow with the young circuit boards."
"I've got something weird on screen."
Windows' answer: "Ignore it first."
Hardware assistant to Windows: "The user puts pressure. He wants me to identify this thing. Could be an ISDN card."
Windows: "Well, well."
Unknown ISDN card to all: "Will you please let me in?"
Network card to intruder: "You can't spread out here!"
Windows: "Quiet in the case! Or I'll cut both their support!"
Device Manager: "Offer compromise. The network card is allowed on Mondays, the ISDN card is on Tuesday."
Graphics card to Windows: "My driver retired yesterday. I'm crashing now."
Windows to graphics card: "When will you be back?"
Graphics card: "Well, not at first."
CD-Rom drive to Windows: "uh, I would have a new driver here..."
Windows: "What's ich´n supposed to do with it?!"
Installation software to Windows: "Leave it, I'll mach´ that already."
Windows: "That's nice to hear."
USB connection to interrupt management: "Alarm! Just been penetrated by a scanner cable. Request response."
Interrupt management: "Where are you coming from?"
USB connection: "I was in the computer right from the start. I'm joined by another colleague."
"You're not on my list." - "Say something."
Windows: "Hopefully there won't be another printer."
Graphics card: "The new driver twitches."
Windows: "We'll just have to get the old one out of retirement."
Uninstall program to new driver: "Go away."
Unwanted driver: "Fuck you."
Windows to Norton Utilities: "Kill him and his brood!"
Utilities to driver rests: "Sorry, we have to delete you."
Important system file: "Arrrrrrgghh!"
Windows on blue screen: "Gib´, the Norton Boys are over the top again."
Blue screen to user: "So, that's it for this week."
Excuse me for stealing your time
And I know it's way too long7 -
Dear Android:
I know I'm not on wifi. I get it. Sometimes data coverage isn't amazing or the network is congested. It's cool. You can just flash "no service" and I just won't try. or even "3G" and I'll have some patience. I rember how slow 3G was. It's okay, I'll wait.
But fucking stop showing 4G LTE if you can't make a fucking GET request for a 2kb text file in less than 5 minutes! Fucking really? Don't fucking lie to me with your false hope bullshit, just tell me the truth and I'll probably sigh and say shit and put my phone away.
But fuck you and your progress bar externally stuck in the middle. As if to say you're making progress! Wasting my time!
If you can't download a kilobyte in a 5min period, why even say I have data at all? What good does that do me?23 -
Do you know what annoys the living fuck out of me?
Me: no...?
Me: may I tell?
Me: yes, please do!
Me: okay here we go:
Sites which use Google fonts or apis or ajax or other Google-hosted libraries.
It takes fucking ages to load those sites (if they lost et-all) since I block as much as possible from that cocksucking mass surveillance network.
Google, feel free to die in a fucking corner while getting an acid shower and being stripped of your skin layer by layer, as slow as possible to increase the pain and suffering.15 -
Fucking cloud providers always trying to steal your shit and spy on your things, fucking prying eyes. That's why i've decided to go back hosting my own private cloud from home. Running on some very energy efficient shit: dual core intel atom cpu (so slow that it can't fucking run windows normally), 16gb of ram, because why the fuck not? and 1tb 2.5"hdd, along with unlimited data - 100/100 Mbit/s internet connection with a server response time less than 95ms just to backup my shitty Iphone selfies and cat pics, host some very important files and regularly back up my contacts. This shit runs CentOS, Nginx, https, bitch! This platform is more trustworthy than your shitty dropbox or whatever other shit they offer you. I can choose whether i back-up my shit from local network or over internetz, Costing me no more than 25€ annually(just to keep the machine on 24/7/365).14
-
- Ran `brew install maven`
- Left my terminal to have some fun because of the slow network (fuck my ISP btw)
- Came back after ~15 mins to look at the terminal cause' installation should've finished.
- Checked terminal.
- Realised I ran `brew isntall maven` instead.
FML3 -
Worked with a European consulting company to integrate some shared business data (aka. calling a service).
VP of IT called an emergency meeting (IT managers, network admins) deeply concerned about the performance of the international web site since adding our services.
VP: “The partner’s site is much slower than ours. Only common piece that could cause that is your service.”
Me: “Um, their site is vastly different than ours. I don’t think we can compare their performance to ours.”
VP: “Performance is #1! I need your service fixed ASAP!”
Me: “OK, but what exactly is slow? How did you measure their site? The servers are in Germany”
VP: “I measured performance from my house last night.”
Me: “Did you use an application?”
VP: “<laughs> oh no, I was at home. When I opened the page, I counted one Mississippi, two Mississippi, three Mississippi, then the page displayed.”
Me: “Wow…um…OK…uh…how long does our page take to load?”
VP: “Two Mississippi’s”
Me: “Um…wow…OK…wow…uh, no, we don’t measure performance like that, but I’ll work with our partners and develop a performance benchmark to determine if the shared service is behaving differently.”
VP: “Whatever it is, the service is slow. Bill, what do you think is slowing down the service?”
NetworkAdmin-Bill: “The Atlantic Ocean?”
VP got up and left the meeting.2 -
Our internship and placement tests start from 30th and this is message we got from our coordinators.
😤
(We do have WiFi on campus and in labs ,idk why they aren't letting us use that!!)
They are asking use to use the DONGLES ! who the hell has those, these days !
Uni's response to this : if you can't get your own internet source,then don't give the test. (Translation : we don't give a single flying fuck)
Got my self a Jiofi ,I hope it would work fine.
BUT !!!!!!!!
Often our phones catch no network in the labs ! And if they do,the internet speed is slow.
The tests will go GREAT ! 🙃25 -
TL;DR
I accidentally surpassed(?) my user permissions and closed some of my classmates browsers and locked up a terminal for me
In school we have 2 primary operating systems: Windows and Ubuntu. Windows is hell in general and but not as hell as the firefox installation on Ubuntu.
"Just loaded this page. Now wait half a minute so that I can render it"
"Woah, woah, woah. Slow there. You just made an input event. Give me those 5 seconds to compute what you just did"
Executing "top" or "htop" shows you a long list of firefox processes with a cpu usage of 99.9%, since the whole school shares that linux environment.
Anyway, one day it was way more servere than normally and I way forced to kill my firefox instances. So I pressed CTRL+ALT+T for that terminal, waited 5 minutes until it accepted input typed "killall firefox" with a delay of half a minute per character and smahed that enter key.
At this very point in time I could hear confusion from every corner of the room. "What happened to firefox?"
Around 30% of the opened browsers where abruptly stopped. I looked back to my screen noticed I was logged out. I couldn't login from that terminal for the rest of that day.
Our network admin, which happened to be there, since the server is just next door, said that this was just convenience, but the timing was too perfect so I heighly doubt that.
I felt like a real hackerman even if it was by accident :)8 -
I was noticing some slow network and it was dropping some connections. So I booted up my old XP install with Java 6 so connect to the ASA 5505, I see it’s logging max connections of 10000 has been reached.
Fine, I recon it’s my colleague backing up his entire machine to Google Drive.
Because when he shut it off, n connections dropped.
I check back in the log, and I see there’s 4-500 connections happening per second, I think WTF and check the source IP. Lots of random IPs from Vietnam, all going to a Windows2008 Server using rdp.
(I didn’t setup our servers, so I didn’t know which server it was accessing)
Ask my other colleague, he told me it’s a windows server from an earlier project that’s not used anymore.
I rdp into it, see there’s users logged in from around the world, and I immediately do a shutdown.
Would you look at that, connections per second dropped to about 50.
I guess that server isn’t going back online ever.
And I now need to ask management for a budget to update our network infrastructure, because the old ASA 5505 is begging me to die.
TL;DR gg previous employees didn’t shut down old servers and left them open to the world to enjoy9 -
Got pulled out of bed at 6 am again this morning, our VMs were acting up again. Not booting, running extremely slow, high disk usage, etc.
This was the 6 time in as many weeks this happened. And always the marching orders were the same. Find the bug, smash the bug, get it working with the least effort. I've dumped hundreds of hours maintaining this broken shitheap of a system, putting off other duties to keep mission critical stations running.
The culprits? Scummy consultants, Windows 10 1709, and Citrix Studio.
Xen Server performed well enough, likely due to its open source origins and Centos architecture.
Whelp. DasSeahawks was good and pissed. Nothing like getting rousted out of bed after a few scant hours rest for patching the same broken system.
DasSeahawks lost his temper. Things went flying. Exorcists were dispatched and promptly eaten.
Enough. No consultants, no analysts, and no experts touched it. No phone calls, no manuals, not even a google search. Just a very pissed admin and his minion declaring blitzkrieg.
We made our game plan, moved the users out, smoked our cigs, chugged monster, and queued a gnu-metal playlist on spotify.
Then we took a wrecking ball to the whole setup. User docs were saved, all else was rm -r * && shred && summon -u Poseidon -beast Land_Cracken.
Started at 3pm and finished just after midnight. Rebuilt all the vms with RDP, murdered citrix studio (and their bullshit licenses), completely blocked Windows 10 updates after 1607, and load balanced the network.
So what do we get when all the experts are fired? Stabbed lightning. VMs boot in less than 10 seconds, apps open instantly, and server resources are half their previous usage state. My VMs are now the fastest stations in our complex, as they should be.
Next to do: install our mxgpu, script up snapshots and heartbeat, destroy Windows ads/telemetry, and setup PDQ. damn its good to be good!
What i learned --> never allow testing to go to production, consultants will fuck up your shit for a buck, and vendors are half as reliable over consultants. Windows works great without Microsoft, thin clients are overpriced, and getting pissed gets things done.
This my friends, is why admins are assholes.4 -
I might lose my job this week
I'm part of a team of 2 tech people
We were hired as programmers. But over these past 10 months we've done everything from helpdesk to fixing network infrastructure, i setup a backup server for the company, started properly managing the companies passwords,and a host of other things not in my contract.
But my boss is changing the deadline again and she refuses to listen to anyone's concerns, she doesn't understand the complexity of what she wants and since the best we've done so far can be considered at best a prototype in my opinion shes going to be disappointed
So at the next meeting me and my coworker are going to politely list our grivences point out all shes had us do at the same time and the impossible deadlines.
I've seen herpitch a fit for less so I'm fully prepared to be fired in rage in which case I'll compile the documentation and information on what we've done to email her.
But I'm pretty sure she won't find anything long term for the 40k salary shes expecting. Especially with how slow she is to do work herself. I was supposed to be on company health insurance since October 2020
In a way I'm kinda relieved at the potential of being fired.3 -
Buffer usage for simple file operation in python.
What the code "should" do, was using I think open or write a stream with a specific buffer size.
Buffer size should be specific, as it was a stream of a multiple gigabyte file over a direct interlink network connection.
Which should have speed things up tremendously, due to fewer syscalls and the machine having beefy resources for a large buffer.
So far the theory.
In practical, the devs made one very very very very very very very very stupid error.
They used dicts for configurations... With extremely bad naming.
configuration = {}
buffer_size = configuration.get("buffering", int(DEFAULT_BUFFERING))
You might immediately guess what has happened here.
DEFAULT_BUFFERING was set to true, evaluating to 1.
Yeah. Writing in 1 byte size chunks results in enormous speed deficiency, as the system is basically bombing itself with syscalls per nanoseconds.
Kinda obvious when you look at it in the raw pure form.
But I guess you can imagine how configuration actually looked....
Wild. Pretty wild. It was the main dict, hard coded, I think 200 entries plus and of course it looked like my toilet after having an spicy food evening and eating too much....
What's even worse is that none made the connection to the buffer size.
This simple and trivial thing entertained us for 2-3 weeks because *drumrolls please* none of the devs tested with large files.
So as usual there was the deployment and then "the sudden miraculous it works totally slow, must be admin / it fault" game.
At some time it landed then on my desk as pretty much everyone who had to deal with it was confused and angry, for understandable reasons (blame game).
It took me and the admin / devs then a few days to track it down, as we really started at the entirely wrong end of the problem, the network...
So much joy for such a stupid thing.18 -
ChatGPT implementation details leaked !!!
This explains everything:
- slow response typing
- occasional nonsensical answers and factual mistakes
- delays before answering
- sudden and random network errors
- "servers" over-capacity
- responses with an Indian accent6 -
So for context, I'm doing an Apprenticeship in IT and naturally I've been put on help desk.
I've recently been given a phone on my desk since I'm trusted enough and know enough about our software that there's no risk to me accepting calls.
I get the standard ones, a number from a different country, poorly pronouncing a co-workers name, asking if they can speak to them. I give my normal response, "I'll just check if they're in a meeting and I'll get back to you" (which they somehow always are) and ask if they would like to leave a message. They obviously don't since they're usually scams.
Since Tuesday I've started getting calls from "BT Technical Support". I don't use BT. My company doesn't use BT. So, it's clearly a scam.
Yesterday, the same guy calls me up, Thomas he says his name is. I go along with it for a while, agreeing that I've noticed our network has been slow until the point where he asks me to begin to install TeamViewer. I realise what he's going to do so I ask him what the problem with our network is.
I hear him start to respond but he stops. He's got no clue what to say, so I say to him, "Thomas mate. I think our biggest problem with our BT network is that we don't have BT."
He puts the phone down.
So I ask you for help, lovely people of devRant.
I have a Windows 10 VM ready to go. I have a couple notepad files labelled as "Passwords" and "Bank Details". What else can I throw on there to make this guy think he's hit the jackpot without really causing too much damage?
Any ideas would be appreciated. <36 -
Customer complains that the deployed desktop app is slow at site x.
I check it out with users at site x, and indeed, it does have a delay when trying to connect to a share on a server.
Checks with users at site y and z, no issues.
After a bit of digging, the resolve of a DNS record is most likely the culprit.
Send the ticket to the customer network team to investigate.
Get it back after an hour.
"We have pinged the DNS name, and it responds fine, there must be a bug in the application".
Oh and also, I wrote this rant at work, in my head, with a lot more cursewords involed.3 -
TL;DR: Fuck Wordpress and their shitty “editor”.
Client told me the Wordpress editor was unusual slow on their site. I inspected the network traffic, while fiddling around in the admin pages. What I found was an even worse nightmare than expected. Somehow the fucktard of an “engineer” decided to implement the spell check module, to parse all other text areas on the page - even the fucking image sources. The result is a browser sending a GET request to fetch the images from the server every time an author triggers a keyup-event. Disabled the spell check and everything was back to budget-ineffective-feces-Wordpress normal.3 -
My school blocks all UDP traffic going outside their network. 1 - 65535. Which in turn means that I had to switch to TCP for my VPN to connect. Now my VPN is slow as shit. -.-8
-
Why the fuck do offline-enabled (music, video) apps load forever on slow network but instantaneously in airplane mode? Is it so difficult to show cached content first and refresh only once downloaded? Yes, I mean you, Spotify, Amazon Music, Amazon Video and Audible!!! 🤬1
-
So recently I installed Windows 7 on my thiccpad to get Hyperdimension Neptunia to run (yes 50GB wasted just to run a game)... And boy did I love the experience.
ThinkPads are business hardware, remember that. And it's been booting Debian rock solid since.. pretty much forever. There are no hardware issues here. Just saying.
With that out of the way I flashed Windows 7 Ultimate on a USB stick and attempted to boot it... Oh yay, first hurdle to overcome. It can't boot in UEFI mode. Move on Debian, you too shall boot in BIOS mode now! But okay, whatever right. So I set it to BIOS mode and shuffled Debian's partitions around a bit to be left with 3 partitions where Windows could stick in one more.
Installed, it asks for activation. Now my ThinkPad comes with a Windows 7 Pro license key, so fuck it let's just use that and Windows will be able to disable the features that are only available for Ultimate users, right? How convenient would that be, to have one ISO for all the half a dozen editions that each Windows release has? And have the system just disable (or since we're in the installer anyway, not install them in the first place) features depending on what key you used? Haha no, this is Microsoft! Developers developers developers DEVELOPERS!!! Oh and Zune, if anyone remembers that clusterfuck. Crackhead Microsoft.
But okay whatever, no activation then and I'll just fetch Windows Loader from my webserver afterwards to keygen my way through. Too bad you didn't accept that key Microsoft! Wouldn't that have been nice.
So finally booted into the installed system now, and behold finally we find something nice! Apparently Windows 7 Enterprise and Ultimate offer a native NFS driver. That's awesome! That way I don't have to adjust my file server at all. Just some fuckery with registry keys to get the UID and GID correct, but I'll forgive it for that. It's not exactly "native" to Windows after all. The fact that it even has a built-in driver for it is something I found pretty neat already.
Fast-forward a few hours and it's time to Re Boot.. drivers from Lenovo that required reboots and whatnot. Fire the system back up, and low and behold the network drive doesn't mount anymore. I've read that this is apparently due to Windows (not always but often) mounting the network drive before the network comes up. Absolutely brilliant! Move out shitstaind, have you seen this beauty of an init Mr. Poet?
But fuck it we can mount that manually after every single boot.. you know, convenient like that. C O P E.
With it now manually mounted, let's watch a movie! I've recently seen Pyro's review on The Platform and I absolutely loved it. The movie itself is quite good too. Open the directory on my file server and.. oh. Windows.. you just put db.thumb on it and db.thumb:encryptable. I shit you not, with the colon and everything. I thought that file names couldn't contain colons Windows! I thought that was illegal in NTFS. Why you doing this in NFS mate? And "encryptable", am I already infected with ransomware??? If it wasn't for the fact that that could also be disabled with something as easy as a registry key, I would've thought I contracted ransomware!
Oh and sound to go with that video, let's pair up some Bluetooth headphones with that Bluetooth driver I installed earlier! Except.. haha nope. Apparently you don't get that either.
Right so let's just navigate the system in its Aero glory... Gonna need to flick the mouse for that. Except it's excruciatingly slow, even the fastest speed is slower than what I'm used to on Linux.. and it's jerky as hell (Linux doesn't have any of that at higher speed). But hey it can compensate for that! Except that slows down the mouse even more. And occasionally the mouse driver gets fucked up too. Wanna scroll on Telegram messages in a chat where you're admin? Well fuck you mate, let me select all these messages for you and auto scroll at supersonic speeds! And God forbid that you press delete with that admin access of yours. Oh maybe I'll do it for you, helpful OS I am!
And the most saddening part of it all? I'd argue that Windows 7 is the best operating system that Microsoft ever released. Yeah. That's the best they could come up with. But at least it plays le games!10 -
Took dad's phone in hand to find out a contact number and realised that his phone has become quite slow to respond.
Asked him why is that and he didn't answer my question and said that he has got a message that Airtel is giving away free 4G Sims.
Bewildered, I asked him so what is that supposed to do with his phone being slow and he said, "If we take the 4G sim, then won't that make my phone faster? I have heard that 4G is fastest network".
And my reaction was like2 -
Just learnt perfectly what the below joke means:
'I wanted to improve the world, but they wouldn't give me the source code'
I really don't understand why the world is full of obsolete processes that people fight against daily when changing things ever so slightly could take the weight of the world off their shoulders. The same thing goes for my work, I work in finance, and we use a remote app built in Windows forms (not xaml or wpf, the original forms) and it's insecure, slow, buggy, and crashes whenever you press ESC (yes, really). Even worse, I've offered to rewrite their whole network for nothing, just the improvement to people's lives. And they say no! WELL FUCK YOU FOR BEING A PLAGUE ON THE FUCKING WORLD! Why do people insist on staying behind the times when the world could be such a beautiful place?!?3 -
This was originally a reply to a rant about the excessive complexity of webdev.
The complexity in webdev is mostly necessary to deal with Javascript and the browser APIs, coupled with the general difficulty of the task at hand, namely to let the user interact with amounts of data far beyond network capacity. The solution isn't to reject progress but to pick your libraries wisely and manage your complexity with tools like type safe languages, unit tests and good architecture.
When webdev was simple, it was normal to have the user redownload the whole page everytime you wanted to change something. It was also normal to have the server query the database everytime a new user requested the same page even though nothing could have changed. It was an inefficient sloppy mess that only passed because we had nothing better and because most webpages were built by amateurs.
Today webpages are built like actual programs, with executables downloaded from a static file server and variable data obtained through an API that's preferably stateless by design and has a clever stateful cache. Client side caches are programmable and invalidations can be delivered through any of three widely supported server-client message protocols. It's not to look smart, it's engineering. Although 5G gets a lot of media coverage, most mobile traffic still flows through slow and expensive connections to devices with tiny batteries, and the only reason our ever increasing traffic doesn't break everything is the insanely sophisticated infrastructure we designed to make things as efficient as humanly possible.11 -
I started a short term contract job that requires access to company online resources. Only problem is the office I'm working in has really bad internet. The connection speed at best is comparable to dial up and at worse just non-existent. I tried tethering to my phone but this wasn't working either due to low signal. I mention this as an issue early on the week to the boss. Later in the week the boss asks how things are going at the same time that the network is down. I tell him the same problem. He then tells me his computer is fast and he has internet, so I show him the 2 computers I have access to and how they are too slow/no internet. He then tells me a bad workman blames his tools and he's not happy with me for having problems.
Don't even know what to say to that. I just told him this role wasn't working for me and clocked out.8 -
In the Netherlands most trains have a wifi network called WiFi in the train, it's rather slow so I always use my hotspot
Guess who decided to name their hotspot 'Internet Cable in the train' 😛7 -
You think NORMAL updates are slow?
I have to install these twice, once from the desktop then again at boot-time!
(gift desktop for my uncle, but we're almost out of bandwidth for the LAN network for the month... had to go sit outside (yes, OUTSIDE) a McD's so I could have both power and network access. It was 2° outside.)7 -
Fucking shit for brains authors that think the digital world is a fantasy realm where everything can happen just to aid their story. Out of boredom i watched "scorpion" today, a tv series about a group of geniusses which are a special case task force.
They got a visitor from the government saying the servers from the federal reserve bank were encrypted with ransomware. I already twitched when they said the economic system would collapse if the servers were left inoperational for a few days. Then one guy got to his desk and "hacked" the fed network to check... he then tried to remove the malware but "it changed itself when observed". But they got the magical fingerprint of the device that uploaded it. In the end some non-programmers created the malware, but it is super fast and dangerous because it runs on a quantum computer which makes it hyper fast and dangerous. They got to the quantum computer which was a glowing cube inside another cube with lasers going into it and they had to use mirrors to divert the lasers to slow down that quantum thingy. And be careful with that, otherwise it explodes. In the end the anti-malware battled the malware and won, all in a matter of minutes.
This is a multimillion hollywood production. How can a movie this abusive to computer science even air on television? Shit like this is the reason people still think the cyberworld is some instable thing that can explode any second. It's not, it's an instable thing that can break down any second. I remember "ghost in the wires" and people had surreal imaginations about the internet already. Shit like this is why people stay dumb and think everything can be done in seconds. If i ever should encounter one of these idiots i tell him i have an app that can publish his browser history by taking a picture of his phone and watch his reaction.
Time to shuw down the tv and learn vim again.11 -
I'm currently between jobs and have a few rants about my previous job (naturally). In retrospect, it's somewhat therapeutic to range about the sheer brainfuckery that has taken place. Enjoy!
First, let me set the scene: legacy B2B web app made with LEMP stack and sencha ext.js 3 + 4 (don't ask) and a lot of madness. Let's call that app "Alpha".
Alpha is a self made CMS build for typical ERP stuff. Yes, a self made CMS: entities are containers, containers have types and fields and values. Like so many legacy PHP apps, it does not have a dedicated FE: the HTML is rendered on the server and then spewed out to the browser.
Easy right? Coding like it's 1999! But there was a twist: Because everything is basically a container, the HTML-templates are saved in the DB. Along with the nessary JS and the CSS. And the translation variables. Why? Because fuck you! That's why. Who needs a git history anyways.
For some reason, Alpha was kinda slow.
There was also an editor, that allowed you to modify templates (web, mail, pdf) on the fly in prod. Because templates contain repeating data (header/footer), one template could contain additional templates. Much confusion. You could change templates via migration (slow, boring) or just ctrl-c/ctrl-v that sucker (fast, much excitement).
Did I mention Alpha was slow?
On with the rant: e-mails! How do they work? Noone knows. How to send mails asynchronous in PHP? Witchcraft is the only possible answer to that riddle. Here is your enterprise™ solution:
1. create mail
2. insert mail into DB
3. WAIT UP TO 59 SECONDS FOR A FUCKING CRON TO SEND MAIL
Why? "Because that way, we can resend mails in case the network is down :)"
Same procedure for the SOAP-API (db-queue + cron). You read that right: all requests to various other systems are processed once a minute.
Alpha slow.
Alpha was only one of several systems. Imagine a bunch of monolithic php apps, interconnected via SOAP, REST and GraphQL like a godamn intergalactic orgy. Image having to debug that cluster fuck.
Let's say there is a bad request. These things happen. No biggie. Remember the db-queue? Let's try to send the bad request a second time! And a third time! Still no luck? How odd. Let's create a specific file in a specific directory: a LOCK-file. Now, "the db-queue is on hold and no request gets processed :)"
Golly gee thanks Alpha.
Anyhow, did you know that MySQL has a join limit of 61 tables?3 -
Not much of a SQL Dev, still an apprentice and had a basic run throughs. Client needed a migration script to run, which I was assigned. Took me a good 6/7 days to make, transfer over a secure (and VERY slow) network took 2 hours. Infrastructure 3rd party took 2 days to clear and run. After all that process. I then realise, I left the fucking rollback in1
-
Anyone have any idea why release downloads from GitHub is slow af?!
It is not a network issue on my end.
Any ideas?2 -
0. working PCs
0.0 technically they are working, but they are too slow to even open up eclipse
0.1 maybe this gets better at university
1 coding on paper
2 not using google + usbs + network drives for code sharing
3 if it might be applicable PLEASE ENABLE THE FUCKING CMD! OR LET ME USE ARCH ON MY STICK! C'MON2 -
So I'm currently "assigned" a task in which I need to fix a slow query problem, which isn't a big deal. The biggest problem is that the original team of this project haven't got any means to develop things on your local machine. Looking at their docs and scripts, it seems like everything is deployed to a dev server. But whilst looking for details for this server, I found out that the network team have decommissioned the server!
So my dilemma right now is that I can't test any of my fixes on anywhere besides staging, or possibly production! Inheriting projects is the bloody worst!5 -
How about incompetent management? Company absolutely murders any possible increase in productivity. Laptop provided? Slow as balls. Takes minutes to log in. I get a Mac for mobile development and that's OK. SSD and adequate memory but I'm primarily a .NET Dev. Can't get on the network with a virtual machine. They won't I stall even a managed image. So can't use databases because they're all AD authenticated. Got a virtual desktop environment and that sucks worse in performance than the laptop. Add the Assault on local administration rights and the monitoring software that constantly thrashers any memory and hard drive usage and im about to quit over all this... All this decided by a non developer and not asked for our opinions. Yay large Enterprises
-
!rant
"It was a slow day at work and another intern suggested finding a way to send messages discretely between our computers on the local network.
To accomplish this I chose any true programmer’s favorite tools: excel sheets and VBA."
http://tristancalderbank.com/2016/... -
🚨 EMERGENCY ALTRANT UPDATE 🚨
Release Notes:
- Fixed critical UI hangs when scrolling up a rant's comments on slow networks
- Fixed critical UI hangs when loading the profile screen on slow networks
Today, I discovered that there is a huge issue with UI responsiveness when the device is connected to a slow (or subpar) network connection. I deemed this absolutely unacceptable and not in the standard I strive to achieve and scrambled to make a fix. The fix is now *live* and available.
In a week from now, I will expire the update I released yesterday (build 2070) in favor of this new one (build 2084). The schedule for expiring the build before yesterday's update (build 1607) is still scheduled to be expired on Wednesday, 11/23/2022, 6 days from the upload of this post.8 -
So I have a semi big project ongoing:
Because my modem+router combo sucks dick and gets buttfucked to much I want to make my own Router with PPPoE.
So I ordered an 8 euro used switch, 24ports for management, but our IP TV provider is sucking cock too! He uses multicasting to send the fucking IP TV signal. So the switch is not VLan ready and so the network will be flodded.
But that's not the worst...
I don't know how to route VOIP over QoS correctly... So I just hope that part work's!
I also ordered another network switch Wich is manageable + an God damn networking closet. 80 bucks gone again! Wish me luck this works better...
BUT THATS NOT THE WORST BECAUSE NOW COMES THE HEAVY PART!
I wanted to use my old AMD Athlon X2 64bit and 4 gigs of RAM PC to be my powerhouse of the router. I plugged it into the wall, booted, screen error... Thought it might be the integrated graphics card... Unplugged my old one, inserted it.... AND IT WOR... NOPE NOPE IT DIDN'T NOW MY DAMN MOTHERBOARD IS FUCKING FRIED TO DUST BECAUSE OF THE GOD DAMN ... I DONT EVEN KNOW! AAAAA
So I thought I could temporarily use my raspberry pi one model b, a good fellow with multiple years of usage! I plugged in the sd card into my girls laptop, wasn't at home, and her God damn internet downloaded that shitty raspbian (sorry raspi but your servers sometimes are very slow) and after the download I realized her GOD DAMN SD READER DIDNT FUCKING WORK!!!
SO I GUESS I WILL WAIT!1 -
!rant
Ever find something that's just faster than something else, but when you try to break it down and analyze it, you can't find out why?
PyPy.
I decided I'd test it with a typical discord bot-style workload (decoding a JSON theoretically from an API, checking if it contains stuff, format and then returning it). It was... 1.73x the speed of python.
(Though, granted, this code is more network dependent than anything else.)
Mean +- std dev: [kitsu-python] 62.4 us +- 2.7 us -> [kitsu-pypy] 36.1 us +- 9.2 us: 1.73x faster (-42%)
Me: Whoa, how?!
So, I proceed to write microbenches for every step. Except the JSON decoding, (1.7x faster was at least twice as slow (in one case, one hundred times slower) when tested individually.
The combination of them was faster. Huh.
By this point, I was all "sign me up!", but... asyncpg (the only sane PostgreSQL driver for python IMO, using prepared statements by default and such) has some of it's functionality written in C, for performance reasons. Not Cython, actual C that links to CPython. That means no PyPy support.
Okay then.1 -
This is a true story when I was working as a application technician a couple of years ago!
Before I started working there, they had a couple of incidents with ppl with less knowledge accidently deleted stuffs in prod databases, and only a handfull of ppl get the full access to them. I started working in this team, and one day I was asked to run a snippet in one of the prod databases from a co-worker with less privilege.
Loged in, run the snippet and the server STALLED for a couple of minutes! When the snippet was finished I looked at the screen and saw the output "1724217 rows deleted". The fun part here, was that we went to a coffee break right after this, and after a couple of minutes we started to hear ppl mumbling that the network was slow as f*ck, servers didn't respond etc etc.
Well, I responded that I got a snippet that deleted 1724217 row in a table and we ran back to our computers and started to work backwards to solve this.
The best part in this story is that:
* Was not my fault! Even thou I was the one that executed it.
* The tables was deleted from a live prod server that was not heavily used!
* I asked for a life line for us in this team, that we needed a prevented output so we can "match" the actual output after we ran it in prod from the ones from the developers!
Even thou it was not my fault, this is the worst mess up I have done working in IT over 10 years. O_o8 -
Windows why the fuck you slow down to a crawl when I upload something to network storage?! I only upload at 100Mbit and you loose your shit opening another file explorer??!2
-
I hate the elasticsearch backup api.
From beginning to end it's an painful experience.
I try to explain it, but I don't think I will be able to cover it all.
The core concept is:
- repository (storage for snapshots)
- snapshots (actual backup)
The first design flaw is that every backup in an repository is incremental. ES creates an incremental filesystem tree.
Some reasons why this is a bad idea:
- deletion of (older) backups is slow, as newer backups need to be checked for integrity
- you simply have to trust ES that it does the right thing (given the bugs it has... It seems like a very bad idea TM)
- you have no possibility of verification of snapshots
Workaround... Create many repositories as each new repository forces an full backup.........
The second thing: ES scales. Many nodes / es instances form a cluster.
Usually backup APIs incorporate these in their design. ES does not.
If an index spans 12 nodes and u use an network storage, yes: a maximum of 12 nodes will open an eg NFS connection and start backuping.
It might sound not so bad with 12 nodes and one index...
But it get's pretty bad with 100s of indexes and several dozen nodes...
And there is no real limiting in ES. You can plug a few holes, but all in all, when you don't plan carefully your backups, you'll get a pretty f*cked up network congestion.
So traffic shaping must be manually added. Yay...
The last thing is the API itself.
It's a... very fragile thing.
Especially in older ES releases, the documentation is like handing you a flex instead of toilet paper for a wipe.
Documentation != API != Reality.
Especially the fault handling left me more than once speechless...
Eg:
/_snapshot/storage/backup
gives you a state PARTIAL
/_snapshot/storage/backup/_status
gives you a state SUCCESS
Why? The first one is blocking and refers to the backup status itself. The second one shouldn't be blocking and refers to the backup operation.
And yes. The backup operation state is SUCCESS, while the backup state might be PARTIAL (hence no full backup was made, there were errors).
So we have now an additional API that we query that then wraps the API of elasticsearch. With all these shiny scary workarounds like polling, since some APIs are blocking which might lead to a gateway timeout...
Gateway timeout? Yes. Since some operations can run a LONG (multiple hours) time and you don't want to have a ton of open connections hogging resources... You let the loadbalancer kill it. Most operations simply run in ES in the background, while the connection was killed.
So much joy and fun, isn't it?
Now add the latest SMR scandal and a few faulty (as in SMR instead of CMD) hdds in a hundred terabyte ZFS pool and you'll get my frustration level.
PS: The cluster has several dozen terabyte and a lot od nodes. If you have good advice, you're welcome - but please think carefully about this fact.
I might have accidentially vaporized people sending me links with solutions that don't work on large scale TM.2 -
Ideas I've had over the years that could pan out and be useful:
SMS-DB: Stands for SMS-Data Burst. Used to allow those with low cell signal or no data plan to transfer data between a phone and some client via the standard SMS text space. Would be slow, but would act kinda like dial-up over SMS (as mobile lines are compressed on all service levels, even LTE, so traditional dial-up wouldn't work!) I have a general idea on how packets would be laid out, but that's about it so far...
everything2PNG: Allows one to transpose any file's data into a PNG with a 3 byte per pixel (full color RGB), which allows for a "compression" of sorts (about 91, 93% on preliminary tests) AND allowing further, more efficient compression of the resulting file. (Plus... it's just kinda cool to see files transposed as PNGs.) I actually have a simple transposer to go to PNG, but can't yet go back. Large files (around 600MB) use upwards of 4GB with efficient paging and other optimizations via NumPy so far, so it's not *viable* yet, but it's coming along nicely.
RPi-GPIO Interconnection Bus: A master/slave or round robin method to allow for Raspberry Pis to communicate using GPIO, which can help free up network bandwidth in RPi cloud computing clusters. At most, this'd allow for 4 bits used for pushing to the GPIO "bus", and 4 bits used for pulling from the "bus". 8 pins total are usually unused minimum, so either 3 or 4 pins for upload, 3 or 4 for download, and potentially 1 or 2 for commands, general non-data communication, etc. I made a version of this concept using Round Robin for a client, but it was horribly slow. (I also don't have distribution rights for the code, so i'm working from scratch.) Definitely doable. -
Our office just moved to a new place and our internet is not yet connected to the line. Our network connects to the web with a mobile access point in the meantime and it's pretty slow.
One of our managers tried to download some large files without success, then decided to try later from home via TeamViewer onto her office PC.
We tried to not laugh.1 -
! Dev
I don't know much about the biology, but from what i know, a virus is never treatable. In due course of time we might generate a medicine that will modify our immunity system to fight against it, like polio and when this medicine is available, all the human race would get it and that's how this epidemic ends.
Until then, we all would need a total social isolation at some instance of time, as it is being done now.
But here is my main question : what to do until then? How will the economy survive? General stores, grocery markets, restaurant and fast food, clothings and many other industries and dominantly involves direct interaction.
Shutting down and going online is also not the solution. Poor/small businesses can't afford it. companies like amazon , dominos, etc have huge network of delivery guys for e shopping, but won't that be soon banned too?
Looks like our technology in robotics and drone delivery is too slow to be proved effective in this situation . I am hoping the technology would be a solution to such situation.
What are your thoughts about it?4 -
rant && what do you think?
so one of our ISP (Orange Slovakia) had troubles with service for like two days. Their DNS servers translated domains to IPs reaaally slow or not at all. So when i saw the dns error in chrome (yes i use chrome and not quantum) I changed my dns to google dns and ignored it.
Two days later when the service was back up and running, this ISP went to the local media and made a statement "we had a DDOS attack, no user data were harmed, blabla" that was when my BS radar went bananas... so somebody DDOSd your DNS server ... for two fucking days straight... this is probably a lie or they have really noob engineers (or both).
I'm not an expert on network services or routing, or servers but, how about turning off this server, IP and setting up a backup on a different IP ? Possibly anyone here with experience how to handle DDOS? Whats the chance of this happening? i'm really curious23 -
git push # via a slow network
> ssh: connect to host github.com port 22:
> Connection timed out
> fatal: Could not read from remote repository.
>
> Please make sure you have the correct access rights
> and the repository exists.
yes, if I have to wait for your server to time out, I can waste more time checking my permissions and that the repo still exists - as if ...1 -
Since the last update of the company antivirus some things became terribly slow, like IE dev tools, they are standard slow, but now they ere horribly slow, copying a 500MB file over the network to my pc now takes 10 minutes and the worst part is git. Git is unusably slow so i can't use git-tfs anymore and have to use standard TFS again.
And whenever all of this is happening there is always the same thing on the top spot of CPU usage 'trend micro unauthorized change prevention service'.
Oh how I hate that antivirus crap -
I'm working on a JavaEE Webshop (Uni assignment) that has to send and receive JMS messages to and from a server, which is located inside the university network, so I have to use a VPN to run the shop. The problem is, the VPN is so goddamn slow that I get SocketTimeOutExceptions regularely! I have Gigabit-Connection, but with the VPN it slows down to ~1Mbps for whatever reason, which is apparently too slow for Java.2
-
I'm trying to waste time reading rants, but don't get half of them because our work wifi network is too slow to load pictures
fml1 -
So I just installed Elementary OS Loki on my older desktop and for that the wifi is incredibly slow, like 30 seconds to load googles home page. It also randomly stops working, and gives a no network connection. When this system was on windows I would average 50~ mb/s down speed, changing it to Linux I'm lucky to maintain 2mb/s. I've been googling for hours and nothing I try seems to work, any Linux pros here able to give me some suggestions. The network card in the PC is an Aetheros one, I it supports a,b,g,n and Bluetooth, I'm currently using the desktop with a Bluetooth mouse / kbd. (None of the hardware/setup has changed since using windows)2
-
Another part of messy network gone.
Caching fucked me hard....
Isn't it just lovely that nowadays you need to nearly wipe a machine to get it from claiming stale data....
And thanks to DNS, HAProxy -/ service names / ... I think I know now why the curse of babel is so powerful.
When you have to think for 2 mins to make sure you've set the zone's right, cause otherwise you need to ProxyJump with SSH through more tunnels than imaginable (VPN/HO) to fix possible caching on several DNS servers.... You'll realize that it's russian roulette with too much bullets. :(
And If a monitoring service asks another monitoring service for status information which asks the first monitoring service which then asks the second monitoring cause you were too late...
You'll get very funky monitoring statistics.
Too slow, had to nuke it (mismatched a DNS name, the second monitoring service should have been a service node).
I think I've had more near death scenarios in the last 2 weeks than I like.
Hopefully I'll never have to do that again.
(Splitting and reordering a few dozen VLANs, assigning proper DNS names, loadbalancer migration....) -
I'm on vacation and we wanted VPN connections back to Sweden to access some sites thats only available in Sweden
So I setup a raspberry pi as access point using hostapd and openvpn from there.
So we have two wireless network options where we are: fast unsecure or slow and from Sweden, just choose what you are going to do on the device that you connect with.
Tablets, computer, phones and so on. -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2 -
Anyone else using ownCloud client on linux? The client is pretty buggy.
I recently got a new laptop and now I'm downloading all my files from my ownCloud instance. Might take a while though with my slow (free) home network.2 -
Am I the only to have bad mojo with Android 7? It loses 4g network all the time, restarts, is generally slow... It's like they follow the Microsoft release model, one version stable (4), next one buggy and bad (5), then nice again (6), and now bad (7)...
-
If I would be a network administrator , I would troll everyone by creating 2 networks, one called x-fast and another called x-slow. Obviously I would reverse them so the one called slow is actually fast, and the fast one is slow. So I can benefit myself from a private network with no one on it because it's "slow" :-)
Evil me or genius?1 -
For progress and success all you have to do is be the right person in the... holy shit, I missed it! Damn lag!
-
When you just want to play a game on your computer but first steam needs to update, then the game you want to play needs updating. Come on I just want to play.