Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "bandwidth"
-
Few months ago, I ate so many MBs (just 300+ GBs) in a month that my ISP blocked my connection and sent a worker to check if i was sharing my internet connection with neighbours etc. { They say UNLIMITED downloads when selling packages }
I was so pissed that after restoration I wrote autorun-on-startup powershell which keeps downloading a 100MB file forever just to eat bandwidth.
This month my downloads crossed a TB.
I feel like I've pissed in ISP's face just to show that if I'm not eating TBs every month, it doesn't mean i can't do it.14 -
Dear public transport,
please don't advertise your fucking free wifi if you don't give enough bandwidth to even load freaking Google.
Dear sincerely,
Me6 -
So my landlord just came up and asked why I'm using so much bandwidth (they've just had a new line installed so they're monitoring it like hell for some reason) so we had a chat, I told him I'm a Web Developer so I'm uploading and downloading a load, and bare in mind this is student housing, he offered to install a wired connection in my flat only so I'll have a decent and stable connection when all the other students come back in September.
This is the first time in my life I feel like I'm not paying enough rent!7 -
!rant
I was in a hostel in my high school days.. I was studying commerce back then. Hostel days were the first time I ever used Wi-Fi. But it sucked big time. I'm barely got 5-10Kbps. It was mainly due to overcrowding and download accelerators.
So, I decided to do something about it. After doing some research, I discovered NetCut. And it did help me for my purposes to some extent. But it wasn't enough. I soon discovered that my floor shared the bandwidth with another floor in the hostel, and the only way I could get the 1Mbps was to go to that floor and use NetCut. That was riskier and I was lazy enough to convince myself look for a better solution rather than go to that floor every time I wanted to download something.
My hostel used Netgear's routers back then. I decided to find some way to get into those. I tried the default "admin" and "password", but my hostel's network admin knew better than that. I didn't give up. After searching all night (literally) about how to get into that router, I stumbled upon a blog that gave a brief info about "telnetenable" utility which could be used to access the router from command line. At that time, I knew nothing about telnet or command line. In the beginning I just couldn't get it to work. Then I figured I had to enable telnet from Windows settings. I did that and got a step further. I was now able to get into the router's shell by using default superuser login. But I didn’t know how to get the web access credentials from there. After googling some and a bit of trial and error, I got comfortable using cd, ls and cat commands. I hoped that some file in the router would have the web access credentials stored in cleartext. I spent the next hour just using cat to read every file. Luckily, I stumbled upon NVRAM which is used to store all config details of router. I went through all the output from cat (it was a lot of output) and discovered http_user and http_passwd. I tried that in the web interface and when it worked, my happiness knew no bounds. I literally ran across the floor screaming and shouting.
I knew nothing about hiding my tracks and soon my hostel’s admin found out I was tampering with the router's settings. But I was more than happy to share my discovery with him.
This experience planted a seed inside me and I went on to become the admin next year and eventually switch careers.
So that’s the story of how I met bash.
Thanks for reading!10 -
In a user-interface design meeting over a regulatory compliance implementation:
User: “We’ll need to input a city.”
Dev: “Should we validate that city against the state, zip code, and country?”
User: “You are going to make me enter all that data? Ugh…then make it a drop-down. I select the city and the state, zip code auto-fill. I don’t want to make a mistake typing any of that data in.”
Me: “I don’t think a drop-down of every city in the US is feasible.”
Manage: “Why? There cannot be that many. Drop-down is fine. What about the button? We have a few icons to choose from…”
Me: “Uh..yea…there are thousands of cities in the US. Way too much data to for anyone to realistically scroll through”
Dev: “They won’t have to scroll, I’ll filter the list when they start typing.”
Me: “That’s not really the issue and if they are typing the city anyway, just let them type it in.”
User: “What if I mistype Ch1cago? We could inadvertently be out of compliance. The system should never open the company up for federal lawsuits”
Me: “If we’re hiring individuals responsible for legal compliance who can’t spell Chicago, we should be sued by the federal government. We should validate the data the best we can, but it is ultimately your department’s responsibility for data accuracy.”
Manager: “Now now…it’s all our responsibility. What is wrong with a few thousand item drop-down?”
Me: “Um, memory, network bandwidth, database storage, who maintains this list of cities? A lot of time and resources could be saved by simply paying attention.”
Manager: “Memory? Well, memory is cheap. If the workstation needs more memory, we’ll add more”
Dev: “Creating a drop-down is easy and selecting thousands of rows from the database should be fast enough. If the selection is slow, I’ll put it in a thread.”
DBA: “Table won’t be that big and won’t take up much disk space. We’ll need to setup stored procedures, and data import jobs from somewhere to maintain the data. New cities, name changes, ect. ”
Manager: “And if the network starts becoming too slow, we’ll have the Networking dept. open up the valves.”
Me: “Am I the only one seeing all the moving parts we’re introducing just to keep someone from misspelling ‘Chicago’? I’ll admit I’m wrong or maybe I’m not looking at the problem correctly. The point of redesigning the compliance system is to make it simpler, not more complex.”
Manager: “I’m missing the point to why we’re still talking about this. Decision has been made. Drop-down of all cities in the US. Moving on to the button’s icon ..”
Me: “Where is the list of cities going to come from?”
<few seconds of silence>
Dev: “Post office I guess.”
Me: “You guess?…OK…Who is going to manage this list of cities? The manager responsible for regulations?”
User: “Thousands of cities? Oh no …no one is our area has time for that. The system should do it”
Me: “OK, the system. That falls on the DBA. Are you going to be responsible for keeping the data accurate? What is going to audit the cities to make sure the names are properly named and associated with the correct state?”
DBA: “Uh..I don’t know…um…I can set up a job to run every night”
Me: “A job to do what? Validate the data against what?”
Manager: “Do you have a point? No one said it would be easy and all of those details can be answered later.”
Me: “Almost done, and this should be easy. How many cities do we currently have to maintain compliance?”
User: “Maybe 4 or 5. Not many. Regulations are mostly on a state level.”
Me: “When was the last time we created a new city compliance?”
User: “Maybe, 8 years ago. It was before I started.”
Me: “So we’re creating all this complexity for data that, realistically, probably won’t ever change?”
User: “Oh crap, you’re right. What the hell was I thinking…Scratch the drop-down idea. I doubt we’re have a new city regulation anytime soon and how hard is it to type in a city?”
Manager: “OK, are we done wasting everyone’s time on this? No drop-down of cities...next …Let’s get back to the button’s icon …”
Simplicity 1, complexity 0.16 -
Worst dev team failure I've experienced?
One of several.
Around 2012, a team of devs were tasked to convert a ASPX service to WCF that had one responsibility, returning product data (description, price, availability, etc...simple stuff)
No complex searching, just pass the ID, you get the response.
I was the original developer of the ASPX service, which API was an XML request and returned an XML response. The 'powers-that-be' decided anything XML was evil and had to be purged from the planet. If this thought bubble popped up over your head "Wait a sec...doesn't WCF transmit everything via SOAP, which is XML?", yes, but in their minds SOAP wasn't XML. That's not the worst WTF of this story.
The team, 3 developers, 2 DBAs, network administrators, several web developers, worked on the conversion for about 9 months using the Waterfall method (3~5 months was mostly in meetings and very basic prototyping) and using a test-first approach (their own flavor of TDD). The 'go live' day was to occur at 3:00AM and mandatory that nearly the entire department be on-sight (including the department VP) and available to help troubleshoot any system issues.
3:00AM - Teams start their deployments
3:05AM - Thousands and thousands of errors from all kinds of sources (web exceptions, database exceptions, server exceptions, etc), site goes down, teams roll everything back.
3:30AM - The primary developer remembered he made a last minute change to a stored procedure parameter that hadn't been pushed to production, which caused a side-affect across several layers of their stack.
4:00AM - The developer found his bug, but the manager decided it would be better if everyone went home and get a fresh look at the problem at 8:00AM (yes, he expected everyone to be back in the office at 8:00AM).
About a month later, the team scheduled another 3:00AM deployment (VP was present again), confident that introducing mocking into their testing pipeline would fix any database related errors.
3:00AM - Team starts their deployments.
3:30AM - No major errors, things seem to be going well. High fives, cheers..manager tells everyone to head home.
3:35AM - Site crashes, like white page, no response from the servers kind of crash. Resetting IIS on the servers works, but only for around 10 minutes or so.
4:00AM - Team rolls back, manager is clearly pissed at this point, "Nobody is going fucking home until we figure this out!!"
6:00AM - Diagnostics found the WCF client was causing the server to run out of resources, with a mix of clogging up server bandwidth, and a sprinkle of N+1 scaling problem. Manager lets everyone go home, but be back in the office at 8:00AM to develop a plan so this *never* happens again.
About 2 months later, a 'real' development+integration environment (previously, any+all integration tests were on the developer's machine) and the team scheduled a 6:00AM deployment, but at a much, much smaller scale with just the 3 development team members.
Why? Because the manager 'froze' changes to the ASPX service, the web team still needed various enhancements, so they bypassed the service (not using the ASPX service at all) and wrote their own SQL scripts that hit the database directly and utilized AppFabric/Velocity caching to allow the site to scale. There were only a couple client application using the ASPX service that needed to be converted, so deploying at 6:00AM gave everyone a couple of hours before users got into the office. Service deployed, worked like a champ.
A week later the VP schedules a celebration for the successful migration to WCF. Pizza, cake, the works. The 3 team members received awards (and a envelope, which probably equaled some $$$) and the entire team received a custom Benchmade pocket knife to remember this project's success. Myself and several others just stared at each other, not knowing what to say.
Later, my manager pulls several of us into a conference room
Me: "What the hell? This is one of the biggest failures I've been apart of. We got rewarded for thousands and thousands of dollars of wasted time."
<others expressed the same and expletive sediments>
Mgr: "I know..I know...but that's the story we have to stick with. If the company realizes what a fucking mess this is, we could all be fired."
Me: "What?!! All of us?!"
Mgr: "Well, shit rolls downhill. Dept-Mgr-John is ready to fire anyone he felt could make him look bad, which is why I pulled you guys in here. The other sheep out there will go along with anything he says and more than happy to throw you under the bus. Keep your head down until this blows over. Say nothing."11 -
Holy shit I love this, that's fucking amazing, it's basically a modern terminal browser, that actually has html5, css support etc. not like elinks, especially nice inside tmux for sure.
"Browsh is a fully-modern text-based browser. It renders anything that a modern browser can; HTML5, CSS3, JS, video and even WebGL. Its main purpose is to be run on a remote server and accessed via SSH/Mosh or the in-browser HTML service in order to significantly reduce bandwidth and thus both increase browsing speeds and decrease bandwidth costs."
https://www.brow.sh/
demo: https://youtube.com/watch/...
https://motherboard.vice.com/en_us/...24 -
My new phone will probs arrive tomorrow and me and mates are going on a vacation to Germany next week.
Currently using nearly all bandwidth/ram of one of my dedicated servers to download maps for offline use (OpenStreetMaps) and convert them to formats OsmAnd can use.
No need for google maps ❤️59 -
Internet Download Manager costs about $24. It's not cross platform either. uGet UI looks old as fuck and shows positive in virustotal.
So I decided to do what most other devs would do in my situtation. I created my own download manager in QT 💪. It uses 16 different threads to download files and pretty much utilises all my bandwidth.24 -
Stepmom told me I take up the most bandwidth in the house because I have 3 monitors. ATT guy said the problem was I forwarded a Minecraft port on our router. Am I the only person who knows anything about routers?!4
-
⚡️ devRantron v1.4.1 ⚡️
I strongly urge all the users of the devRantron to upgrade their app. We have added some major features and made a lot of bugfixes. For example:
1. Edit Rants and Comments
2. Browse Weekly
3. Save drafts of rants so that you can edit and post them later. Also, the app now autosaves when you are typing a new rant and will keep it until you post it.
4. Fixed macOS startup. Previously the app used to open a terminal in the background to launch the app. That has been removed.
5. Confirmation before deleting a rant or comment
6. Huge performance optimization. We have upgraded to React 16 and also changed the way our compiler compiles the application. The way we fetch the notifications has also been changed and it uses less bandwidth.
7. The app will only have single instance now. If you accidentally open the app again, it will just switch to the currently running instance.
8. We now show a release info dialogue before updating. Linux and macOS users will now receive an update notification for new updates.
9. Added the ability to select rant types.
You can get it from here: https://devrantron.firebaseapp.com/
macOS users, please remove the devRantron from "Login Items" in Settings > Users and Groups.
We would like to thank all our users for giving us the feedbacks. If you like the app, you can show your appreciation by giving a start to the repo.
Thank you!23 -
!rant
My ISP just sent me an email:
I have double the bandwidth for the next 6 month while still paying the same as before. No fishy things, no special conditions...
What a time to be alive.14 -
Hashedram's compilations #1
List of most annoying website designs.
1) Pages with AUTO PLAYING VIDEOS.
Yes I'm looking at you Netflix. Along with every news website known to man. I'm looking to read a fucking article, so why would you even waste your money and bandwidth trying to shove a video of some shit I don't care about in my face, and make it follow me as I scroll down like a fucking insecure puppy. Also, fuck you Instagram.
2) Pages that redirect once immediately after you visit them, thereby fucking with the browser history and the BACK BUTTON just leads back to the same fucking site.
I mean, just why. Did you think I would just go "Hey the back button doesn't work so let's stay on the site and read their awesome content"?
3) Sites showing things in a SLIDESHOW, when it actually should be in a list.
Slideshows are for progressive stories or for showing lists where you don't care about what's in them. Top 10 foods that reduce weight. Slideshow 1/15. Fuck you.
4) LOOKS LIKE YOU'RE USING AN AD BLOCKER
Yes. Yes I am. No I will not turn it off for you, you narcissistic snowflake fuck. And don't even try to guilt shame me into turning it off, because I know you're just going to bombard me with videos of sexy singles in the area if I do.
5) Pages where I see the first 3 lines of an article and have to SUBSCRIBE to see more.
Yes. Brilliant fucking idea. A user wants to see what your site has to offer, so within the first three seconds, don't show him exactly that.
6) Looking up an article and having to read through the entire motivational life story of the author.
I just want to know how to boil eggs, not read about your journey across Africa learning how to make difference recepies using boiled rhino dung.
7) CLICK BAIT.
Title: School boy designs blockchain machine learning game engine
Actual Content: Tic tac toe program made using linked lists6 -
"I don't have enough bandwidth to watch the twitch stream, is there a way to get the audio only?"
"yes, start the stream and turn off the monitor"
*sigh*6 -
A brilliant article that talks on the state of internet
The Bullshit Web - https://pxlnv.com/blog/...
Tldr: as internet speed increased, page loading time did not decrease because the extra bandwidth is being stuffed with unnecessary big scripts and autoplaying videos.
AMP is nothing more than a business tactic by Google24 -
At office we sometimes lose our internet connection, the strange part is that it's not fully gone, if you (for example) ping an ip directly, it's fine. But if you try to load any web pages, or do any other kind of internet usage, it won't.
We finally know why...
It's because another company in the same building is uploading some huge thing and using all of the available upload bandwidth (200 mb/s)
So that's nice... Let's put a limiter on that so they DON'T FUCKING KICK US OFFLINE WHEN THEY NEED TO UPLOAD SOME.... WHAT EVER THEY MAKE...3 -
Peaceful protest #2:
Spotify desktop application is blocked due to "bandwidth issues", but web version works fine. Now I listen to music on YouTube, in 1080p, and I like to leave it on for the night...14 -
We are on a roll here people (side note, if You are joining the site, thank you but if you are using disposable email accounts at least wait for the verification code to arrive to said account):
So our most well know and belowed CMS that brings lots of love and feels to those that have to (still) deal with it, had some interesting going on:
Oh Joy! "Backdoor in Captcha Plugin Affects 300K WordPress Sites", well arent You a really naughty little boy, eh?
https://wordfence.com/blog/2017/...
Remember that "little" miner thingy that some users here has thought about using for their site? Even Yours truly that does make use of Ads Networks (fuck you bandwidth is not free) even I have fully condenmed the Miner type ads for alot of reasons, like your computer being used as a literal node for DDoSing, well... how about your "Antivirus" Android phone apps being literally loaded with miner trojans too?
https://securelist.com/jack-of-all-...
"When You literally stopped giving any resembles of a fuck what people think about Your massive conglomerate since You still literally dominate the market since alot of people give zero fucks of how Orwellian We are becoming at neck-breaking speed" aka Google doesnt want other webbrowsers to get into market, Its happy with having MemeFox as its competitor:
https://theregister.co.uk/2017/12/...
Talking about MemeFox fucking up again:
https://theregister.co.uk/2017/12/...
And of course here at Legion Front we cant make finish a report without our shitting at Amazon news report:
"French gov files €10m complaint: Claims Amazon abused dominance
Probe found unfair contracts for sellers"
More News at:
https://legionfront.me/page/news
And for what you may actually came and not me reporting stuff at Legion's Orwell Hour News™ ... the free games, right?:
Oxenfree is free in GoG, its a good game, I played like 2 months after its release and I think I heard they wanted to make a Live Action movie or some sort of thing after it:
https://www.gog.com/game/oxenfree
Kingdom Classic is also free:
http://store.steampowered.com/app/...
Close Order Steam Key: HWRMI-2V3PQ-ZQX8B
More Free Keys at:
https://legionfront.me/ccgr4 -
A few weeks before, my neighbor came to me saying his wifi is hacked and someone is abusing it.
So I tried the wifi and found out there is no password. And the one who was abusing a simple open wifi was me XD.
So I set a password for her and disabled wps. But hopefully no one (expect devrant) will know I used that much bandwidth.2 -
This rant is particularly directed at web designers, front-end developers. If you match that, please do take a few minutes to read it, and read it once again.
Web 2.0. It's something that I hate. Particularly because the directive amongst webdesigners seems to be "client has plenty of resources anyway, and if they don't, they'll buy more anyway". I'd like to debunk that with an analogy that I've been thinking about for a while.
I've got one server in my home, with 8GB of RAM, 4 cores and ~4TB of storage. On it I'm running Proxmox, which is currently using about 4GB of RAM for about a dozen VM's and LXC containers. The VM's take the most RAM by far, while the LXC's are just glorified chroots (which nonetheless I find very intriguing due to their ability to run unprivileged). Average LXC takes just 60MB RAM, the amount for an init, the shell and the service(s) running in this LXC. Just like a chroot, but better.
On that host I expect to be able to run about 20-30 guests at this rate. On 4 cores and 8GB RAM. More extensive migration to LXC will improve this number over time. However, I'd like to go further. Once I've been able to build a Linux which was just a kernel and busybox, backed by the musl C library. The thing consumed only 13MB of RAM, which was a VM with its whole 13MB of RAM consumption being dedicated entirely to the kernel. I could probably optimize it further with modularization, but at the time I didn't due to its experimental nature. On a chroot, the kernel of the host is used, meaning that said setup in a chroot would border near the kB's of RAM consumption. The busybox shell would be its most important RAM consumer, which is negligible.
I don't want to settle with 20-30 VM's. I want to settle with hundreds or even thousands of LXC's on 8GB of RAM, as I've seen first-hand with my own builds that it's possible. That's something that's very important in webdesign. Browsers aren't all that different. More often than not, your website will share its resources with about 50-100 other tabs, because users forget to close their old tabs, are power users, looking things up on Stack Overflow, or whatever. Therefore that 8GB of RAM now reduces itself to about 80MB only. And then you've got modern web browsers which allocate their own process for each tab (at a certain amount, it seems to be limited at about 20-30 processes, but still).. and all of its memory required to render yours is duplicated into your designated 80MB. Let's say that 10MB is available for the website at most. This is a very liberal amount for a webserver to deal with per request, so let's stick with that, although in reality it'd probably be less.
10MB, the available RAM for the website you're trying to show. Of course, the total RAM of the user is comparatively huge, but your own chunk is much smaller than that. Optimization is key. Does your website really need that amount? In third-world countries where the internet bandwidth is still in the order of kB/s, 10MB is *very* liberal. Back in 2014 when I got into technology and webdesign, there was this rule of thumb that 7 seconds is usually when visitors click away. That'd translate into.. let's say, 10kB/s for third-world countries? 7 seconds makes that 70kB of available network bandwidth.
Web 2.0, taking 30+ seconds to load a web page, even on a broadband connection? Totally ridiculous. Make your website as fast as it can be, after all you're playing along with 50-100 other tabs. The faster, the better. The more lightweight, the better. If at all possible, please pursue this goal and make the Web a better place. Efficiency matters.9 -
Recently, Comcast limited my bandwidth to 1TB. I'd be upset, but my service is so slow I don't think I could use it all in just a month!6
-
We have a ver crappy Internet connection at my office (I believe it's 100Mb/s for 50 people to share), so when somebody starts downloading a big file they pretty much hijack all the available bandwidth and fuck up everybody else.
Now, we have ONE, just ONE SINGLE FUCKING COMPUTER RUNNING FUCKING WINDOWS 10 AND EVERY WEEK IT FUCKS UP THE ENTIRE OFFICE'S INTERNET CONNECTION WITH ITS STUPID FUCKING UNCANCELLABLE MANDATORY UPDATES.
FUCK YOU MICROSOFT.8 -
So I disconnected all the other nodes consuming bandwidth on uni's network and it was ALL mine!
(They thought something has happend to wifi cause they were all disconnected )
SO WHAT?! 😎11 -
Who else agrees that the play store should have a section which tells if an app needs a mandatory login to use it ? Like why the hell does a simple offline planner todo app need you to sign-up if you don't wanna use cloud backup. Jeez the user just trusted you by spending valuable time and bandwidth downloading your app. You owe it to him to show him some features of the app before shoving a sign up in his face. As an app developer myself I really think that this kind of behaviour turns off more users than anything else.2
-
The company I interned at last summer decided to adopt a JS framework a little over a year ago. The managers went with the old Angular 1.x because they didn't want a JS build process. Each page has ~100 script tags on it, and these are manually included in various files (no automated way to include dependencies). None of the CSS/JS files are minified, either.
They really should have chose Angular 2+, or an entirely different framework (React, VueJS). They're also just now upgrading the codebase from PHP 5.6 to PHP 7.2 (5.6 support ended a long time ago, and security support ends this month).
I love the company itself but these practices are poor.
I may be working there full time eventually. I hope to eventually help with the inevitable transition to a newer framework once Angular 1.x is dead since I am an avid user of newer JS technologies. Any tips on convincing manager(s) towards newer technology? (Or at least convincing them to combine+minify these files in production to reduce # of requests and bandwidth.)
Also this company's product has millions of active users.16 -
Had 2 days of vacation. Theoretically (plus weekend, plus 2 days) 6 days.
Worked today… At Saturday.
Some administrators forgot to properly check bandwidth limitations....
*rolls eyes*
We had a major version upgrade of some server software at Monday.
Guess why I got called...
Of course it MUST be the software upgrade.
It couldn't be the new hardware that was setup 2 weeks ago and on which a lot of "important" VMs were migrated.
*eyes roll inside till only white is visible*
The even more annoying thing is that it wasn't that hard to figure out.
Looking at monitoring, we had spikes on 20 Gbit/s (roughly 2.x Gigabyte/sec - Ethernet) connection of some server at roughly 1.9 plus Gigabyte/sec.
IO latency spikes that made the graph look like a heartbeat EKG with severe tachycardia...
*additionally to white eyes starts cursing in reverse latin*
Incompetent admin answer: Booboo that can only be your fault - the developers must investigate.
Me (just a tad more polite): Meep Meep mother fucker, get your shit together. If the software would eat that much, the network would be a niece chunk of charcoal. Plus the time (sending instead of links to monitoring pictures… guess the lazy fucktard who's brain is a vacuum didn't even bother to check it)...
NOTICE SOMETHING?!
Incompetent admin: It starts at the same time. Always.
After wasting roughly another hour of time discussing with him, I just hanged up the video call.
Called someone I knew from the admin department and turns out that - drumrolls please - the incompetent admin was someone who got recruited 3 months ago…
*turning into antichrist*
I then had a not so polite discussion about how the only competent people could take days off (all except incompetent admin were on vacation) and the seemingly incompetent fresh recruit - who by the way NEVER mentioned this - was the only one left of the admin department. Which would be bad alone, but no - he even got the 24/7 emergency support role for the whole weekend.
Sometimes this company and HR especially notoriously drive me insane...
Guess next week there will be some HR barbecue.
But yeah. After a lot of raging around we nailed it down to the traffic of backups and could fix it.
Roughly 4 hours of analysis, communication, raging and hatred.
Just one hour implementing shit.
*goozfraba*11 -
---WiFi Vision: X-Ray Vision using ambient WiFi signals now possible---
“X-Ray Vision” using WiFi signals isn’t new, though previous methods required knowledge of specific WiFi transmitter placements and connection to the network in question. These limitations made WiFi vision an unlikely security breach, until now.
Cybersecurity researchers at the University of California and University of Chicago have succeeded in detecting the presence and movement of human targets using only ambient WiFi signals and a smartphone.
The researchers designed and implemented a 2-step attack: the 1st step uses statistical data mining from standard off-the-shelf smartphone WiFi detection to “sniff” out WiFi transmitter placements. The 2nd step involves placement of a WiFi sniffer to continuously monitor WiFi transmissions.
Three proposed defenses to the WiFi vision attack are Geofencing, WiFi rate limiting, and signal obfuscation.
Geofencing, or reducing the spatial range of WiFi devices, is a great defense against the attack. For its advantages, however, geofencing is impractical and unlikely to be adopted by most, as the simplest geofencing tactic would also heavily degrade WiFi connectivity.
WiFi rate limiting is effective against the 2nd step attack, but not against the 1st step attack. This is a simple defense to implement, but because of the ubiquity of IoT devices, it is unlikely to be widely adopted as it would reduce the usability of such devices.
Signal obfuscation adds noise to WiFi signals, effectively neutralizing the attack. This is the most user-friendly of all proposed defenses, with minimal impact to user WiFi devices. The biggest drawback to this tactic is the increased bandwidth of WiFi consumption, though compared to the downsides of the other mentioned defenses, signal obfuscation remains the most likely to be widely adopted and optimized for this kind of attack.
For more info, please see journal article linked below.
https://arxiv.org/pdf/...9 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
The handle on the faucet in one of the bathrooms broke off today. You can still operate the faucet with some finger strength. It is just difficult. We also got a reminder today that we are not to be streaming video or music using the company wifi. They ask that we use our own bandwidth on our phones.
So on the bathroom door where the faucet handle is broken I placed this sign:10 -
Fucking piece of shit German internet man. Some of you might know that Germany probably has the shittiest internet in the EU. And by shitty, I don't mean the downstream speeds you can get (which is how most ISPs justify their crappy network), but the GODDAMN UPSTREAM SPEEDS.
See, I'm just a student, right? I don't run a fucking company or something like that. I don't need / can't afford a symmetrical gigabit connection. But I do a lot of stuff that requires a decent upstream connection.
Fucking Unitymedia (my ISP), if I already decide to buy the goddamn "business plan" (IPv6 & static adresses), at least supply me with some decent upstream speeds. PLEASE!
My current plan costs ~45€ a month for internet and TV (I don't watch, but my two other flat-mates do).
Internet speeds are 150 Mbit/s down and FUCKING 10 Mbit/s up! What??! What the hell am I supposed to do with only 10 Mbit/s?? I'm already completely exhausting the bandwidth and I'm not even done setting everything up! Fucking hell...
I was planning on getting their "upload package" to get at least 20 Mbit/s up – but they removed that option! IT'S GONE, PEOPLE! They said in an interview last year that "customers are not interested in higher upload speeds" and consequently removed that option. WHAT???
"You wanna have state-of-the-art downstream speeds of 400 Mbit/s? Here you go. Oh, our maximum limit of 10 Mbit/s upstream is not enough for you? TOO FUCKING BAD, NOTHING THAT WE CAN OFFER YOU!"
(Seriously though, the best customer internet plan is 400D & 10U)
Goddamn... in this day and age of things like cloud storage etc. even "normal" people definitely need higher upload speeds.
Man, this rant got so long, but I really wanted to get this out. This wasn't even everything though, maybe I'll make a separate rant to elaborate on other issues.
If you are interested, you might want to read up on the following report:
https://speedtest.net/reports/...33 -
Managed a 97% reduction in bandwidth usage for our internal host monitoring tool by converting the dashboard from using AJAX polling to websocket events.
Completely unnecessary but wanted an excuse to do some development with websockets. (:10 -
You need to answer your manager, but don't have a clue, try these randomly:
I'll circle back to you
I will run the numbers on it
Let's go back to the drawing board
Let's touch base in a bit. Ping me
I don't have the bandwidth right now
It's on my radar
Let's put this on the back-burner
I have too many balls in the air right now
I have a lot on my plate
I'll get back by close of play tomorrow
I'll have to deep dive/drill down into this and get back
I have a hard stop at X hours
Let's park that for now
It's in the pipeline2 -
After months of development, testing, testing and even more testing the app was ready for deployment to production. Happy days, the end was in sight!
I had a week's leave so I handed over the preparation for deployment to my Senior Developer and left it in his capable hands while I enjoyed the sun and many beers.
I came back on the day of deployment and proudly pressed the deploy button. Hurrah!
Not long after I got loads of phone calls from around the country as the app wasn't working. What madness is this?! We tested this for months!
Turns out my Senior didn't like the way I'd written the SQL queries so he changed them. Which is obviously both annoying and unprofessional, but even worse he got a join wrong so the memory usage was a billion times more and it drained the network bandwidth for the whole site when I tried to debug it.
I got all the grief for the app not working and for causing many other incidents by running queries that killed the network.
So...much... rage!!!3 -
Working in Asian IT. N/w bandwidth sucks soul out of body faster than downloading a web page...
A man(ager) asks, why do you need Internet?
....?2 -
It's my second rant about Windows here in two days, but here we go:
Windows used to be a cool OS (and in part it still is). Yes, it's made for the end user, not power users, yes it has many flaws. But it was my gateway to computers and programming. I have fond memories of my first PC, playing around with the old win98 themes (my favorite was the baseball one!).
However, I am very disappointed now. I just had to basically force Windows 10 to stop hogging my bandwidth. It was an actual battle, with the OS simply (I kid you not) running update and other services EVEN AFTER I SPECIFICALLY DISABLED THEM. I just saw the Windows update service running, while its status was disabled. It's absurd.
Sorry Windows, but that's not what I want. I want to choose what happens on my own OS. Linux gives me exactly that, why can't you?11 -
"Do you have enough bandwidth?"
"No."
*Passes me another project
Whats the entire point of asking in the first place?2 -
First rant!
The first time I got in touch with programming was when I was about 14 years old. I started a private server for a game called Maplestory (yeah you know it, I know you do) and had one of the most popular servers.
Topping all the rankings of best servers, getting lots and lots of traffic...
Anyway, I started modding the game and implement new features and quests. Right until my father saw our bandwidth. Because the server was running on my computer in my own bedroom 24/7 and blowing nice hot air in my room.
Our bandwidth limit was reached in just a couple days in to the next billing cycle and had to shut everything down from that point. And this happened a few times.
I was devastated shutting it down but learned so much from it. And it introduced me to programming.
Up till now, I'm almost graduating in computer science, already have 2 companies that are willing to hire me, and probably even going to work with my dad on a huge app soon2 -
Ok seriously what the actual fuck is this even supposed to be
Narnia has have better bandwidth than this15 -
Here it is: get MythTV up and running.
In one corner, building from source, the granddaddy Debian!
In the other, prebuilt and ready to download, the meek but feisty Xubuntu!
Debian gets an early start, knowing that compiling on a single core VM won't break any records, and sends the compiler to work with a deft make command!
Xubuntu, relying on its user friendly nature, gets up and running quickly and starts the download. This is where the high-bandwidth internet really works in her favor!
Debian is still compiling as Xubuntu zooms past, and is ready to run!
MythTV backend setup leads her down a few dark alleys, such as asking where to put directories and then not making them, but she comes out fine!
Oh no! After choosing a country and language the frontend commit suicide with no error message! A huge blow to Xubuntu as this will take hours to diagnose!
Meanwhile, Debian sits in his corner, quietly chugging away on millions of lines of C++...
Xubuntu looks lost... And Debian is finished compiling! He's ready to install!
Who will win? Stay tuned to find out!4 -
So I have this 13 year old cousin who's pretty determined to follow my footsteps as a developer someday. He really likes gaming and all internet stuffs. His future plans makes me happy since I may finally have a relative that is a developer. But darn it! He's kinda weird coz he still throws tanrums. One of his major tantrums(which happened again last night) is that he wants the wireless Karaoke machine to be turned off because he thinks that it's slowing down the internet. It was his sister's birthday party and the guests are partying. I've told him many times that the signal for the karaoke is different from that of the router which has nothing to do with the internet slowing down. It must be caused by q device that is updating some apps or whatever. We live in the philippines and our internet provider is quite fast but it has this stupid fair usage policy that caps our bandwidth to a minimum speed if we reach a certain amount of data usage. Since he goes to youtube everyday in 480 and 720p, I explained it to him that it was one of the causes.
Last night, I almost got triggered because I wanted him to believe about the wifi being different to that of the karaoke machine's radio and that it is not connected to the wifi and not using data. I also told him about different kinds of wireless signals which I studied as a Software Engineering student back then and yet he still doesnt believe me. And what almost triggered me is that i saw his steam client updating while watching youtube. I told him that was it. But instead of agreeing, he refused to believe me and just told me that steam is just updating and he's not downloading anything which made me think why he keeps going to youtube, because...he's not downloading. Oh God! Good luck to this kid. 😂5 -
I was wondering if anybody gets to sniff my wifi and finally finds my pass, so he is able to listen to my encrypted traffic and fully decrypt it (websites without https)!
That is far worse than just using my bandwidth!!
What do you think?
What else the attacker can get?4 -
Dear customer, disregarding the bullshit your agency has dumped into Figma, I hereby deliver a clean, minimalist, and usable website without carousel sliders, chatbots, call-to-action teasers for newsletter signup, and muted auto-play videos consuming your end users' bandwidth.
One day you will understand and be grateful, too!3 -
Oi mates!
Little #ad (Not annoying don't worry - it's a cool project)
Just wanted to let y'all know about the awesome project from the Stanford University named Folding@Home!
Basically you donate CPU/GPU power and they use it for researching cancer/alzheimer's/etc.
All you need to do is install some software on your server/computer.
Then the software downloads so called "Work Units" (no big bandwidth required - really small packets) and simulates/calculates some stuff. Afterwards the client send the results back to their server.
This way they are able to create a "supercomputer" that is spread all over the world.
You don't need to pay anything except maybe some increased electricity bills (but you change some settings to use only a little part of the CPU/GPU and therefore create less heat).
Of course the program only uses the CPU/GPU power that's not required by any other software on the computer. I can literally play games while the client is running. No performance decrease.
That's a short intro by me. I can suggest you to visit their website and maybe even start folding by yourself!
> https://foldingathome.com
Also @cr78, @kescherRant and me are in a team together. If you want to join our team as well just use our Team ID:
235222
Teams?
Yup, there's this little stats site (https://stats.foldingathome.com) where all teams can compete against each other. Nothing big.
I hope I convinced atleast some of you!
Feel free to ask questions in the comments!
See ya.11 -
So our last project was a hybrid application in Cordova
During client meeting:
Client (digital mobile lead) : So we have to integrate Nodejs in our App
Me: huh :|||
BD guy: yes SIR, yes SIR
Me: we cant integrate like that, both are different things and have different applications :|
Client: I am told that Nodejs is FAST and its Javascript
BD guy: yes SIR, yes SIR
Me: but (just started to explain the difference)
Client: we need to increase the 'bandwidth', we want another senior resource for this project
BD guy: yes SIR, yes....
Me: what the FUCK :|5 -
The technician from my previous ISP was creating a mess. The cables were worn out and overall the service quality started degrading. Maybe I too had an old router.
After 10 years of loyalty, decided to switch the ISP. Similar plan, better rates. However, this one is fibre optic.
Expecting better service and less bandwidth drop. We just got the installation done and now will get the connection activated next week.
The ISP has also agreed to provide me a free 5G router, so yayayay!!5 -
Arguing with a co worker.... he is writing a serial data plotter, and wants me to send the data as text. I’m like ugh no I’m not wasting bandwidth for text data, you are getting it as binary, as my embedded system has a lot of other stuff todo than send debug info, so the quicker I get the data to you the better... plus his program is running on a pc there is no issue regarding resources handling binary data.
He tells me I’m am wrong, and is trying to defend his stance, then all the electrical engineers and other software engineers all stand up and said why in the hell would it be faster to send text than binary? He has no response.23 -
Why the fuck does every operations app do popouts now? I don't want a simple view of the data, I want all the data so I can compare it together.
It's not like you're saving any bandwidth! All the data is there, I can fucking see it 👀 in the dev tools!
I hate how every product now desperately tries to be like their competitor and everyone fails at it because everyone is copying everyone else.6 -
starting to use everything on Incognito, with a VPN, and (thinking) of switching to TOR.
a heck of a privacy that will be, but the cost of the Speed(Bandwidth) will be terrible.14 -
Can someone help me understand?
I subscribed to a nifty IT-releated magazine, and on its back, there's an ad for "Dedicated root server hosting", nothing unusual at a first glance, but after I read the issue, I decided to humor them and see what it is that they offered, and... It just... Doesn't make sense to me!
An ad for "Dedicated Root Server" - What is a dedicated root server first of all? Root servers of any infrastructure sound pretty important.
But, the ad also boasts "High speed performance with the new Intel Core i9-9900K octa-core processor", that's the first weird thing.
Why would anyone responsible enough want to put an i9 into a highly-reliable root server, when the thing doesn't even support ECC? Also, come on, octa-core isn't much, I deal with servers that have anywhere between 2 and 24 cores. 8 isn't exactly a win, even if it has a higher per-core clock.
Oh, also, further down the ad has a list of, seeming, advantages/specs of the servers, they proclaim that the CPU "incl. Hyper-Threading-Technology"... Isn't that... Standard when it comes to servers? I have never seen a server without hyperthreading so far at my job.
"64 GBs of DDR4 RAM" - Fair enough, 64 gigs is a good amount, but... Again, its not ECC, something I would never put into a server.
"2 x 8 TB SATA Enterprise Hard Drive 7200 rpm" - Heh, "enterprise hard drive", another cheap marketing word, would impress me more if they mentioned an actual brand/model, but I'll bite, and say that at least the 7200 rpm is better than I expected.
"100 GBs of Backup Space" - That's... Really, really little. I've dealt with clients who's single database backup is larger than that. Especially with 2x8 TB HDD (Even accounting for software raids on top)
This one cracks me up - "Traffic unlimited"
Whaaaat?! You are not gonna give me a limit to the total transferred traffic to the internet for my server in your data center? Oh, how generous of you, only, the other case would make the server just an expensive paperweight! I thought this ad was for semi-professionals at least, so why mention traffic, and not bandwidth, the thing that matters much more when it comes to servers? How big of a bandwidth do I get? Don't tell me you use dialup for your "Dedicated Root Server"s!
"Location Germany or Finland" - Fair enough, geolocation can matter when it comes to latency.
"No minimum contract" - Oooh, how kiiiind of you, again, you are not gonna charge me extra for using the server only as long as I pay? How nice!
"Setup Fee £60" - I guess, fair enough, the server is not gonna set itself up, only...
The whole ad is for "monthly from £55.50", that's quite the large fee for setup.
Oh, and a cherry on top, the tiny print on the bottom mentions: "All prices exclude VAT and are a subject to..." blah blah blah.
Really? I thought that this sort of almost customer deceipt is present only in the common people's sphere!
I must say, there's being unimpressed, and then... There's this. Why, just... Why? Anyone understands this? Because I don't...12 -
Best thing about doing Wifi setup/network maintenance for the local coffee shop for free? Putting my laptop's MAC address into the QOS table and guaranteeing myself bandwidth.5
-
(long post is long)
This one is for the .net folks. After evaluating the technology top to bottom and even reimplementing several examples I commonly use for smoke testing new technology, I'm just going to call it:
Blazor is the next Silverlight.
It's just beyond the pale in terms of being architecturally flawed, and yet they're rushing it out as hard as possible to coincide with the .Net 5 rebranding silo extravaganza. We are officially entering round 3 of "sacrifice .Net on the altar of enterprise comfort." Get excited.
Since we've arrived here, I can only assume the Asp.net Ajax fiasco is far enough in the past that a new generation of devs doesn't recall its inherent catastrophic weaknesses. The architecture was this:
1. Create a component as a "WebUserControl"
2. Any time a bound DOM operation occurs from user interaction, send a payload back to the server
3. The server runs the code to process the event; it spits back more HTML
Some client-side js then dutifully updates the UI by unceremoniously stuffing the markup into an element's innerHTML property like so much sausage.
If you understand that, you've adequately understood how Blazor works. There's some optimization like signalR WebSockets for update streaming (the first and only time most blazor devs will ever use WebSockets, I even see developers claiming that they're "using SignalR, Idserver4, gRPC, etc." because the template seeds it for them. The hubris.), but that's the gist. The astute viewer will have noticed a few things here, including the disconnect between repaints, inability to blend update operations and transitions, and the potential for absolutely obliterative, connection-volatile, abusive transactional logic flying back and forth to the server. It's the bring out your dead approach to seeing how much of your IT budget is dedicated to paying for bandwidth and CPU time.
Blazor goes a step further in the server-side render scenario and sends every DOM event it binds to the server for processing. These include millisecond-scale events like scroll, which, at least according to GitHub issues, devs are quickly realizing requires debouncing, though they aren't quite sure how to accomplish that. Since this immediately becomes an issue with tickets saying things like, "scroll event crater server, Ugg need help! You said Blazorclub good. Ugg believe, Ugg wants reparations!" the team chooses a great answer to many problems for the wrong reasons:
gRPC
For those who aren't familiar, gRPC has a substantial amount of compression primarily courtesy of a rather excellent binary format developed by Google. Who needs the Quickie Mart, or indeed a sound markup delivery and view strategy when you can compress the shit out of the payload and ignore the problem. (Shhh, I hear you back there, no spoilers. What will happen when even that compression ceases to cut it, indeed). One might look at all this inductive-reasoning-as-development and ask themselves, "butwai?!" The reason is that the server-side story is just a way to buy time to flesh out the even more fundamentally broken browser-side story. To explain that, we need a little perspective.
The relationship between Microsoft and it's enterprise customers is your typical mutually abusive co-dependent relationship. Microsoft goes through phases of tacit disinterest, where it virtually ignores them. And rightly so, the enterprise customers tend to be weaksauce, mono-platform, mono-language types who come to work, collect a paycheck, and go home. They want to suckle on the teat of the vendor that enables them to get a plug and play experience for delivering their internal systems.
And that's fine. But it's also dull; it's the spouse that lets themselves go, it's the girlfriend in the distracted boyfriend meme. Those aren't the people who keep your platform relevant and competitive. For Microsoft, that crowd has always been the exploratory end of the developer community: alt.net, and more recently, the dotnet core community (StackOverflow 2020's most loved platform, for the haters). Alt.net seeded every competitive advantage the dotnet ecosystem has, and dotnet core capitalized on. Like DI? You're welcome. Are you enjoying MVC? Your gratitude is understood. Cool serializers, gRPC/protobuff, 1st class APIs, metadata-driven clients, code generation, micro ORMs, etc., etc., et al. Dear enterpriseur, you are fucking welcome.
Anyways, b2blazor. So, the front end (Blazor WebAssembly) story begins with the average enterprise FOMO. When enterprises get FOMO, they start to Karen/Kevin super hard, slinging around money, privilege, premiere support tickets, etc. until Microsoft, the distracted boyfriend, eventually turns back and says, "sorry babe, wut was that?" You know, shit like managers unironically looking at cloud reps and demanding to know if "you can handle our load!" Meanwhile, any actual engineer hides under the table facepalming and trying not to die from embarrassment.36 -
How much does your internet cost?
My stats:
Speed: 50Mbps
Bandwidth: Unlimited with no FUP
Yearly: $67.16
Monthly: $5.59
Downtime: Almost never
Note: This is for home internet plan and NOT mobile plan51 -
Fire your whole fucking web team Bethesda
* Your design is a classic ipecac. Whatever the fuck you are doing doesn't in frontend doesn't justify the 4Mb of bandwidth I wasted on a single js file. Why the fuck can I see the whole fucking node_modules directory when looking at the sources?
I know this is supposed to be a webpage for a game development studio, but I'm seriously wondering if your budget would even get me a prostitute.
I'm a greedy fuck and want a free game. apparently your servers are only good enough to register me, but login is apparently too much to ask for. Yeah sure. Oh and also thank you for choosing an "incorrect username and password" error message by default, even though your fucking gateway timed out. Please be kind enough and punch me directly into my face next time. Not like I'll ever access that shit ever again3 -
I haven't had a smart phone in a while now. So I just started using one again. I am getting upsold for an app to "protect me from dangerous calls/texts" on my service. Really? You want to charge me more money for overpriced bandwidth to protect me from YOUR service? This is like aftermarketing a seatbelt on a car.
At least Microsoft has the decency of providing basic security/virus protection for their flagship product. -
We'd just finished a refactor of the gRPC strategy. Upgraded all the containers and services to .Net core 3, pushed a number of perf changes to the base layer and a custom adaptive thread scheduler with a heuristic analyzer to adjust between various strategies.
Went from 1.7M requests/s on 4 cores and 8gb ram to almost 8M requests/s on the same, ended up having to split everything out distributed 2 core instances because we were bottlenecking against 10gb/e bandwidth in AWS.2 -
Ok, first rant, about my struggles getting reliable internet over the past 6 years. It's not too interesting of a topic, but here we go:
I'm living in a more rural part of Germany and internet here is shit. I pay more than 50 bucks a month for 700kb/s downstream (let's just not talk about upstream...), which is meh by itself but it gets worse. Before this I had roughly 230kb/s downstream using DSL. My provider came out with a new oh-so-fucking-fancy solution for giving people faster internet without upgrading their lame ass fucking backbone and POS infrastructure from 70 years ago: they sell you hybrid internet which combines your shit DSL and an LTE connection using TCP Multicast. Not only do I get only 6 of my promised (and payed for) 50 Mbit, no, It's also a fucking piece of nonworking shit!!!
Let me illustrate:
You constantly have problems with web content (or any remote content) not loading because the host server does not support TCP Multicast. It either refuses connection altogether or it takes about 30-50 seconds to establish a connection. Think about your live when it takes two or three fucking minutes to load 5 YouTube thumbnails or load new tweets at the bottom of the Twitter page! Also, you never know if you a) have an error in your implementation of a new API or if b) the remote host doesn't support TCPMC (there's never an error for that! Fuck you!), your SSH sessions ALWAYS drop in the most inopportune fucking moments because the LTE thing lost connection, you always have to turn on a VPN if you want to visit specific websites (for example your school's website) and so on....
Oh and also, my provider started throttling specific services again these days with Netflix and YouTube struggling to display 240p, fucking 240p video without buffering when I get 600kbit down on steam (ofc the steam download is paused when watching videos). When using a VPN, YouTube 720p and Netflix HD work like a charm again. Fucking Telekom bastards
Then there is the problem with VPNs. The good thing about them is that they solve all the TCP Multicast problems. Yay. Now for the bad things:
First of all, as soon as I use a VPN, access times to remote go up by like fucking 500%. A fucking DNS lookup takes 8-15 seconds!!! The bandwidth is there but it takes forever.. because reasons I guess. Then the speed drops to DSL speeds after a while because the router turns off my LTE connection when it is unused and it does not detect VPN traffic as traffic (again because... Reasons?) And also, the VPN just dies after an hour and you have to manually reconnect (with every VPN provider so far)
And as if that wasn't enough, now the lan is dying on me, too, with the router (the fucking expensive hybrid piece of shit, 230 bucks..) not providing DHCP service anymore or completely refusing all wifi connections or randomly dropping 5Ghz devices, or.....
You get the point.
The worst thing is, they recently layed down 400mbit fiber in my neighborhood. Guess where the FUCKING PIECE OF SHIT CABLE ENDS??? YEAH, RIGHT IN FRONT OF MY NEIGHBORS HOUSE. STREET NUMBER 19 IS SERVED WITH 400MBIT AND MY HOME, THE 20, IS NOT IN THEIR FUCKING SERVICE REGION. Even though there is a fucking cable with the cable companies name on it on my property, even leading up to my house! They still refuse to acknowledge it! FUCK YOU!!!!
Well anyways thanks for reading. Any of you got the same problems? :/2 -
So recently I had an argument with gamers on memory required in a graphics card. The guy suggested 8GB model of.. idk I forgot the model of GPU already, some Nvidia crap.
I argued on that, well why does memory size matter so much? I know that it takes bandwidth to generate and store a frame, and I know how much size and bandwidth that is. It's a fairly simple calculation - you take your horizontal and vertical resolution (e.g. 2560x1080 which I'll go with for the rest of the rant) times the amount of subpixels (so red, green and blue) times the amount of bit depth (i.e. the amount of values you can set the subpixel/color brightness to, usually 8 bits i.e. 0-255).
The calculation would thus look like this.
2560*1080*3*8 = the resulting size in bits. You can omit the last 8 to get the size in bytes, but only for an 8-bit display.
The resulting number you get is exactly 8100 KiB or roughly 8MB to store a frame. There is no more to storing a frame than that. Your GPU renders the frame (might need some memory for that but not 1000x the amount of the frame itself, that's ridiculous), stores it into a memory area known as a framebuffer, for the display to eventually actually take it to put it on the screen.
Assuming that the refresh rate for the display is 60Hz, and that you didn't overbuild your graphics card to display a bazillion lost frames for that, you need to display 60 frames a second at 8MB each. Now that is significant. You need 8x60MB/s for that, which is 480MB/s. For higher framerate (that's hopefully coupled with a display capable of driving that) you need higher bandwidth, and for higher resolution and/or higher bit depth, you'd need more memory to fit your frame. But it's not a lot, certainly not 8GB of video memory.
Question time for gamers: suppose you run your fancy game from an iGPU in a laptop or whatever, with 8GB of memory in that system you're resorting to running off the filthy iGPU from. Are you actually using all that shared general-purpose RAM for frames and "there's more to it" juicy game data? Where does the rest of the operating system's memory fit in such a case? Ahhh.. yeah it doesn't. The iGPU magically doesn't use all that 8GB memory you've just told me that the dGPU totally needs.
I compared it to displaying regular frames, yes. After all that's what a game mostly is, a lot of potentially rapidly changing frames. I took the entire bandwidth and size of any unique frame into account, whereas the display of regular system tasks *could* potentially get away with less, since most of the frame is unchanging most of the time. I did not make that assumption. And rapidly changing frames is also why the bitrate on e.g. screen recordings matters so much. Lower bitrate means that you will be compromising quality in rapidly changing scenes. I've been bit by that before. For those cases it's better to have a huge source file recorded at a bitrate that allows for all these rapidly changing frames, then reduce the final size in post-processing.
I've even proven that driving a 2560x1080 display doesn't take oodles of memory because I actually set the timings for such a display in order for a Raspberry Pi to be able to drive it at that resolution. Conveniently the memory split for the overall system and the GPU respectively is also tunable, and the total shared memory is a relatively meager 1GB. I used to set it at 256MB because just like the aforementioned gamers, I thought that a display would require that much memory. After running into issues that were driver-related (seems like the VideoCore driver in Raspbian buster is kinda fuckulated atm, while it works fine in stretch) I ended up tweaking that a bit, to see what ended up working. 64MB memory to drive a 2560x1080 display? You got it! Because a single frame is only 8MB in size, and 64MB of video memory can easily fit that and a few spares just in case.
I must've sucked all that data out of my ass though, I've only seen people build GPU's out of discrete components and went down to the realms of manually setting display timings.
Interesting build log / documentary style video on building a GPU on your own: https://youtube.com/watch/...
Have fun!20 -
If you ask me to run your reports, saying that it's urgent, then leave for a week's vacation, I will throttle your bandwidth to that of a 1200 baud modem for the next 3 months.
-
Management: "Let's hire some folks and expand the team in this region. And then we will think about what they will exactly do. And then get more work or create more work with our own vision"
Hired developers *mind*: "Why the fuck is this small work done by such a big team? Why the fuck is this work done by a my team (different team) and not by the same team that built the product? Oh, they don't have bandwidth? They want to parallelize? And all decisions still have to go through the product team? They have months of experience and context? Right"2 -
You think NORMAL updates are slow?
I have to install these twice, once from the desktop then again at boot-time!
(gift desktop for my uncle, but we're almost out of bandwidth for the LAN network for the month... had to go sit outside (yes, OUTSIDE) a McD's so I could have both power and network access. It was 2° outside.)7 -
I didn't know "bandwidth" can be so hard to understand even after 2 hrs explaining..client still wants widgets with autoplay videos..God why?4
-
I hate AMP sites so much. Fuck you Google! I'm not living in some third world country, nor do I use a decade old smartphone. And even if so, it's none of your fucking business what I do with my bandwidth!!
Just give me the real website, instead of downgraded shit!!1 -
We released a website for a client 10 days ago. The site was up and running and everything seemed fine.
But it turns out that the site used 15GB of bandwidth in 5 days ( WTF???).
So now I need to go and examine my code to make sure I didn't forget something and implement a new caching methods to try and reduce the amount of bandwidth being used.
But I still don't understand how a small "newspaper" website with a max upload size of 5MB could of used so much in so little time.
I also added a screenshot showing the number of visits from an addthis dashboard5 -
Me: the web app is downloading a lot of static content while loading the page, leading to the app being very slow in low bandwidth locations. can you ensure compression is enabled while serving static files ?
UI Developer: sure, I'll look into that. Btw, I have a question reg that.
Me: yes, pls.
UI Developer: once the compressed static files are downloaded to the browser, should I write a separate module to uncompress them ?
Me: :-(Strategic Facepalm) -
I HATE YOU STREAMING SERVICES! FUCK YOU!
Here's the setup:
I work in a rather small office, where we are like 7 people (including me). Now, there's one person in charge of putting music through speakers (obviously, not everyone enjoys the same kind of music)
Well, we have a hell of a small bandwidth (1.5MBPS tops), now, add to that that every single fucker here uses "Spotify" and it's streaming their music...
WHY!!!!
Good side: I have my earphones and ~30GB of my music on my phone, so it's not an issue for me, also, I'm kinda audiophile, so Spotify quality sucks.
Bad side: I can't even fucking load Google because those fuckers are eating the bandwidth.5 -
Fuck my internet connection. I really dont get it, sometimes it works fine and I can download small files while using skype without any problem and the other day, without any apparent reason, I always get kicked out of online games, Websites take ages to load and teamspeak audio cuts out. What the fuck, I even closed everything that might take up the smallest amout of bandwidth. It fucking ruins my night to the point where I want to run through my computer setup with an axe.11
-
*this is gonna be a long one*
This year has been a Year™️. I'm kinda fed up with the industry in general, and I'm not sure how I'm gonna get back to working.
I also got an official autism diagnosis, which makes me feel like there isn't really gonna be a workplace where I'm not gonna want to die. It's fucking exhausting to deal with corporate bs and I don't have the bandwidth for that.
Recently I've been focusing on finishing my studies and I've been considering a hard turn to academia. tbh it's not an idea i like to entertain, but i do like that it has more autonomy and room to breathe. I also like teaching, that's not the problem for me, i just hate the research culture in general. I find it pedantic and gatekeepy in a way that really pisses me off.
Anyway, I'm mostly exhausted, but i do enjoy this field, I just don't know where to go from here.3 -
!rant
Since I only have internet access via mobile phone on the way, the bandwidth varies from place to place.
Only one suggestion for all those who use NPM and do not have the space to clone the entire repository (almost 2 TB) or have a slow internet connection.
Modserv (https://github.com/wmhilton/modserv). Works flawlessly and saves a huge amount of data volume.
I've been using it for almost six months without a single problem. -
HELP ME OUT BRUTHA AND SISTUR..
I've finally finished my website - now's the time to do the tedious thing and get a decent hosting for as little money as possible.
Does anyone know a hosting that has:
- High privacy ethics (not that I'm gonna store porn there, just my screenshots posted via ShareX)
- VPS-based hosting I can put a nice Linux on.
- Unlimited or 'really high' bandwidth.
- Located in Europe (UK included lol).
I would be most thankful :P24 -
Why do people hate jQuery so much? 30KB is basically nothing in times of modern bandwidth and for me CSS selectors are mich more readable than the Vanilla JS functions.
Im not judging and would be happy if someone explains it to me or shows better alternatives.4 -
So I am working 12+ hours a day so as to finish the story assigned to me in the sprint.Turns out after completing the tasks allocated to me,I am given the stories of the people of way higer designation than me just because they are working for 8 hrs a day and therefore don't have the bandwidth to do their own tasks...So I have to come on weekends too,to finish my tasks and their tasks without getting anything extra in return....🙁😶😐😭4
-
Set up an account on cPanel to show the client their final product. Set the bandwidth limit to a low number so that the client can only view the website once and so that if something is not right, I can blame the bandwidth error.
After the client reported an issue, fixed it before resetting bandwidth limit. -
Demoing our product at the customer's site by remoting into one of our internal environments. Their internet is slow so product looks slow.
Project manager after the demo: hey, next time, think of yourself as the tech lead, not just the software lead. Next time hop into the command prompt and do whatever you guys do, check the bandwidth or something.
Me biting my tongue: so I can tell you the customer's internet is too slow?1 -
Since my ISP doesn't allow port forwards on that port, does anyone want to open their port 445 (if not otherwise occupied by Microsoft-DS) and have netcat reply an empty string or so any request (such as an empty string) sent over TCP (or has anyone already done so)? It would help me out a lot with testing out some networks full port range (since portquiz.net's hoster blocks that port) and would take close to no bandwidth for you.6
-
When systems throttles your bandwidth during load tests and doesn't tell you, and you waste an afternoon investigating1
-
I think what's worse than bad coding is bad network connection. Can't load StackOverflow, Network Assets, Run Reports, or pull updated repositories because somebody is hogging up the bandwidth. FFFFFFFFFFFFUUUUUUUUUUU
-
So, due insanely annoying youtube algo, I just wanted to listen related music to given author - not really what I usually listen to. You know the youtube drill, it's suggestion feed sucks hard.
So I kick in incognito, my typical workaround and wait a second... it prompts me to check out youtube music (didnt know its a thing) and well, well, well... It's actually pretty nice. And solves my suggestion feed problems.
Nice. I have no idea when they deployed that but it's lookin preetty nice (at first glance) and dosent use as much bandwidth. Sweet, exacly what Dubby needed.
(said person who dislikes spotify cuz 60% of music I listen to is just not there) -
So, I'm looking into something and end up on Stack Overflow. Someone posted the question:
"Does minified javascript improve performance?"
This question was old as shit, all they way from 07/25/09, and about an Adobe Air application. (Remember that? Me neither...) It had a great, accepted, and still accurate answer, posted the same day the question was asked. Now, fast forward 8 years and on 12/08/17 (A mere 7 months ago...) the following answer was posted. I don't know what they were thinking, but here it is, complete and unabridged, with my comments in square brackets:
"I'd like to post this as a separate answer as it somewhat contrasts the accepted one: [Somewhat contrasts? More like completely contradicts...]
Yes, it does make a performance difference as it reduces parsing time - and that's often the critical thing. For me, it was even just simply linear in the size and I could get it from 12s to 4s parse time by minifying from 3MB to 1MB. [First off, your parse time should NEVER be THE critical thing, but secondly, and more importantly, WHO THE FUCK HAS 1MB OF MINIFIED JS ON A PAGE!!!]
It's not a big app either, it just has a couple of reasonable dependencies. [THERE IS ABSOFUCKINGLUTELY NOTHING REASONABLE ABOUT ANYTHING HE JUST SAID! What dependancies is he using?! You could use minified and not even gzipped jQuery, AngularJS, Vue, Ember, React, AND Dojo libraries on the SAME PAGE, AND have 118k of application code, AND STILL NOT HAVE HIT 1MB QUITE YET!!!]
So the moral of the story here is: Yes, minifying is important for performance - and not because of bandwidth, but because of parsing. [Javascript should NEVER take longer to parse then to download, even on a low powered device...]"
So, yeah, I'm at a loss for what this guy was thinking, but the thought the people like this exist, and that my browser might one day be subjected to their horrific nightmare of code terrifies me...2 -
I love how the people that stay here at this student accommodation uses the WiFi without consideration. There is about 14 people in the house with maybe 2 devices that connects to the WiFi, which connects to an ADSL line that has a speed of 10mbps. One of my roommates, video calls on Skype with his gf or whoever DAILY. You probably expect them to be talking and stuff right? No, they just have video call on but not talking just doing their own thing on their phones and stuff... Wtf? Dont they realise they hogging up the bandwidth?
Every time I restart the WiFi he waits for the connection to be up and goes back to having the video call up but not chatting... Ffs...18 -
I've got a kinda basic networking question I can't quite figure out
How does a push notification work?
Like, on an Android app. A good example is an authenticator. Say I don't login to the service for 4 months.
Then, one day, I try to log into the web portal and it prompts me to accept the request on my authenticator app on my phone.
Immediately, there's a push notification on my phone.
Wtf.
Is there a socket open for 4 months? Does it send requests every few seconds for 4 months? I can't imagine that either of these options scale whatsoever: both horrendously waste bandwidth and server connections.
How the fuck does it work? I don't even have the first idea.7 -
Just upgraded to Win 10. Windows update keeps sucking my bandwidth. Stopped windows update and BITS, set to manual, yet keeps popping up. Finally blocked windows update's IP via Firewall. Now oddly satisfied..1
-
50, 100, 200, and most recently my ISP upgraded my download speed to 250mbps. I mean, it's nice, but through all these upgrades, my upload speed has stayed the same dogshit level. Also, what good is additional bandwidth when my monthly usage is capped?
-
In a meeting where the executives are talking about bandwidth issues during the day that affect productivity, meanwhile the laptop being used for the presentation is running uTorrent
-
I love Mikrotik. Just fucking love them. I also love my residential fiber service. Small company. Synchronous 125M service. No caps. Bandwidth is always there.
BUT... They use PPPOE (seriously guys?), and the IP changes on *every single re-connect*. Also: no IPv6 support. I know. I don't need it. But I want it.
Enter DNSMadeEasy's DDNS, Hurricane Electric's 6to4 tunnel service, and my Routerboard AH100x4. I wrote a script that runs on the router whenever my IP changes. It updates my DDNS record, updates my 6to4 tunnel IP using HE's API, and updates my local 6to4 interface's IP.
It just works. My public IPv4 may change, but the /48 IPv6 networks on my LAN side stay fully routeable.4 -
I'm not one of those "windows sucks lol" guys, but I got used to having my dev environment set on Linux and due to some technical problems I'm setting things up on Windows for a while (dual boot).
Now... Jesus CHRIST how annoying this is. First, I use Laravel and the whole documentation assumes you're either using Mac or Linux. Second, everything has to be added to the god-damned PATH. Third, Windows sole purpose now seems to be updating the PC (and hogging my bandwidth in the process) so I had to waste time taming the beast called Windows Update.
Again, not the stupid old Linux vs Windows thing. I use both for different things, but had never set up a dev environment on Windows.9 -
First post yay!
I'm a "tech" lead for my team. The "tech" stands for technically, I can go on a whole different rant there but that's not why we're here today.
So we have a new PM on our side and a new PM on the client side. I've been working on this project longer than any of the devs and PMs have.
One of the tasks that my team does is validate and ingest data. It's pretty straightforward and it's fully automated. It takes minutes, and at most an hour, to complete this task. We get these tasks from users randomly and they don't have any schedule to it. It's FIFO basis and we just add it to our current sprint if we have bandwidth or add it to the next one if we don't. Not a big deal, no users have complained about it before, it's just business as usual. And we have a tracker of when we received it, how big it was and when it's been ingested. Super simple.
So now comes in the new client PM. He's been asking us to come up with timelines for these ingestions. My project's new PM is bending over to him and saying okay we'll come up with it, no problem. Well, there is a problem. We don't know that far in advance for when these tasks are coming in. Even if we did, now we're supposed to create timelines for a 10 min task? It literally is uploading a file and our system handles everything and I've explained that to my pm but he still is like well that's what they want. It takes less effort to do the ingestion than to make these timelines. It just means project managers bothering devs about timelines.
Idk how to deal with this. Thoughts? Any similar experiences?5 -
Got a code for the Adobe suite which made it the same price as the student package so i could use PS for a tiny project and to dick around a little bit: mostly to see what has changed since CS5. Project was done in 2 weeks and I didn't open PS again for months, letting my subscription lapse too. Well if your subscription lapses Adobe deletes all of it's apps from your machine with no fucking notice. Pretty fucked when you are out in the sticks with shit internet and limited bandwidth, not to mention the ethical shitfuckery of it. So I tried to cancel my subscription, but because the deal i got using the discount thingy there is no button to cancel like there normally is and tech support tell you that you can BUY OUT OF YOUR CONTRACT by paying ~60% of the yearly cost upfront. I told them to fuck themselves and 30 mins later had the subscription canceled.
Am I the only one that sees anything wrong with all this??6 -
"Can you go through this hours-long process to reproduce an issue i saw and debug it? I don't have bandwidth."
"Sure, but I'm pretty sure the issue is actually due to your recent changes in [related feature], and I'm pretty busy myself."
"No, that's not how that works. Please figure out the real issue." (Strongly implying it was my fault)
*Goes through hours-long process to reproduce* (yes this procedure could be improved but this is a rant not a planning meeting)
*Of course, it was his change*
"Oh. Well, it's not really a priority." -
Unable to access cpanel/whm due to IP changed error.
called HR
me : please connect me to networking team (out sourced)
hr : why ?
me : I have some issue to access cpanel. I contacted to hosting comapny but it is not their fault so may be it's our network issue.
hr : explain me in details.
me : ok
from morning I am trying to access whm because our website is out of bandwidth limit and showing 509 error ,I contacted to hosting comapny but they explained me problem from our side. SO i wanted to talk with network team about this issue because I am not using any proxy or vpn even my tor browser is off too still ip chaged error giving frustation. second reason I am frusted that my public IP and private IP is not chaged.
one more your windows pc freeze 3 times from morning.
do you need in detailed technical reason why I want to talk with them.
hr : no no no *hang up*
after 2 minute *my landline ring*
hr : network engineer on other side.
fair enough2 -
Oi windows, your highly intelligent "transmission optimization process" is not really helping me by hogging the whole bandwidth while I'm downloading something.2
-
While waiting for internet to load on crappy bandwidth, the wait can burn you out just as easily. Including response timeouts...
-
My bandwidth is ordinarily a few hundred kbps, but whenever I torrent it can reach up to 2 mbps, while all other traffic from me and my housemates is stuck in the single digit kbps range.
What does BitTorrent do so fucking well, and how can other protocols replicate this success? Would the total available bandwidth be different if every protocol did whatever enables BitTorrent to summon bandwidth from thin air?10 -
I'm officially convinced that my computer is cursed by now:
I get a Oculus Touch Bundle. Connect it to the computer, both sensors through USB 2, HMD too. One of them on an extension cord, experimental 360 degree setup (and yes, I'm covering the lenses when not playing).
Works great for a couple weeks, then I start getting 8603 and 8609 errors (USB connection bad or too little bandwidth. Usually happens when you do something else on the same USB controller).
Trying all of the setups that comply with the setup manual, none works...
... Thinking "fuck it, can't get any worse now", I connect both sensors to the USB 3 ports on my board (A big thou shalt not according to the manual).
Works perfectly. No lag, no loss of tracking.
Well, I guess if something applies to 99.9% of all computers in the world, mine is among the 0.1%. I'm a living corner case, 🤣
Guess I'll move to the Netherlands and become a Ganja farmer.2 -
What's your workspace setup?
Curious because it took awhile and a lot of experimenting/thinking to get mine setup the way it is, but now I can't even think properly unless I have things setup that way after booting up in the morning.
Here goes:
Workspace 1: General stuff, personal email. social media, random research for non work related things, etc
Workspace 2: My main project local development, includes terminals, database, browser research for bugs, debugging software, error logs, etc.
Workspace 3: My main project, production workspace, consoles, browser, etc related to production server, you get the idea
Workspace 4: local dev on my side project
I found it crucial to setup workspace 2 and 3, it has helped me avoid countless stupid errors, like, for example, accidentally working on production terminal and wanting to rip my hair out wondering why the fuck _____ isn't working, then realizing, oh shit, i'm on production, not local. Huge brainspace bandwidth saver when I setup like this.
How about you?2 -
Once I had a junior who was the child of the boss, whom would refuse to listen because they thought they knew better. That junior was ambitious and always wanted harder tasks even though at the end of the day, someone would need to “pair” with them (aka go and do the task for them).
I left that place. Now I have a junior who knows so little that if they came and told me they don’t know how to turn on a computer, I wouldn’t be surprised (yes, unfortunately the bar went down incredibly fast).
I feel sorry for the new junior. I don’t have bandwidth for all of this. Nobody in the team has.
I do think it sucks that companies in general are so against juniors, but I wish at least that the ones who still make the cut were a little bit more prepared.6 -
What's your favorite vps hoster?
I'm currently using scaleway and love it, but recently learned that they offer no protection against data loss.
So I'm looking for an alternative for a project in production that has automatic backups as well as unmetered bandwidth.7 -
Dear client, when I reply to your email with ""Noted with thanks". You really don't have reply back to me with "Thanks". You are just wasting the internet bandwidth. Do you fucking know how expensive is the bandwidth.1
-
Great fucking job github and git-lfs
Github,
First don't tell no one about your fucking limits and then when one goes to delete those files that clogged up the storage, fucking don't let them
Also, even for the unsuccessfull commit, let's charge their fucking bandwidth
And for git-lfs
You can't even fucking use the goddamn help command on git-lfs which they suggest you to use. (I installed git-lfs just as they said)2 -
Was interested to learn React native, started with the demo project too. But then *bam* Flutter shows up with butter smooth UI and it's all new features.
Well, now I'm in a dilemma as to where I need to direct my 'learning bandwidth'. I'm a huge fan of JavaScript, it's my fav language. Also Dart feels lot like JavaScript.
What do you guys think? Any suggestions and experience are welcome..2 -
So, I browse to a video livestream and an annoying ad starts before the livestream is shown. Furthermore, the page jumps around because of a cookie notification that also blocks some UI elements at the top.
Note: this is the website of a public (government-paid) national news website with very high standards and a good reputation.
Action 1: refresh page; I hope the ad is skipped. Nope, annoying ad restarts. Page jumps around again because of the cookie notification.
Action 2: accept cookies to remove notification blocking the top UI (it's OK, I know it can't actually save any cookies on my machine). Instead of some nice JS doing it for me in the background, the page refreshes because you know, HTTP requests and whatnot.
Annoying ad restarts again... FML 🤬
Lessons to be learned from this for any web dev: these annoyances can and *will* exponentially get worse if used simultaneously against your users, instead of being used to help or inform your users.
As a user of you website, I want to watch a livestream. I don't care what stupid legislation forced you to shove a fucking cookie notification in my face. Make sure it is not annoying me to the point that I close you website and take minutes to rant about it!
Also, give me the freedom of choice to watch an ad or not. You and I both know that some ads simply are not for me. Better save yourself and myself the bandwidth.
And go get good at web development. You're a news site. That's more than just text and images. If you want great apps, social media coverage, videos, live streams, blogs, etc. go get some better web devs. Your current web frontend devs only qualify to get fired.1 -
I feel like i am being forced to own a shitty module in our codebase.
It was developed by previous owners and they made a frankenstien monster out of it: Its one part of codebase that is very huge, does not follow the code standards, is making complex kinds of api calls and using very niche components. It gets bugs once in a while BUT IT WORKS.
It fuckin works and is one of the important steps before customer purchases a company product, so kinda part of revenue generation flow.
But this module was never a part of our codebase which we would usually touch. it was owned by another team, they would add enhancements , new features to it and fix the bugs .
When i joined the team, i was once asked to help those guys as a "resource" because they wanted to get something shipped and were low on bandwidth. So i just worked on one of the screens, added a small bugifx and voila, task is done and am back to other part of the app.
But now out of random, they decided to pass on the ownership to ur team, gave a small KT which didn't really explained a lot of actual codebase, but rather the business functionality of it(and that too poorly). And my TL is saying that i should own it because "I worked on that module before"
I don't know how to deal with this frankenstien monster. Earlier a bug came and i was out of my wits to understand why this bug came. their logging is weird and not explaining a lot, their backend devs help provide aws logs but those aren't very helpful either .
the best i could do was declare that their technical approach is wrong and we should modify it, but that idea was quickly squashed.
ITs quite possible that company isn't going to change this module or add any new features further. but everytime a bug would come, i would be getitngfrustrated looking at their frankenstien monster5 -
Ideas I've had over the years that could pan out and be useful:
SMS-DB: Stands for SMS-Data Burst. Used to allow those with low cell signal or no data plan to transfer data between a phone and some client via the standard SMS text space. Would be slow, but would act kinda like dial-up over SMS (as mobile lines are compressed on all service levels, even LTE, so traditional dial-up wouldn't work!) I have a general idea on how packets would be laid out, but that's about it so far...
everything2PNG: Allows one to transpose any file's data into a PNG with a 3 byte per pixel (full color RGB), which allows for a "compression" of sorts (about 91, 93% on preliminary tests) AND allowing further, more efficient compression of the resulting file. (Plus... it's just kinda cool to see files transposed as PNGs.) I actually have a simple transposer to go to PNG, but can't yet go back. Large files (around 600MB) use upwards of 4GB with efficient paging and other optimizations via NumPy so far, so it's not *viable* yet, but it's coming along nicely.
RPi-GPIO Interconnection Bus: A master/slave or round robin method to allow for Raspberry Pis to communicate using GPIO, which can help free up network bandwidth in RPi cloud computing clusters. At most, this'd allow for 4 bits used for pushing to the GPIO "bus", and 4 bits used for pulling from the "bus". 8 pins total are usually unused minimum, so either 3 or 4 pins for upload, 3 or 4 for download, and potentially 1 or 2 for commands, general non-data communication, etc. I made a version of this concept using Round Robin for a client, but it was horribly slow. (I also don't have distribution rights for the code, so i'm working from scratch.) Definitely doable. -
I have mixed feelings about Elon’s Neuralink. Just read a bit of the abstract.
“Neuralink’s first steps toward a scalable high-bandwidth BMI system. We have built arrays of small and flexible electrode “threads”, with as many as 3,072 electrodes per array distributed across 96 threads.”
I’m curious, will this be this be the next “form of cognition”?6 -
I'm working on a project which monitors streaming videos from several Rasp Pi devices wirelessly. Should packet loss occur, the bitrate is reduced remotely by the central computer. (Packet loss is inevitable as wireless devices will compete for wireless bandwidth.)
To demonstrate my work I asked for a wireless WI-FI router rather than bringing one in from home. I wish I hadn't bothered... -
So many forked processes curling webpages to get response status that I exausted my bandwidth... I need a server with good connection -_-
-
Weekend ruined supporting legacy and poorly designed services coupled with poor architecture.
But "no project bandwidth" to refactor said services.
5 hours of data loss should now hopefully inspire a backlog re-shuffle. -
Opus is an amazing codec, but did Soundcloud really have to switch to it with a bitrate of 64kbps? even 80kbps would have been worlds better.
Bandwidth isn't _that_ expensive, even post-neutrality.4 -
Toggl for timekeeping.
Trello for project mgmt.
Report for team performance.
Plan for the month.
oh and by the way, do you have bandwidth for a new project? :/1 -
WINDOWS SUCKS!
Since yesterday I'm cursing my ISP for shitty internet speed and called him few times for the same.
As it turns out windows was updating in the background sucking my whole bandwidth!
It took me a day to download a 1 Gb file1 -
Why is the GitHub compare functionality hidden? After going through all the repo views I had to actually Google how to do it, and apparently you HAVE to append "/compare" to the repo URL... is it to save bandwidth, or hide a buggy feature?...1
-
Holy fucking shit, I hate ubuntu SO much.
So what it happened..
I was tryin to set up an Ubuntu server on my machine using virtual box, and I know what you are thinking, "VirtualBox?" yeah its the only machine I had lying around and it had windows and I didn't wanna re-format its hard drive.
So Here how it goes...
Install went fine.. But when I was trying to manage multiple network interfaces, it was Terrible & pain in the ASS 😡...
So initially I needed 2 network interfaces, one for NAT adapter and another Host-only interface for SSH and stuff.. so I made changes in virtualbox settings and rebooted the VM. and it stuck on "a start job is running for wait for network to be configured" I was like okayy and removed host-only adapter and rebooted, it booted fine :/ then I tried combo of bridged adapter with my Ethernet and a host-only adapter, and what? it booted finally! but this wasn't an optimal solution because it had and IP address within subnet of other devices with my router and half the bandwidth (like 50mbps or something).. I reverted back to NAT network & I checked with ifconfig and it STILL didn't had an IP address assigned to it for Host-only adapter!! FFS I deleted the VM and reinstalled the whole thing again but this time both interfaces attached..
after installing it stuck on same shit again :'(
"a start job is running for wait for network to be configured"... FUCK!
after about an hour of troubleshooting and trying different configurations, I still couldn't get it to work.. I never had such problems with centOS.
Fuck you ubuntu.. fuck you in the ass7 -
3/17 is my last day of running-out-of-bandwidth this month. What should I do with the extra like 80GB?11
-
TL;DR I have to bump a Redis cluster from t3.medium to m6g.large just to get enough network bandwidth even though I have no need of the extra memory.
Debugged an interesting issue today.
I am adding Elasticache to a project to reduce strain on the single node postgres DB.
Deployed a Redis replication group with 2 shards, with multi-AZ replication for resilience.
Everything was going well. We arent caching that much atm so was barely using 100Mb of memory.
Suddenly, when our US region comes online, latency skyrockets and the logs are full of Jedis timeout errors.
Still no issue with memory or node CPU.
The cause? Arbitrary network bandwidth throttling by AWS. The app currently processes about 3,000 requests per second so we were exceeding Amazons random ass allowances which arent documented anywhere.1 -
Ugh sometimes I really question my luck... soyoustart just doubled the bandwidth for the servers for free - on new servers. I got my new one last month...1
-
Rant 1
---
I have so much shit to talk about and its annoying to wait 2+ hours between each rant just to rant so ill start off by ranting about not being able to rant as often as i want to rant
Rant 2
---
What is ORM doing under the hood if it makes the queries so much slower than compared to writing raw sql?
Rant 3
---
Im thinking of creating more accounts just to be able to say what i want to say without waiting these dumbass 2+ hours. Who tf even made that and thought it was a good idea. Ur not saving ur bandwidth storage by making devs wait to rsnt bro itll be the same shit
Rant 4
---
Now by writing 3 rants in a row i forgot what i wanted to rant about more and its an enitrely different topic so ill rant about not remembering what i wanted to rant about because of devrants dumbass 2+ hour wait logic
Rant 5
---
Wow this new york company looking for senior devops dev requires a lot less shit to know compared to the saudi arabian shithole company for the exact same position. But how do i learn all of what they require fast so i can apply for this position since the recruiter has contacted me20 -
I have been trying to get fiber installed at my location for over a month now. It is really frustrating. They have been out 4 times and every time there is another reason they cannot get it installed. I finally got all my ducks in a row to make it happen and I have to wait until the 20th for the install. Great, my country is literally going full retard and now I wonder if war will stop the install now. Grrrrr......
This brings me to tethering. I have been tethering my internet right now. I have 20GB of bandwidth of 4G speeds. When that runs out I get shitty ass 300KBPS. Yes, bytes per second. You can do almost nothing at that speed. That is using the built in tethering program that comes with Android. This is where I get to the grey area of tethering.
I decided to try the ClockWordMod Tether app to do USB tethering. Apparently that is full speed 4G. So I can work and do whatever with that. I feel like I am gaming the system. Part of me doesn't care, part of me says I shouldn't.
What are your thoughts on tethering using alternate methods. Am I going to the Nether?5 -
So an old boss phones and suggests I interview for him at his new company.
A week or so later a couple of his senior guys conduct a virtual interview - which is interrupted by the main guy having to go and stop his sky box downloading so he has enough bandwidth to conduct the interview.
I impress and they disappear for a week. Then I'm finally called by a recruiter to say that they weren't willing to pay my asking salary which was provided to the original ex-boss who contacted me.5 -
!dev
So I found out that
A) there's no Ham radio bandwidth
B) no Ham radio community
C) that Ham radio is technically illegal
D) but "easily could be made legal
And was considering applying for bandwidth allocation however, the way that applying works, implies a single studio, with equipment costs/budget and the likes.
Have any of you ever played with ham radio?3 -
In the war on bandwidth consumption, work has cut out torrent access. So I, like a child look for porn (actually I was doing that too), found a way around. I use http://filestream.me to cache my torrents. Then go to http://Uptobox.com file host and login to my account, that i created with my fake mailinator.com email address, where I use the remote URL upload feature to download my files from filestream. Change the file name to VM-update.dll (I don't know why I chose a DLL originally, but I release no one asks why you were downloading a DLL). The download. All of this, except the downloading is done in Opera Web Browser with VPN on (a little extra paranoia goes a long way).2
-
apparently; indirectly allocating tasks means that it doesn't chew up your bandwidth,
so where is the deliverable? -
-- Have you ever self hosted a Linux/Free Bsd server at home?
-- What was the maintenance like in terms of operation and cost compared to an online service?
I and my partner are planning to self host our Ubuntu server locally because currently though we spend less than $1100 a month on Azure with moderate CPU usage but we plan to scale out with believes that the server cost might sky-rocket.
We made a budget of $25k for the setup which includes cost of hardware, bandwidth and power.
We also made some research concerning most used hardwares for home servers because we really are newbees talk of hardware. What we found are options related to the Intel Xeon as CPU, some others say use NAS, while some are more of advertising.
$25k on the desk,
we care more about speed than of space. How can we make the setup totally worth it? You don't have to spare us a change, just some headlight and way to go.
Your advice are needed. Thank you.8 -
Obviously the top item on the table is NN, the "end users" from both sides of the connection on the net are for the saving it, and the middlemen that only own the "cables" want it to be repealed.
We have the solution to end this issue forever. It wont be easy, nor will it be fast.. unless certain "entities" team with us in secrecy. (There's a reason why certain "entities" have stayed silent regarding NN, due to agreements to not get involved due to the risk of backlash. AND if NN is repealed Those Entities cannot fix the problem as their hands are tied to continue to provide content to the end users.) Read between the lines you will understand it will all make sense later.
I will make The Official Public Statement within 24 hours of the FCC Vote. That statement will be how to get involved, help, get us jump started in your area, funding, the ENTIRE details of the plan, goals, and timeline. AS WELL as how to contact us. This will take time and we are not a magic solution that will fix the problem overnight.
We are however THE solution to the underlying problem with ISPs of today. We have been researching for quite a while and digging deep into the entities that have caused us to get where we are now. The further you go digging into 'THEM' the more pissed off you become as you truly realize whats going on and has been on among the ISPs its MUCH deeper than you are being told.
OUR solution will remove all of "them" from the equation completely as well as being faster, and cheaper than the Tier 1 as you wont be paying for the connection or speed, you would be paying for the hardware/overhead cost. AND we will be bringing you closer to the content providers than EVER before.
AND we will be the only solution capable for competing in the current Tier1 Monopoly zones, I promise you they cannot match our plan's price, IF they did it would be only as a loss leader and NOT a sustainable long term solution for those competing with us at are for-profit....
In order for our solution to work, and to keep the internet service non-bias, well non-bias from OUR members :) this will need to be a collective effort, focused one clearly defined vision. WE WILL AND WE MUST ALL set "profits" aside on this as profits in selling nothing other "connection" to the internet has gotten us in the mess we are in now. AND YES we realize profits help maintain and upgrade the infrastructure, BUT that isn't true in this case...Overhead from our view includes those anticipated costs.
Smaller ISPs will need to make a decision, give up profits, become one with us, and be apart of the mission OR they will be left to suffer at the mercy of the ISPs above them setting the cost of bandwidth eventually leading to their demise.
This will happen because we wont be bound by the T1s .... WE would be the "Tier 0" that doesn't exist ;)
This sounds crazy, impossible, BUT its not, it will work WILL happen, regardless of the FCC's vote. as if the FCC choices to keep NN, its only a matter of time till the big lawyers of the ISPs find some loophole, or lobby enough to bring us back to this.
Legistlation is NOT the solution its just a band-aid fix as the cancer continues to grow within.
PLEASE understand that
Until the vote is made, and we release what we are doing, stay put, hang in, it will all be explained later, we are the only true solution.
BIG-ISPs WILL REGRET WHAT THEY HAVE DONE!
What needs to be understood by all is with net neutrality inplace the ability to compete aginst the Tier 1s directly over customers and reinvent the internet to lower or remove costs completely, increase speeds AND expand to underserved/unserved communities ITS NOT POSSIBLE WITH NN
NN REPEAL is the only way to the fixing the problem for good... yes the For profit BIG ISPs will benefit but not forever.. as repealing it opens the doors for outside the box big picture innovators to come in and offer something different, the big ISPs have clearly over looked this small detail being the possibility of a “NonProfit CoOp TIER 1 ISP” entering into the game thru end users and businesses working together as one entity to defeat them... THE FOR PROFIT ISPs over looked this because they are blinded by the profit potential of NN Repeal, never did they consider our option as a possible outcome because no one has attempted it....
We will unite as one
Be the first to know! -stay updated
SnapChat: theqsolution