Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "protocol"
-
You see a web, I see:
CLIENT: TCP SYN
SERVER: TCP SYN ACK
CLIENT: HTTP Get
SERVER: HTTP Response
...
CLIENT: TCP FIN
SERVER: TCP FIN ACK
All I’m saying is that this spider has a clear understanding of Transfer Control Protocol.13 -
Internet Explorer:
You type a local IP without the protocol.
It doesn't add http automatically.
It doesn't add https automatically.
IT TRIES TO SEARCH IT ON BING
I freaking hate IE13 -
--- HTTP/3 is coming! And it won't use TCP! ---
A recent announcement reveals that HTTP - the protocol used by browsers to communicate with web servers - will get a major change in version 3!
Before, the HTTP protocols (version 1.0, 1.1 and 2.2) were all layered on top of TCP (Transmission Control Protocol).
TCP provides reliable, ordered, and error-checked delivery of data over an IP network.
It can handle hardware failures, timeouts, etc. and makes sure the data is received in the order it was transmitted in.
Also you can easily detect if any corruption during transmission has occurred.
All these features are necessary for a protocol such as HTTP, but TCP wasn't originally designed for HTTP!
It's a "one-size-fits-all" solution, suitable for *any* application that needs this kind of reliability.
TCP does a lot of round trips between the client and the server to make sure everybody receives their data. Especially if you're using SSL. This results in a high network latency.
So if we had a protocol which is basically designed for HTTP, it could help a lot at fixing all these problems.
This is the idea behind "QUIC", an experimental network protocol, originally created by Google, using UDP.
Now we all know how unreliable UDP is: You don't know if the data you sent was received nor does the receiver know if there is anything missing. Also, data is unordered, so if anything takes longer to send, it will most likely mix up with the other pieces of data. The only good part of UDP is its simplicity.
So why use this crappy thing for such an important protocol as HTTP?
Well, QUIC fixes all these problems UDP has, and provides the reliability of TCP but without introducing lots of round trips and a high latency! (How cool is that?)
The Internet Engineering Task Force (IETF) has been working (or is still working) on a standardized version of QUIC, although it's very different from Google's original proposal.
The IETF also wants to create a version of HTTP that uses QUIC, previously referred to as HTTP-over-QUIC. HTTP-over-QUIC isn't, however, HTTP/2 over QUIC.
It's a new, updated version of HTTP built for QUIC.
Now, the chairman of both the HTTP working group and the QUIC working group for IETF, Mark Nottingham, wanted to rename HTTP-over-QUIC to HTTP/3, and it seems like his proposal got accepted!
So version 3 of HTTP will have QUIC as an essential, integral feature, and we can expect that it no longer uses TCP as its network protocol.
We will see how it turns out in the end, but I'm sure we will have to wait a couple more years for HTTP/3, when it has been thoroughly tested and integrated.
Thank you for reading!27 -
So this was a couple years ago now. Aside from doing software development, I also do nearly all the other IT related stuff for the company, as well as specialize in the installation and implementation of electrical data acquisition systems - primarily amperage and voltage meters. I also wrote the software that communicates with this equipment and monitors the incoming and outgoing voltage and current and alerts various people if there's a problem.
Anyway, all of this equipment is installed into a trailer that goes onto a semi-truck as it's a portable power distribution system.
One time, the computer in one of these systems (we'll call it system 5) had gotten fried and needed replaced. It was a very busy week for me, so I had pulled the fried computer out without immediately replacing it with a working system. A few days later, system 5 leaves to go work on one of our biggest shows of the year - the Academy Awards. We make well over a million dollars from just this one show.
Come the morning of show day, the CEO of the company is in system 5 (it was on a Sunday, my day off) and went to set up the data acquisition software to get the system ready to go, and finds there is no computer. I promptly get a phone call with lots of swearing and threats to my job. Let me tell you, I was sweating bullets.
After the phone call, I decided I needed to try and save my job. The CEO hadn't told me to do anything, but I went to work, grabbed an old Windows XP laptop that was gathering dust and installed my software on it. I then had to build the configuration file that is specific to system 5 from memory. Each meter speaks the ModBus over TCP/IP protocol, and thus each meter as a different bus id. Fortunately, I'm pretty anal about this and tend to follow a specific method of id numbering.
Once I got the configuration file done and tested the software to see if it would even run properly on Windows XP (it did!), I called the CEO back and told him I had a laptop ready to go for system 5. I drove out to Hollywood and the CFO (who was there with the CEO) had to walk about a mile out of the security zone to meet me and pick up the laptop.
I told her I put a fresh install of the data acquisition software on the laptop and it's already configured for system 5 - it *should* just work once you plug it in.
I didn't get any phone calls after dropping off the laptop, so I called the CFO once I got home and asked her if everything was working okay. She told me it worked flawlessly - it was Plug 'n Play so to speak. She even said she was impressed, she thought she'd have to call me to iron out one or two configuration issues to get it talking to the meters.
All in all, crisis averted! At work on Monday, my supervisor told me that my name was Mud that day (by the CEO), but I still work here!
Here's a picture of the inside of system 8 (similar to system 5 - same hardware)15 -
I wrote a Student Information system for my midterm project back in 94 written in Clipper and runs on MS-DOS.
I demoed & explained to the panel of professors how it tracks enrollments, payments, class schedules, grades and attendance of each and every student. Has user authentication, auditing and reporting functionalities.
It has a lite version also written in Clipper that can be installed on a Professor's laptop so that he/she can update records even at home, and would be able to sync with the db at school via a BBS. Telix for DOS (self-taught) was my choice for the BBS as it was shareware, has built-in Zmodem support and comes with it's own programming language called SALT (Script Application Language for Telix) that can be used for automating tasks. The lite version of my project would dump the updates on an ASCII file, compress the file using PKZIP, use the laptop's modem to dial-up the number to the school's BBS and send the file across using Zmodem protocol.
The main version would then download the file(s) from the BBS and proceed to do a sync.
After the doing the demo and answering all their questions the panel asked me to wait outside the room, called me back in after 15mins and told me that I don't have to attend that class for the remainder of the term. The happiness as the my classmates outside of the room gawked at me felt like King Midas himself gave my balls his golden touch.
Then in 97, 2yrs after I graduated, I accompanied my cousins to a different campus of the same school for their enrollment and right there on the bottom of the screen were my initials on a very very familiar UI! They actually used, and were still using, my school project. Needless to say my cousins didn't believe that it was written by me.15 -
It's maddening how few people working with the internet don't know anything about the protocols that make it work. Web work, especially, I spend far too much time explaining how status codes, methods, content-types etc work, how they're used and basic fundamental shit about how to do the job of someone building internet applications and consumable services.
The following has played out at more than one company:
App: "Hey api, I need some data"
API: "200 (plain text response message, content-type application/json, 'internal server error')"
App: *blows the fuck up
*msg service team*
Me: "Getting a 200 with a plaintext response containing an internal server exception"
Team: "Yeah, what's the problem?"
Me: "...200 means success, the message suggests 500. Either way, it should be one of the error codes. We use the status code to determine how the application processes the request. What do the logs say?"
Team: "Log says that the user wasn't signed in. Can you not read the response message and make a decision?"
Me: "That status for that is 401. And no, that would require us to know every message you have verbatim, in this case, it doesn't even deserialize and causes an exception because it's not actually json."
Team: "Why 401?"
Me: "It's the code for unauthorized. It tells us to redirect the user to the sign in experience"
Team: "We can't authorize until the user signs in"
Me: *angermatopoeia* "Just, trust me. If a user isn't logged in, return 401, if they don't have permissions you send 403"
Team: *googles SO* "Internet says we can use 500"
Me: "That's server error, it says something blew up with an unhandled exception on your end. You've already established it was an auth issue in the logs."
Team: "But there's an error, why doesn't that work?"
Me: "It's generic. It's like me messaging you and saying, "your service is broken". It doesn't give us any insight into what went wrong or *how* we should attempt to troubleshoot the error or where it occurred. You already know what's wrong, so just tell me with the status code."
Team: "But it's ok, right, 500? It's an error?"
Me: "It puts all the troubleshooting responsibility on your consumer to investigate the error at every level. A precise error code could potentially prevent us from bothering you at all."
Team: "How so?"
Me: "Send 401, we know that it's a login issue, 403, something is wrong with the request, 404 we're hitting an endpoint that doesn't exist, 503 we know that the service can't be reached for some reason, 504 means the service exists, but timed out at the gateway or service. In the worst case we're able to triage who needs to be involved to solve the issue, make sense?"
Team: "Oh, sounds cool, so how do we do that?"
Me: "That's down to your technology, your team will need to implement it. Most frameworks handle it out of the box for many cases."
Team: "Ah, ok. We'll send a 500, that sound easiest"
Me: *..l.. -__- ..l..* "Ok, let's get into the other 5 problems with this situation..."
Moral of the story: If this is you: learn the protocol you're utilizing, provide metadata, and stop treating your customers like shit.22 -
Soms week ago a client came to me with the request to restructure the nameservers for his hosting company. Due to the requirements, I soon realised none of the existing DNS servers would be a perfect fit. Me, being a PHP programmer with some decent general linux/server skills decided to do what I do best: write a small nameservers which could execute the zone transfers... in PHP. I proposed the plan to the client and explained to him how this was going to solve all of his problems. He agreed and started worked.
After a few week of reading a dozen RFC documents on the DNS protocol I wrote a DNS library capable of reading/writing the master file format and reading/writing the binary wire format (we needed this anyway, we had some more projects where PHP did not provide is with enough control over the DNS queries). In short, I wrote a decent DNS resolver.
Another two weeks I was working on the actual DNS server which would handle the NOTIFY queries and execute the zone transfers (AXFR queries). I used the pthreads extension to make the server behave like an actual server which can handle multiple request at once. It took some time (in my opinion the pthreads extension is not extremely well documented and a lot of its behavior has to be detected through trail and error, or, reading the C source code. However, it still is a pretty decent extension.)
Yesterday, while debugging some last issues, the DNS server written in PHP received its first NOTIFY about a changed DNS zone. It executed the zone transfer and updated the real database of the actual primary DNS server. I was extremely euphoric and I began to realise what I wrote in the weeks before. I shared the good news the client and with some other people (a network engineer, a server administrator, a junior programmer, etc.). None of which really seemed to understand what I did. The most positive response was: "So, you can execute a zone transfer?", in a kind of condescending way.
This was one of those moments I realised again, most of the people, even those who are fairly technical, will never understand what we programmers do. My euphoric moment soon became a moment of loneliness...21 -
Mother of god.
I spent hours and hours last week to try and get OpenVPN working. I mean, OpenVPN is working perfectly fine (on a VirtualBox (nope no vmware for me on servers) machine on a friends' dedicated server) but it wouldn't get through! As in, every forwarding/firewall rule just didn't work.
Was seriously about to lose my shit just now when I suddenly noticed the term 'TCP' in a forwarding rule.
Looked at the .ovpn file: proto udp
I added the exact same rule for UDP as a forward within VirtualBox.
It worked.
Well, there goes quite some hours 😐
And solely because I didn't realise that I setup a forwarding thingy for the wrong protocol.
I feel very stupid now :(5 -
IBM
I have replied to them with scripts, curl commands, and Swagger docs (PROVIDED TO SUPPORT THEIR API), everything that could possibly indicate there's a bug. Regardless, they refuse to escalate me to level 1 support because "We cant reproduce the issue in a dev environment"
Well of course you can't reproduce it in a dev environment otherwise you'd have caught this in your unit tests. We have a genuine issue on our hands and you couldnt give less of a shit about it, or even understand less than half of it. I literally gave them a script to use and they replied back with this:
"I cannot replicate the error, but for a resource ID that doesnt exist it throws an HTTP 500 error"
YOUR APP... throws a 500... for a resource NOT FOUND?????????!!!!!!!!!! That is the exact OPPOSITE of spec, in fact some might call it a MISUSE OF RESTFUL APIs... maybe even HTTP PROTOCOL ITSELF.
I'm done with IBM, I'm done with their support, I'm done with their product, and I'm DONE playing TELEPHONE with FIRST TIER SUPPORT while we pay $250,000/year for SHITTY, UNRELENTING RAPE OF MY INTELLECT.11 -
My first hack... Back at the days when phones had disks to dial a number. I was a kid of cause, I'm not that old. I used to like to call my grans. Once, when I supposed to go to sleep already, I've found out that there is phone socket in my room (the one connected to the copper wire, that is where the word "phone line" came from).
It took me about a half of an hour to detach handset from the toy phone and about two ours to reverse engineer dialing protocol (you just need to disconnect the line sequentially corresponding number if times).
And after that I've heard my granny's voice. I was literally overwhelmed that it worked.6 -
Me: Why is there such a delay between the app and the hardware device?
Colleague: Ah, same old same old, TCP is just an inefficient protocol. We should stop development and build our own replacement to TCP.
(PS. The actual problem was his code)9 -
So today (or a day ago or whatever), Pavel Durov attacked Signal by saying that he wouldn't be surprised if a backdoor would be discovered in Signal because it's partially funded by the US government (or, some part of the us govt).
Let's break down why this is utter bullshit.
First, he wouldn't be surprised if a backdoor would be discovered 'within 5 years from now'.
- Teeny tiny little detail: THE FUCKING APP IS OPEN SOURCE. So yeah sure, go look through the code! Good idea! You might actually learn something from it as your own crypto seems to be broken! (for the record, I never said anything about telegram not being open source as it is)
sources:
http://cryptofails.com/post/...
http://theregister.co.uk/2015/11/...
https://security.stackexchange.com/...
- The server side code is closed (of signal and telegram both). Well, if your app is open source, enrolled with one of the strongest cryptographic protocols in the world and has been audited, then even if the server gets compromised, the hackers are still nowhere.
- Metadata. Signal saves the following and ONLY the following: timestamp of registration, timestamp of the last connection with the server (both rounded to the day so not on the second), your phone number and your contact details (if you authorize it) (only phone numbers) in HASHED (BCrypt I thought?) format.
There have been multiple telegram metadata leaks and it's pretty known that it saves way more than neccesary.
So, before you start judging an app which is open, uses one of the best crypto protocols in the world while you use your own homegrown horribly insecure protocol AND actually tries its best to save the least possible, maybe try to fix your own shit!
*gets ready for heavy criticism*19 -
me: do you know what is so great about UDP jokes?
you: No
me: the fact that i don't care if you got them.1 -
If there is SMTP (Simple mail transfer protocol), is there also HMTP (Hard mail transfer protocol)?6
-
Biggest dev insecurity?
Probably http://
It’s not secure at all, never feeling very confident when browsing that protocol.5 -
So, some time ago, I was working for a complete puckered anus of a cosmetics company on their ecommerce product. Won't name names, but they're shitty and known for MLM. If you're clever, go you ;)
Anyways, over the course of years they brought in a competent firm to implement their service layer. I'd even worked with them in the past and it was designed to handle a frankly ridiculous-scale load. After they got the 1.0 released, the manager was replaced with some absolutely talentless, chauvinist cuntrag from a phone company that is well known for having 99% indian devs and not being able to heard now. He of course brought in his number two, worked on making life miserable and running everyone on the team off; inside of a year the entire team was ex-said-phone-company.
Watching the decay of this product was a sheer joy. They cratered the database numerous times during peak-load periods, caused $20M in redis-cluster cost overrun, ended up submitting hundreds of erroneous and duplicate orders, and mailed almost $40K worth of product to a random guy in outer mongolia who is , we can only hope, now enjoying his new life as an instagram influencer. They even terminally broke the automatic metadata, and hired THIRTY PEOPLE to sit there and do nothing but edit swagger. And it was still both wrong and unusable.
Over the course of two years, I ended up rewriting large portions of their infra surrounding the centralized service cancer to do things like, "implement security," as well as cut memory usage and runtimes down by quite literally 100x in the worst cases.
It was during this time I discovered a rather critical flaw. This is the story of what, how and how can you fucking even be that stupid. The issue relates to users and their reports and their ability to order.
I first found this issue looking at some erroneous data for a low value order and went, "There's no fucking way, they're fucking stupid, but this is borderline criminal." It was easy to miss, but someone in a top down reporting chain had submitted an order for someone else in a different org. Shouldn't be possible, but here was that order staring me in the face.
So I set to work seeing if we'd pwned ourselves as an org. I spend a few hours poring over logs from the log service and dynatrace trying to recreate what happened. I first tested to see if I could get a user, not something that was usually done because auth identity was pervasive. I discover the users are INCREMENTAL int values they used for ids in the database when requesting from the API, so naturally I have a full list of users and their title and relative position, as well as reports and descendants in about 10 minutes.
I try the happy path of setting values for random, known payment methods and org structures similar to the impossible order, and submitting as a normal user, no dice. Several more tries and I'm confident this isn't the vector.
Exhausting that option, I look at the protocol for a type of order in the system that allowed higher level people to impersonate people below them and use their own payment info for descendant report orders. I see that all of the data for this transaction is stored in a cookie. Few tests later, I discover the UI has no forgery checks, hashing, etc, and just fucking trusts whatever is present in that cookie.
An hour of tweaking later, I'm impersonating a director as a bottom rung employee. Score. So I fill a cart with a bunch of test items and proceed to checkout. There, in all its glory are the director's payment options. I select one and am presented with:
"please reenter card number to validate."
Bupkiss. Dead end.
OR SO YOU WOULD THINK.
One unimportant detail I noticed during my log investigations that the shit slinging GUI monkeys who butchered the system didn't was, on a failed attempt to submit payment in the DB, the logs were filled with messages like:
"Failed to submit order for [userid] with credit card id [id], number [FULL CREDIT CARD NUMBER]"
One submit click later and the user's credit card number drops into lnav like a gatcha prize. I dutifully rerun the checkout and got an email send notification in the logs for successful transfer to fulfillment. Order placed. Some continued experimentation later and the truth is evident:
With an authenticated user or any privilege, you could place any order, as anyone, using anyon's payment methods and have it sent anywhere.
So naturally, I pack the crucifixion-worthy body of evidence up and walk it into the IT director's office. I show him the defect, and he turns sheet fucking white. He knows there's no recovering from it, and there's no way his shitstick service team can handle fixing it. Somewhere in his tiny little grinchly manager's heart he knew they'd caused it, and he was to blame for being a shit captain to the SS Failboat. He replies quietly, "You will never speak of this to anyone, fix this discretely." Straight up hitler's bunker meme rage.13 -
Start a development job.
Boss: "let's start you off with something very easy. There's this third party we need data from. They have an api, just get the data and place it on our messaging bus."
Me: "sure, sounds easy enough"
Third party api turns out to have the most retarded conversation protocol. With us needing a service to receive data on while also having a client to register for the service. With a lot of timed actions like, 'send this message every five minutes' and 'check whether our last message was sent more than 11 minutes ago'.
Due to us needing a service, we also need special permissions through the company firewall. So I have to go around the company to get these permissions, FOR EVERY DATA STREAM WE NEED!
But the worst of it all is... This whole api is SOAP based!!
Also, Hey DevRant!5 -
What I'm posting here is my 'manifesto'/the things I stand for. You may like it, you may hate it, you may comment but this is what I stand for.
What are the basic principles of life? one of them is sharing, so why stop at software/computers?
I think we should share our software, make it better together and don't put restrictions onto it. Everyone should be able to contribute their part and we should make it better together. Of course, we have to make money but I think that there is a very good way in making money through OSS.
Next to that, since the Snowden releases from 2013, it has come clear that the NSA (and other intelligence agencies) will try everything to get into anyone's messages, devices, systems and so on. That's simply NOT okay.
Our devices should be OUR devices. No agency should be allowed to warrantless bypass our systems/messages security/encryptions for the sake of whatever 'national security' bullshit. Even a former NSA semi-director traveled to the UK to oppose mass surveillance/mass govt. hacking because he, himself, said that it doesn't work.
We should be able to communicate freely without spying. Without the feeling that we are being watched. Too badly, the intelligence agencies of today do not want us to do this and this is why mass surveillance/gag orders (companies having to reveal their users' information without being allowed to alert their users about this) are in place but I think that this is absolutely wrong. When we use end to end encrypted communications, we simply defend ourselves against this non-ethical form of spying.
I'm a heavy Signal (and since a few days also Riot.IM (matrix protocol) (Riot.IM with end to end crypto enabled)), Tutanota (encrypted email) and Linux user because I believe that only those measures (open source, reliable crypto) will protect against all the mass spying we face today.
The applications/services I strongly oppose are stuff like WhatsApp (yes, encryted messages but the metadata is readily available and it's closed source), skype, gmail, outlook and so on and on and on.
I think that we should OWN our OWN data, communications, browsing stuffs, operating systems, softwares and so on.
This was my rant.17 -
I'm working on a project with a teacher to overview the project at my school to be responsible for the confidential student data...
Teacher: How are we going to authenticate the kiosk machines so people don't need a login?
Me: Well we can use a unique URL for the app and that will put an authorized cookie on the machine as well as local IP whitelisting.
Teacher: ok but can't we just put a secret key in a text file on the C drive and access it with JavaScript?
Me: well JavaScript can't access your drive it's a part of the security protocol built into chrome...
Teacher: well that seems silly! There must be a way.
Me: Nope definately not. Let's just make a fancy shortcut?
Teacher: Alright you do that for now until I find a way to access that file.
I want to quit this project so bad4 -
Found this on mastodon:
I sometimes imagine that somewhere there must be a Ministry for Messing Up the Internet. It would be like a Monty Python sketch.
Each day a new idea would arrive in the intray of an official who looks like a young John Cleese. They would form a large pile of papers.
[reads] "Make a protocol so complicated that nobody can understand it. No the Sematic Web has already been tried".
[reads] "Ban all the cat photos for spurious copyright reasons. No, we already have an upload filter in progress to do that".
[reads] "Fill Tim Berners-Lee's socks with elephants. No - much too silly."
"Ah yes, [reads] make a giant man in the middle that everything on the internet has to go through like a sausage machine and get squirted out on the other side, hopefully in the correct order. Bernard, get Cloudflare on the phone immediately."
@bob@soc.freedombone.net2 -
I absolutely love the email protocols.
IMAP:
x1 LOGIN user@domain password
x2 LIST "" "*"
x3 SELECT Inbox
x4 LOGOUT
Because a state machine is clearly too hard to implement in server software, clients must instead do the state machine thing and therefore it must be in the IMAP protocol.
SMTP:
I should be careful with this one since there's already more than enough spam on the interwebs, and it's a good thing that the "developers" of these email bombers don't know jack shit about the protocol. But suffice it to say that much like on a real letter, you have an envelope and a letter inside. You know these envelopes with a transparent window so you can print the address information on the letter? Or the "regular" envelopes where you write it on the envelope itself?
Yeah not with SMTP. Both your envelope and your letter have them, and they can be different. That's why you can have an email in your inbox that seemingly came from yourself. The mail server only checks for the envelope headers, and as long as everything checks out domain-wise and such, it will be accepted. Then the mail client checks the headers in the letter itself, the data field as far as the mail server is concerned (and it doesn't look at it). Can be something else, can be nothing at all. Emails can even be sent in the future or the past.
Postfix' main.cf:
You have this property "mynetworks" in /etc/postfix/main.cf where you'd imagine you put your own networks in, right? I dunno, to let Postfix discover what your networks are.. like it says on the tin? Haha, nope. This is a property that defines which networks are allowed no authentication at all to the mail server, and that is exactly what makes an open relay an open relay. If any one of the addresses in your networks (such as a gateway, every network has one) is also where your SMTP traffic flows into the mail server from, congrats the whole internet can now send through your mail server without authentication. And all because it was part of "your networks".
Yeah when it comes to naming things, the protocol designers sure have room for improvement... And fuck email.
Oh, bonus one - STARTTLS:
So SMTP has this thing called STARTTLS where you can.. unlike mynetworks, actually starts a TLS connection like it says on the tin. The problem is that almost every mail server uses self-signed certificates so they're basically meaningless. You don't have a chain of trust. Also not everyone supports it *cough* government *cough*, so if you want to send email to those servers, your TLS policy must be opportunistic, not enforced. And as an icing on the cake, if anything is wrong with the TLS connection (such as an MITM attack), the protocol will actively downgrade to plain. I dunno.. isn't that exactly what the MITM attacker wants? Yeah, great design right there. Are the designers of the email protocols fucking retarded?9 -
I reversed engineered the network protocol for a game.
I uploaded the source code to GitHub and made a post on UC Forums.
I kept getting bombarded with messages from the same person, it went something like this:
Him: "I can't get this hack to work, pls send finish hack, thanks"
Me: "First of all this is not a complete hack. You actually need to know how to code to use this library."
Guy: "Ok, can u help me make hack for game?"
To keep this short, I basically told him:
"No. Look through the code, learn it, use what you learned."
Couple of hours later he replied:
"Ok. I look through code but don't know how work. Send me code pls."
From the kindness of my heart I made a extremely simplified wrapper for the already simple code and sent him the project files.
He replies with: "Thank for hack, I not able make it work. I build I try inject game but no work. How to run dll file."
At that point I gave up...3 -
I die, go to hell and my punishment is to write software for hell network that is having power problems due to light source disruptions and is running on Windows 95 on FAT32 without any service pack.
Network speed is trough 300bps dial up modem. Protocol is over IPX/SPX.
My task is to write interactive websites that are replacement of modern websites but in VBScript, ActiveX, IE 4.0.
I have 10 managers that tell me what to do and scream when I miss deadline that is set everyday without my knowledge at random times.
They send me an email and 5 minutes later they arrive at my desk to ask me about it.
I must work 16 hours a day before I can leave the place and if I won’t show up police beats me and escorts me to the office.
If I’m late a second I don’t get payment.
I can’t afford to rent a place so I sleep in the sleeping bag.
It doesn’t matter much cause as soon as I fall asleep phone rings until I wake up and my manager screams about the problems he have for about an hour.6 -
Senior manager: I cant understand how this project has taken so long?
Me: Well you hired me as a C# WPF developer and then asked me to deliver an android app without any kind of training so i had to teach myself app development and reverse engineer the undocumented protocol it needs to use to communicate with our product.
Senior manager: Ok. I get that, but it should only take around 3 months to get up to speed though right?
Me (to myself): how in the hell? New platform, self teaching, undocumented protocol for a complex low level real-time system, other responsibilities taking at least 50% of my time and i should be as productive as an outsourced app dev company in 3 months???!! FFFFFUUUUUUUUUUUUUU!!!!!!!!!3 -
Intern's CV says they have technical skills with MS Office, MySQL and JavaScript. Last month I let my manager know that this intern doesn't really know anything, so we let her do a Freecodecamp course, after which she still cannot build a basic HTML and CSS page and doesn't understand the relationship between HTML and CSS.
My manager bought her a Laravel course for beginners and today I discovered that she also doesn't understand databases, because she tried to enter an alphabetic character into a column that only accepts integers. She doesn't read/understand the error codes thrown by the application.
She tried to access a route which she created in her Laravel app by accessing it via the phpmyadmin dashboard and called me and wasted my time by asking me why her route isn't working. She literally does not understand how computers work, or how the HTTP protocol works, even less so how a file structure works. She cannot translate abstractions to practical solutions.
She either deliberately lied on her CV to get a job, or she's just really dumb and doesn't understand what the term "technical skills" mean.
I've told my manager multiple times how I think she's in the wrong job, but they keep pushing things beyond her capabilities onto her desk. I was told I'd get an intern to help me with my work load, but I got signed up into an experiment I did not consent to (manager's words, it's an experiment to help uplift people with bad degrees and a poor background). I am not a good teacher, I hate doing it.22 -
Hi,
I'm not a ranty person so I never actually thought I'd post anything here but here it goes.
From the beginning.
We use ancient technologies. PHP 5.2, Symfony 1.2 and a non RFC complient SOAP with NO documentation.
A year ago We've been thrown a new temporary project. An VOIP app for every OS.
That being iOS, Android, MAC, PC, Linux, Windows mobile. With a 3 month deadline. All that thrown at 4 PHP developers. The idea being that They'll take it, sign the delivery protocol, everyone happy. No more updates for the app needed. They get their funds they needed the app for and we get paid.
Fast forward to today...
Our dev team started the year with great news that We'll most likely have to create a new project. Since the amount of new features would be far greater than current feature set, we managed to finally force our boss to use newer technologies (ie. seperate backend symfony4 PHP7+/frontend react, rest api and so on). So we were ecstatic to say the least. With preestimates aimed at a minimum 3 month development period. Since we're comfortable with everything that needs to be done.
Two days later our boss came to me that one of our most annoying clients needs a new feature. Said client uses ancient version written on a napkin because They changed half of the specification 2 weaks before deadline in a software made not by a developer but some sysadmin who didn't know anything. His MVC model was practically VVV model since he even had sql queries in some views. Feature will take 3 days - fixing everything that will break in the meantime - 1-2 months.
F*** it, fine. A little overtime won't kill me.
Yesterday boss comes again... Apparently someone lost a delivery protocol for a project we ended that half a year ago. Whats even better at the time when we asked for hardware to test we never got any. When we asked about any testing enviornment - nothing. The app being SEMI-stable on everything is an overstatement but it was working on the os'es available at the time. Since the client started testing now again, it turns out that both Android app does not work on 8.1/9 and the iOS app does not work on ios12. The client obviously does not want to pay and we can do little with it without the protocol, other than rewriting the apps.
It will take months at least since all of those apps were written by people that didn't know neither the OS'es nor the languages. For example I started writing the iOS one in swift. Only to learn after half of the development time, that swift doesn't like working by C Library rules and I had to use ObjC also. With some C thrown in due to the library. 3 unknown languages, on an unknown platform in 3 months. I never had any apple device in my hand at that time nor do I intend to now. I'm astonished it worked out then. It was a clusterf**k of bad design and sticking everything together with deprecated apis and a gum. So I'll have to basically fully rewrite it.
If boss decides we'll take all those at the same time I'll f***ing jump of a bridge.8 -
Just thought I'd share my current project: Taking an old ISA sound card I got off eBay and wiring it up to an Arduino to control its OPL3 synth from a MIDI keyboard. I have it mostly working now.
No intention to play audio samples, so I've not bothered with any of the DMA stuff - just MIDI (MPU-401 UART) and OPL3.
It has involved learning the pinout of the ISA bus connectors, figuring out which ones are actually used for this card, ignoring the standards a little (hello, amplifier chip that is wired up to the +12V line but which still happily works at +5V...)
Most of the wires going to it are for each bit of the 16-bit address and 8-bit data. Using a couple of shift registers for the address, and a universal shift register for the data. Wrote some fairly primitive ISA bus read/write code, but it was really slow. Eventually found out about SPI and re-wrote the code to use that and it became very fast. Had trouble with some timings, fixed those.
The card is an ISA Plug and Play card, meaning before I could use it I had to tell it what resources to use. Linux driver code and some reverse-engineering of the official Windows/DOS drivers got me past this stage.
Wired up IRQ 5 to an Arduino interrupt to deal with incoming MIDI data, with a routine that buffers it. Ran into trouble with the interrupt happening during I/O and needing to do some I/O inside the handler and had to set a flag to decide whether to disable/re-enable interrupts during I/O.
It looks like total chaos, but the various wires going across the breadboard are mainly to make it easier to deal with the 16-bit address and 8-bit data lines. The LEDs were initially used to check what addresses/data were being sent, but now only one of them is connected and indicates when the interrupt handler is executing.
There's still a lot to do after that though - MIDI and OPL3 are two completely different things so I had to write some code to manage the different "channels" of the OPL3 chip. I have it playing multiple notes at the same time but need to make it able to control the various settings over MIDI. Eventually I might add some physical controls to it and get a PCB made.
The fun part is, I only vaguely know what I'm doing with the electronics side of this. I didn't know what a "shift register" was before this project, nor anything about the workings of the ISA bus. I knew a bit about MIDI (both the protocol and generally how the MPU-401 UART works) along with the operation of a sound card from a driver/software perspective, but everything else is pretty new to me.
As a useful little extra, I made some "fake" components that I can build the software against on a PC, to run some tests before uploading it to the Arduino (mostly just prints out the addresses it is going to try and write to).46 -
This has been said countless times before me, and way better than me that’s supper tired, but I need to rant out
And what I’m ranting out today, is Apple. Its essence, its core, the reason it still exists: the ECOSYSTEM!
The problem with Apple ecosystem is that it’s the ecosystem of a fucking PRISON!
People like it because it works well together , but it’s sure that in a prison, the path from your cell to the cantine is pretty optimized; you get forced there! And you might try to get your food elsewhere, but the walls of the prison are made to be difficult to cross. Especially on mobile, where they’re making it harder and harder to escape, to make a jailbreak (pun-intended). Keeping you the loyal little sheep, or the forcing you to it.
That prison is also made private, a little club, to attract people to it. They even got their own little system to talk to each other, but oh god protect them from their little messages to pass the walls of the prison.
And all that prison is guarded by the warden, watching from high in the cloud. Forcing you to report yourself to him to be part of that prison.
That prison, also, can only be entered with specific vehicles, provided by the prison, to ensure maximum compatibility and efficiency. Good luck entering with a disguised vehicle if you find the official ones too pricey for their parts.
They also provided pressure tubes to send things from one cell to another. While being only simple pressure tubes like any other, they’re acclaimed because they’re apparently easier to use than the other 3rd party pressure tubes that can send things to the outside. Why? Because, oh yes it’s already in everybody’s cells (of that prison, outside is dangerous) and the other tubes have been conveniently being placed somewhere harder to reach.
Another thing they have are those windows that can view the outside. While being maybe less clear than some other windows, they are ok. But if you ever consider going mobile to enjoy that safari with lions, then man do they love bringing you back to that window.
Ok so I’m done with the prison metaphor, or I won’t sleep.
The ecosystem is probably the major reason Apple is still there. You buy from there because you’re a prisoner (I guess I’m not finished with the metaphor after all).
This is a prime example of RMS’s quote “If the user doesn’t control the software, the software controls the user”
AirDrop isn’t some sort of revolutionary tech, it uses a well established protocol that other implementations use to do the same thing. They could really easily open source the protocol and allow everyone to profit, but they won’t, because that would mean you don’t have to buy Apple.
That’s why I militate for open source, decentralized and standardized protocols. Because that way, we control the software, and it doesn’t control us.
All the things I said aren’t so bad because when you buy Apple, you make a choice. But I don’t have a choice, I am typing this on an Apple device, because I need to (I won’t elaborate on that) because of that fucking *ecosystem*
I am really tired, so half the sentences probably don’t make sense, but thanks for coming to my stupid TED talk.12 -
That's actually something that happened fairly recently.. just that I didn't have the energy left at the time to write it down. That, or I got my ass too drunk to properly write anything.. not sure actually.
So on paper I'm unemployed, but I do spend some time still on pretty much voluntary work for HackingVision, along with a handful of other people.
At the time, we were just doing the usual chit-chat in the admin channel, me still sick in my bed (actually that means that I wasn't drunk but really tired for once.. amazing!) and catching up to what happened, but unable to do any useful work in this sick state. So, tablet, typing on glass, right. I didn't have any keyboard attached at the time.
One of the staff members (a wanketeer from India) apparently had an assignment in a few hours for which he needed to write a server application in Java. Now, performance issues aside, I figured.. well I've got quite a bit of experience with servers, as well as some with client-server protocols. So I got thinking.. mail servers, way too overengineered. Web servers.. well that could work, I've done some basic netcat webservers that just sent an HTTP 200 OK and the file, those worked fine.. although super basic of course. And then there's IRC, which I've actually talked to an InspIRCd server through telnet before (which by the way is pretty much the only thing that telnet is still useful for, something that was never its purpose, lol) and realized that that protocol is actually quite easy to develop around. That's why I like it so much over modern chat protocols like XMPP, MQTT and whatnot. So I recommended that he'd write a little IRC server in Java. Or even just a chatbot like I attempted to at the time, considering that that's - with a stretch of course - a sort-of server too.
His fucking response however, so goddamn fucking infuriating. "If the protocol is so easy, then please write me down how to implement it in Java."
Essentially do his fucking work for him. I don't know Java, but as a fucking HackingVision admin, YOU SHOULD FUCKING KNOW THAT HACKERS CAN'T STAND LAZY CUNTS THAT CAN'T EVEN BE ASSED TO GOOGLE SHIT!!! If I wanted to deal with cunts like that, I'd have opened the page inbox with all its Fb h4xx0ring questions, not the fucking admin chat!
And type it on a goddamn fucking piece of glass, while fucking sick?! Get your ass fucked by a bobs and vegana horny fuck from the untouchable caste, because that's where you fucking belong for expecting THAT from me, you fucking bhenchod.
But at least I didn't get my ass enraged like that to say that to him in the admin chat. Although that probably wouldn't have been a bad thing, to get his feet right back on the ground again.1 -
Modern web frontend is giving me a huge headache...
Gazillion frameworks, css preprocessors, transpilers, task runners, webpack, state management, templating, Rxjs, vector graphics,async,promises, es6,es7,babel,uglifying,minifying,beautifying,modules,dependecy injection....
All this for programming apps that happen to run inside browsers on a protocol which was designed to display simple text pages...
This is insanity. It cannot go on like this for long. I pray for webasm and elm to rescue me from this chaos.
I work now as a fullstack dev as my first job but my next job is definitely going to be backend/native stuff for desktop or mobile. It seems those areas are much less crazy.10 -
A few days ago a friend of mine asked me to teach him to code. When I wanted to know which language he'd like to learn, he hesitantly replied "https".
Then I explained, this was a data transfer protocol. His next idea was "http". 🙄
Guess who will learn Python8 -
At college (UK) and taking a general IT course (databases, programming, networking) a friend suggested "We should make our own network protocol." (The only language we had covered at the time was Visual Basic)4
-
The IT head of my Client's company : You need to explain me what exactly you are doing in the backend and how the IOT devices are connected to the server. And the security protocol too.
Me : But it's already there in the design documents.
IT Head : I know, but I need more details as I need to give a presentation.
Me : (That's the point! You want me to be your teacher!) Okay. I will try.
IT Head : You have to.
Me : (Fuck you) Well, there are four separate servers - cache, db, socket and web. Each of the servers can be configured in a distributed way. You can put some load balancers and connect multiple servers of the same type to a particular load balancer. The database and cache servers need to replicated. The socket and http servers will subscribe to the cache server's updates. The IOT devices will be connected to the socket server via SSL and will publish the updates to a particular topic. The socket server will update the cache server and the http servers which are subscribed to that channel will receive the update notification. Then http server will forward the data to the web portals via web socket. The websockets will also work on SSL to provide security. The cache server also updates the database after a fixed interval.
This is how it works.
IT Head : Can you please give the presentation?
Me : (Fuck you asshole! Now die thinking about this architecture) Nope. I am really busy.11 -
My typical morning Teams exchange:
Newb: GM (requesting connection)
Me: GM (connection established)
Newb: How r u? (requesting headers)
Me: Good (headers sent)
Newb: You free? (ready for comms?)
Me: Sure (comms ready)
…
Feels like a bad internet protocol.9 -
*working on a programming assignment for a graduate-level course*
"We will provide you code that implements the protocol in the server. You do not need to touch this code."
*provided file has syntax errors, including a block comment which doesn't close before EOF*1 -
It has been bugging the shit out of me lately... the sheer number of shit-tier "programmers" that have been climbing out of the woodwork the last few years.
I'm not trying to come across as elitist or "holier than thou", but it's getting ridiculous and annoying. Even on here, you have people who "only do frontend development" or some other lame ass shit-stain of an excuse.
When I first started learning programming (PHP was my first language), it wasn't because I wanted to be a programmer. I used to be a member (my account is still there, in fact) of "HackThisSite", back when I was about 12 years old. After hanging out long enough, I got the hint that the best hackers are, in essence, programmers.
Want to learn how to do SQL injection? Learn SQL - write a program that uses an SQL database, and ask yourself how you would exploit your own software.
Want to reverse engineer the network protocol of some proprietary software? Learn TCP/IP - write a TCP/IP packet filter.
Back then, a programmer and a hacker were very much one in the same. Nowadays, some kid can download Python, write a "hello, world" program and they're halfway to freelancing or whatever.
It's rare to find a programmer - a REAL programmer, one who knows how the systems he develops for better than the back of his hand.
These days, I find people want the instant gratification that these simpler languages provide. You don't need to understand how virtual memory works, hell many people don't even really understand C/C++ pointers - and that's BASIC SHIT right there.
Put another way, would you want to take your car to a brake mechanic that doesn't understand how brakes work? I sure as hell wouldn't.
Watching these "programmers" out there who don't have a fucking clue how the code they write does what it does, is like watching a grown man walk around with a kid's toolbox full or plastic toys calling himself a mechanic. (I like cars, ok?!)
*sigh*
Python, AngularJS, Bootstrap, etc. They're all tools and they have their merits. But god fucking dammit, they're not the ONLY damn tools that matter. Stop making excuses *not* to learn something, Mr."IOnlyDoFrontEnd".
Coding ain't Lego's, fuckers.36 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Arrived at office then almost immediately the boss, who's also a developer, tells me he changed something on the protocol/api.
Me thinking: u just broke the api without thinking on consequences but hey... Ur the boss...
Later, he says: look, our app crashes!
Me: obviously...
:/ what the f**k was he thinking... :/9 -
I opened a post starting with a "NO TOFU" logo and I was wondering what relationship existed between the SSH protocol and anti-vegan people.
After some paragraphs it explained that TOFU stands for Trust On First Use (a security anti-pattern).7 -
Clicking "share" on directory in Windows Explorer, digging through config panel, fidgeting with network discovery options, toggling password protection, digging through account management, jumping over a chair 3 times to channel my inner Bill Gates, checking directory permissions, sacrificing 7 virgin unicorns, go into lusrmgr.msc, curse various gods, install CIFS1.0 protocol, reboot computer, disable encryption, checking registry, trying to summon Steve Ballmer using the blood of a bald goat and sweat-scented candles... 5 hours.
Install Ubuntu on spare SSD, mount Windows NTFS drive, start SMB daemon and set up samba users... 15 minutes.12 -
During the last couple of days, I got to hear quite a horrible story...
So we start at the beginning, where I have a dev-related chat with some other strangers on the internet. One of them was working on a custom protocol implementation with an API to go with it, written in Python. There were plans to migrate the codebase to another language like Rust in the long term. So the project seemed to be going well.
Another guy and the main subject of this chimed in on various of our messages, and long story short - he uses Express.js for everything he does, and he doesn't know jack shit on what he's talking about. Yet he still does.
Later we got the delight to hear that he had beaten up his mother, and that she's now in the hospital because of it, with broken arms, hands, fingers and severe bleeding. Yet he has the audacity to complain about his sore throat, caused by all his shouting. He refuses to seek any help, or to take medicines he's been given. This has been going on for several days now.
As much as I hate to even think about it, these too are "developers". I too have skeletons in my closet, but goddamn.. that these people even exist. The very idea that you may be talking to them every day. It disgusts me.16 -
!dev - cybersecurity related.
This is a semi hypothetical situation. I walked into this ad today and I know I'd have a conversation like this about this ad but I didn't this time, I had convo's like this, though.
*le me walking through the city centre with a friend*
*advertisement about a hearing aid which can be updated through remote connection (satellite according to the ad) pops up on screen*
Friend: Ohh that looks usefu.....
Me: Oh damn, what protocol would that use?
Does it use an encrypted connection?
How'd the receiving end parse the incoming data?
What kinda authentication might the receiving end use?
Friend: wha..........
Me: What system would the hearing aid have?
Would it be easy to gain RCE (Remote Code Execution) to that system through the satellite connection and is this managed centrally?
Could you do mitm's maybe?
What data encoding would the transmissions/applications use?
Friend: nevermind.... ._________.
Cybersecurity mindset much...!11 -
My friend, Gavin, an air steward (a job that he had done for decades), told me about an incident at work. He said that (shockingly to me) passengers occasionally die on a flight (particularly long-haul), just as a matter of course. This can be because people sometimes travel to visit loved ones BECAUSE they are dying, people sometimes find travelling itself stressful (so it can exacerbate an existing medical condition), or simply that, if you took a large number of people and shut them up in a space together for some considerable time, some of them would pop off through sheer statistical probability. Cabin crew are, apparently, fully trained to deal within this eventually in a calm, almost routine manner.
This particular flight, Gavin was working with another gay man: Peter, who was actually a VERY funny personality. Camp, extravagant and loud, Peter really lit up the place. But naturally, when the very elderly male passenger in seat 38b died peacefully in his sleep halfway across the Atlantic, Peter acted (like the entire crew), with decorum and dignity. As per the protocol, all the lights in the cabin were dimmed. A hush fell over the passengers (Gavin told me that, although no announcement is ever made, the other passengers nearly always instinctively know what's happened, with the news spreading via the media of hushed whispers and nudges). Then, as per standing instructions, two of the crew carefully lifted the deceased out of his seat and gently carried him to the crew station where he was laid down on a bed for the remainder of the flight.
After the late gentleman disappeared behind the discreetly drawn curtain, you could have heard a pin drop. There was a demure pause during which, slowly, the lights went back up.
Suddenly Peter's cheery face appeared, poking through the gap in the drapes. He looked around, blinking brightly with curiosity at the seated passengers, and said, in a voice that echoed around the whole cabin:
"SO! Anyone else have the fish?"
He narrowly avoided getting sacked.10 -
The new mobile app codebase i'm working with, was clearly written by someone who just read a book on generics and encapsulation.
I need to pull out 2 screens into a separate library to have it shared around. The 1 networking request used is wrapped up in a 'WebServiceFactory' and `WebServiceObjectMapper`, used by a `NetworkingManager` which exposes a generic `request` method taking in a `TopLevelResponse` type (Which has imported every model) which uses a factory method to get the real response type.
This is needed by the `Router` which takes a generic `Action` which they've subclassed for each and every use case needing server communication.
Then the networking request function is part of a chain of 4 near identical functions spread across 4 different files, each one doing a tiny bit more than the last and casting everything to a new god damn protocol, because fuck concrete types.
Its not even used in that many places, theres like 6 networking calls. Why are people so god damn fucking stupid and insist on over engineering the shit out of their apps. I'm fed the fuck up with these useless skidmarks.3 -
Sorry for breaking the protocol, but I'm not here to rant. I want to thanks all the ranters (is that a thing now?) for recommending Mr. Robot (the TV series). Just watched the first episode and I can see myself watching it all day. Be back tomorrow.6
-
In the before time (late 90s) I worked for a company that worked for a company that worked for a company that provided software engineering services for NRC regulatory compliance. Fallout radius simulation, security access and checks, operational reporting, that sort of thing. Given that, I spent a lot of time around/at/in nuclear reactors.
One day, we're working on this system that uses RFID (before it was cool) and various physical sensors to do a few things, one of which is to determine if people exist at the intersection of hazardous particles, gasses, etc.
This also happens to be a system which, at that moment, is reporting hazardous conditions and people at the top of the outer containment shell. We know this is probably a red herring or faulty sensor because no one is present in the system vs the access logs and cameras, but we have to check anyways. A few building engineers climb the ladders up there and find that nothing is really visibly wrong and we have an all clear. They did not however know how to check the sensor.
Enter me, the only person from our firm on site that day. So in the next few minutes I am also in a monkey suit (bc protocol), climbing a 150 foot ladder that leads to another 150 foot ladder, all 110lbs of me + a 30lb diag "laptop" slung over my shoulder by a strap. At the top, I walk about a quarter of the way out, open the casing on the sensor module and find that someone had hooked up the line feed, but not the activity connection wire so it was sending a false signal. I open the diag laptop, plug it into the unit, write a simple firmware extension to intermediate the condition, flash, reload. I verify the error has cleared and an appropriate message was sent to the diagnostic system over the radio, run through an error test cycle, radio again, close it up. Once I returned to the ground, sweating my ass off, I also send a not at all passive aggressive email letting the boss know that the next shift will need to push the update to the other 600 air-gapped, unidirectional sensors around the facility.11 -
In my previous rant about IPv6 (https://devrant.com/rants/2184688 if you're interested) I got a lot of very valuable insights in the comments and I figured that I might as well summarize what I've learned from them.
So, there's 128 bits of IP space to go around in IPv6, where 64 bits are assigned to the internet, and 64 bits to the private network of end users. Private as in, behind a router of some kind, equivalent to the bogon address spaces in IPv4. Which is nice, it ensures that everyone has the same address space to play with.. but it should've been (in my opinion) differently assigned. The internet is orders of magnitude larger than private networks. Most SOHO networks only have a handful of devices in them that need addressing. The internet on the other hand has, well, billions of devices in it. As mentioned before I doubt that this total number will be more than a multiple of the total world population. Not many people or companies use more than a few public IP addresses (again, what's inside the SOHO networks is separate from that). Consider this the equivalent of the amount of public IP's you currently control. In my case that would be 4, one for my home network and 3 for the internet-facing servers I own.
There's various ways in which overall network complexity is reduced in IPv6. This includes IPSec which is now part of the protocol suite and thus no longer an extension. Standardizing this is a good thing, and honestly I'm surprised that this wasn't the case before.
Many people seem to oppose the way IPv6 is presented, hexadecimal is not something many people use every day. Personally I've grown quite fond of the decimal representation of IPv4. Then again, there is a binary conversion involved in classless IPv4. Hexadecimal makes this conversion easier.
There seems to be opposition to memorizing IPv6 addresses, for which DNS can be used. I agree, I use this for my IPv4 network already. Makes life easier when you can just address devices by a domain name. For any developers out there with no experience with administration that think that this is bullshit - imagine having to remember the IP address of Facebook, Google, Stack Overflow and every other website you visit. Add to the list however many devices you want to be present in the imaginary network. For me right now that's between 20 and 30 hosts, and gradually increasing. Scalability can be a bitch.
Any other things.. Oh yeah. The average amount of devices in a SOHO network is not quite 1 anymore - there are currently about half a dozen devices in a home network that need to be addressed. This number increases as more devices become smart devices. That said of course, it's nowhere close to needing 64 bits and will likely never need it. Again, for any devs that think that this is bullshit - prove me wrong. I happen to know in one particular instance that they have centralized all their resources into a single PC. This seems to be common with developers and I think it's normal. But it also reduces the chances to see what networks with many devices in it are like. Again, scalability can be a bitch.
Thanks a lot everyone for your comments on the matter, I've learned a lot and really appreciate it. Do check out the previous rant and particularly the comments on it if you're interested. See ya!25 -
This brings joy
https://reddit.com/r/technology/...
Bypass paywall:
A series of scandals and missteps has damaged Facebook's reputation so much that the company is being forced to pay ever larger compensation to hire and retain workers, according to industry recruiters, former employees, and data reviewed by Insider.
The company has always competed aggressively for talent, and the tech job market in general is on fire. But a deteriorating public image means the social-media giant now has to outbid other major tech companies, such as Google.
"One thing Facebook can still do is pay a lot more," said Jose Guardado, an experienced tech recruiter and the founder of Build Talent. "They can easily throw more compensation at people they currently have, and cover any brand tax and pay a little more to get people to come on."
Silicon Valley companies thrive or whither based on their ability to recruit the smartest employees. Without a steady influx of engineers and other technical experts, new products and important updates take longer to release, and rivals can quickly get ahead. Then there's the financial cost: In 2022, Facebook projected, expenses could jump as high as $97 billion from $70 billion this year, in large part because of "investments in technical and product talent." A company spokesperson did not respond to a request for comment.
Other companies, and even whole industries, have had to increase compensation to overcome hiring and retention problems caused by scandal and shifting public perceptions, said Alan Johnson, a managing director at the compensation consulting firm Johnson Associates. "If you're an oil company, if you make cigarettes, if you're in cattle or Wells Fargo, sure," he said.
How well this is working for Facebook is debatable as the company has more than 4,300 open jobs and has seen decreasing rates of acceptance on job offers, according to internal documents reported by Protocol. It's also seen dozens of high-level executives leave this year, and recruiters say employees are now more open to considering jobs elsewhere. Facebook used to be a place that people rarely left, given its reach, pay, and perks.
A former Oculus engineer who left last year said Facebook could now be seen as a "black mark" on someone's career. A hardware engineer who exited in 2020 shared similar sentiments: They said they quit because of concerns about misinformation on the platform and the effect of that on children. Another employee said their department was dissolved in late 2019 by Facebook and, although the company offered another position that paid more, they left last year anyway for a different industry. The workers, and many other people who spoke with Insider for this story, asked not to be identified because of the sensitive nature of the topic.
For those who stick around and people who take new jobs at Facebook, base pay and stock grants have gone up a "sizable" amount in the past year, said Zuhayeer Musa, cofounder of Levels.fyi, a platform that collects pay data based on verified offers and compensation disclosures.
During the second quarter of 2021, the median compensation for an upper-mid-level engineer, an E5, was $400,000, up from $380,000 a year earlier. For an E4, the median pay jumped to $276,000 from $256,000 in the same period. For both groups, the increases were double the gains between 2018 and 2019, Levels.fyi data showed.
Musa, who's firm also offers pay-negotiation coaching, said previously that the total compensation ceiling for an E5 engineer at Facebook was $450,000. "We recently had a client get up to $510,000 for E5," he added.
Equity awards at the company are getting more generous, too. At the group-director and VP levels, Facebook staff are getting $3 million to $6 million in restricted stock units each year, another tech recruiter said. Directors and managers are getting on average $1 million a year. In engineering, a high-level engineer is getting $600,000 in stock and a $75,000 bonus, while even an entry-level engineer is getting $50,000 to $100,000 in stock and a $20,000 to $50,000 bonus, Levels.fyi data indicated.
Even compared to Google, Facebook's stock awards are generous and increasing, Levels.fyi data shows. While base pay is about the same, Facebook offers more in stock grants, significantly increasing total compensation. At Google, entry-level equity awards range from $20,000 to $38,000, while Facebook grants are worth $40,000 to $60,000. Sign-on bonuses at Facebook are often about $50,000, while Google gives about $20,000, according to the data.
"It's not normal, but it's consistent with the craziness that's happening in the market right now," said Aalap Shah, a managing director focused on the tech industry at the consulting firm Pearl Meyer.10 -
A recruiter emailed me.
And called me (and left a voicemail).
AND texted me.
About a job opportunity in California (I live in Texas).
That requires experience writing performance critical and thread-safe code in a large multi-threaded codebase (I work primarily in JavaScript/TypeScript ecosystem, fat chance of that).
Responsibilities listed as: Focus on Supercharger Open Charge Point Protocol (OCPP) software features. I don’t even know what the fuck that means.
Opportunity is for a 3 month contract.
Why are you so desperate, lady?10 -
I am currently in a bit of a (well-deserved) lull at work, both of my projects are finishing up/ finished, so tomorrow should be pretty light, as the latter half of today was.
And I have really gotten interested in the HTTP protocol. It's so interesting learning how it all works under the hood.
So I think I'm going to be researching/ messing around with creating a cpp project that essentially implements cURL from the ground up, creating sockets, reading from them, parsing the HTTP requests... all that. I don't expect to actually get it done, but it should be an immense learning experience. I have a clear goal: implement this function:
std::string get(const std::string&);
Once I'm able to just GET as simple as that, I know I have achieved my goal!3 -
Physics class, groupwork
Me: *Writing a protocol in Markdown and LaTeX*
Partner: Are you currently using Excel?
Me: "No"
Partner: *yells* We need a new computer!7 -
Another incident which made a Security Researcher cry
[ NOTE : Check profile to read older incidents ]
-----------------------------------------------------------
So this all started when I was at my home (bunked the office that day xD) and I got a call from a..... Let's call him Fella as I always do . So here we go . And yeah , our Fella is a SysAdmin .
-----------------------------------------------------------
Fella - Hey man sup!
Me - Good going mate , bunked the office , weather's nice , gonna spend time with my girl today . So what's goinon?
Fella - Bruh my network sharing folders ain't working no more .
Me - Did you changed or modified anything?
Fella - Nope
Me - Okay , gimme your login creds lemme check .
Fella - Check your inbox *texts me the credentials*
*I logged in and what I'm seeing is that server runs on Windows2008R2 , checked the event logs , everything's fine and all of a sudden what I found is fucking embarrassing , this wise man closed SMB service*
Me - Did you closed SMB service?
Fella - Yeah
Me - You know what it does?
Fella - Yeah it's a protocol , I turned it off to protect the server from Wannacry .
Me - Fuckerrrr!!!!! Asshole dumbass you fuckin piece of Dodo's shit!! SMB is the service responsible for files and network sharing!!!
Fella - But....I just wanted protection
Me - 😭😭😭
*A long conversation continues with a lot of specially made words to decrease the rate of frustration which I used already*
Fella - Okay I'm turning it on .
Me - Go on....... Asshole
Fella - It worked! Thanks a lot bro
Me - Just leave me and my soul away from evil and hang up .
*Now the question is , who the hell gives them the post of SysAdmin? While thinking this question , I almost thought of committing suicide but then my girl came with coffee and my rubber duck*1 -
Google: buys Android
Makes tons of $ from Ads
Meanwhile 7 year old bugs
Are still not fixed
A bug reported in 2012: recently created files are not visible when using MTP protocol.
Guess what? I still have this bug on my 2017 phone, like many other people.
Probably has something to do with file cache.
Because obviously 7 years is not enough to fix a stupid bug. Especially when Google is busy implementing all the other features nobody asked for except marketing department4 -
Apparently DELETE and... most of the HTTP verbs are disabled by default in IIS (ASP/ MVC/ Microsoft server software)
Am I wrong in saying that's fucking bullshit?!
Why make an HTTP serving environment with a massive array of tools to help you do everything you need in the web environment... And then DISABLE some of the web protocol??? What???
Not even the obscure verbs. DELETE. Is microsoft the type of bitch to delete using a GET request?? I bet the send passwords as get parameters.8 -
Spent a lot of time designing a proper HTTP (dare I even say RESTful) API for our - what is until now a closed system, using a little-known/badly-supported message-over-websocket protocol to do RPC-style communications - supposedly enterprise-grade product.
I make the API spec go through several rounds of review with the rest of the dev team and customers/partners alike. After a few iterations, everybody agrees that the spec will meet the necessary requirements.
I start implementing according to spec. Because this is the first time we're actually building proper HTTP handling into the product, but we of course have to make it work at least somewhat with the RPC-style codebase, it's mostly foundational work. But still, I manage to get some initial endpoints fully implemented and working as per the spec we agreed. The first PR is created, reviews are positive, the direction is clear and what's there already works.
At this point in time, I leave on my honeymoon for two weeks. Naturally, I assume that the remaining endpoints will be completed following the outlines/example of the endpoints which I built. When I come back, the team mentions that the implementation is completed and I believe all is well.
The feature is deployed selectively to some alpha customers to start validation testing before the big rollout. It's been like that for a good month, until a few days ago when I get a question related to a PoC integration which they can't seem to get to work.
I start investigating and notice that the API hasn't been implemented according to the previously agreed upon spec at all. Not only did the team manage to implement the missing functionality in strange and some even broken ways, they also managed to refactor my previously working endpoints into being non-compliant.
Now, I'm a flexible guy. It's not because something isn't done exactly as I've imagined it that it's automatically bad. However, I know from experience that designing a good/clear/future-proof API is a tricky exercise. I've put a lot of time and effort into deliberate design decisions that made up the spec that we all reviewed repeatedly and agreed upon. The current implementation might also be fine, but I now have to go over each endpoint again and reason about whether the implementation still fulfills the requirements (both soft and hard) that we set out to meet.
I'm met with resistance, pushback and disbelief from product management and dev co-workers alike when I raise the concern that the API might actually not be production-ready (while I'm frantically rewriting my integration tests and figuring out how the actual implementation works in comparison to what was spec'ed).
Oh, and did I mention that product management wants to release this by end-of-week?!7 -
Someone is trying to launch a brute force attack on one of my servers that I set up for an old project. According to the logs, they've tried Jorgee, they've tried directly accessing the MySQL database (with the laziest passwords), and they're now on day 4 of their brute force attack against my SSH server. I'm fairly certain that they won't be getting in (not that there's anything worth getting in the first place), but what's the standard protocol for this? Do I just wait this out, or is there something I can do to break their bot? I have fail2ban enabled, and it is doing its job, but the attacker is changing their IP address with every attack.10
-
Boss: "We need this change implemented tomorrow"
Me: "No problem, it's completed"
Boss: "Wait you didn't follow change protocol you need to allow 5 business days for review and approval"
Me: ....2 -
I used to work IT in an entertainment startup, and now I’m an iOS dev at a big entertainment company. Several people from my old company have been reaching out to eagerly tell me about their new app idea I just have to hear, asking me to help code their app— and have even hinted at me quitting my nice safe job to join their great new startup that doesn’t even exist yet.
I know this must happen to app devs all the time. What do you say?
How do you deal with telling these nice people who just don’t understand it doesn’t work that way, without crushing their dream? I have a coffee meeting planned to tell one of them “You should learn to code so you can make a proof of concept,” but I fear that won’t be received well.
What’s the standard protocol for telling people you won’t be able to code their magic app idea?10 -
I'm getting more and more triggered by my colleagues overusing words in seemingly random fashion.
The word 'perspective' comes up at least 6 times during a meeting, from an x perspective, from a y perspective. It would be fine in a design meeting but it's used _so fucking much_ I cringe every time I hear it.
Another one is 'standard', that gets put in front of every word nowadays, standard process, standard protocol, standard machine, standard pipeline. What does it mean? No clue, what does it add? Nothing.
'Please put this add the standard location.'
Where?
'The default one'
What?!
I remove it from documentation every chance I get.
Furthermore, some documentation changes make small pieces of information super long. A nice summary list of features? Make it at least 3 sentences for every bullet point. 1-sentence info with a reference link to more info? Scratch that let's include all information in that reference paragraph anyway. Sometimes they even expand English expressions for no reason, making them longer and harder to read.
WHYYYY
We always complain about shit documentation and yet we're oblivious to the fact that our own docs are so bloated. Stop repeating information, stop using useless adjectives, just put it all in 1 sentence and add dozens of code examples. One piece of code says more than a billion words.
I'm not innocent either. As a teen I was great at writing long pieces of text that seemed like a great read but were actually way too bloated for the information I needed to convey. It was great for reaching word limits.
Now I'm trying my absolute best to be as concise and to-the-point as possible because I know that nobody likes reading and people just want the information that they're looking for.
Even this rant is overly long, but thank god that it's just a rant and I can let off some steam.
Btw same thing goes for diagrams, too many icons, too much text, too many lines. When I try to submit a clean-as-fuck diagram I get asked to add more info/features to which I say No, we're already at the max.
I even got a PR for review that made some changes to add unnecessary information, I pointed it out and never heard anything from them again. I rejected the PR, and never saw a new one.
* Sigh *
It's just so strange to me, it's never clear to me why these things happen. I'm too much of a coward to point these things out unless they endanger the quality of the product. But maybe they just need somebody to tell it to them.6 -
I asked my CS teacher why my institutions domain had only the www subdomain pointing to the webspace, but not also the second level domain itself. He then explained me that www is the *protocol* on the internet and it's necessary for the website to be accessible, and that pointing the SLD to the webspace in addition therefore wouldn't work.
How could I ever take him serious again? He's supposed to teach networking btw.2 -
Just now when I'm watching one of the many anime's I've saved onto my file server I noticed something.. all of their files are incomplete, and so are they on the NTFS mirror on this WanBLowS host. The files got corrupted. I recall that I used robocopy to place the files back and forth, and yet again it lives up to its expectations of it being a motherfucking piece of Winshit. FUCK YOU ROBOCOPY!!! If I wanted to fetch that anime yet again just to deal with your developers' incompetence, I'd have watched it online!! Meanwhile tell me, HOW DIFFICULT IS IT TO DEAL WITH A NETWORK FILE TRANSFER THAT EVEN USES YOUR OWN SHITFEST OF A PROTOCOL, FUCKING SMB?!! MSFT certified pieces of shit!!!!7
-
Siemens Step7 code block protection (PLC's).. It was designed to lock code that you don't want others to be able to read. All blocks are in a dbf file, so you just need to find the block record and uncomment one line, voila - source code available.
Given the massive use of Siemens PLC's on plants all over the world, and the simplicity of hacking via S7 protocol, usually Internet connected, it's a breeze to steal or modify the controllers code with possible critical implications.
Enter Stuxnet.1 -
Anything I (am able to) build myself.
Also, things that are reasonably standardized. So you probably won't see me using a commercial NAS (needing a web browser to navigate and up-/download my files, say what?) nor would I use something like Mega, despite being encrypted. I don't like lock-in into certain clients to speak some proprietary "secure protocol". Same reason why I don't use ProtonMail or that other one.. Tutanota. As a service, use the standards that already exist, implement those well and then come offer it to me.
But yeah. Self-hosted DNS, email (modified iRedMail), Samba file server, a blog where I have unlimited editing capabilities (God I miss that feature here on devRant), ... Don't trust the machines nor the services you don't truly own, or at least make an informed decision about them. That is not to say that any compute task should be kept local such as search engines or AI or whatever that's best suited for centralized use.. but ideally, I do most of my computing locally, in a standardized way, and in a way that I completely control. Most commercial cloud services unfortunately do not offer that.
Edit: Except mail servers. Fuck mail servers. Nastiest things I've ever built, to the point where I'd argue that it was wrong to ever make email in the first place. Such a broken clusterfuck of protocols, add-ons (SPF, DKIM, DMARC etc), reputation to maintain... Fuck mail servers. Bloody soulsuckers those are. If you don't do system administration for a living, by all means do use the likes of ProtonMail and Tutanota, their security features are nonstandard but at least they (claim to) actually respect your privacy.2 -
OSC, or open sound protocol, does not have a Boolean type. instead, the Boolean true has type true and the Boolean false has type false.
what.
well.
at least the problem wasn't in my code? -
Got an assignment in school to make an easy project in c for embedded real time processors with a free complexity level (it was really early in the course and many had never been programming before).
Since I've been working a few years in development I decided to create an own transmitter and receiver for an own protocol between processors (we had just spent a week to understand how to use existing protocols, but I made my own).
The protocol used only 1 line to communicate with half-duplex and we're self adjusting the syncing frequency during the transmission. I managed to transmit data up to 1 kbps after tweaking it a bit (the only holdback was the processors clock frequency).
Then I got the feedback from our teacher, which basically said:
"Your protocol looks like any other protocol out there. Have you considered using an UART?"
Like yeah, I see the car you built there looks like any other car out there, have you considered using a Volvo instead?1 -
Last Monday I bought an iPhone as a little music player, and just to see how iOS works or doesn't work.. which arguments against Apple are valid, which aren't etc. And at a price point of €60 for a secondhand SE I figured, why not. And needless to say I've jailbroken it shortly after.
Initially setting up the iPhone when coming from fairly unrestricted Android ended up being quite a chore. I just wanted to use this thing as a music player, so how would you do it..?
Well you first have to set up the phone, iCloud account and whatnot, yada yada... Asks for an email address and flat out rejects your email address if it's got "apple" in it, catch-all email servers be damned I guess. So I chose ishit at my domain instead, much better. Address information for billing.. just bullshit that, give it some nulls. Phone number.. well I guess I could just give it a secondary SIM card's number.
So now the phone has been set up, more or less. To get music on it was quite a maze solving experience in its own right. There's some stuff about it on the Debian and Arch Wikis but it's fairly outdated. From the iPhone itself you can install VLC and use its app directory, which I'll get back to later. Then from e.g. Safari, download any music file.. which it downloads to iCloud.. Think Different I guess. Go to your iCloud and pull it into the iPhone for real this time. Now you can share the file to your VLC app, at which point it initializes a database for that particular app.
The databases / app storage can be considered equivalent to the /data directories for applications in Android, minus /sdcard. There is little to no shared storage between apps, most stuff works through sharing from one app to another.
Now you can connect the iPhone to your computer and see a mount point for your pictures, and one for your documents. In that documents mount point, there are directories for each app, which you can just drag files into. For some reason the AFC protocol just hangs up when you try to delete files from your computer however... Think Different?
Anyway, the music has been put on it. Such features, what a nugget! It's less bad than I thought, but still pretty fucked up.
At that point I was fairly dejected and that didn't get better with an update from iOS 14.1 to iOS 14.3. Turns out that Apple in its nannying galore now turns down the volume to 50% every half an hour or so, "for hearing safety" and "EU regulations" that don't exist. Saying that I was fuming and wanting to smack this piece of shit into the wall would be an understatement. And even among the iSheep, I found very few people that thought this is fine. Though despite all that, there were still some. I have no idea what it would take to make those people finally reconsider.. maybe Tim Cook himself shoving an iPhone up their ass, or maybe they'd be honored that Tim Cook noticed them even then... But I digress.
And then, then it really started to take off because I finally ended up jailbreaking the thing. Many people think that it's only third-party apps, but that is far from true. It is equivalent to rooting, and you do get access to a Unix root account by doing it. The way you do it is usually a bootkit, which in a desktop's ring model would be a negative ring. The access level is extremely high.
So you can root it, great. What use is that in a locked down system where there's nothing available..? Aha, that's where the next thing comes in, 2 actually. Cydia has an OpenSSH server in it, and it just binds to port 22 and supports all of OpenSSH's known goodness. All of it, I'm using ed25519 keys and a CA to log into my phone! Fuck yea boi, what a nugget! This is better than Android even! And it doesn't end there.. there's a second thing it has up its sleeve. This thing has an apt package manager in it, which is easily equivalent to what Termux offers, at the system level! You can install not just common CLI applications, but even graphical apps from Cydia over the network!
Without a jailbreak, I would say that iOS is pretty fucking terrible and if you care about modding, you shouldn't use it. But jailbroken, fufu.. this thing trades many blows with Android in the modding scene. I've said it before, but what a nugget!8 -
You know that you made it as dev when you realize that your creation has ability to effect your life also the life of others
It came to me much earlier in life ( college final semester)
F: Hey there is this girl that i am trying to talk but she never replies me on Facebook i waste to much time looking for her online status , i wish if i can say hi as soon as she comes online
HF: (first reaction) leave her alone man , ( dev reaction) hmm fb probably be using jabber protocol like xmpp I could make xmpp client and sync online status. If status changes drop a notification also the asmack lib provides a way to send msg to user in your chat room sooo we good !!
At the time i was handling 3 android app , implemented this and called it FacebookStalker , you can select who you wanna stalk and what msg you wanna send them as soon as they come online
Google obviously didn’t liked it
for a long time i judged myself that How can i can make this creepy app
Later I realized that it was not the app i was suspended because i used a DRM marked image as icon
Google never tells you the actual reason why your app is suspended so you cannot fix it.
I learned to be mindfull of what i code cause it started having real impact. Loosing dev account was like loosing everything at that point. i had nothing else25 -
Has been a long time since I'm appreciating working with GRPC.
Amazingly fast and full-featured protocol! No complaints at all.
Although I felt something was missing...
Back in the days of HTTP, we were all given very simple tools for making requests to verify behaviours and data of any of our HTTP endpoints, tools like curl, postman, wget and so on...
This toolset gives us definitely a nice and quick way to explore our HTTP services, debug them when necessary and be efficient.
This is probably what I miss the most from HTTP.
When you want to debug a remote endpoint with GRPC, you need to actually write a client by hand (in any of the supported language) then run it.
There are alternatives in the open source world, but those wants you to either configure the server to support Reflection or add a proxy in front of your services to be able to query them in a simpler way.
This is not how things work in 2018 almost 2019.
We want simple, quick and efficient tools that make our life easier and having problems more under control.
I'm a developer my self and I feel this on my skin every day. I don't want to change my server or add an infrastructure component for the simple reason of being able to query it in a simpler way!
However, This exact problem has been solved many times from HTTP or other protocols, so we should do something about our beloved GRPC.
Fine! I've told to my self. Let's fix this.
A few weeks later...
I'm glad to announce the first Release of BloomRPC - The first GRPC Client GUI that is nice and simple,
It allows to query and explore your GRPC services with just a couple of clicks without any additional modification to what you have running right now! Just install the client and start making requests.
It has been built with the Electron technology so its a desktop app and it supports the 3 major platforms, Mac, Linux, Windows.
Check out the repository on GitHub: https://github.com/uw-labs/bloomrpc
This is the first step towards the goal of having a simple and efficient way of querying GRPC services!
Keep in mind that It is in its first release, so improvements will follow along with future releases.
Your feedback and contributions are very welcome.
If you have the same frustration with GRPC I hope BloomRPC will make you a bit happier!3 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
only if students would write proper answers in their answer sheet, that would be great. FFS one of them wrote HTML - HyperText Transfer Protocol.6
-
You know, I agree with the opinion that everyone uses the tools they know can get the job done.
However, sometimes I just wish people wouldn't just pick the first tool for the job that comes up in Google's search results. People should look at more tools and then decide which tool is going to suit their use case best.
I can't for the life of me figure out why some people prefer using ad-ridden tools over ad-free, even open-source ones that work better in every way. The best example for this is people using μTorrent or BitTorrent® for the BitTorrent protocol instead of Deluge, Transmission, qBittorrent, and some others. They just typed in "how2download torrent for free uwu" and downloaded the objectively worst tool.
Pick your tools wisely, not by letting some search algorithm recommend you the worst one.9 -
Okay, mine is actually mildly interesting.
I was, at the time, obsessed with operating systems. The only thing I knew how to do (and I only knew how to do it poorly!) was make websites. And thus, Frames(TM) was born.
It was really labored for what it was. The whole thing worked off iframes to create different "Windows" which you could drag around the screen in a typical window-based environment. It had a start menu (Without search - I wasn't that good yet), task bar, background image, the whole 9 yards.
Some highlights from that project:
- Not hosted anywhere. Everything was file:/// protocol
- Originally, everything was statically created, and I learned about document.createElement during this project
- To communicate between the "Operating system" and the different frames, I used localStorage, which was continuously exec'ing anything it could find. Smart smart boi.
- Of course, the only thing available was web storage. The "Hard drive" was about 5MB, and if you cleared browsing data, goodbye everything!
Hours and hours happily dumped into that project, but I am definitely happy it is gone forever. -
Firefox and Chrome removing FTP support in 2021 was a terrible decision.
Web browsers were simply the more convenient FTP browsers, more than file managers, due to browsers' built-in multimedia capabilities like photo viewing and opening documents, distinct purple highlighting of already opened directories and files, browsing history, familiar mouse shortcuts like middle click for new tab, and no possibility of accidental writes due to a botched drag-and-drop operation or similar.
If I wanted to browse an FTP server in "read-only mode", web browsers used to be the preferred choice.12 -
Why do I program everything myself in C, even a rest service? By writing everything yourself in C you make simple things complex to make complex things simple.
Writing a rest service for example learns you a part of http protocol, how sockets work, how to create a parser (in this case json). Three thing's you would miss if I used python.
On top, your rest service uses WAY lesser resource than written in python for example. Especially for CPU usage.
Allocating and free-ing still often have issues there, but I consider it a skill problem / discipline issue. Not blaming C for that. The rules are clear.12 -
I'm an embedded devices dev. As part of my job I take off the shelf devices and write libraries so that our JavaScript team can interface with them. If I have to deal with one more custom implementation of a standard protocol I'm going to freak the fuck out. Like fuck there are only 5 or so major communication protocols at the hardware level there is no fucking reason to reinvent some shittier protocol over a well documented, widely used protocol that has been around for 30 years. (Modbus for anyone who cares)2
-
I've made the json protocol. It's a protocol containing only json. No http or anything.
To parse an json object from a stream, you need a function that returns the length of the first object/array of all your received data. The result of that function is to get the right chunk of the json to deserialize.
For such function, json needs to be parsed, so I wrote that function in C to be used with my C server and Python client. I finally implemented a C function into python function that has a real benefit / use case. Else you had to validate but by bit by the python json parser and that's slow while streaming. Some messages are quite big.
Advantage of this protocol is that it's full duplex.
I'm very happy!36 -
It's still in development. It often says the opposite from what is expected. Try Retoor1b chatbot at https://llm.molodetz.nl
This was result after building bot + chat website from scratch including training with embeddings. Design is generated by GPT, I tried my own but all ugly.
It's quite cool huh? Ask it to write some code for you. It's absolutely terrible. If it's down, try again in 5 minutes. I'm still working on it.
What's the result? I finally have a toolkit to make good/serious bots. Code could be bit better, but that's for other day.
Stack: self written webserver (and yes, you can post a gb to it or ddos it. Not sure if it survives the first one. I should limit requests to one mb anyway. Http headers may officially not be more than 4096 in total) since I know http protocol from my head anyway. Python websockets module. Asyncio, chromadb.
It could have xss issues. Don't care.
Let me know what you think41 -
"let's use git for this game jam"
Wait! Don't go! I love git and use it on every project I work on! You'll have to hear me out here.
This was 4 years ago, at my first Global Game Jam. Every jam and game I'd worked on up to that point, I was the only Dev; no need for git, as backups were more than enough. I joined a group with high hopes for the game jam, with three coders and a proper art team.
The entire jam was "1 step forward 2 steps back", as git somehow constantly overwrote code as fast as we could write it.
By the end of the jam we barely had anything to show for our hard work. The takeaway isn't even about git. It's simply to never work with other people. Git is a great protocol but it can't stop people from accidentally fucking other people over. Every jam since, I've worked on my own and had a far better time of it.3 -
Soon I will be talking about a new communication protocol between Raspberry Pi and Arduino ... At the meeting ... IoT enthusiasts.
I am excited and slightly upset.7 -
I just found out there's a 418 HTTP status code that stands for "I'm a teapot", specified by RFC2324 which "describes HTCPCP, a protocol for controlling, monitoring, and diagnosing coffee pots". I know it's an april fools joke but I still find it hilarious that there is an RFC for that.9
-
I don't know if this is a problem only in Belgium or also in other countries but while I love Bluetooth for audio playback (headsets, speakers and everything) despite being extremely convoluted as a protocol.. FUCK Bluetooth keyboards.
Several of them I've tried. Several of them, from various brands. Pairing, setting the Belgian keyboard layout (which on that shitty Android 7.0 tablet that I want to use the fucking things with apparently has to be done *every fucking time you connect*, because reasons) all well. Except half the keys don't fucking map properly. A keymap, it doesn't get easier than that! How hard is it to make buttons map to the right keys!? They're literally fucking push buttons on a matrix! Seeing which points in the circuit make contact and sending that off to wherever it needs to go!
And to put the icing on the cake? USB keyboards with the same fucking layout settings work without any problems. So it's extremely likely that it's something in those shitty keyboards' controllers or Bluetooth going full rart on all of them.
Of course, Bluetooth being as convoluted as it is, manufacturers just copy each others' implementations of it if they can.. so there's that.
Can really nobody make a product halfway decent anymore before putting it on the market!?
Another one bites the dust.. JUNK!!! Every single goddamn one of them!1 -
When I first started my current job, 2.5 years ago, I helped write the class that told the machine how to dispense and deposit money.
When the other programmer left, I decided to refactor that section. I wrote a new class that told the machine how to dispense and deposit money.
We are integrating new hardware that has a very different protocol of communication. I am making a library that will convert universal commands into vendor specific function calls. I am writing a new library that tells the machine how to dispense and deposit money.3 -
Is it just me or has german tv become more and more ad-ridden? A few years ago it was considered a dick move to play a single ad during an episode.
Now some channels continue the tv program but make it smaller so they can show an ad on the sides. And whenever i switch to the most popular channels i get an overlay that is basically an ad too.
I switched to livestreams and web-based tv for a reason, and that's because tv ads get more intrusive with every year. And don't get me started on the bullshit that smart tv's do nowadays, by that i mean sending data home.
I can't wait for tv to die out or to be replaced by an ip-based protocol just like telephones did.6 -
#wk13
Client: Let's get our car online using the phone as the router!
Me: let's do that!
Client: Can we use NFC as the protocol?
Me: Probably, but just to automate the connection..
Client: No we should use NFC for the entire session!
Me: No!
Client: Why not? It's new, it's happening, bosses will be excited!
Me: You do know what the N in NFC stands for right!
Client: New?
Me: -_- thinking "I hope you lose your genitals to a horrible case of blue waffles.."8 -
Being 46 and finally having the chance to focus on software development after years of BA/PM roles, flogging the market trying to get a junior gig, then one day painting a shed with my 16 year old who I introduced to programming about 6 months ago and listen to him speak at length on protocol programming, finer variances between python and swift and his own development of a text based RPG system where he is creating randomized map generation, gear customization etc. only to realize as paint glides down my arm:
" I'M FREEKIN' OLD!!!!"
When did my brain stop absorbing like a sponge and behave more like a brick?1 -
Fuck you windows 10. Fuck you private keys. Fuck you tortoise git. Fuck you git bash. Fuck you cygwin. Want 3x hours of my life back. Had an auth problem... Had to reinstall all the above on windows to connect to my private repo. Took me 5 minutes to connect after reinstalling all the tools. Grrrrrrr. And I'll never know why it wouldn't connect apart from fatal protocol error: bad line length character..I tried ever stack overflow answer... I nearly bricked my gitlab CE...and it was windows being a motherslut8
-
My grandfather is at age 72 & don't know much about technology. He forward me this message on whatsapp bcz I'm a software engineer. He made my day...
What is the difference between http and https ?
Time to know this with 32 lakh debit cards compromised in India.
Many of you may be aware of this difference, but it is
worth sharing for any that are not.....
The main difference between http:// and https:// is all
about keeping you secure
HTTP stands for Hyper Text Transfer Protocol
The S (big surprise) stands for "Secure".. If you visit a
Website or web page, and look at the address in the web browser, it is likely begin with the following: http:///.
This means that the website is talking to your browser using
the regular unsecured language. In other words, it is possible for someone to "eavesdrop" on your computer's conversation with the Website. If you fill out a form on the website, someone might see the information you send to that site.
This is why you never ever enter your credit card number in an
Http website! But if the web address begins with https://, that means your computer is talking to the website in a
Secure code that no one can eavesdrop on.
You understand why this is so important, right?
If a website ever asks you to enter your Credit/Debit card
Information, you should automatically look to see if the web
address begins with https://.
If it doesn't, You should NEVER enter sensitive
Information....such as a credit/debit card number.
PASS IT ON (You may save someone a lot of grief).
GK:
While checking the name of any website, first look for the domain extension (.com or .org, .co.in, .net etc). The name just before this is the domain name of the website. Eg, in the above example, http://amazon.diwali-festivals.com, the word before .com is "diwali-festivals" (and NOT "amazon"). So, this webpage does not belong to amazon.com but belongs to "diwali-festivals.com", which we all haven't heard before.
You can similarly check for bank frauds.
Before your ebanking logins, make sure that the name just before ".com" is the name of your bank. "Something.icicibank.com" belongs to icici, but icicibank.some1else.com belongs to "some1else".
👆 *Simple but good knowledge to have at times like these* 👆3 -
When your co-worker uses needless terminology. It’s your day off and you’re texting from bed.
cw: Do you have access to the email client?
me: You mean the work email? Yes.
cw: Did you set up access to the database or an FTP protocol for userX?
me: You mean an admin account? Yes.
cw: Were we planning on adding more calls to action on projectX?
me: You mean site links? Yes.2 -
All sysadmins, PLEASE! For the love of God just block port 21 in any direction from anywhere, going anywhere.. FTP needs to die.. The f**king protocol predates tcp/ip for God's sake! We need to stop project managers using it, it's a nightmare!!9
-
Finally found my topic for the 10 min presentation :
Network protocol security
After giving an overview , i'm gonna talk about http , ftp , telnet , netbios and showing them a sample packet and how easy the username and password can be found if these protocols are used.
Any other recommandation?5 -
when you think about it.. Moses was the first guy ever to download data from The Cloud and distribute it via p2p protocol [torrent]. The first it pirate ever.
Happy Easter!2 -
Today I learned about binary encoding formats alternative to JSON such as Google Protocol Butter.
I like these binary formats.
Just thought I would share this here so others would benefit as well (and please share your experience if it is relevant)8 -
4 years ago I made a personal goal/plan to be a full stack developer. Meaning a good understanding of any development between os level code and web/front end user experience.
Over the years this term 'full stack' has been abused greatly and now basically means 'a javascript developer that generally knows what they are talking about'.
So now, devRant collective I ask you. What do you call a developer with good skills in:
- os level code (c, c++ and os apis)
- database level tech (advanced querying and db aglo/modeling)
- software architecture
- application level (workflow and business logic)
- transport level (protocol design and usage)
- front end tech (graphics programming and event driven paradigm)
- user experience14 -
My first task in my current company, a few years ago.
I had to add features to a 10 year old microcontroller-based device written in C.
There was a struct named "global", which held hundreds of other structs that held variables or even more structs.
If one would have printed the structure of this mess it would haven needed several pages.
This "global"-struct was used in every single sourcefile to store and pass data around. Obviously there was no documentation and often useless comments.
Additionally there were a few protocol stacks involved, mainly similar, only differing in one or two protocol layers.
The implementation of the protocol stack was by setting flags in the "global"-struct in every protocol layer and having the application data in a buffer.
The complete telegram with all layer specific data (header, checksums, etc.) was then build at one single point right before sending it, based on the flags and the data buffer.
As there was no chance to reuse protocol layers with this implemenation. Three protocol implementations with special telegram builder existed in parallel, although they were nearly identical.
I needed a fourth variant of the protocol stack, so I had no chance but to make another copy with some minor changes.
But there was a benefit from this task.
As I had to do the software for the successor of this device from scratch I learned for many things how not to do them :-) -
Probably had my worst half-week ever this week.
Customer's CRM system, the read and edit masks just...stopped existing on last week friday. CRM fell back on some default masks for the dataset. No way to create new masks directly without putting the whole system upside down.
We couldn't do anything anyway because they reported the issue literally as we all were about to leave for weekend and our boss was like "Ah nah, well do it next week."
Our brains were already fried anyway...
I mail the reporter that we've registered their issue, will investigate and report back ASAP once we've got news.
Monday rolls around, I'm whacking my head against their system trying to figure the fuck out, what went wrong and how to solve it, I come up empty; Not that terrible since the masks only stopped existing in the webclient version of the system and they can still use the windows client, so they can still work.
Tuesday rolls around, I'm at an on site training for an ERP system with my boss at a remote company. Get an email in midst of the training, I was doing protocol.
Guy from the afflicted company goes and tells me that the issue has somehow spread to his colleague and him...IN THE WINDOWS CLIENT.
I'm fucking flabbergasted, so to speak, since the masks for the windows client and the web client are totally isolated from one another.
After we're back at our company, I investigate, less efficiently this time because my brain got fried at the training. I come up empty again.
NOW TODAY: Discuss further proceedings with my boss, he's not pissed at me or anything, just to say, but we're both worried, obviously.
Then at 10:20, a guy from the afflicted company mails me in an annoyed tone that the masks are still broken.
11:00, we figure out a workaround so the windows client users can at least work again, albeit limited.
11:10, I mail the guy, telling him that although we're still not able to fully work everything out and are still investigating, we've made a workaround so they can at least work again.
11:20, the guy mails me in a pissed tone around the lines of "This is very very important and must be fixed ASAP or else we'll not be able to work at all [...]"
And I think like "Dude I literally just told you like 8 minutes ago that there's are workaround so you'll be able to at least work again..."
Forward the mail to boss, we meet up quickly to discuss how in God's name we can deescalate this mfer.
11:31, the guy mails me again, all apologetically this time "Stop! All is good, I just now fully read you mail, thanks for implementing the workaround, nothing will come to a standstill [...]"
BRUH CAN YOU NOT FUCKING READ BEFORE ESCALATING SHIT
Fuck customers. Dumb fucking cretins unable to fucking read.
The issue is still unresolved. Support of the CRM software lets us sit on our collective asses and wait.
There is no such thing as stable software, it's a myth.
Every corporate software is like an ever-decaying semi-corpse of a brain dead patient slowly getting worse and worse but not fucking dying.
Rant over. -
Apache Tomcat vulnerability "GHOSTCAT" allows read conduct files and implant web shells. All versions in the last 13 years vulnerable.
According to Security Researcher of Chaitin Tech : Due to a flaw in the Tomcat AJP protocol (the channel for Tomcat to connect to the outside, pass them to the corresponding web application for processing and return the response result of the request), an attacker can read or include any files in the webapp directories of Tomcat.
For example, An attacker can read the web-app configuration files or source code. In addition, if the target web application has a file upload function, the attacker may execute malicious code on the target host by exploiting file inclusion through "GHOSTCAT" vulnerability.
Apache Tomcat has officially released versions 9.0.31, 8.5.51, and 7.0.100 to fix this vulnerability.5 -
Fax machines connected to VoIP connections...
Had a nightmare recently, where my fathers machine that he really needs refused to work after he moved appartements.
Uncounted calls with different tech departments, a furious fathers and two weeks later they found out, that they forgot to activate the protocol.3 -
What if people, life, humanity, the universe is just a cluster of CPUs running a giant Recurrent Neural Network algorithm? 🤔
-Sun and food == power source
-People == semiconductors
-Earth/a Galaxy == a single CPU
-Universe == a local grouping of nearby nodes, so far the ones we've discovered are dead or not what same data transport protocol/port as us
-Universal Expansion == the search algorithm
-Blackholes: sector failures
-Big Bang == God turns on his PC, starts the program
-Big Crunch == rm -rf4 -
So I just had a bit of a shower thought. Suppose you could get the linguists to break a language down and define all the rules that make up that language as if it were a protocol - exceptions included. If you get an arbitrary string of text, could you match against those rules, then break that down to the information it contains, and use that information against a new rule set to construct a new valid sentence containing the same information. Would you just have made the ultimate translator?16
-
So, have you all got your HTTPS protocols in order yet? Aren't you existed about the future?
Sincerely, Google27 -
I wanted to develop a programming language since all programming languages have some shortcoming of their owns so as I walk further along in developing custom parser generator and so forth, I get to the point where I have to consider implementing the Language Server Protocol for the programming language only to realize that while ironically LSP was supposed to make it easy to to have autocompletion features and other stuff made available to other editors, you still end up requiring to make plugins/extensions for such editor like Visual Studio and Visual Studio Codes anyway despise the fact that LSP was meant to solve that. Meanwhile over at Linux Land, we have Kate editor that can be configured to simply connect to LSP server and require no plugin/extension to do so, you just specify it in json config and that's that.
Microsoft... you created LSP protocol and yet you want Plugin/Extensions still for VSCode/Visual Studio even though LSP was made to address that... Make up your mind, ffs. P.S. I have no interest in writing 100,000 LOC of extension/plugin for your editor if it can't get it's $#!^ together.19 -
Legacy tech be like:
"The connection to this site uses TLS 1.0 (an obsolete protocol), RSA (an obsolete key exchange), and AES_128_CBC with HMAC-SHA1 (an obsolete cipher)."2 -
I just saw Kickstarter's blog post about moving over to the Blockchain. They're doing it because, uh, protocols, or something. No joke, here's a direct quote from their post:
"You may have heard of HTTP (Hypertext Transfer Protocol) which helps you browse the web, or SMTP (Simple Mail Transfer Protocol) which helps you send email. Protocols like these make up the unseen infrastructure of the internet. Imagine that, but for crowdfunding creative projects."
What the fuck does that even mean? The rest of the blog post is more of the same. They packed it full of every crypto buzzword they could find while also not actually providing any useful information.
Full article here, if anyone wants to read a headache-inducing pile of nonsense: https://kickstarter.com/articles/...9 -
WHAT THE FUCK!!!
Whoever says that MacOS is superior or at least on par with Linux in terms of ease of maintenance -- feel free to stick a backwards pinecone up your asshole and push it down with a baseball bat with 5" nails.
The FUCK is this nightmare. You can't even start a process w/o logging in via GUI, password changes are another horror story, esp. for users who have never logged in [warning says to change keystore pass separately... Which doesn't exist...], vnc uses some proprietary protocol, ...
Seriously, even SunOS is easier to maintain, not to mention AIX, compared to this BSD nightmare of a UNIXoid....
Wtf....3 -
I just pulled an all-nighter to write an usability testing protocol in Microsoft Word for a medical mobile app.
- statement of consent and privacy declaration; easy: 1 hour
- structuring the protocol and writing the different use cases; easy: 1-2 hours
- layouting the document so the tables don't look like utter shit and adding dotted lines into the columns so the user can write in it without fucking up the whole document when resizing a simple column width; a fucking nightmare: 5 hours
Why is the creation of a nice layout so inefficient to the point where I'd rather design a form in CSS and send it to my printer, get your shit together!3 -
Today in Cursed Java error messages, this beauty: `java.net.MalformedURLException: no protocol: "http://knowledgebase-api.development.svc.cluster.local/..."`
Yes, no protocol. You read that right. There is in fact a protocol there.10 -
Years ago I was working in local cinema as a student job from time to time and used to sleep after shifts at my uncle's. Uncle did not had internet but there were so many wlans all around. Since I had nothing to do for hours after shift, I downloaded Backtrack linux at home, made live dvd of it and saved a two articles of "how to hack wifi" to text files.
It took me 4 hours to break WEP, since I was total lame, and it was the only one WEP around. They also had mac restrictions set to router, so I changed my mac address to one of their devices, logged in to router and added our mac address. For my uncle it was complete magic but since he is total geek to linux he liked it.
Fast forward weeks later. When I came to my uncle's house he was downloading like ton of linux distributions. Literally each one. Gigabytes of data. I told him not to do so because sooner or later neighbour will notice, but he did not care. Guess what, he notices, probably slow internet and (maybe) bigger bills, I do not know, but owner just changed protocol to WPA2, not changing password. So the story continued for almost 2 years. Felt a bit sorry for neighbour but did not expect such an outcome. I just wanted to watch youtube videos and scroll social networks, keeping low profile so no one notice.1 -
I used to be a sysadmin and to some extent I still am. But I absolutely fucking hated the software I had to work with, despite server software having a focus on stability and rigid testing instead of new features *cough* bugs.
After ranting about the "do I really have to do everything myself?!" for long enough, I went ahead and did it. Problem is, the list of stuff to do is years upon years long. Off the top of my head, there's this Android application called DAVx5. It's a CalDAV / CardDAV client. Both of those are extensions to WebDAV which in turn is an extension of HTTP. Should be simple enough. Should be! I paid for that godforsaken piece of software, but don't you dare to delete a calendar entry. Don't you dare to update it in one place and expect it to push that change to another device. And despite "server errors" (the client is fucked, face it you piece of trash app!), just keep on trying, trying and trying some more. Error handling be damned! Notifications be damned! One week that piece of shit lasted for, on 2 Android phones. The Radicale server, that's still running. Both phones however are now out of sync and both of them are complaining about "400 I fucked up my request".
Now that is just a simple example. CalDAV and CardDAV are not complicated protocols. In fact you'd be surprised how easy most protocols are. SMTP email? That's 4 commands and spammers still fuck it up. HTTP GET? That's just 1 command. You may have to do it a few times over to request all the JavaScript shit, but still. None of this is hard. Why do people still keep fucking it up? Is reading a fucking RFC when you're implementing a goddamn protocol so damn hard? Correctness be damned, just like the memory? If you're one of those people, kill yourself.
So yeah. I started writing my own implementations out of pure spite. Because I hated the industry so fucking much. And surprisingly, my software does tend to be lightweight and usually reasonably stable. I wonder why! Maybe it's because I care. Maybe people should care more often about their trade, rather than those filthy 6 figures. There's a reason why you're being paid that much. Writing a steaming pile of dogshit shouldn't be one of them.6 -
MTP is complete garbage. I want mass storage back.
The media transfer protocol (MTP) occasionally discovers new creative ways of failure. Frequently, directory listings take minutes to load or fail to load at all, and it freezes up infinitely (until disconnected) when renaming an item, and I can not even do two things simultaneously.
While files are being moved, I can not browse pictures or watch videos from the smartphone.
Sometimes, files are listed with the date 1970-01-01 (Unix epoch) instead of their correct date. Sometimes, files do not appear at all, which makes it unsafe to move directories from the device.
MTP lacks random access. If I want to play a two-gigabyte 4K 2160p video and seek in the video, guess what: I need to copy it to my computer's local mass storage first because MTP lacks random access.
When transferring high numbers of files, MTP has to slooooowly enumerate (or "prepare" or "calculate the time of") them all, which might even take longer than mass storage would need for the entire process. This means MTP might start copying or moving the actual files when mass storage is already finished.
Today, the "preparing to move" process was especially slow: five minutes for around 150 files! How am I supposed to find out what caused this random malfunction?
MTP sometimes drives me insane. I want mass storage back, at least for the MicroSD memory card, which uses a widely supported file system.
Imagine a 2010 $100 Android phone is better at file transfer than a 2022 $1000 Android phone (or iPhone, for that matter).3 -
Here's something I'm sick of seeing: server software documentation that doesn't fully list what ports they are using. Too often I've read things like this: "AcmeServe uses ports 400, 8001, and 8002". Great, but why are you making me guess if those are TCP or UDP?
And sometimes it's: "AcmeServe uses ports 400 (UDP), 8001 (TCP), and 8002 (TCP)". Soooo, which ones do I port forward? Are you really going to make me have to use netstat -a to find out?
I can't understand the mentality behind that. They obviously realise you need to setup firewalls, but they half-arse it by only telling you the port numbers but not the protocol and/or if they're inbound/outbound.
Please, list what protocol the port is and if it's listening or outbound. Oh, and consider also mentioning where the port numbers come from in your config files, so I don't have to go playing a guessing game with a bunch of XML files should someone have overridden the default port numbers.1 -
For all the hate that Java gets, this *not rant* is to appreciate the Spring Boot/Cloud & Netty for without them I would not be half as productive as I am at my job.
Just to highlight a few of these life savers:
- Spring security: many features but I will just mention robust authorization out of the box
- Netflix Feign & Hystrix: easy circuit breaking & fallback pattern.
- Spring Data: consistent data access patterns & out of the box functionality regardless of the data source: eg relational & document dbs, redis etc with managed offerings integrations as well. The abstraction here is something to marvel at.
- Spring Boot Actuator: Out of the box health checks that check all integrations: Db, Redis, Mail,Disk, RabbitMQ etc which are crucial for Kubernetes readiness/liveness health checks.
- Spring Cloud Stream: Another abstraction for the messaging layer that decouples application logic from the binder ie could be kafka, rabbitmq etc
- SpringFox Swagger - Fantastic swagger documentation integration that allows always up to date API docs via annotations that can be converted to a swagger.yml if need be.
- Last but not least - Netty: Implementing secure non-blocking network applications is not trivial. This framework has made it easier for us to implement a protocol server on top of UDP using Java & all the support that comes with Spring.
For these & many more am grateful for Java & the big big community of devs that love & support it. -
Senior argue with me that Java can’t do serial programming, I prove to him by read the hardware protocol in HEX , and convert to ASCII using Java. He said the F word to me , I was like “他是什麼意思啊?”8
-
Translation:
"According to Xiaomi, the U-Disk (Xiaomi's new USB drive) has UDP technology that prevents damage by splashing and dust."
HOW A TRANSPORT PROTOCOL CAN'T PREVENT THAT?
What's next? "The new notebook has HTTP technology that makes it waterproof."7 -
After a management meeting about the companies first e-commerce initiative which I proposed and protocol-typed with assistance from internal and 3rd party resources, I returned with my boss to her office feeling on cloud nine as everything had been accepted / approved and the project was green lit!!!
She turns to me and says “I’m going down to the local sex shop and buying the largest dildo, strap it on, and then they will listen to me too”... I just sat, staring at the floor ...
Queue the crickets...4 -
That shitty moment when you are reverse engineering an app (LINE), but can't find any useful hints.
Web analysis didn't help. Decompiling the windows executable also didn't help. Testing the app on different behaviour with python scripts didn't help. Analysing the android app on windows with the jadx decompiler and other decompiler didn't help that much.
BUT today it worked. I did use a paid "Dex dump" android application. I found some methods that the app receives from the servers with a thrift protocol.
Now I just need to find the right parameters to be finally able to make a bot. Hehehe.
That was a hard way, but it paid out. I did learn so many things. It took me like a whole year.5 -
I'm working on a firmware for 3d printers. I had to send a lot of data to another microcontroller and I was making a very sophisticated protocol. When finished I was so proud of my work but in that moment I remember that there is a thing called JSON but I didn't care. Now I have to send the same data to a webserver and need to move from my own protocol to JSON.
Fuck me. -
Finally upgraded my webserver and php modules to support HTTP/2 ^^ Everything works fine.
Found out devrant.com doesn't support it though. @dfox8 -
DevRant doesn't let you choose the protocol for your website. Seeing http:// on my profile makes me feel insecure.6
-
I can't talk to our management. It's not like they aren't present nor that I wouldn't find them. I'm infront of their office actually.
But apparently their room number of 505 suggests that they don't support a modern protocol version and I disabled my legacy support. This won't work. -
I decided to upgrade my intellij ultimate from 2019.3 to 2020.2 and I saw there is update button.
I clicked on it.
As I expected it didn’t work and it was 30 minutes waiting looking at progress bar going back and forth couple of times before I decided just to download latest version and drag and drop it to applications folder ( took me 5 minutes) - I use mac so it replaces all crap ( I think ).
I cleared the old cache that growed to 2 gigabytes leaving some configuration files.
Next as always crash on startup cause of incompatible plugins with long java stacktrace - at least I could click the close button or popup closed itself I can’t remember ( one version I remember this button couldn’t be clicked cause it was off the screen and you need to do some cheating to launch ide )
The font has changed and I see that it at least work a little faster - that is nice. Indexing is finally fixed after all those years - probably thanks to visual studio code intellisense pushing those lazy bastards to deal with this.
But the preloader on first logo disappears so I think they decided to remove it cause it’s so fast - no it loads the same time or maybe little longer when I launch it on my old macbook.
After that as always I looked at plugins to see if there’s something interesting, so to find ability to scroll over whole plugins I needed to click couple of times. I think they assume I remember all the nice plugins in their marketplace and I only type search.
Maybe I should be type of user who reads best 2020 plugins for your best ide crap articles filled with advertising or even waste more time to watch all of this great videos about ide ( are there any kind of this stuff ? )
After a few operations I unfortunately clicked apply instead of restart ide and it hanged up on uninstalling some plugin I’m no longer interested in for 5 minutes so I decided to use always working ‘kill -9’ from command line.
Launched again and this time success.
Fortunately indexing finished for this workspace and I can work.
I’m intellij ultimate subscriber for 7+ years and I see those craps are not changing from like forever.
What’s the point of automate something that you can’t regression test ?
I started thinking that now when most people are facebook wall scrolling zombies companies assume that when new software comes out everyone is installing it right away and if not they’re probably not our customers cause they’re dead.
What a surprise they have when I pay for another year I can only imagine ( to be fair probably they even don’t know who I am ).
Yeah for sure I am subscribed to newsletters and I have jetbrains as a start page cause I shit myself with money and have nothing better to do then be grupie ( is there corporate grupies already a big community? )
Well I am a guy who likes to spend some time when installing anything and especially software that is responsible for my main source of income and productivity speed up.
Anyway I decided to upgrade cause editing es7 and typescript got to be pain in the ass and I see it’s working fine now. I don’t know if I like the font but at least the editor it’s working the same or maybe faster then the original that is huge improvement as developers lose most of their time between keyboard and screen communication protocol.
I don’t write it to discourage intellij as it’s great independent ide that I love and support for such a long time but they should focus on code editor and developers efficiency not on things that doesn’t make sense.
Congratulations if you reached this point of this meaningless post.
Now I started thinking that maybe it’s working faster cause I removed 2 gigs of crap from it.
Well we’ll see.1 -
I've implemented my own version of IoT all over my room and home.
Hope the protocol I've designed has proper security...1 -
I've used ngrok since it's earlier days when it was free to use. Now it's hard to use ngrok for certain use cases. And it is very slow for parallel calls. So I started working on my own ngrok implementation from scratch using QUIC as the communication protocol between the exposed server and the tunneling client. The project is very new as the QUIC library I'm using is not that mature, yet I'm getting good results. It is very easy to setup. Would love to know if you guys had any thoughts. https://github.com/aki237/qxpose1
-
whenever I suspend my laptop my openvpn would get stuck on reconnecting and I'd have to ctrl c and wait for like minutes so it would correctly close. so I only used VPN when I really needed it.
but then I found out: mullvad (my VPN host supports wireguard! and so wireguard is a more passive protocol, and doesn't need to keep open the connection. so now I can just set my VPN to "always on" and not worry about it anymore, yay!
ps: you should have seen my face when I found out mullvad gives away free stickers! :D -
2nd post progress of this project https://devrant.com/rants/9985730/...
I went to shop to buy missing ir diode and bluetooth for arduino.
Launched arduino today with ir receiver and I managed to reverse engineer protocol.
Turns out it’s just NEC remote codes.
I used this library https://arduino.cc/reference/en/... to easily send and receive ir signals.
Everything took me whole day cause I’m rookie in hardware.
I can now remote control medion md 19500 using arduino.
Next step is to make it riding itself.
I need to measure speed and turn angle with error rates.
I will probably use pen and paper and let vacuum cleaner draw angle for me and after that I will use the most modern, accurate and cheapest angle measurement tool that is protractor - school welcome back
Speed can be more complicated and need another external complicated tool that is tape measure and a clock.
I also bought second robot because I got this stupid idea to allow people to control robots using internet.2 -
Lady comes over to my cube and stands silently until I notice her in the mirror. She cheerfully asks that I help her reset her password.
Okay...one, I'm buried up to my balls in work that needs to be done, and here she is camping, expecting me to feel a disturbance in The Force to help on her whim, when our company has an issue system for shit like this. 👊
Two, I'm 👏 a 👏 developer 👏! My sign says Software Engineer on it, which might give some context as to why she forgot her password.
Look, I was nice to her. But it seems like I'm getting more and more phone calls and surprise visits lately from people that I shouldn't be.1 -
One can't have any personal repo and git protocol is disabled by the proxies and we are calling ourselves devops welcoming.
This is simply fuckops.5 -
You made a very important device used in pharmaceutical labs which stores important data, but for some fucking reason you decided to write the communication protocol so poorly that I want to cry.
You can't fucking have unique IDs for important records, but still asks me for the "INDEX" (not unique ID, fucking INDEX) to delete a particular one. YOU HAVE IT IN THE MEMORY, WHY DON'T USE IT?!
How the fuck you have made such a stupid decision… it's a device that communicates using USB so theoretically I could unplug it for a moment, remove records, add them and plug it in again and then delete a wrong one.
I can't fucking check if it's still the correct one and the user isn't an asshole every 2 seconds because this dumb device takes about 3 for each request made.
WHY?
Why I, developing a third party system, have to be responsible for these dumb vulnerabilities you've created? -
I'm part of a robotics team in my highschool. We work on autonomous robots, which are driven with microcontrollers like the Arduino and "bare" Atmel chips.
Last year we were using a protocol called CAN. CAN is a bus which runs at 1Mbps and it is quite easy to connect devices together. (It's a bus ofc it is). BUT... it needs a terminator at the end, mostly 120 ohm.
Every year we are on a deadline and something broke on our current board and we needed to solder up a new one, but we couldn't find the 120 ohm terminator... ANYWHERE!
At end after searching for it in the workshop for 4h straight (12am- 4am) we finally found it, and soldered up the new board and guess what... it wasn't what we thought, the code was the problem.
After realizing the problem my teammate and I, in silence just stood up, packed our things and went home. Argh!2 -
I need some opinions on Rx and MVVM. Its being done in iOS, but I think its fairly general programming question.
The small team I joined is using Rx (I've never used it before) and I'm trying to learn and catch up to them. Looking at the code, I think there are thousands of lines of over-engineered code that could be done so much simpler. From a non Rx point of view, I think we are following some bad practises, from an Rx point of view the guys are saying this is what Rx needs to be. I'm trying to discuss this with them, but they are shooting me down saying I just don't know enough about Rx. Maybe thats true, maybe I just don't get it, but they aren't exactly explaining it, just telling me i'm wrong and they are right. I need another set of eyes on this to see if it is just me.
One of the main points is that there are many places where network errors shouldn't complete the observable (i.e. can't call onError), I understand this concept. I read a response from the RxSwift maintainers that said the way to handle this was to wrap your response type in a class with a generic type (e.g. Result<T>) that contained a property to denote a success or error and maybe an error message. This way errors (such as incorrect password) won't cause it to complete, everything goes through onNext and users can retry / go again, makes sense.
The guys are saying that this breaks Rx principals and MVVM. Instead we need separate observables for every type of response. So we have viewModels that contain:
- isSuccessObservable
- isErrorObservable
- isLoadingObservable
- isRefreshingObservable
- etc. (some have close to 10 different observables)
To me this is overkill to have so many streams all frequently only ever delivering 1 or none messages. I would have aimed for 1 observable, that returns an object holding properties for each of these things, and sending several messages. Is that not what streams are suppose to do? Then the local code can use filters as part of the subscriptions. The major benefit of having 1 is that it becomes easier to make it generic and abstract away, which brings us to point 2.
Currently, due to each viewModel having different numbers of observables and methods of different names (but effectively doing the same thing) the guys create a new custom protocol (equivalent of a java interface) for each viewModel with its N observables. The viewModel creates local variables of PublishSubject, BehavorSubject, Driver etc. Then it implements the procotol / interface and casts all the local's back as observables. e.g.
protocol CarViewModelType {
isSuccessObservable: Observable<Car>
isErrorObservable: Observable<String>
isLoadingObservable: Observable<Void>
}
class CarViewModel {
isSuccessSubject: PublishSubject<Car>
isErrorSubject: PublishSubject<String>
isLoadingSubject: PublishSubject<Void>
// other stuff
}
extension CarViewModel: CarViewModelType {
isSuccessObservable {
return isSuccessSubject.asObservable()
}
isErrorObservable {
return isSuccessSubject.asObservable()
}
isLoadingObservable {
return isSuccessSubject.asObservable()
}
}
This has to be created by hand, for every viewModel, of which there is one for every screen and there is 40+ screens. This same structure is copy / pasted into every viewModel. As mentioned above I would like to make this all generic. Have a generic protocol for all viewModels to define 1 Observable, 1 local variable of generic type and handle the cast back automatically. The method to trigger all the business logic could also have its name standardised ("load", "fetch", "processData" etc.). Maybe we could also figure out a few other bits too. This would remove a lot of code, as well as making the code more readable (less messy), and make unit testing much easier. While it could never do everything automatically we could test the basic responses of each viewModel and have at least some testing done by default and not have everything be very boilerplate-y and copy / paste nature.
The guys think that subscribing to isSuccess and / or isError is perfect Rx + MVVM. But for some reason subscribing to status.filter(success) or status.filter(!success) is a sin of unimaginable proportions. Also the idea of multiple buttons and events all "reacting" to the same method named e.g. "load", is bad Rx (why if they all need to do the same thing?)
My thoughts on this are:
- To me its indentical in meaning and architecture, one way is just significantly less code.
- Lets say I agree its not textbook, is it not worth bending the rules to reduce code.
- We are already breaking the rules of MVVM to introduce coordinators (which I hate, as they are adding even more unnecessary code), so why is breaking it to reduce code such a no no.
Any thoughts on the above? Am I way off the mark or is this classic Rx?16 -
## building my own router
I hoped things would go more smoothly :)
Anyway, my new miniPC easily accepted CentOS 8 - no fuss here. And I've got to say - I love CentOS8 so far! Shell has amazing nifty tricks, UI (gnome3) is also snappy, video/audio/ethernet,.. everything works.
What I did NOT expect is hardware being off. Well okay, the price was low - it was obvious smth is not right. But still.. I decided to build my own router so that I could swap wifi card whenever I want. So that I could run my own network services in there. Turns out - the card swapping is not as easy as one might think.
I got the AX200 WiFi6 card for that very purpose. But once plugged in the OS can only see it's bluetooth module. Weird... What's even weirder is that even though the card is PCIe, the OS uses btusb module to talk to that device. What? USB?? emm.. What??
And there it is. After opening it up again I noticed that the mPCIe area is marked with a label: "USB WIFI / WWAN". USB? Does that mean this PCIe slot is wired into the USB bus? Not impossible I guess.
Googling for a "pcie wifi over usb" or smth like that brought me to one reddit (I think?) where someone wanted to build a DIY wifi mPCIe -> USB adapter and someone else adviced hime that (for some reason) at best he could only get bluetooth working (hey! just like me!). It's got to do smth with pcie channels and USB being too weak to handle all that load, or smth.. IDK, I'm not a HW guy.
Well that sucks then! I have a mPCIe slot that does not work as a PCIe. Shit! So I guess the best I could do is to plug back in the same wifi card that came with the device. It smells like 2003 - supports only g protocol. Fine, let's try that. Maybe I'll find a way to work around this mPCIe limitation later on (USB adapter or smth... except there are no USB WIFI6 dongles yet :( ). So I plug it back in and start turning it into a router. Disable NetworkManager, configure static NCs' settings, install dhcpd, hostapd, bind and others. Looks like all is done! Now it's time to start it all. systemctl start hostapd --> FAILED. wtf? journalctl says it could not initialize a driver. umm okay? Why? Forums say I should airodump-ng check and kill whatever's using that device. Fine. airodumo reveals avahi and wpa_suppl are still using it. kill, kill, GOTTA KILL 'EM ALL!! Starting hostapd again -- same shit... wtf?
iw list
My gawd... That shitty network card does not even support AP mode :( I mean.. My USB wifi dongle for 2€ supports 2x more modes, is faster, has better range and is easier to work with than this old tart!
Yeah. That was an interesting day. When enfironment engineers break my testing environments at work I'm glad I have where to spend my time now.
BTW any ideas how to bypass this mPCIe nonsense? Come on, there are USB GPUs out there.. Why can't they make a USB (or dual-USB if they really need to) mPCIe adapter?8 -
So I see this code:
class ViewWithDisplayLayer {
func viewDisplayLayer() -> CALayer? {
FatalErrorMustOverride()
return nil
}
}
If only Swift bad some way of defining some sort of interface or protocol with methods to be implemented by a class without using class inheritance and wouldn't it be great if that feature also gave a compile time error if you forgot to override/implement said method(s). If only.....
😳 -
It must be good to at least know computer networking?! I remember nothing about these TCP, UDP, whatever the fudge protocol shite. I don't remember these megabit and megabyte things. All I do is code from one end to another. Anyone else watched Eli The Computer Guy's series?2
-
First you make a filthy JSON protocol where numbers are encapsulated into strings.
Then you document this little fact nowhere. Actually you don't document anything at all.
Then you make a shitty parser that ignores any exception. So that when I try to send my objects, it took two hours to figure out it was "my fault" as I was sending actual integers instead of strings.
I think you deserve to suffer a terrible agony for exactly the amount of time I lost.2 -
What bothers me most with the Matrix hack is that so many people say oh look the secure messenger got hacked. From what I can tell it had nothing to do with their software nor their protocol. If you're running your own Homeserver you're totally unaffected.1
-
Talking about the Open Graph protocol (http://ogp.me/)
Why the fuck does the Facebook Object Debugger tell me that my image in the og:image meta tag could not be loaded when I put a HTTPS link in there, but when there is a HTTP link with a permanent redirect to the HTTPS link it can load the fucking image.2 -
Freenas update from 11.1 to 11.2 beta 2
They added experimental smb direct / multichannel support, yay.
Me tries to connect to the smb share:
->Connection timed out 🤔
Tries something.
->Connection refused 😐
Google foo ....
->Nope, no connection 😔
"Failed to retrieve list of shares from server"
Reinstalls freenas to be sure it's not some janky install.
->Nope.
Google some more
->Nope 😭
*Like a year later*
Look into /etc/samba/smb.conf
Client max protocol = NTLM1
Motherfucker! 😬
Who thought that to be a good Idea!?
😠
It's the default Manjaro smb conf from the official repository by the way.
Seriously.
Didn't even know there was a setting for max client protocol.
Thought it was a server only config.
😵
Nope, some motherfucker trolled me long and hard this time. 😩
But back to getting smb direct working on my setup.
Thunar gvfs is like it's own completely separate thing.
Smb status, and all the other commands don't see any open connections anywhere.
Gvfs still connects fine to the share even though the smb.conf is deleted and everything else is complaining that there is no config.
On the one hand, it uses samba, on the other it's not actually.
Where the heck can I see the connection properties and wether rdma works or not?
Mother trucking, fracking, leg breaking piece of a dance type.1 -
I have been working on IoT projects for last five years. After using MQTT in many of my projects I have realized that there is a huge learning curve for the beginners to understand and implement MQTT in their projects. The packet structure of MQTT is complex and MQTT packets are difficult to debug. Also customizing the open source MQTT brokers are also difficult for the beginners, and sometimes even for the experts.
To make IoT and Messaging simple, I am designing a new protocol which uses JSON packets for data exchange and is far less complex than MQTT. I am also developing an open source project which will contain a server (with load balancer support), a python client, a Javascript client and a python based load balancer. I hope this project will reduce the development time as the protocol is easy to understand and the open source code is fully modular & easy to customize.
This will be my very first contribution to the open source community. Wish me luck! -
Still using a database from 90' - Enea Polyhedra:
- no decent visual sql client
- utterly limited scripting language
- weird communications protocol
- no redundancy beyond master-replica
- no encryption of communication protocols
- etc. -
Me: there seems to be a problem in the Web Sphere app server...I would recommend u change it to weblogic
Client( IT division head of his company): is it compatible with websphere soap..??
Me: soap is generic, websphere is just an app server
Client: no but we have been told to use only websphere soap, is weblogic having that..??
Me: soap is protocol, app server is changeable..
Client: no we want only websphere soap.
Me:....(trying to find the nearest exit)4 -
I have been working on IoT projects for last five years. After using MQTT in many of my projects I have realized that there is a huge learning curve for the beginners to understand and implement MQTT in their projects. The packet structure of MQTT is complex and MQTT packets are difficult to debug. Also customizing the open source MQTT brokers are also difficult for the beginners, and sometimes even for the experts.
To make IoT and Messaging simple, I am designing a new protocol which uses JSON packets for data exchange and is far less complex than MQTT. I am also developing an open source project which will contain a server (with load balancer support), a python client, a Javascript client and a python based load balancer. I hope this project will reduce the development time as the protocol is easy to understand and the open source code is fully modular & easy to customize.
This will be my very first contribution to the open source community. Wish me luck!3 -
Always nice when you discover that your hardware has an *ELABORATE* HARDWARE OFFLOADING ENGINE with full protocol implementation for something you spent two months writing software for...
Well, at least the current solution works like it is supposed to. Don't know that yet of the hardware implementation.
It would save 4 euro component cost though if we switch to the offloading engine -
VSCode is doing really strange things to my language server, in such variety that I'm starting to suspect that it's simply incorrect because it's very unlikely that I'd misunderstand so many distinct things at once.
- The trace level is verbose, yet VSCode absolutely spams the LS with trace: off requests
- the capability update request I used to set file watchers never gets a response even though the standard clearly states that all requests must get responses or progress reports quickly, and I'm not getting file updates even after vscode responds to a file system change. By the way, if file watching is a capability, why can't I set it in the protocol handshake with all the other capabilities?
- my semantic token provider (used for syntax highlighting) is simply ignored, no requests, no errors
- the debug console is spamming editor internal errors2 -
Moving files is emotionally easier than copying and deleting files, and moving eliminates the risk of selecting the wrong files at the deletion part.
I have read that it is safer to manually copy and manually delete files rather than to move it, but copying and deleting has a hidden risk that was not mentioned: selecting the wrong files for deletion.
Moving files feels like moving an obstacle from one room to another. The deletion part of copying and deleting feels like destroying something, which is an added emotional barrier.
Technically, copying and deleting is safer, since there is no risk of source files being deleted without having been transferred as a result of a device disconnecting or the buggy media transfer protocol (MTP) failing to load the entire file list. However, on mass storage devices, this pretty much never happened to me, and on MTP, data loss can be avoided by not moving folders but opening the source folders and selecting all files and moving those out. This prevents a parent folder with incompletely loaded file listing from being deleted.
However, something that is not considered about copying and deleting is that the risk of selecting the wrong files in the deletion step exists. One might end up selecting files that were never copied.
Not only is moving straightforward and time-saving, but it has no emotional barrier and the risk of selecting the wrong files to delete from the source is eliminated, since a proper file manager like Nemo or Windows Explorer (mass storage only, not MTP) only deletes a moved file from the source after it has been properly transferred. The user does not need to pay attention to select the correct files to delete, since the file manager already did it.4 -
Betty: Opens slack chat with Bob, Tony and me to ask me to fix some data for a client who messed the setup. (Don’t worry just building a script that takes 3 hours to complete and that I must supervise)
Betty: Opens slack chat with Ron, Tim and me to ask me to force the system I made to ignore protocol because someone else’s fuck up made it so she didn’t get the output she expected.
Betty: proceeds to ask for status updates constantly on both chats. She also disguises them as her asking what she can do to “get it across faster” knowing there’s jack shit I or anyone can do to make it go “faster”.
Also Betty, vomits BS about my micro service being unstable in front of managers even though it is it’s correctness what brought to light a bug fucking up thousands of records silently.
Go fuck yourself Betty ☺️ and fuck the client5 -
Hello everyone!! This is my first rant so I'm not sure what the protocol is.
I just wrote my first ever Medium Post on Dynamic Theming in Android.
Just wanted to share it with you all.
https://medium.com/@nihitb06.dev/...
Any constructive criticism is welcome. -
When the ops team needs to go through a 5 step "protocol" over a couple of days, just to open a damn port in the firewall, so that our CI server can access the local GitLab server..
Seems like the migration of the last couple of projects from SVN to Git is going to take a little longer than I expected.. -
So matplotlib can do 3d plots. However, when you try to then label your axes...
plt.xlabel("protocol") # ok
plt.ylabel("volume") # ok
plt.zlabel("time") # error: no such method zlabel (ಠ_ಠ)2 -
I hate this feeling.
Changing stuff with a greamripers scythe around my neck called doubt because the available data isn't too convincing.
Then having to go big or nothing as it is an ecosystem change (e.g. changing the cipher suites of TLS, changing protocol - e.g. HTTP 1.1 to 2) so it needs to be consistent as otherwise fun stuff could happen (fun as in the grim reaper cuts off my neck except a few centimeters and plays "now your head is off, now your head is on" ).
To top it off - just few seconds after the change has happened people coming up in the support channel.
My hands are - mysteriously - not sweaty then. Rather cold.
Lil prayer to the heavens and getting the whiskey bottle...
Opening an ongoing discussion in support channel....
And they're discussing whether the page needs to have an additional arrow for going back to the last page or if the default page navigation is enough.
Constantly using @all so everyone gets pissed off due to being pinged every few seconds in a channel that was meant for emergency support.
Now my hands go from a dark red to a bright red, my nostrils flare out, my adrenaline goes through the roof and I literally wanna murder people....
Those days.
I hate those days.
And I hate the timing of some people...
Like they're deliberately fucking with me without knowing it, like the universe told them explicitly to do so just to fuck with me.
*gooozfraba*
And of course, everything else is fine and running smooth like butter, except that said discussion now goes on in a total flamewar so I get even more pings.
Sucks to be in management.
You have way to many rooms where people can annoy you.
To top it off - after being grumpy and pissed and angry for people just annoying the fuck out of me, I have to mediate.
Yeah. Cause the usual person is on vacancy.
*slowly strangling the whiskey bottle like homer does with bart*
Turns out after 15 mins listening to enraged UX designer vs Frontend Team Lead that UX designer meant a completely different thing - uploaded wrong screenshot, whole discussion was unnecessary.
*Nah. Fuck it. Drinking whiskey*
Reminding everyone what the fucking frigging support channel is meant for and that penis fights aka who got the longest schlong don't belong there....
"Yeah it was a mistake, but it wasn't so bad"
...
You pinged fucking 32 people like it was the end of the world, you ignorant fucktwads.
For over 5 mins.
For fucking frigging nothing except your tiny dicks and shitty egos.
*Second round of whiskey*
Back to work after a wasted half hour.
What says monitoring?
Ah. Everything's working.
At least luck hasn't failed me.
Good server. Brave server.
Then I hear this lil voice in my head: no.
The servers know your personality.
They're afraid. Terrified.
Somehow that thought makes me giggle always...
Childish? Maybe. But it helps on those days.... Funnily enough, remaining 3 hours noone said anything in any chat channel.
"I wonder why, I wonder how...."... *hum* -
!rant
Interview on Monday. Buzzing! The company is pretty cool, they have a startup buzz but part of a wider umbrella of businesses so don't suffer from the financial uncertainty that destroyed my last company (that and my old boss was pretty clueless about everything except sales). They also give time for personal projects, allow remote working, bonuses if the company does well and provide its employees transparency to its finances.
In short, I'm not going to be a cog in a big corporate wheel. If I get this.
Well they liked the code I produced for their programming test, so good start.
Meta: categorised this as rant because it's tech related, but obviously it's not a rant, what's the protocol? Random?2 -
The Web 3 has coming and I really love that. The descentralized web is a new way for the devs. Some projects was started, like: Patchwork built on SSB protocol (Secure Scuttlebutt), Dat Project wich create the dat protocol for share files in P2P network. Someone has started same project into the new web?
P.S.: All projects before has built in Node.js/JavaScript1 -
Having just endured 30 excruciating minutes of utter braindead idiocy that is trying to setup and configure WPA2-Enterprise on a Windows 10 machine, I wanna go and fucking kill myself.
How can it be so bad after so many years this protocol has been out?! Not only can the authentication options be changed only in the who knows how many years old control panel settings and not the modern settings app, but once you finish setting up the network, you can no longer modify some of the key attributes like which CA certificates to validate the radius server against!
What. The. Fuck. Microsoft.
I swear, I don't usually get my jimmies rustled at work, but this... This just bloody infuriated me!2 -
Mozilla has announced plans to remove support for the FTP protocol from Firefox. Users won't be able to download files via the FTP protocol and view the content of FTP folders inside the Firefox browser.
According to the report of ZDNet: Michal Novotny, a software engineer at the Mozilla Corporation said "We're doing this for security reasons, FTP is an insecure protocol and there are no reasons to prefer it over HTTPS for downloading resources. Also, a part of the FTP code is very old, unsafe and hard to maintain and we found a lot of security bugs in it in the past." Novotny says Mozilla plans to disable support for the FTP protocol with the release of Firefox 77, scheduled for release in June this year.
Users will still be able to view and download files via FTP, but they'll have to re-enable FTP support via a preference inside the about:config page.13 -
I hate it when companies got 5 payment options while 4 of them basically lead to a credit card payment.
I'm renting some servers from Vultr and they recently changed something in their payment protocol. Now you need a credit card, even while paying with PayPal, and I don't have a credit card. Using their BTC option doesn't work either since my wallet tells me they are using an incompatible payment protocol (error reason, address & amount) . There is not even a wallet address shown through their BTC checkout to which I could directly send the amount to. You need to open the website on the device your wallet is stored on and then make the payment (so no address is required from their side). Account management is taking a look at it now, I got very quick replies back from their support but this is the first time I'm having such an issue with them.
Oh well, hope they won't take down my servers in the meantime.2 -
When file managers copy and delete files within the same partition instead of moving or renaming them…
When Google's Storage Access Framework was introduced, it did not feature a move command, so file managers just resorted to copying and deleting files within the same storage. Not only does this cause needless wear and is much slower, but it also destroys the date/time attribute (it gets changed to current).
When moving files through MTP (miserable transfer protocol, used for connecting smartphones to PC), they are also copy-deleted. This makes moving a 20-Gigabyte DCIM folder impractical. Also, if one cancels the operation, it might end up whoopsie-daisy deleting some files from the source before they have been transferred.
MTP is so bogus that it is incapable of a simple operation that would JustWork™ on mass storage devices. Not to mention, MTP lacks parallelism and its directory listing loading it S-L-O-W. Upwards of a minute for just 1000 files. Sometimes, it fails loading at all.
Also, trying to rename a file through MTP using the terminal through GVFS, even if just within the same folder, it copy-deletes it. If I want to rename a 1 GB 2160p 4K video in a highly populated DCIM folder, I can not do so through the terminal. At least, the 4K video has a time stamp in its internal metadata, but it still renames slowly and adds needless wear to the smartphone's flash memory.14 -
So I reverse engineered the
protocol of QONQR: World in Play and made a mitmproxy addon running locally inside termux that can see when I launch in the game and uses Termux:API to notify me when my ingame resources are replenished.
I direct the traffic through mitmproxy using Drony. I configured it so that by default Drony passes traffic directly to the internet except if it comes from the QONQR app.
The problem is that while Drony is running, there is a chance of network traffic being corrupted so I often get spammed by connection and ssl errors.
So I have to either continue sacrificimg my network integrity or stop getting assistance ppaying QONQR :-/
Does anyone know an alternative to Drony (basically an app that can connect you to a proxy without root using the android vpn api, if possible with filtering by app or ip)?
Also does anyone else have problems with drony on Android 9 or other versions? I don't really have an opportunity to test it.
Edit: It only took 4 tries to post this yay3 -
I was in the network lab today, trying to wrap my head around basic dynamic routing protocols, but i could not ping the third computer..
30 minutes of frustration later I noticed while debugging the protocol, router 2 was ignoring messages from router 3 because it was not version 2...
RIP -
Is this a technological metaphor?
For some Hacker challenge I was reading up on different keyboard layouts, Dvorak and stuff. And the technological lock in is baffling me: The rationale for qwerty was to reduce jamming of the typewriter letter arms. Today that doesn't make sense anymore, yet we stick to it. Wondering how much of today's tech is dragged down by things like that.
This stuff often also makes me weary of the first decisions, like choosing a protocol or data base - its kind and layout, because we might be stuck with it for reasons of backwards compatibility.... Like when Microsoft opted for the backslash as a directory separator..25 -
I'm really surprised at how when I type in a domain without the protocol it automatically goes to http in this 'privacy' browser (firefox focus)3
-
what is that annoying little bug in my brain that triggers those annoying Chemicals which makes sleeping at night even after being awake for 27 hours, so goddamn hard?
protocol nature's spec describe the natural and optimal daily rhytm as sleeping 8 hours each night, so why is obeying the defaults so goddamn troubling?!5 -
*intense hacker vs hacker situation in a movie
One of them: "Let's see how you like Hyper Text Transfer Protocol"
*Continues intensively hacking1 -
So I'm apparently not allowed to work with what I've learned in my work in my free time.
My boss gave me the job to create modifications for an already existing tool. I always wanted to do that and I started to collect ideas a long time ago what I want to have. So I kindly shared my ideas with my boss and started working on it. Since I'm leaving the company I now longer work on these things and now I started continue working on MY ideas in my free time.
And for protocol: I didn't take any of my code I wrote in my working time and I didn't apply anything else that clearly belongs to the company.
Now I have a problem with my boss. I shared him my ideas so now they belong to the company. And I learned how to create modifications for this tool in my working time so now I'm not allowed to use this knowledge for anything else. I had an argument with my boss but he persists on the idea that since he gave me this little feedback that my ideas are great, they now belong to his company and he wants to put me into big trouble now...11 -
Just created my own publish-subscribe-based IoT protocol for the NodeMCU. It's like a simplified version of MQTT and pretty error-rich. (So it shouldn't be used in important cases). But the cool thing is that you can use a simple NodeMCU to host a server and don't need to set up a Mosquitto Server on a Linux Machine.
Will release on GitHub soon!
Also made an example Client in PHP!4 -
This person suggests that youtube went down, because somethign went wrong while moving to different transport layer network protocol.
https://twitter.com/fahadjax/...1 -
Writing simple driver for AT24C256 eeprom on pico (RP2040)
It turned out it was FT24C256A, which should follow same protocol.
After literally over month of coming back to it, getting stuck again, rewriting things (including some functions of pico-sdk), i almost gave up a d started just yolo trying random shit.
Afterall the documentation on addressing the chip fucking missled me -_- (1st bit is r/w flag and 2-7 bits are address, counted from MSB->LSB)
I made it work yesterday.
In meantime Ive rewritten Wire library, Ive modified someone's else rewrite, extended sdk to allow getting i2c registers, tried to use tiny go just to learn it doesnt support i2c slave mode, resoldered entire thing few times, measured connections few too many times etc.
Frustrated I doubted I will ever manage to finish putting this project together because it looked like Im just too noob.1 -
Trying to implement WebRTC for Voice chat in the company app in Unity.
Pros:
- it's super fucking fast
- it kinda is peer to peer
Cons:
- WebRTC comes in very different ways and therefore you either need to properly config the server or change the way the app works
- Each signaling server might have different config so you can't even connect to different servers like you do for http, ftp and so on
- You need to use a server to know each peer
- You need to use another server to make the actual messages go through
- None of it seems to actually be p2p except the fact that you will need to make a different connection to each and every other client in the conference
So basically it was engineered to be as compatible as possible and therefore no server-side default was defined in the protocol, which means it won't ever be actually very compatible with anything at all since everyone will make its configuration.
Fuck me, fuck WebRTC and fuck this whole shit1 -
Nooooo you cant just enable kernel modules at will, you need to know what they do!
Me: Höhö, B.A.T.M.A.N. Advanced meshing protocol goes brrrrr2 -
2005 called. It wants its numbered file names back.
While I am mostly satisfied with "celluloid" as a worthy successor to xplayer, the first major disappointment I stumbled upon is `celluloid-shot0001.jpg`. Are we in 2005?
Just like xplayer, Celluloid, the new default media player of Linux Mint, should use proper, i.e. time-stamped names such as `celluloid-2023-04-10T00-47-42.jpg` or `celluloid-video_file_name-2023-04-10T00-47-42.jpg` for screenshots taken from videos, to eliminate the possibility of file name conflicts if files are moved into other directories, to make screenshots searchable by video file name, and to retain the date and time information if the files are moved to a device that does not support date and time stamp retention such as MTP (Media Transfer Protocol), and to allow for date range selection using wildcards in the terminal (e.g. `celluloid-2023-04*` for all screenshots from April 2023). Besides, PNG screenshots should be supported too, but that's out of scope here.
As a reference, the gnome and mate screenshot tools also pre-fill time stamps into the file name field.
Numbered file names were useful in an era when there was no VFAT and file names needed to have 8.3 file names that could impossibly fit a date and a time, and compact cameras used such names, but those times are long over. Just like the useless and annoying pull-to-refresh gesture on mobile apps and the Media Transfer Protocol, numbered file names belong to the technological graveyard.
If numbers are really desirable, at least `celluloid-shot0001.2023-04-10T00-47-42.jpg` should be used, to include both a number and a date. The command to get this date format is `date +"%Y-%m-%dT%H-%M-%S"`. For compatibility across operating systems, dashes instead of colons have to be used to separate hours and minutes and seconds.
Numbered file names are a thing of the past. Use time stamps.2 -
The project I have been working on was growing and growing and growing... It reached it a point where the front-end was really hard to maintain. The worst part was the communication protocol, we were using JSON to serialize really complex objects.
I took some initiative and suggested that we use protobuf instead of JSON. Long story short, data usage is 10% of what it used to be, serialization and deserialzation is much faster, and the best of all, everything is strongly typed, with auto generated classes. Fucking awesome!1 -
- Eclipse (especially when plugged in with any SCM, excluding Che)
- RichFaces / PrimeFaces (from the pre SPA era)
- WebLogic (how many times do you need to be restarted in a day? )
- SOAP (not a dev technology, but even as a protocol. Thank You Microsoft !!!)
- Struts (what were you doing at the same time as Spring ??? )
- GWT (how did this even find its place inside Google? )
Need more time a deeper retrospective of each dev tech I've come across :( -
Project Zero team found that a specially crafted URL could make the Git client into sending credential information of an alternative host to an attacker's host. In this case, the specially crafted URL needs to contain a newline character to trick the credential handling (performs url decoding on most possible url components, no additional validation) and sending the data off to an alternate host.
Updated Now : Credential protocol code is now forbidding newline characters in any values.
More : https://lore.kernel.org/lkml/...1 -
I've gotta create a bidirectional communication protocol to link 2-3 RPis over GPIO. I have between 4-5 pins for TX and 45 for RX, so each directional bus is that wide.
Even better, I have to assume 4 bit bus length unless told otherwise, since 4 to 6 pins on the GPIO are usually used for serial/UART, COM and/or 1-pin communications (for use to get a console, not to throw data down.)
The best part?
Needs to be a Python library.
i wanna die4 -
Speaking with my former coworker about networking:
Me: Coaxial uses the Ethernet layer two protocol.
Coworker: No? Coax uses the RADIUS layer two protocol.
And this guy was my "superior." -
So I get to work on building a client at work for industrial automation. I am building a mini hmi to show customers how our server works. The code uses opcua. The reason I am making a client is because all the opcua hmis on the market are really expensive. There is nothing less than $600. There are hmis for free out there, but none of them say they support opcua. opcua has become a major protocol in the industrial automation industry.
It took me about 2 days to gin together a client that is pretty much abstracted and will be easy to maintain. A lot of that was just learning the opcua library client code.
Now I want to create servers and clients geared toward home automation for fun and profit. I want to take sensor data from arduinos using a simple serial protocol like modbus or other protocols that are supported. Then have an opcua server that collects this data. Then finally have an opcua hmi that I develop talk to these servers. The security model is much better and would be compatible with other vendors clients/servers. I already have a game engine I want to use for the hmi portion. It has tons of widgets for displaying data, graphs, lists, text, etc. It does both 2d and 3d.
This sounds like a project that could really fun, meshes with my work learning, and provides value to people that want to automate their lives.
The other side effect is that the next time I go looking for a simple and cheap hmi that supports opcua, there will be one. -
So, to realize a FCM to send messages from client to client you have to implement a server Protocol, on the data Layer, in a language that I have never worked with. I'm not getting enough money from my app to implement such this.2
-
My bandwidth is ordinarily a few hundred kbps, but whenever I torrent it can reach up to 2 mbps, while all other traffic from me and my housemates is stuck in the single digit kbps range.
What does BitTorrent do so fucking well, and how can other protocols replicate this success? Would the total available bandwidth be different if every protocol did whatever enables BitTorrent to summon bandwidth from thin air?10 -
A system of universally accessible storage. Here is what I mean:
1. The fastest protocol available would be used (LAN:smb, else:sftp)
2. File transfers would be direct (no download from A, upload to B)
3. Transfers could be resumed
4. Transparent to normal programs
5. Integrated with GUI file manager
But I'm extremely bad at C, so... -
So I was instructed today, after lunch, to spend an hour teaching a member of my team how to SSH, store keys, basic io routines, and create CRON jobs to auth our ECR registry by my team lead.. Why am I wasting dev time teaching someone how to use an operating system? Need I add, our primary Dev workspace is a spun up using vagrant using xubuntu. I just can't comprehend how this person has been using xubuntu as their primary OS for two months and doesn't know the SSH protocol. Much less how they landed a dev job without any prior experience with a *NIX based OS.2
-
Didn't know how difficult is to work with UDP protocol, doing local tests between two PCs in the same network it works well but, connecting to a public server over the internet has become a PITA, you have to do some shit like hole punching or UPnP(some routers but accoding to some users on the net is not reliable) or some others shit in order to connect it
And all that is because how NAT and UDP works, libs like libtorrent(C++) can connect using NAT-PMP, PCP and UPnP, but nothing in C# that can help with that, this is a game of pure guessing4 -
Either the coworker next to me doesn't understand social protocol, or my hair's too long for them to notice I've got earbuds in and don't want to be disturbed. Might have to invest in some over-ear cans just to get the message across.1
-
We need to talk about Matrix (the protocol thingy)
It's a pretty neat system, to be able to communicate to eachother with whatever you want (through whatever other service you want).
What are your experiences so far with it (+ Clients)? Can you give some tips on what to use and to avoid? I heard that depending on the home server it might behave rather slow10 -
Manager: Oh, this feature freeze you where talking about was no joke?
Me: Yes, that's why we have written it into the protocol of the Last Meeting and everyone agreed...
Manager: Thats nonsense, add more Features! -
So, I’ve been given the task of sorting the security out in an application plugging the holes and whatnot as to be honest it’s shocking haha. It doesn’t help that we automate security audits but that’s a different rant for another day.
We’re using devise for authentication (rails standard, ♥️ devise), we have no password resets through the login page, it has to be manually reset by ringing support, why who knows, even though it’s built into the gem and we allow the user to login using an username instead of an email because for whatever reason someone thought it was a bright idea to not have the email field mandatory.
So I hop onto a call with the BAs, basically I go that we need to implement password resets into the login page so the user can do it themselves and also to cut down support calls a ticket is already in place for it. So I go through the standardised workflow for resetting a password. My manager goes.
“I don’t think this will be very secure”
Wait.. what. Have you never reset a password before? It’s following the same protocol as every other app.
We go back and fourth and I said I’ll get it checked with security just to keep him happy.
The issue mainly is well we can’t implement password resets due to 100s of users not having an email on there account.. 🙃 so before we push this change we need to try and notice all users to set a unique email.
Updated the tickets. All dandy.
Looking at the PRs to see what security things have been done if any and turns out one of the devs in India has just written a migration to add the same default email to every user that doesn’t have an email present and yep it got merged. So I go revert the change but talk about taking a “we don’t care about security approach”.
Eventually we want to have the user reset their passwords and login using their email and someone goes a head and does that. Not to mention the security risk.
Jesus Christ I wonder why I bother sometimes.2 -
Although I don't like facebook from various points of views (policies, etc) (still I use it with disabled platform and many other settings) I like most of their developer oriented projects. Graphql being one of them.
Likewise talking about microsoft I have grown to love vscode and language server protocol (which I find too awesome!!)
What's your rant about similar companies? -
Working full time as a "Protocol Engineer" for a big company, taking care of pretty much everything related to AS/NAS on the network layer (2G, 3G, 4G).
I hate it, but it pays really well.
On my free time, revising ML/DL stuff from Udacity's nano (finished it last year) while studying for the VR nano and keeping my coding skills fresh (basic to advanced structures, coding strategies, best practices and stuff).
Love it, but usually I pay a heavy price to keep my mind in place.
Sometimes I just wish to give everythin up and travel the world with my 2 bucks and just try to get some rest. :v
To all of you who go through this kind of stuff, how are you holding up?1 -
So, I need to figure out what ir protocol a controller board uses. I know how to do it with a remote, but how would I do it with the board? Can I reverse engineer the ir reciver to find it's protocol?10
-
Second day/night with language server protocol and after “I hate my life phase” I think I am starting to understand this shit ( read found enough libraries and examples that are written in some kind of understandable manner to my little brain).
Fucking learning process and no prior knowledge of typescript doesn’t help.
Time to write some simple language server prototype. -
Which of the following is related to Alert Protocol in SSL?
A. SELECT, ALARM
B. ALERT, ALARM
C. WARNING, FATAL
D. FATAL, ALARM
E. SELECT, FATAL
F. I don't always use SSL3 -
In reply to this:
https://devrant.com/rants/260590/...
As a senior dev for over 13 years, I will break you point by point in the most realistic way, so you don't get in troubles for following internet boring paternal advices.
1) False. Being go-ahead, pro active and prone to learn is a good thing in most places.
This doesn't mean being an entitled asshole, but standing for yourself (don't get put down and used to do shit for others, or it will become the routine) and show good learning and exploration skills will definitely put you under a good light.
2)False. 2 things to check:
a) if the guy over you is an entitled asshole who thinkg you're going to steal his job and will try to sabotage you or not answer acting annoyed, or if it's a cool guy.
Choose wisely your questions and put them all togheter. Don't be that guy that fires questions in crumbles, one every 2 minutes.
Put them togheter and try to work out the obvious and what can be done through google or chatgpt by yourself. Then collect the hard ones for the experienced guy and ask them all at once. He's been put over you to help you.
3) Idiotic. NO.
Working code = good code. It's always been like this.
If you follow this idiotic advice you will annoy everyone.
The thing about renaming variables and crap it's called a standard. Most company will have a document with one if there is a need to follow it.
What remains are common programming conventions that everyone mostly follows.
Else you'll end up getting crazy at all the rules and small conventions and will start to do messy hot spaghetti code filled with syntactic sugar that no one likes, included yourself.
4)LMAO.
This mostly never happens (seniors send to juniors) in real life.
But it happens on the other side (junior code gets reviewed).
He must either be a crap programmer or stopped learning years ago(?)
5) This is absolutely true.
Programming is not a forgiving job if you're not honest.
Covering up mess in programming is mostly impossible, expecially when git and all that stuff with your name on it came out.
Be honest, admit your faults, ask if not sure.
Code is code, if it's wrong it won't work magically and sooner or later it will fire back.
6)Somewhat true, but it all depends on the deadline you're given and the complexity of the logic to be implemented.
If very complex you have to divide an conquer (usually)
7)LMAO, this one might be true for multi billionaire companies with thousand of employees.
Normal companies rarely do that because it's a waste of time. They pass knowledge by word or with concise documentation that later gets explained by seniors or TL's to the devs.
Try following this and as a junior:
1) you will have written shit docs and wasted time
2) you will come up to the devs at the deadline with half of the code done and them saying wtf who told you to do that
8) See? What an oxymoron ahahah
Look at point 3 of this guy than re-read this.
This alone should prove you that I'm right for everything else.
9) Half true.
Watch your ass. You need to understand what you're going to put yourself into.
If it's some unknown deep sea shit, with no documentations whatsoever you will end up with a sore ass and pulling your hair finding crumbles of code that make that unknown thing work.
Believe me and not him.
I have been there. To say one, I've been doing some high level project for using powerful RFID reading antennas for doing large warehouse inventory with high speed (instead of counting manually or scanning pieces, the put rfid tags inside the boxes and pass a scanner between shelves, reading all the inventory).
I had to deal with all the RFID protocol, the math behind radio waves (yes, knowing it will let you configure them more efficently and avoid conflicts), know a whole new SDK from them I've never used again (useless knowledge = time wasted and no resume worthy material for your next job) and so on.
It was a grueling, hair pulling, horrible experience that brought me nothing in return execpt the skill of accepting and embracing the pain of such experiences.
And I can go on with other stories. Horror Stories.
If it's something that is doable but it's complex, hard or just interesting, go for it. Expecially if the tech involved is something marketable.
10) Yes, and you can't stop learning, expecially now that AI will start to cover more and more of our work.4 -
Browser automation is a PITA. I’m going on my fourth side mission with this crap and I honestly still look like a newbie. I’ve tried Java Selenium with Chrome, Excel VBA with IE9, Vanilla JS in the browser console, and tonight I’m thinking to concoct some kind of hybrid CDP & Selenium approach in Chrome. Never used CDP before, not even sure where to start but I heard it sucks like anything else unless you get some extra libraries and plugins and stuff.
It doesn’t help that I can’t get just anything I want from our IT Department. It would be another PITA to ask for puppeteer. If puppeteer is totally legit please let me know.
Selenium sucks. The buttons don’t click, the waits don’t wait. Its unusable. Iframes are annoying as all hell but I can deal with that. HTML Tables suck too. It doesn’t help I have to restart my whole java program and whole Chrome every time an element doesn’t get picked correctly. Scripting one single element can take all fucking night.
Chrome dev tools what the fuck. Why the fuck is the DOM explorer in the same window as the web page I’m working on?? I can’t undock it. Am I supposed to use a fucking TV screen to work with this bastard?? If I use the remote chrome tools on port 9225 or whatever - It Still Renders The Whole Fucking Page Alongside The Console. Get Out Of My Way!!! The nested HTML CODE IS ONE CHARACTER WIDE ALL THE TIME. I can’t for the life of me figure out what the fuck I’m looking at. Haven’t you people ever heard of A HORIZONTAL SCROLL BAR at least.
Fuck I tried using getElementById, and the Xpath thing and its not all that great seeing I have seemingly 1000s of nested Divs all over the god damned place oftentimes containing a single element. I’m finally on chrome now should I learn Jquery now? I mean seriously wtf.
I use this one no code tool for dev it has web automation built in. As you can imagine its just as broken as anything else!! I have 10 screens to navigate it gets stuck on the second screen all the damn time. Fuck I love clicking the buttons when my script misses and playing catch up with it.
So as a work around to Selenium not waiting even 1 millisecond when I use explicit wait or implicit wait or fluent wait, I’m guessing maybe I can attach both Chrome Dev Tools Protocol (CDP as ive called it earlier) and selenium to the same browser and maybe I can use CDP to perform a Wait with any degree of success. Selenium will do nothing more than execute vanilla javascript Element.click(); This is the only way I know to even ACTUALLY use selenium beyond the simplest html documents possible. Hell I guess CDP can execute js idk.
I can’t get the new selenium that has CDP but I do have some buggy ass selenium from a few years back. Yeah, I remember reading there was a pretty impactful regression defect in the version I have. Maybe I’m being gaslighted by some shit copy of selenium?
The worst part is that I do seem to be having issues that the rest of the internet’s devs do not seem to be having. People act like browser automation is totally viable and pretty OK. How in the fuck hell is my Selenium Test Suite going to be more reliable my application under test?!!?? I’ll have more fucking bugs in my test suite than in my application. Today, I have less than half a test script and, I. already. fucking. do.
I am still SUPER PISSED at the months of 12 hour days (always 8 hours spent on normal sprint work btw only 4 to automation) I spent trying to automate our regression tests. I got NOWHERE.
I did learn a lot about HTML and JS though like I’m not that mad…but I’m just trying to emphasize my achievement on my task was zero.
The buttons don’t click. There are so many divs and I swear you sometimes need to select a div somewhere in the middle sometimes to get it working. The waits don’t wait. XHR requests are invisible. Java crashes 100 times before I find an xpath and thread.sleep() combo that works. I have no failure modes to use — Sometimes I click the same element 20x in a script because I have no way to know if it clicked the first time! Sometimes you gotta scroll the page to make the click work. So many click methods all broken. So many wait methods all broken. Its not just the elements don’t click! There are so many ways to click that almost work but surely they all fail the same in the end. ok at this point I’m just repeating myself…
there yet even more issues that I can’t remember…and will soon remember as I journey into this project yet again…
thanks for reading I hope I entertained and would love to hear your experience!5 -
Over the last week I've slowly grown to fucking hate IMAP and SMTP. You'd think after so many years we'd have come up with better servers to manage email but no we still rely on fucking decades old protocols that can't even batch requests.
To make things worse I need to attach to IMAP through node and that has been a nightmare. All the libraries suck ass and even the ones tailored towards Gmail don't work for Gmail because Google decided one day to fucking out the header at the bottom of some emails and split into mimeparts. Also why the fuck is fetching email asynchronous? There's no point at all since we requests are processed line by line in IMAP, and if the library actually supported sending asynchronous requests it wouldn't require a new object to be created for each request and allow only a single listener.
Also callbacks are antiquated for a while and it pisses me off that node hasn't updated their libraries i.e. TLS to support async/await. I've taken to "return await new Promise" where the resolve of the promise is passed as the callback, which let's me go from callback to promise to async/await. If anyone has any other ideas I'm all ears otherwise I might just rewrite their TLS library altogether...
And this is just IMAP. I wish browsers supported TLS sockets because I can already see a server struggling with several endpoints and users, it would be much easier to open a connection from the client since the relationship is essentially:
Client [N] --- [1] Server [1] --- [1] IMAP
And to make the legs of that N : N which would fix a lot of issues, I would have to open a new IMAP connection for every client, which is cool cause it could be serverless, but horrifying because that's so inefficient.
Honestly we need a new, unifying email protocol with modern paradigms...8 -
Ugh... I don't like how TCP is a stream protocol and how UDP is unreliable and unordered.
I want a semi-reliable, ordered, message protocol dang it!13 -
I spent the whole damn day trying to setup grpc-web, but this protocol is documented so damn poorly!
You manage to set grpc up for one language and it’s all cool, then you stupidly think that you are free to reuse the compiler you used for the nodejs version for your frontend part but nope! Our web module is now deprecated, please use this module instead!
“Ah yes just clone the repo and check out (…) and you can also check this link whic is in no way highlighted in the middle of a wall of text (…)”
*checking the other page*
Ah yes you need to install a package available only on your unix machine (great! Screw the devs in my team who use windows I guess, they’ll be happy to hear this!) and don’t forget to clone this repo to build your own plugin! And by that I ofc mean to compile it on your own!
- compiler error
After digging for an hour you find a requirement in an obscure issue opened and closed cause “ah yes we have a dependency not stated anywhere” *close issue and never add it to the project*
Fine, fine I can survive this bs
- another compiler error, no solution found after 2 hours
Honestly? Why the fuck do I need to compile this stuff? Just give me a damn npm package I can use? Goddamn it’s just transpiling, you don’t need access to my OS! (Aside for fs to save the files, and which btw is accessible via nodejs)
Now, I COULD download the latest realease as a precompiled, but… honestly?
I give up, I’ll do some shitty rest apis cause the customer’s not paying me enough for even THINKING to go trough this shit again when they’ll ask an iOS app. Or having colleagues asking me to help them understand how to do it.
Side note: also add typescript support to the web-code-generation ffs! Why does node have it and web don’t?5 -
I am working on a pub-sub based protocol (like MQTT) with some added features. I am developing a python based server for my protocol which can be run on distributed architecture with load balancing without any tweaks. I am planning to make this server and the protocol open source.
The whole thing is getting so complex that I think about scrapping this project sometimes. I need your inspiration guys. Really, I need it. I know this protocol will be good enough to help people working on IoT, chat or any pub-sub based application if I can complete it. Cheer me up, please. -
Our current assignment in class is a group project, where we develop a p2p chat client that works within the same network. The whole class needs to use a common protocol, so that the different groups can communicate, so the leaders have to choose/create one. I got Democratically elected. I also defined most of the protocol until now and kinda managed my group.
Since GUI-guy had the least stuff to I told him to copy a Persona 5 theme😆 -
MQTT - all I used to know about this is its name, untill few months back a client sent us some requirements which included MQTT. I opened its specification and I was fucking shocked! I am implementing almost similar protocol in most of my applications (which needs subscription based service) for last 3 years. I have developed IoT apps, remote monitoring systems, HMI systems using the same fucking protocol! Even I had implemented the same thing on HTTP using long polling a few years back!!
Now I feel like open sourcing my protocol. But I don't know where to start. Any help please?1 -
Is there any kind of protocol/method where I can use something like docker containers in order to "host" compilers like gcc and use that with vscode to compile and assemble source code?
No I'm not talking about volumes (it's a bit tedious if I want to use it to manage numerous projects)3 -
Online Multiplayer Mafia party game built on Ethereum.
Project Type: Existing open source project
Description: I found that most of the blockchain game projects in this space are using traditional web2 technology for hosting gameplay. So, we decided to create a game that utilizes web3 technologies as much as possible for our project and create services like real-time chat, game rooms, player profiles that can be used by other games. These services are very common among modern online multiplayer games and we need a reliable and scalable alternative that uses a web3 tech stack. So, we have decided to create a game that incorporates all these features.
Blockchain smart contracts development is complete. I need help in backend and frontend development. You don't need to have any experience in Blockchain.
Tech Stack: Express.js + React.js + IPFS + Solidity
Current Team Size: 1
URL: https://github.com/cryptomafias/...
Note: We are eligible for a grant from the protocol labs - the company behind IPFS.8 -
does anybody here use diaspora*? for those who don't, it's a free (as in freedom) social network and protocol thereof, and it employs a decentralized, distributed approach. you can choose a "pod" to store your data, and search for people and content inter-podly. as a decentralization/distribution/foss enthusiast, i love the project and check regularly, but sometimes i get the feeling that i'm all by myself there, as i have no friends yet and all the content i see is just my followed keywords. (so befriend me, maybe? :D)5
-
SMB/CIFS support on Linux distros is a nightmare! Switching from wired to wireless will cause ALL mounts to freeze, and they all become impossible to dismount normally. You can't even ls the root folder anymore if there are frozen mount folders inside. It's f#&%ing retarded to have to reboot your PC twice a day because you lost WiFi signal for one second, and the underlying processes don't understand SIGTERM. And I could go on about MTP! Standard file transfer protocol for Android but boy it is hellish. Trying to copy a structure with subfolders will take forever because every ls call to the phone is like an API call to some free webhosting company in Australia, takes forever, if it even succeeds. I won't even get started on WebDAV and SSHFS (the latter is even worse than CIFS). Those make me want to do unpleasant things to my computer. So frustrating! I can't be the only one who has experienced this, right?1
-
I was able to replace Okta Verify with an open source Python script and Android app and I wrote a tutorial for it:
https://battlepenguin.com/tech/...
Unfortunately it won't work for our companies VPN which requires Okta Push. After fighting with Security for a bit, it looks like I'll have to do a Part II where I reverse engineer the Okta Verify protocol. -
Created an IoT communication framework generator which generated communication code for any IoT device for any communication protocol or any platform or programming language. Also managed to publish in an IEEE conference
-
Honest question. When do you consider yourself a "Big data engineer"?
Today I managed to create a system that collects historical metrics from monitoring tools every 5 minutes and do all sorts of crazy transformations to make them ingestible by grafana Mimir in OTLP protocol. Doing 600gb a dat, millions of active time series, .... And I still feel it's, "small"
Thoughts?5 -
Relatively often the OpenLDAP server (slapd) behaves a bit strange.
While it is little bit slow (I didn't do a benchmark but Active Directory seemed to be a bit faster but has other quirks is Windows only) with a small amount of users it's fine. slapd is the reference implementation of the LDAP protocol and I didn't expect it to be much better.
Some years ago slapd migrated to a different configuration style - instead of a configuration file and a required restart after every change made, it now uses an additional database for "live" configuration which also allows the deployment of multiple servers with the same configuration (I guess this is nice for larger setups). Many documentations online do not reflect the new configuration and so using the new configuration style requires some knowledge of LDAP itself.
It is possible to revert to the old file based method but the possibility might be removed by any future version - and restarts may take a little bit longer. So I guess, don't do that?
To access the configuration over the network (only using the command line on the server to edit the configuration is sometimes a bit... annoying) an additional internal user has to be created in the configuration database (while working on the local machine as root you are authenticated over a unix domain socket). I mean, I had to creat an administration user during the installation of the service but apparently this only for the main database...
The password in the configuration can be hashed as usual - but strangely it does only accept hashes of some passwords (a hashed version of "123456" is accepted but not hashes of different password, I mean what the...?) so I have to use a single plaintext password... (secure password hashing works for normal user and normal admin accounts).
But even worse are the default logging options: By default (atleast on Debian) the log level is set to DEBUG. Additionally if slapd detects optimization opportunities it writes them to the logs - at least once per connection, if not per query. Together with an application that did alot of connections and queries (this was not intendet and got fixed later) THIS RESULTED IN 32 GB LOG FILES IN ≤ 24 HOURS! - enough to fill up the disk and to crash other services (lessons learned: add more monitoring, monitoring, and monitoring and /var/log should be an extra partition). I mean logging optimization hints is certainly nice - it runs faster now (again, I did not do any benchmarks) - but ther verbosity was way too high.
The worst parts are the error messages: When entering a query string with a syntax errors, slapd returns the error code 80 without any additional text - the documentation reveals SO MUCH BETTER meaning: "other error", THIS IS SO HELPFULL... In the end I was able to find the reason why the input was rejected but in my experience the most error messages are little bit more precise.2 -
Is it safe to assume that Dell's power button PCB-s all communicate the same way? They all have a single button and four wires.2
-
I have acquired a Bang & Olufsen surround sound system with a broken discplayer. I can not finde anything about the connector for the intact speakers but it seems to use a protocol instead of analog signals.
Does anybody here know something about them and theire speakers?1 -
My nose you shouldn’t see
it behaves like protocol UDP
But with my faculties I should be considered a hero
my mind feels like I just divided by zero
I feel like a Java applicated newly created
with the garbage collector just activated
But I try to keep everything on the positive side
same as the COVID test I just tried…1 -
Would it be possible to use (S)FTP protocol in conjunction with push technology rather than pull? Perhaps websockets since both use TCP?
Say, something like an external server periodically sending my server files and when a new file arrives, I will get a notification. This instead of constantly polling my directory to check if there are files in it.
I think I can see this done with an Angular page that gives me a notification when a new file arrives on my FTP.
I think it might turn into an interesting little hobby project..4 -
So, I know that RIP (Routing Information Protocol) timers used to synchronise although they were supposed to be started at a random time. My question, why and how did this happen?
-
In addition to being able to lookup DNS queries over Twitter, telegram (even literal ones), devRant, HTTP(s), TLS and even the DNS protocol itself - Cloudflare will now offer DNS-over-HAM in London.
Sources:
- Heise Online (German): https://heise.de/newsticker/...
- Original Tweet: https://mobile.twitter.com/jgrahamc...1 -
Consider an API that uses the HTTP path to represent position in a tree that literally represents a file tree with minimal constraints, and GET/PUT/DELETE methods to read, write and destroy the nodes. How would you encode read/write operations to per-node metadata? The kinds of metadata are static and around 4, so inventing HTTP verbs for each of them is infeasible but filtering is not necessary.
Options considered so far:
- toplevel resources alongside a namespaced /data such as /acl, /lock
- magic keywords to the Range header (this is apparently compliant)
- mimetypes such as text/plain+acl
- SETPROP / PROP methods in the spirit of WebDAV
- headers (I worry this may become an immitigable bottleneck really fast)
I'm looking for any kind of suggestion or insight, not perfect answers.
I read the WebDAV specification and I won't even suggest that I'm trying to align with it, the only protocol I'd seen in the past with comparable scope bloat is WebRTC.22 -
I want to learn about the most important network protocols (HTTP 1/1.1/2, SSH, IMAP, SMTP, IMAP...) but reading the RFCs is extremely time consuming and probably not necessary for someone which doesn't need to implement these protocol.
Do you know more concise resources where I can learn more about the topic?9 -
!rant
Stupid licensing issue.
I have a licensing question/problem.
I'm porting Lemonbar (the fancy GNU/Linux X11 statusbar) to D (which is awesome imo).
I'm adding Wayland functionality and since D is part of the C syntax family some code is just about exactly the same (the XCB libs are protocol-generated external imports).
Also, the X-specific parts are in a specific file.
What do I license the project against? My own license (I prefer Apache) or Lemonbar's? What about the X-specific file?
BTW, it's a full rewrite using the same concepts, object-orienting the whole thing.2 -
Anyone with good understanding of hardware and/or an operating systems network protocols please assist me. I have questions
When using socket api I know it’s not the actual sockets sending the data but the socket api tells the network protocol to send, receive, listen, connect, etc well what I want to know is how that networking protocol works within the operating system
My second question is more an extension of the first. After the operating system knows what the socket api wants to do and wants to do it how does the transmission and receiving work on the physical layer within the hardware
Idk if what I’m asking makes sense. But if anyone also has any resources or a link that’ll help me on the subject I’d appreciate it. I haven’t found anything on the subjects myself19 -
I've ran into some problems because I misunderstood iOS `Decodable` protocol. After a while I've compiled some utility classes to transform it into something more expected.
I've written a short post about it here and I don't have any place to share for feedback. So I thought I would post it here.
- - -
Thanks for reading. (I cannot post an URL yet... So I guess I can only attach a screenshot of the title...)2 -
!rant
TIL they created an open source e-mail protocol, JMAP (info at http://jmap.io), based on IMAP. The problem is, there is no client nor server that actually uses this. Do you know if they will ever develop one? -
Hello,
I was tweaking with Flutter lately and pushed an app to the stores, the framework looks promising. So I wanted to do some contributions.
If you talk about create chat app with flutter, all you’ll find there is some firebase tutorials (Google is pushing it so hard).
I want an on-promise solution, so I used MQTT protocol to use it as chat protocol (with custom extension) and I created Flutter client for that. I am really happy for the concept, it shows some real strengths and could be a thing, so here I am sharing the repository here.
Any feedback will be welcomed.
https://github.com/WahidNasri/...4 -
Vivaldi browser seemed a good idea to escape Google's misfeatures without swapping it for Microsoft extensions (Edge) or Firefox / Gecko idiosyncrasies (size / magnification issues on Ubuntu, slow Android version, clunky UI). But there are some ongoing issues that I never experienced in any other user agent (maybe I will when switching to Chromium), like URL completion (port URLs without a protocol aren't prepended with https but trigger a xdg-open dialog, autocomplete prefers obscure deep links with long paths instead of the base URL, browsers seems to forget login passwords by default, etc.) - so Chromium seems like the obvious choice. But there seem to be no more Chromium builds for Android? Anyone else disappointed by Vivaldi has a preferred solution?4
-
Okay, so, I have a functional snort agent instance, and it's spewing out alerts in it's "brilliant" unified2 log format.
I'm able to dump the log contents using the "u2spewfoo" utility (wtf even is that name lol... Unified2... something foo) but... It gives me... data. With no actual hint as to *what* rule made it log this. What is it that it found?
All I see are IDs and numbers and timings and stuff... How do I get this
(Event)
sensor id: 0 event id: 5540 event second: 1621329398 event microsecond: 388969
sig id: 366 gen id: 1 revision: 7 classification: 29
priority: 3 ip source: *src-ip* ip destination: *my-ip*
src port: 8 dest port: 0 protocol: 1 impact_flag: 0 blocked: 0
mpls label: 0 vland id: 0 policy id: 0
into information like "SYN flood from src-ip to destination-ip" -
Need some help,
I am setting up postfix and I need it to accept all emails, from any domain (without a domain list), and forward it to a local address on the machine (It pipes into PHP, toscript@).
I have a catch-all working where it is forwarding the emails to the toscript@ mailbox dispite of the to address. But if I send an email to it that is not in the domain list it gets rejected as it's not in the domain list, Is their a known way to force Postfix to accept all domain emails without having a list of the domains in the server.
I have searched but no luck of a working solution, I have looked at the following with no working solution
Server Fault: 133190
Server Fault: 422468
Server Fault: 179419
Server Fault: 105641
Server Fault: 161321
Server Fault: 318426
Server Fault: 514643
Server Fault: 410053
Stack Overflow: 4772229
Super User: 353488
Looking at the docs I do not see anything for it but making it an open relay but I can't figure what settings to update to make it the open relay to capture all of the mail.
I know I am missing something but I can't figure out what it is!
::Rant::
I'd like to use Postfix as it seems very stable and it's not a hack job as some of the projects that I have seen. It also can communicate with all of the proper channels for SMTP and the Protocol as well as some very easy configs.2 -
Damm bro life kinda suks now. Gonna move to Iraq or summin get away from these bafoons. Protocol 61 the snakeskin is shed.4
-
THIS is powering the internet:
"[...] was a protocol number, similar to the third argument to socket today. Specifying this structure was the only way to specify the protocol family. Therefore, in this early system the PF_ values were used as structure tags to specify the protocol family in the sockproto structure, and the AF values were used as structure tags to specify the address family in the socket address structures. The sockproto structure is still in 44BSD (pp. 626-627 of TCPv2) but is only used internally by the kernel. The original definition had the comment "protocol family" for the sp_family member, but this has been changed to "address family" in the 4.4BSD source code. To confuse this difference between the AF_ and PF_ constants even more, the Berkeley kernel data structure that contains the value that is compared to the first argument to socket (the dom family member of the domain structure, p. 187 of TCPv2) has the comment that it contains an AF_ value. But some of the domain structures within the kernel are initialized to the corresponding AF value (p. 192 of TCPv2) while others are initialized to the PF value." Richard Stevens 'Unix network programming' -
We can’t use google sheets, cause of security risks.
(Okay...)
Not even for our showcase content.
Which is public.
The showcase content which goal of the company is to have seen by as many ppl as possible.
Cause security issues which may lead to the possibility of people seeing it.
Seeing the content we want them to see.
Roses are red
My dog ate my led
I may be going crazy
It would be so easy
If they used their head
Or at least fucking read
Edit: if any security expert can give me a valid explanation better than: “it’s the protocol” I am willing to accept I am wrong, but then the point is that they (colleagues) are dicks for not explaining5 -
Reversed network protocol didn't work, the sent messages weren't acknowledged or denied... basically no response from the server at all.
Turned out, after weeks of cluelessness, that I forgot to append PKCS#7 padding... -
When I started with javascript long time ago I thought JS is weird enough, but Swift is even more.
Why does it allow me to compile the code below? In the last line `taker.take(view)`, the `view` is an optional passed into a function that expects non-optional `some View`. How is this even possible!? I tried to change the view with some the other protocols, then it complaints, why the `View` protocol is different from the others?
```
import SwiftUI
struct ViewStruct: View {
var body: some View {
Text("")
}
}
class Taker {
func take (_ view: some View) {
print(view)
}
}
class Container {
var view:ViewStruct?
func createView() {
view = ViewStruct()
}
func test() throws {
let taker = Taker()
guard let viewIsView = view else {
throw fatalError()
}
taker.take(view)
}
}
```7 -
Can somebody give working example how to solve
Access to XMLHttpRequest at 'localhost:8000/index.php/api/companies/1/logo' from origin 'http://localhost:8080' has been blocked by CORS policy: Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, chrome-untrusted, https.
this error is talked so much but no working solution I can find. Maybe it is somewhere but cannot find so far in the internet trash.
Nginx server.
Not by installing chrome plugin, because other people would also need to install it. Thats not a solution.20 -
Whilst procrastinating via semi-helpful browsing,(random blockchain news/info) I come across a new crypto that's really pushing for dev (advertising dev grants etc).
I click "why develop on *whatever*".
This is the start of the page it lead to:
"The Internet began with Web1, a read-only content delivery network. Users could only consume what was offered by site owners, which significantly limited their interaction with the web content."
I blink slowly a few times, figuratively scratch my head and leave.
Am I just too harsh on things like this? I mean, I get that internet history and knowing wtf web3 means is important and all...
Is it too high of a bar to expect a link, specifically trying to entice competent devs who are directly looking into a new web3/blockchain tech to dev with/on, lead to a page that starts with somewhat relative, to the originating link's stated topic, information?
Don't get me wrong, I definitely understand the frequent necessity to be pedantic... but starting with multiple paragraphs of internet history when the sole objective of the link is to inform/entice, specifically, competent devs, who are explicitly looking to leverage blockchain tech... just seems ridiculous.
Despite not actually super interested in changing or adding new blockchain tech to dev with in the near future (not dissatisfied with our relatively established groundwork/current approach), I was actually starting to consider branching out a bit to include initial functionality and/or tools/integrations with this protocol i wasnt aware of (not even just for grant $)... but if their idea of onboarding devs to build on their tech starts with an extremely pedantic intro as to Web1-3 basics... they must have a reeeeally low bar/very desperate for devs.
Seeing this makes me pretty certain it'd be easy/minimal effort to get a decent chunk of grant funding... but with a bar THAT low, I'm not wanting to be associated with them.8 -
I just found out about the firmata protocol and I’m geeking out a bit. I’ve been looking for something like it without realizing.
If you like microcontrollers, imagine what you could do with gpio pins on any device, with all the resources of a PC or even a smartphone.
I mean I knew this could be achieved, but didn’t know that anyone designed a protocol. -
Hyper Text Coffee Pot Control Protocol
https://tools.ietf.org/html/rfc2324
This is one of the best things I've read in a while -
EY and ConsenSys announced the formation of the Baseline Protocol with Microsoft which is an open source initiative that combines cryptography, messaging and blockchain to deliver secure and private business processes at low cost via the public Ethereum Mainnet. The protocol will enable confidential and complex collaboration between enterprises without leaving any sensitive data on-chain. The work will be governed by the Ethereum-Oasis Project.
Past approaches to blockchain technology have had difficulty meeting the highest standards of privacy, security and performance required by corporate IT departments. Overcoming these issues is the goal of the Baseline Protocol.
John Wolpert, ConsenSys’ Group Executive for Enterprise Mainnet added, “A lot of people think of blockchains as the place to record transactions. But what if we thought of the Mainnet as middleware? This approach takes advantage of what the Mainnet is good at while avoiding what it’s not good at.”
Source : ConsenSys -
I have been working on a long time, low progress project of mine that keeps on giving and giving.
Let's begin like two years ago where I dipped my toes into "more then gigabit" networking thanks to a Linus Techtips video about infiniband.
I had the dream of booting my Workstation from my NAS, a so called diskless setup.
Well, since I run FreeNAS on my Nas , a very nice Freebsd based Nas OS, everything's gonna be good.
In the beginning, there was no infiniband support.
Turns out, you don't need it, since the mellanox CX2 nics can do ETH too.
Yay.
Just took me a few weeks of anger.
So, to be able to boot something over the network, you need firmware that finds the bookable stuff and loads it.
That protocol and firmware is called PXE.
PXE needs a DHCP telling it what to do, and what is where and etc.
Freenas here I come! Installing dnsmasq on the actual freenas install turned out to be not that great of an idea because freenas thinks of itself as being an "appliance" that you don't fiddle with. So things work, until you update/ upgrade when everything will basically be wiped, except what you have done through the ui.
Ok. So I gona use a jail, a container like thing for that.
Everything is great, jail has internet, everything Installs fine, what could go wrong?
Dnsmasq can launch and work, but not as dhcp server. Some thing about permissions.
Turns out, jails have permission like things.
A few days of head scratching later, it has ALL the permissions.
Dnsmasq still can't work as DHCP server though, why you ask?
Because it needs a specific kernelmodule that isn't contained in the jail. Since jails are kind of like a docker container, they run on the same OS kernel, who does not have this module, I'd need to patch the freenas, which is an appliance, so fuck that.
Like a year later, freenas has finally added good VM support, so why not make a VM for the dhcpserver?
Well, about a year ago, I didn't know that the virtual Intel nic is a fucken unstable piece of garbage, crashing nearly any OS at some point.
So that was it for a while again.
Now to the last few weeks.
Finally dnsmasq is running in a freebsd VM with a good and working configuration which is rather simple, if those tutorial fuckers out there would explain shit instead of just telling you to copy, paste and replace X.
Now back to the PXE side.
I'm using iPXE because I have no clue how to boot anything over tftp so iSCSi it is, since that is what I can relate too.
The idea behind iscsi is to fake a SCSI disk over the network. Attached devices appear as if they are actually directly connected to the machine instead of over the network.
iPXE gets a lease from the server, can connect to it, everything is fucken great. Finally.
Except that if it "sanBoots" the iscsi drive, it can't find anything to boot.
Well fuck.
If I attach a Linux live USB over iscsi, it boots, finds grub, and crashes because the live iso isn't configured for network-boot.
But it boots.
So what's so different?
Well iPXE is booted in legacy mode, where as the content of the target is windows 10 in efi mode.
Ffff.
Ok. Can I get iPXE to boot in EFI mode?
Well yes, after like 3 days fiddling with it.
But it only finds the onboard Intel nic instead of the new Mellanox CX3 cards, and can't even connect to the target....
Sooo, I guess my options are as follows.
Either, get PXE efi to work on the network cards directly, its called flexboot and might be able to since I just found some firmware options for that.
Or give up on efi and install windows in legacy mode.
Which isn't that easy when it has to end up on a drive on my nas. -
What's the name of that protocol that federated social media software implements? ActivityPub? well, what ever the name is I know it does not implement any concept for fitness specific social updates (like this app Hevy does)