Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "network testing"
-
Hacking/attack experiences...
I'm, for obvious reasons, only going to talk about the attacks I went through and the *legal* ones I did 😅 😜
Let's first get some things clear/funny facts:
I've been doing offensive security since I was 14-15. Defensive since the age of 16-17. I'm getting close to 23 now, for the record.
First system ever hacked (metasploit exploit): Windows XP.
(To be clear, at home through a pentesting environment, all legal)
Easiest system ever hacked: Windows XP yet again.
Time it took me to crack/hack into today's OS's (remote + local exploits, don't remember which ones I used by the way):
Windows: XP - five seconds (damn, those metasploit exploits are powerful)
Windows Vista: Few minutes.
Windows 7: Few minutes.
Windows 10: Few minutes.
OSX (in general): 1 Hour (finding a good exploit took some time, got to root level easily aftewards. No, I do not remember how/what exactly, it's years and years ago)
Linux (Ubuntu): A month approx. Ended up using a Java applet through Firefox when that was still a thing. Literally had to click it manually xD
Linux: (RHEL based systems): Still not exploited, SELinux is powerful, motherfucker.
Keep in mind that I had a great pentesting setup back then 😊. I don't have nor do that anymore since I love defensive security more nowadays and simply don't have the time anymore.
Dealing with attacks and getting hacked.
Keep in mind that I manage around 20 servers (including vps's and dedi's) so I get the usual amount of ssh brute force attacks (thanks for keeping me safe, CSF!) which is about 40-50K every hour. Those ip's automatically get blocked after three failed attempts within 5 minutes. No root login allowed + rsa key login with freaking strong passwords/passphrases.
linu.xxx/much-security.nl - All kinds of attacks, application attacks, brute force, DDoS sometimes but that is also mostly mitigated at provider level, to name a few. So, except for my own tests and a few ddos's on both those domains, nothing really threatening. (as in, nothing seems to have fucked anything up yet)
How did I discover that two of my servers were hacked through brute forcers while no brute force protection was in place yet? installed a barebones ubuntu server onto both. They only come with system-default applications. Tried installing Nginx next day, port 80 was already in use. I always run 'pidof apache2' to make sure it isn't running and thought I'd run that for fun while I knew I didn't install it and it didn't come with the distro. It was actually running. Checked the auth logs and saw succesful root logins - fuck me - reinstalled the servers and installed Fail2Ban. It bans any ip address which had three failed ssh logins within 5 minutes:
Enabled Fail2Ban -> checked iptables (iptables -L) literally two seconds later: 100+ banned ip addresses - holy fuck, no wonder I got hacked!
One other kind/type of attack I get regularly but if it doesn't get much worse, I'll deal with that :)
Dealing with different kinds of attacks:
Web app attacks: extensively testing everything for security vulns before releasing it into the open.
Network attacks: Nginx rate limiting/CSF rate limiting against SYN DDoS attacks for example.
System attacks: Anti brute force software (Fail2Ban or CSF), anti rootkit software, AppArmor or (which I prefer) SELinux which actually catches quite some web app attacks as well and REGULARLY UPDATING THE SERVERS/SOFTWARE.
So yah, hereby :P39 -
So back story... I opened up my own company a while back. I provide not only general IT and phone repair etc but I also do ethical penetration testing and patch the holes.
Before opening my own business me and some buddy's went out to a bowling ally and bar to have a few drinks. I wanted to see what their network was like... I hacked into their entire network in less than two minutes. From my iPhone. I was in their switches, I was configuring their printers and fax machines. Lord knows what I could have done if I had my laptop.
Anyways, back to the rant... I got this text today. 😂😩🔫18 -
Excuse the profuse amount of profanity below.
Fuck this fucking fucked up motherfucker of a fucking director. Money does not make you a fucking decent person, and you come in here and tell me that you pay my fucking measly salary so I must be fucking grateful.
Starts off with a boardroom meeting this morning. Wireless connection on my laptop takes two minutes to connect, I get told that I am wasting company time and that the salary of everyone in the meeting is quite a lot ("with me being the highest"- cuntface director) so stop wasting time. Fuck you man, it's a fucking wireless connection. I am building your motherfucking company applications and doing web design and for what, so I can earn fuckall and be told that I am fucking wasting time. I am presenting your fucking site you wanted, so give me a fucking minute extra to start up the fucking wireless connection.
The fucking mails are taking long to send, great, let's come down and fucking scream at the dev who regrettably said he would try and assist IT (by calling the provider). I literally just got told that I am the following. 1) Fucking stupid 2) He is going to close the dept down because I apparently fuck up (yet again cuntface, your fucking mailserver is NOT MY FUCKING PROBLEM) 3) He is going to contact an external company to come and check my work. 4) I am fucking useless. 5) I telling him lies (yeah fuckface, I worked as a sys admin, I know what a motherfucking DNS server is and what it does. you don't - so don't fucking tell me that I am lying when I tell you there is a DNS fucking issue, because you don't know what the fuck you are talking about - to top that off motherfucker, I FUCKING BUILT YOUR FUCKING SERVER AND YOUR FUCKING NETWORK. I FUCKING KNOW HOW IT WORKS AND WHAT THE FUCK I AM TALKING ABOUT).
On top of that, I got pushed out of the way of my own PC, my code got some fucked up gibberish in it (because he was trying to minimise my editor and he typed some in it, and now I have to fucking roll-back. He told me I am wasting company time and he will take my shit away from me if I download something again. It is an open network. I downloaded JAVA and fucking updated Sublime. Jesus man. What the fucking fuck.
"why is your gmail open?!?!" because I was testing your emails from an external network. "DON'T FEED ME BULLSHIT" (even though the top mail states "test"). It's the whole fucking "my money determines my dick size" mentality.
That being said, I got told that I need to work overtime, without pay, to resolve IT's issue, even if I have to on the weekend.
That being said,my new Dell that I had just bought (my own) got thrown on the floor and he fucked out of my office. Stupid motherfucker. I fucking earn nothing but cannot leave. I will find another job, and when I do - you can go and fuck yourself and your fucking degrading opinions. I am not fucking stupid, so fuck you.Fuck your company and fuck you. Cunt.33 -
Worst dev team failure I've experienced?
One of several.
Around 2012, a team of devs were tasked to convert a ASPX service to WCF that had one responsibility, returning product data (description, price, availability, etc...simple stuff)
No complex searching, just pass the ID, you get the response.
I was the original developer of the ASPX service, which API was an XML request and returned an XML response. The 'powers-that-be' decided anything XML was evil and had to be purged from the planet. If this thought bubble popped up over your head "Wait a sec...doesn't WCF transmit everything via SOAP, which is XML?", yes, but in their minds SOAP wasn't XML. That's not the worst WTF of this story.
The team, 3 developers, 2 DBAs, network administrators, several web developers, worked on the conversion for about 9 months using the Waterfall method (3~5 months was mostly in meetings and very basic prototyping) and using a test-first approach (their own flavor of TDD). The 'go live' day was to occur at 3:00AM and mandatory that nearly the entire department be on-sight (including the department VP) and available to help troubleshoot any system issues.
3:00AM - Teams start their deployments
3:05AM - Thousands and thousands of errors from all kinds of sources (web exceptions, database exceptions, server exceptions, etc), site goes down, teams roll everything back.
3:30AM - The primary developer remembered he made a last minute change to a stored procedure parameter that hadn't been pushed to production, which caused a side-affect across several layers of their stack.
4:00AM - The developer found his bug, but the manager decided it would be better if everyone went home and get a fresh look at the problem at 8:00AM (yes, he expected everyone to be back in the office at 8:00AM).
About a month later, the team scheduled another 3:00AM deployment (VP was present again), confident that introducing mocking into their testing pipeline would fix any database related errors.
3:00AM - Team starts their deployments.
3:30AM - No major errors, things seem to be going well. High fives, cheers..manager tells everyone to head home.
3:35AM - Site crashes, like white page, no response from the servers kind of crash. Resetting IIS on the servers works, but only for around 10 minutes or so.
4:00AM - Team rolls back, manager is clearly pissed at this point, "Nobody is going fucking home until we figure this out!!"
6:00AM - Diagnostics found the WCF client was causing the server to run out of resources, with a mix of clogging up server bandwidth, and a sprinkle of N+1 scaling problem. Manager lets everyone go home, but be back in the office at 8:00AM to develop a plan so this *never* happens again.
About 2 months later, a 'real' development+integration environment (previously, any+all integration tests were on the developer's machine) and the team scheduled a 6:00AM deployment, but at a much, much smaller scale with just the 3 development team members.
Why? Because the manager 'froze' changes to the ASPX service, the web team still needed various enhancements, so they bypassed the service (not using the ASPX service at all) and wrote their own SQL scripts that hit the database directly and utilized AppFabric/Velocity caching to allow the site to scale. There were only a couple client application using the ASPX service that needed to be converted, so deploying at 6:00AM gave everyone a couple of hours before users got into the office. Service deployed, worked like a champ.
A week later the VP schedules a celebration for the successful migration to WCF. Pizza, cake, the works. The 3 team members received awards (and a envelope, which probably equaled some $$$) and the entire team received a custom Benchmade pocket knife to remember this project's success. Myself and several others just stared at each other, not knowing what to say.
Later, my manager pulls several of us into a conference room
Me: "What the hell? This is one of the biggest failures I've been apart of. We got rewarded for thousands and thousands of dollars of wasted time."
<others expressed the same and expletive sediments>
Mgr: "I know..I know...but that's the story we have to stick with. If the company realizes what a fucking mess this is, we could all be fired."
Me: "What?!! All of us?!"
Mgr: "Well, shit rolls downhill. Dept-Mgr-John is ready to fire anyone he felt could make him look bad, which is why I pulled you guys in here. The other sheep out there will go along with anything he says and more than happy to throw you under the bus. Keep your head down until this blows over. Say nothing."11 -
I think the coolest project I did was a few years ago, it was actually a Minecraft plugin.
I decided to learn Java for Minecraft, and a few months after I started learning Java, I was approached by someone who'd like to work with me to create this full-blown Gun Game style gamemode for Minecraft. I made it clear I didn't have the most knowledge, but I was willing to learn.
We began working on the project, the projects main class was bigger than any project I had worked on. Within a few months, it became one of the more popular plugins out there, even though we were still in an alpha mode. Had nearly 1,000 servers running the plugin, over 10k+ players total testing out the plugin.
Cause of this project, I learnt how to properly organize my code, how to make it efficient, learnt how to network, learned how to properly secure and verify anything being sent by the client, working with dependencies, adding features that can support a bunch of other plugins that other developers had, and a bunch more.
Sadly we couldn't finish the plugin anymore, so we gave someone else the source code who has kept it updated to this day. (I know I didn't provide much insight into what I'm saying and just gave a general overview, got a killer headache.)2 -
Install Kali
Configurate Kali
Testing with a VM
Testing with my own Network
FUCKED MY NETWORK FOR 4 DAYS
.
.
.
Analyse my Mistakes
Test again with my Network
Accomplished
Tested my Neighbours Network
Found 3 ways to get in less than 5 Minutes
Went to Neighbour
Said that his Network is pretty easy to hack
Earned 30€10 -
Our web department was deploying a fairly large sales campaign (equivalent to a ‘Black Friday’ for us), and the day before, at 4:00PM, one of the devs emails us and asks “Hey, just a heads up, the main sales page takes almost 30 seconds to load. Any chance you could find out why? Thanks!”
We click the URL they sent, and sure enough, 30 seconds on the dot.
Our department manager almost fell out of his chair (a few ‘F’ bombs were thrown).
DBAs sit next door, so he shouts…
Mgr: ”Hey, did you know the new sales page is taking 30 seconds to open!?”
DBA: “Yea, but it’s not the database. Are you just now hearing about this? They have had performance problems for over week now. Our traces show it’s something on their end.”
Mgr: “-bleep- no!”
Mgr tries to get a hold of anyone …no one is answering the phone..so he leaves to find someone…anyone with authority.
4:15 he comes back..
Mgr: “-beep- All the web managers were in a meeting. I had to interrupt and ask if they knew about the performance problem.”
Me: “Oh crap. I assume they didn’t know or they wouldn’t be in a meeting.”
Mgr: “-bleep- no! No one knew. Apparently the only ones who knew were the 3 developers and the DBA!”
Me: “Uh…what exactly do they want us to do?”
Mgr: “The –bleep- if I know!”
Me: “Are there any load tests we could use for the staging servers? Maybe it’s only the developer servers.”
DBA: “No, just those 3 developers testing. They could reproduce the slowness on staging, so no need for the load tests.”
Mgr: “Oh my –bleep-ing God!”
4:30 ..one of the vice presidents comes into our area…
VP: “So, do we know what the problem is? John tells me you guys are fixing the problem.”
Mgr: “No, we just heard about the problem half hour ago. DBAs said the database side is fine and the traces look like the bottleneck is on web side of things.”
VP: “Hmm, no, John said the problem is the caching. Aren’t you responsible for that?”
Mgr: “Uh…um…yea, but I don’t think anyone knows what the problem is yet.”
VP: “Well, get the caching problem fixed as soon as possible. Our sales numbers this year hinge on the deployment tomorrow.”
- VP leaves -
Me: “I looked at the cache, it’s fine. Their traffic is barely a blip. How much do you want to bet they have a bug or a mistyped url in their javascript? A consistent 30 second load time is suspiciously indicative of a timeout somewhere.”
Mgr: “I was thinking the same thing. I’ll have networking run a trace.”
4:45 Networking run their trace, and sure enough, there was some relative path of ‘something’ pointing to a local resource not on development, it was waiting/timing out after 30 seconds. Fixed the path and page loaded instantaneously. Network admin walks over..
NetworkAdmin: “We had no idea they were having problems. If they told us last week, we could have identified the issue. Did anyone else think 30 second load time was a bit suspicious?”
4:50 VP walks in (“John” is the web team manager)..
VP: “John said the caching issue is fixed. Great job everyone.”
Mgr: “It wasn’t the caching, it was a mistyped resource or something in a javascript file.”
VP: “But the caching is fixed? Right? John said it was caching. Anyway, great job everyone. We’re going to have a great day tomorrow!”
VP leaves
NetworkAdmin: “Ouch…you feel that?”
Me: “Feel what?”
NetworkAdmin: “That bus John just threw us under.”
Mgr: “Yea, but I think John just saved 3 jobs. Remember that.”4 -
Worst hack/attack I had to deal with?
Worst, or funniest. A partnership with a Canadian company got turned upside down and our company decided to 'part ways' by simply not returning his phone calls/emails, etc. A big 'jerk move' IMO, but all I was responsible for was a web portal into our system (submitting orders, inventory, etc).
After the separation, I removed the login permissions, but the ex-partner system was set up to 'ping' our site for various updates and we were logging the failed login attempts, maybe 5 a day or so. Our network admin got tired of seeing that error in his logs and reached out to the VP (responsible for the 'break up') and requested he tell the partner their system is still trying to login and stop it. Couple of days later, we were getting random 300, 500, 1000 failed login attempts (causing automated emails to notify that there was a problem). The partner knew that we were likely getting alerted, and kept up the barage. When alerts get high enough, they are sent to the IT-VP, which gets a whole bunch of people involved.
VP-Marketing: "Why are you allowing them into our system?! Cut them off, NOW!"
Me: "I'm not letting them in, I'm stopping them, hence the login error."
VP-Marketing: "That jackass said he will keep trying to get into our system unless we pay him $10,000. Just turn those machines off!"
VP-IT : "We can't. They serve our other international partners."
<slams hand on table>
VP-Marketing: "I don't fucking believe this! How the fuck did you let this happen!?"
VP-IT: "Yes, you shouldn't have allowed the partner into our system to begin with. What are you going to do to fix this situation?"
Me: "Um, we've been testing for months already went live some time ago. I didn't know you defaulted on the contract until last week. 'Jake' is likely running a script. He'll get bored of doing that and in a couple of weeks, he'll stop. I say lets ignore him. This really a network problem, not a coding problem."
IT-MGR: "Now..now...lets not make excuses and point fingers. It's time to fix your code."
IT-VP: "I agree. We're not going to let anyone blackmail us. Make it happen."
So I figure out the partner's IP address, and hard-code the value in my service so it doesn't log the login failure (if IP = '10.50.etc and so on' major hack job). That worked for a couple of days, then (I suspect) the ISP re-assigned a new IP and the errors started up again.
After a few angry emails from the 'powers-that-be', our network admin stops by my desk.
D: "Dude, I'm sorry, I've been so busy. I just heard and I wished they had told me what was going on. I'm going to block his entire domain and send a request to the ISP to shut him down. This was my problem to fix, you should have never been involved."
After 'D' worked his mojo, the errors stopped.
Month later, 'D' gave me an update. He was still logging the traffic from the partner's system (the ISP wanted extensive logs to prove the customer was abusing their service) and like magic one day, it all stopped. ~2 weeks after the 'break up'.8 -
Are you using socat?
Any interesting use case you would like to share?
I am using it to create fake / proxy docker containers for network testing.7 -
Just tech screened a kid for a senior Network automation role, in a specific niche.
He's never automated anything before. Didn't know networking basics, didn't know about the niche...
This guy hasn't heard of unit testing or UDP... good luck out there kid. You've got balls anyway.14 -
Before shutting something down, verify that its the right one ;)
A colleague was testing some performance setting on a server and needed to restart the network driver so he disabled it and was going to enable it when the remote session died ;)
Fast 30 min trip to the datacenter to enable network card.2 -
A few years ago I was browsing Bash.org, and a user posted that he'd physically lost a machine.
A few weeks ago, I'd switched my router out for OPNSense. I figured it was time to start cleaning up my network.
Over the course of tracking down IP addresses and assigning statics to mac addresses, I spotted an IP I didn't recognize.
Being a home network, I'm pretty familiar with everything on the network by IP, so was a little taken aback.
I did some testing, found out that it was a Linux box. Cool.
I can SSH into it. Ok.
Logs show that it's running fine, no CPU/Memory/Harddrive issues. Nice.
So where is it?
Traceroute shows its connected directly to the router... Maybe over an unmanaged switch...
Hostname is "localhost"... That's no help.
I've walked the network 4 times now, and God knows where it is.
I think maybe I'll just leave it alone. If it ain't broke...9 -
Dear Product Owners,
If you tell me how I need to architect my software again I'm going to ask you to provide a network topology of the architecture you want me to build.
I'll also need you to request the new servers, work with the ops teams to setup credentials, provision the NAT, register the domains and document the routes that the proxy will need to use.
then I'll need you to hook the repo up to our non-existent pipeline so that I can make sure I won't do all that testing I already can't do.
I hope you're paying attention, because that framework you told me I needed to use is going to be a pain to setup correctly.
after you're done with that, please attach any documentation you shit out to the ticket you never created.
Enragedly yours,
Looking for a new job
PS: get fucked3 -
Writing a Firefox add-on which analyzes the current webpage's links and puts a warning if one of those links to one of the prism (surveillance network) integrated services/companies.
Nearly ready enough for testing (real working version) when I suddenly realize: web pages can request resources without links.
Gotta rewrite the fucker partly now.
How fucking stupid can I be. 😐11 -
After months of development, testing, testing and even more testing the app was ready for deployment to production. Happy days, the end was in sight!
I had a week's leave so I handed over the preparation for deployment to my Senior Developer and left it in his capable hands while I enjoyed the sun and many beers.
I came back on the day of deployment and proudly pressed the deploy button. Hurrah!
Not long after I got loads of phone calls from around the country as the app wasn't working. What madness is this?! We tested this for months!
Turns out my Senior didn't like the way I'd written the SQL queries so he changed them. Which is obviously both annoying and unprofessional, but even worse he got a join wrong so the memory usage was a billion times more and it drained the network bandwidth for the whole site when I tried to debug it.
I got all the grief for the app not working and for causing many other incidents by running queries that killed the network.
So...much... rage!!!3 -
I haven't ranted for today, but I figured that I'd post a summary.
A public diary of sorts.. devRant is amazing, it even allows me to post the stuff that I'd otherwise put on a piece of paper and probably discard over time. And with keyboard support at that <3
Today has been a productive day for me. Laptop got restored with a "pacman -Syu" over a Bluetooth mobile data tethering from my phone, said phone got upgraded to an unofficial Android 9 (Pie) thanks to a comment from @undef, etc.
I've also made myself a reliable USB extension cord to be able to extend the 20-30cm USB-A male to USB-C male cord that Huawei delivered with my Nexus 6P. The USB-C to USB-C cord that allows for fast charging is unreliable.. ordered some USB-C plugs for that, in order to make some high power wire with that when they arrive.
So that plug I've made.. USB-A male to USB-A female, in which my short USB-C to USB-A wire can plug in. It's a 1M wire, with 18AWG wire for its power lines and 28AWG wires for its data lines. The 18AWG power lines can carry up to 10A of current, while the 28AWG lines can carry up to 1A. All wires were made into 1M pieces. These resulted in a very low impedance path for all of them, my multimeter measured no more than 200 milliohms across them, though I'll have to verify and finetune that on my oscilloscope with 4-wire measurement.
So the wire was good. Easy too, I just had to look up the pinout and replicate that on the male part.
That's where the rant part comes in.. in fact I've got quite uncomfortable with sentences that don't include at least one swear word at this point. All hail to devRant for allowing me to put them out there without guilt.. it changed my very mind <3
Microshaft WanBLowS.
I've tried to plug my DIY extension cord into it, and plugged my phone and some USB stick into it of which I've completely forgot the filesystem. Windows certainly doesn't support it.. turns out that it was LUKS. More about that later.
Windows returned that it didn't support either of them, due to "malfunctioning at the USB device". So I went ahead and plugged in my phone directly.. works without a problem. Then I went ahead and troubleshooted the wire I've just made with a multimeter, to check for shorts.. none at all.
At that point I suspected that WanBLowS was the issue, so I booted up my (at the time) problematic Arch laptop and did the exact same thing there, testing that USB stick and my phone there by plugging it through the extension wire. Shit just worked like that. The USB stick was a LUKS medium and apparently a clone of my SanDisk rootfs that I'm storing my Arch Linux on my laptop at at the time.. an unfinished migration project (SanDisk is unstable, my other DM sticks are quite stable). The USB stick consumed about 20mA so no big deal for any USB controller. The phone consumed about 500mA (which is standard USB 2.0 so no surprise) and worked fine as well.. although the HP laptop dropped the voltage to ~4.8V like that, unlike 5.1V which is nominal for USB. Still worked without a problem.
So clearly Windows is the problem here, and this provides me one more reason to hate that piece of shit OS. Windows lovers may say that it's an issue with my particular hardware, which maybe it is. I've done the Windows plugging solely through a USB 3.0 hub, which was plugged into a USB 3.0 port on the host. Now USB 3.0 is supposed to be able to carry up to 1A rather than 500mA, so I expect all the components in there to be beefier. I've also tested the hub as part of a review, and it can carry about 1A no problem, although it seems like its supply lines aren't shorted to VCC on the host, like a sensible hub would. Instead I suspect that it's going through the hub's controller.
Regardless, this is clearly a bad design. One of the USB data lines is biased to ~3.3V if memory serves me right, while the other is biased to 300mV. The latter could impose a problem.. but again, the current path was of a very low impedance of 200milliohms at most. Meanwhile the direct connection that omits the ~200ohm extension wire worked just fine. Even 300mV wouldn't degrade significantly over such a resistance. So this is most likely a Windows problem.
That aside, the extension cord works fine in Linux. So I've used that as a charging connection while upgrading my Arch laptop (which as you may know has internet issues at the time) over Bluetooth, through a shared BNEP connection (Bluetooth tethering) from my phone. Mobile data since I didn't set up my WiFi in this new Pie ROM yet. Worked fine, fixed my WiFi. Currently it's back in my network as my fully-fledged development host. So that way I'll be able to work again on @Floydian's LinkHub repository. My laptop's the only one who currently holds the private key for signing commits for git$(rm -rf ~/*)@nixmagic.com, hence why my development has been impeded. My tablet doesn't have them. Guess I'll commit somewhere tomorrow.
(looks like my rant is too long, continue in comments)3 -
Best code performance incr. I made?
Many, many years ago our scaling strategy was to throw hardware at performance problems. Hardware consisted of dedicated web server and backing SQL server box, so each site instance had two servers (and data replication processes in place)
Two servers turned into 4, 4 to 8, 8 to around 16 (don't remember exactly what we ended up with). With Window's server and SQL Server licenses getting into the hundreds of thousands of dollars, the 'powers-that-be' were becoming very concerned with our IT budget. With our IT-VP and other web mgrs being hardware-centric, they simply shrugged and told the company that's just the way it is.
Taking it upon myself, started looking into utilizing web services, caching data (Microsoft's Velocity at the time), and a service that returned product data, the bottleneck for most of the performance issues. Description, price, simple stuff. Testing the scaling with our dev environment, single web server and single backing sql server, the service was able to handle 10x the traffic with much better performance.
Since the majority of the IT mgmt were hardware centric, they blew off the results saying my tests were contrived and my solution wouldn't work in 'the real world'. Not 100% wrong, I had no idea what would happen when real traffic would hit the site.
With our other hardware guys concerned the web hardware budget was tearing into everything else, they helped convince the 'powers-that-be' to give my idea a shot.
Fast forward a couple of months (lots of web code changes), early one morning we started slowly turning on the new framework (3 load balanced web service servers, 3 web servers, one sql server). 5 minutes...no issues, 10 minutes...no issues,an hour...everything is looking great. Then (A is a network admin)...
A: "Umm...guys...hardly any of the other web servers are being hit. The new servers are handling almost 100% of the traffic."
VP: "That can't be right. Something must be wrong with the load balancers. Rollback!"
A:"No, everything is fine. Load balancer is working and the performance spikes are coming from the old servers, not the new ones. Wow!, this is awesome!"
<Web manager 'Stacey'>
Stacey: "We probably still need to rollback. We'll need to do a full analysis to why the performance improved and apply it the current hardware setup."
A: "Page load times are now under 100 milliseconds from almost 3 seconds. Lets not rollback and see what happens."
Stacey:"I don't know, customers aren't used to such fast load times. They'll think something is wrong and go to a competitor. Rollback."
VP: "Agreed. We don't why this so fast. We'll need to replicate what is going on to the current architecture. Good try guys."
<later that day>
VP: "We've received hundreds of emails complementing us on the web site performance this morning and upset that the site suddenly slowed down again. CEO got wind of these emails and instructed us to move forward with the new framework."
After full implementation, we were able to scale back to only a few web servers and a single sql server, saving an initial $300,000 and a potential future savings of over $500,000. Budget analysis considering other factors, over the next 7 years, this would save the company over a million dollars.
At the semi-annual company wide meeting, our VP made a speech.
VP: "I'd like to thank everyone for this hard fought journey to get our web site up to industry standards for the benefit of our customers and stakeholders. Most of all, I'd like to thank Stacey for all her effort in designing and implementation of the scaling solution. Great job Stacy!"
<hands her a blank white envelope, hmmm...wonder what was in it?>
A few devs who sat in front of me turn around, network guys to the right, all look at me with puzzled looks with one mouth-ing "WTF?"9 -
Got pulled out of bed at 6 am again this morning, our VMs were acting up again. Not booting, running extremely slow, high disk usage, etc.
This was the 6 time in as many weeks this happened. And always the marching orders were the same. Find the bug, smash the bug, get it working with the least effort. I've dumped hundreds of hours maintaining this broken shitheap of a system, putting off other duties to keep mission critical stations running.
The culprits? Scummy consultants, Windows 10 1709, and Citrix Studio.
Xen Server performed well enough, likely due to its open source origins and Centos architecture.
Whelp. DasSeahawks was good and pissed. Nothing like getting rousted out of bed after a few scant hours rest for patching the same broken system.
DasSeahawks lost his temper. Things went flying. Exorcists were dispatched and promptly eaten.
Enough. No consultants, no analysts, and no experts touched it. No phone calls, no manuals, not even a google search. Just a very pissed admin and his minion declaring blitzkrieg.
We made our game plan, moved the users out, smoked our cigs, chugged monster, and queued a gnu-metal playlist on spotify.
Then we took a wrecking ball to the whole setup. User docs were saved, all else was rm -r * && shred && summon -u Poseidon -beast Land_Cracken.
Started at 3pm and finished just after midnight. Rebuilt all the vms with RDP, murdered citrix studio (and their bullshit licenses), completely blocked Windows 10 updates after 1607, and load balanced the network.
So what do we get when all the experts are fired? Stabbed lightning. VMs boot in less than 10 seconds, apps open instantly, and server resources are half their previous usage state. My VMs are now the fastest stations in our complex, as they should be.
Next to do: install our mxgpu, script up snapshots and heartbeat, destroy Windows ads/telemetry, and setup PDQ. damn its good to be good!
What i learned --> never allow testing to go to production, consultants will fuck up your shit for a buck, and vendors are half as reliable over consultants. Windows works great without Microsoft, thin clients are overpriced, and getting pissed gets things done.
This my friends, is why admins are assholes.4 -
I just got pissed off when someone on my team asked me how to start a web server on port 8080 (needed for network port testing)
I check the port get 404. 🤔🤔🤔
Spend half an hour explaining to them about ports and how there's already a week server running... And that 404 is a HTTP Status code.
I'm pretty sure she works on our webapp and maybe even REST APIs...
New grad but still.... Not recognizing a 404?
Maybe those pretty 404 pages these days make the realizing that it's a fucking web server error response harder....1 -
TL;DR Dear boss, firstly, you always get someone to review anything important done by a fucking intern.
Secondly, you do not give access to your fucking client's production server to an intern.
Thirdly, you don't ask your fucking intern to test the intern's work that has not been reviewed by anyone directly on your client's fucking production server.
Last week, the boss and one of the lead devs (the only guy with some serious knowledge about systems and networking) decided to give me (an intern who barely has any work experience) the task of fixing or finding an alternate solution to allowing their support team access to their client machines. Currently they used a reverse SSH tunnel and an intermediary VH but for some reason, that was very unreliable in terms of availability. I suggested using OpenVPN and explained how it would work. Seemed to be a far better idea and they accepted. After several days of working through documentations and guides and everything, I figured out how OpenVPN works and managed to deploy a TEST server and successfully test remote access using two VMs. On seeing my tests, the boss told me that he wanted to test it on the client network. I agreed. Today he comes to me and he tells me to prepare testing for tomorrow and that the client technician is going to give me access to one of their boxes. And then he adds, "It's a working prod server. We'll see if we can make it work on that" and left. I gaped at him for a while and asked another dev guy in the room if what I heard was right. He confirmed. Turns out, the lead dev and the boss's son (who also works here) had had a huge argument since morning on the same issue and finally the dev guy had washed it off his hands and declared that if anything goes wrong from testing it on production, it's entirely the boss's own fault. That's when the boss stepped in and approached me. I ran back to his office and began to explain why prod servers don't top the list of things you can fuck around with. But he simply silenced me saying, "What can go wrong?" and added, "You shouldn't stay still. You should keep moving". Okay, like firstly what the fuck and secondly, what the fuck?.
Even though OpenVPN client is not the scariest thing to install, tomorrow's going to be fun.4 -
So a few months ago, I got a half-broken old iPhone (microphone, speaker and cameras not working) for testing purposes and it lays 99.9% of the time on my shelf turned off. Today, I turned it on and after I opened Safari, I surprised in not exactly the most pleasant way.
When I started writing in the address bar a strange suggestion from Siri came up for a website my mom searched a few hours ago on her android tablet. Like what the actual fuck?? There is absolutely 0 connection between these 2 devices, there is PiHole running on local network. The only thing that I can think of is that she is using Google (logged out) and it looks like they are actively sharing their data based on IP addresses. Wow...1 -
So client wants an android app that implements some legacy Epson printer SDK, works on a chinese Windows device with an android Emulator on it, connects to local Webservice that had to be configurated and ran (local Network) , sends and tracks data, if Server down then handle it on the Client and reconnect as soon as Server up, running own TCP Server on Android device that listens for specific http requests, which make the android connect to an Epson printer to start printing. The stuff that is being printed? A png file that has to be converted to a Bitmap, a QR Code that has to be generated by the bugged base64 encrypted stuff coming via http in (webserver-> Android TCP server)
Dont forget the Software Design (MVP), documentation, research etc.. Im about to finish the app , its my 5th day on this Project, the 6th day was planned to be full testing. Client Calls me and ask me how far I am, I reply, he says ok. 30 minutes later he tells me he wont pay me next time that much because this work should take 3 days, or even 2. "A senior Android developer could do this in 2 days"... When i sent him my notices he called me a liar, his webdev has alot of experience and told him it should take 2-3 days...ffs2 -
Testing a chat application that works within a local network on my mobile hotspot.
600 mb gone
WTF!3 -
TIL how time-delay relays are used in circuits with a hell lot of power. For example to power and control the motor of a machine that lifts 20 people up to the sky and back to ground again.
FYI this is a simple circuit. We will hopefully start experimenting with SPS/Speicherprogrammierbare Steuerung (Programmable Logic Controllers). I cannot wait to make use of our predesigned circuits with complex logic gates with the new unit we will get to know.
Unfortunately, we will do practical networking (testing cables for the signal strength and speed, building a network of telephones and call each other - guess this is going to be the funniest part lol, etc.) before the SPS and LOGO software phase.
Btw. I am doing an all-nighter rn and repeating everything we did in our recent signal calculation lesson. We have another exam tomorrow.11 -
The most crazy issue I've fixed was caused by a TCP behavior which I didn't know, called the "half-closed connection".
There was a third-party application installed on a production server which called a LDAP server for retrieving users information. During the day we had several users using the application and all worked fine. During the night, when the application was not accessed, something happened and the first call to the application in the morning was stuck for about 5 minutes before returning a response. I tried to reproduce the issue in a testing environment without success. Then I discovered that the application and the LDAP server were located on two different networks, with a firewall between them. And firewalls sometimes drop old connections. For this reason network applications usually implement a keep-alive mechanism. Well, the default LDAP Java libraries don't set the keep-alive on their connections. So, I found a library called "libdontdie", which force the keep-alive on the connections. I installed the library on the server, loaded it at the startup and the weird stuck behavior in the morning disappeared.2 -
New twist on an old favorite.
Background:
- TeamA provides a service internal to the company.
- That service is made accessible to a cloud environment, also has a requirement to be made available to machines on the local network so you can develop against it.
- Company is too cheap/stupid to get a s2s vpn to their cloud provider.
- Company also only hosts production in the cloud, so all other dev is done locally, or on production non-similar infra, local dev is podman.
- They accomplish service connectivity by use of an inordinately complicated edge gateway/router/firewall/message translator/ouija board/julienne fry maker, also controlled by said service team.
Scenario:
Me: "Hey, we're cool with signing requests using an x509 cert. That said, doing so requires different code than connecting to an unsecured endpoint. Please make this service accessible to developer machines and lower environments on the internal network so we can, you know, develop."
TeamA: "The service should be accessible to [cloud ip range]"
Me: "Yes, that's a production range. We need to be able to test the signing code without testing in production"
TeamA: "Can you mock the data?"
Me: "The code we are testing is relating to auth, not business logic"
TeamA: "What are you trying to do?"
Me: "We are trying to test the code that uses the x509 you provide to connect to the service"
TeamA: "Can you deploy to the cloud"
Me: "Again, no, the cloud is only production per policy, all lower environments are in the local data center"
TeamA: "can you try connecting to the gateway?"
Me: "Yes, we have, it's not accessible, it only has public DNS, and only allows [cloud ip range]"
TeamA: "it work when we try it"
Me: "Can you please supply repro steps so we can adjust our process"
TeamA: "Yes, log into the gateway and try issuing the call from there"
Me: (╯°□°)╯︵ ┻━┻
tl;dr: Works on my server -
I continue to internally read and study about Smalltalk in an effort to see where we might have FUCKED UP and went backwards in terms of software engineering since I do not believe that complex source code based languages are the solution.
So I have Pharo. Nothin to complex really, everything is an object, yet, you do have room for building DSL's inside of it over a simple object model with no issue, the system browser can be opened across multiple screens (morph windows inside of a smalltalk system) for which you can edit you code in composable blocks with no issues. Blocks being a particular part of the language (think Ruby in more modern features) give ample room for functional programming. Thus far we have FP and OO (the original mind you) styles out in the open for development.
Your main code can be executed and instantly ALTER the live environment of a program as it is running, if what you are trying to do is stupid it won't affect the live instance, live programming is ahead of its time, and impressive, considering how old Smalltalk is. GUI applications can be given headless (this is also old in terms of how this shit was first distributed) So I can go ahead and package the virtual machine with the entire application into a folder, and distribute it agains't an organization "but why!!!! that package is 80+ mbs!") yeah cuz it carries the entire virtual machine, but go ahead and give it to the Mac user, or the Linux user, it will run, natively once it is clicked.
Server side applications run in similar fashion to php, in terms of lifecycles of request and how session storage is handled, this to me is interesting, no additional runtimes, drop it on a server, configure it properly and off you go, but this is common on other languages so really not that much of a point.
BUT if over a network a user is using your application and you change it and send that change over the network then the the change is damn near instant and fault tolerant due to the nature of the language.
Honestly, I don't know what went wrong or why we are not bringing this shit to the masses, the language was built for fucking kids, it was the first "y'all too stupid to get it, so here is simple" engine and we still said "nah fuck it, unlimited file system based programs, horrible build engines and {}; all over the place"
I am now writing a large budget managing application in Pharo Smalltalk which I want to go ahead and put to test soon at my institution. I do not have any issues thus far, other than my documentation help is literally "read the source code of the package system" which is easy as shit since it is already included inside. My scripts are small, my class hierarchies cover on themselves AND testing is part of the system. I honestly see no faults other than "well....fuck you I like opening vim and editing 300000000 files"
And honestly that is fine, my questions are: why is a paradigm that fits procedural, functional and OBVIOUSLY OO while including an all encompassing IDE NOT more famous, SELECTION is fine and other languages are a better fit, but why is such environment not more famous?9 -
lets just wait till the day before deploy to finally authorize api testing for users outside of the network - and then get super upset when we aren't adjusted, prepared, and tested in time for next-day deployment
-
My first rant. Woohoo!
Honestly I do the whole shebang ussualy depending on what the needs are from network to servers to coding because for some reason nobody has any technical experience where I work.
I just started app development for a gamedev startup and I am in sheer awe of the amount of transpiling/compiling etc that needs to be done for an multiplatform app for iOS and android with js(x)/typescript, html, css.
I remember when I could just write some spaghetti code to make it working by following a couple of tutorials. Then refractoring and testing it for a couple of hours and be done with it. push it into production.
Now I am lost having to learn OOP, functional programming, reactjs, react native, express, webpack, mongodb, babel, and the list goes on and on...
Why not just make a new backend that does all of that in another language which supports all of that.
I have no formal education in programming/coding and the last time I learned JS it was just some if else, switches and simple dom manipulation.
I just want to get to coding a freakin' game but I have to learn JSX for the front and typescript on the backend.
I am this close to going back to ye ol' lamp stack and quitting this job. 😥5 -
## building my own router
I hoped things would go more smoothly :)
Anyway, my new miniPC easily accepted CentOS 8 - no fuss here. And I've got to say - I love CentOS8 so far! Shell has amazing nifty tricks, UI (gnome3) is also snappy, video/audio/ethernet,.. everything works.
What I did NOT expect is hardware being off. Well okay, the price was low - it was obvious smth is not right. But still.. I decided to build my own router so that I could swap wifi card whenever I want. So that I could run my own network services in there. Turns out - the card swapping is not as easy as one might think.
I got the AX200 WiFi6 card for that very purpose. But once plugged in the OS can only see it's bluetooth module. Weird... What's even weirder is that even though the card is PCIe, the OS uses btusb module to talk to that device. What? USB?? emm.. What??
And there it is. After opening it up again I noticed that the mPCIe area is marked with a label: "USB WIFI / WWAN". USB? Does that mean this PCIe slot is wired into the USB bus? Not impossible I guess.
Googling for a "pcie wifi over usb" or smth like that brought me to one reddit (I think?) where someone wanted to build a DIY wifi mPCIe -> USB adapter and someone else adviced hime that (for some reason) at best he could only get bluetooth working (hey! just like me!). It's got to do smth with pcie channels and USB being too weak to handle all that load, or smth.. IDK, I'm not a HW guy.
Well that sucks then! I have a mPCIe slot that does not work as a PCIe. Shit! So I guess the best I could do is to plug back in the same wifi card that came with the device. It smells like 2003 - supports only g protocol. Fine, let's try that. Maybe I'll find a way to work around this mPCIe limitation later on (USB adapter or smth... except there are no USB WIFI6 dongles yet :( ). So I plug it back in and start turning it into a router. Disable NetworkManager, configure static NCs' settings, install dhcpd, hostapd, bind and others. Looks like all is done! Now it's time to start it all. systemctl start hostapd --> FAILED. wtf? journalctl says it could not initialize a driver. umm okay? Why? Forums say I should airodump-ng check and kill whatever's using that device. Fine. airodumo reveals avahi and wpa_suppl are still using it. kill, kill, GOTTA KILL 'EM ALL!! Starting hostapd again -- same shit... wtf?
iw list
My gawd... That shitty network card does not even support AP mode :( I mean.. My USB wifi dongle for 2€ supports 2x more modes, is faster, has better range and is easier to work with than this old tart!
Yeah. That was an interesting day. When enfironment engineers break my testing environments at work I'm glad I have where to spend my time now.
BTW any ideas how to bypass this mPCIe nonsense? Come on, there are USB GPUs out there.. Why can't they make a USB (or dual-USB if they really need to) mPCIe adapter?8 -
I need some opinions on Rx and MVVM. Its being done in iOS, but I think its fairly general programming question.
The small team I joined is using Rx (I've never used it before) and I'm trying to learn and catch up to them. Looking at the code, I think there are thousands of lines of over-engineered code that could be done so much simpler. From a non Rx point of view, I think we are following some bad practises, from an Rx point of view the guys are saying this is what Rx needs to be. I'm trying to discuss this with them, but they are shooting me down saying I just don't know enough about Rx. Maybe thats true, maybe I just don't get it, but they aren't exactly explaining it, just telling me i'm wrong and they are right. I need another set of eyes on this to see if it is just me.
One of the main points is that there are many places where network errors shouldn't complete the observable (i.e. can't call onError), I understand this concept. I read a response from the RxSwift maintainers that said the way to handle this was to wrap your response type in a class with a generic type (e.g. Result<T>) that contained a property to denote a success or error and maybe an error message. This way errors (such as incorrect password) won't cause it to complete, everything goes through onNext and users can retry / go again, makes sense.
The guys are saying that this breaks Rx principals and MVVM. Instead we need separate observables for every type of response. So we have viewModels that contain:
- isSuccessObservable
- isErrorObservable
- isLoadingObservable
- isRefreshingObservable
- etc. (some have close to 10 different observables)
To me this is overkill to have so many streams all frequently only ever delivering 1 or none messages. I would have aimed for 1 observable, that returns an object holding properties for each of these things, and sending several messages. Is that not what streams are suppose to do? Then the local code can use filters as part of the subscriptions. The major benefit of having 1 is that it becomes easier to make it generic and abstract away, which brings us to point 2.
Currently, due to each viewModel having different numbers of observables and methods of different names (but effectively doing the same thing) the guys create a new custom protocol (equivalent of a java interface) for each viewModel with its N observables. The viewModel creates local variables of PublishSubject, BehavorSubject, Driver etc. Then it implements the procotol / interface and casts all the local's back as observables. e.g.
protocol CarViewModelType {
isSuccessObservable: Observable<Car>
isErrorObservable: Observable<String>
isLoadingObservable: Observable<Void>
}
class CarViewModel {
isSuccessSubject: PublishSubject<Car>
isErrorSubject: PublishSubject<String>
isLoadingSubject: PublishSubject<Void>
// other stuff
}
extension CarViewModel: CarViewModelType {
isSuccessObservable {
return isSuccessSubject.asObservable()
}
isErrorObservable {
return isSuccessSubject.asObservable()
}
isLoadingObservable {
return isSuccessSubject.asObservable()
}
}
This has to be created by hand, for every viewModel, of which there is one for every screen and there is 40+ screens. This same structure is copy / pasted into every viewModel. As mentioned above I would like to make this all generic. Have a generic protocol for all viewModels to define 1 Observable, 1 local variable of generic type and handle the cast back automatically. The method to trigger all the business logic could also have its name standardised ("load", "fetch", "processData" etc.). Maybe we could also figure out a few other bits too. This would remove a lot of code, as well as making the code more readable (less messy), and make unit testing much easier. While it could never do everything automatically we could test the basic responses of each viewModel and have at least some testing done by default and not have everything be very boilerplate-y and copy / paste nature.
The guys think that subscribing to isSuccess and / or isError is perfect Rx + MVVM. But for some reason subscribing to status.filter(success) or status.filter(!success) is a sin of unimaginable proportions. Also the idea of multiple buttons and events all "reacting" to the same method named e.g. "load", is bad Rx (why if they all need to do the same thing?)
My thoughts on this are:
- To me its indentical in meaning and architecture, one way is just significantly less code.
- Lets say I agree its not textbook, is it not worth bending the rules to reduce code.
- We are already breaking the rules of MVVM to introduce coordinators (which I hate, as they are adding even more unnecessary code), so why is breaking it to reduce code such a no no.
Any thoughts on the above? Am I way off the mark or is this classic Rx?16 -
Since my ISP doesn't allow port forwards on that port, does anyone want to open their port 445 (if not otherwise occupied by Microsoft-DS) and have netcat reply an empty string or so any request (such as an empty string) sent over TCP (or has anyone already done so)? It would help me out a lot with testing out some networks full port range (since portquiz.net's hoster blocks that port) and would take close to no bandwidth for you.6
-
Fk you Google!
My Samsung note 10 screen went dead near a week ago... it's a secondary line so waiting for parts wasn't the end of the world.
Ofc the screen (curved and incl a fingerprint reader thatd be a major pain to not replace) was integrated to the whole front half... back panel glued, battery, glued immensely and with all other parts out, about 6mm space only at the bottom to get a tool in to pry it out.
New screen (off brand) ~200... all genuine parts amazon refurb ~230... figured id have some extra hardware for idk what... i like hardware and can write drivers so why not.
Figured id save a bit of time and avoid other potentially damaged (water) components to just swap out the mobo unit that had my storage.
Put it back together, first checked that my sim was recognised since this carrier required extraneous info when registering the dev... worked fine... fingerprint worked fine, brave browser too...
Then i open chrome. It tells me im offline... weird cuz i was literally in a discord call. My wifi says connected to the internet (not that i wouldn't have known the second there was a network issue... i have all our servers here and a /28 block... ofc i have everything scripted and connected to alert any dev i have, anywhere i am, the moment something strange happens).
Apparently google doesnt like the new daughter board(i dislike the naming scheme... its weird to me)... so anything that is controlled by google aside from the google account that is linked to non-google reliant apps like this... just hangs as if loading and/or says im offline.
I know... itll only take me about the 5-10m it took to type this rant but ffs google... why dont you even have an error message as to what your issue is... or the simple ability to let me log in and be like 'yup it's me, here's your dumb 2fa and a 3rd via text cuz you're extra paranoid yet dont actually lock the account or dev in any way!'
I think it's a toss up if google actually knows that it's doing this or they just have some giant glitch that showed up a couple times in testing and was resolved via the methods of my great grama- "just smack it or kick it a few times while swearing at it in polish. Like reaaaally yelling. Always worked for me! If not, find a fall guy."7 -
Has hacking become a hobby for script-kiddies?
I have been thinking about this for a while know, I went to a class at Stanford last summer to learn penetration-testing. Keep in mind that the class was supposed to be advanced as we all knew the basics already. When I got there I was aggravated by the course as the whole course was using kali linux and the applications that come with it.
After the course was done and I washed off the gross feeling of using other peoples tools, I went online to try to learn some tricks about pen-testing outside of kali-linux tools. To my chagrin, I found that almost 90% of documentation from senior pen-testers were discussing tools like "aircrack-ng" or "burp-suite".
Now I know that the really good pen-testers use their own code and tools but my question is has hacking become a script kiddie hobby or am I thinking about the tools the wrong way?
It sounds very interesting to learn https and network exploits but it takes the fun out of it if the only documentation tells me to use tools.3 -
I had the idea that part of the problem of NN and ML research is we all use the same standard loss and nonlinear functions. In theory most NN architectures are universal aproximators. But theres a big gap between symbolic and numeric computation.
But some of our bigger leaps in improvement weren't just from new architectures, but entire new approaches to how data is transformed, and how we calculate loss, for example KL divergence.
And it occured to me all we really need is training/test/validation data and with the right approach we can let the system discover the architecture (been done before), but also the nonlinear and loss functions itself, and see what pops out the other side as a result.
If a network can instrument its own code as it were, maybe it'd find new and useful nonlinear functions and losses. Networks wouldn't just specificy a conv layer here, or a maxpool there, but derive implementations of these all on their own.
More importantly with a little pruning, we could even use successful examples for bootstrapping smaller more efficient algorithms, all within the graph itself, and use genetic algorithms to mix and match nodes at training time to discover what works or doesn't, or do training, testing, and validation in batches, to anneal a network in the correct direction.
By generating variations of successful nodes and graphs, and using substitution, we can use comparison to minimize error (for some measure of error over accuracy and precision), and select the best graph variations, without strictly having to do much point mutation within any given node, minimizing deleterious effects, sort of like how gene expression leads to unexpected but fitness-improving results for an entire organism, while point-mutations typically cause disease.
It might seem like this wouldn't work out the gate, just on the basis of intuition, but I think the benefit of working through node substitutions or entire subgraph substitution, is that we can check test/validation loss before training is even complete.
If we train a network to specify a known loss, we can even have that evaluate the networks themselves, and run variations on our network loss node to find better losses during training time, and at some point let nodes refer to these same loss calculation graphs, within themselves, switching between them dynamically..via variation and substitution.
I could even invision probabilistic lists of jump addresses, or mappings of value ranges to jump addresses, or having await() style opcodes on some nodes that upon being encountered, queue-up ticks from upstream nodes whose calculations the await()ed node relies on, to do things like emergent convolution.
I've written all the classes and started on the interpreter itself, just a few things that need fleshed out now.
Heres my shitty little partial sketch of the opcodes and ideas.
https://pastebin.com/5yDTaApS
I think I'll teach it to do convolution, color recognition, maybe try mnist, or teach it step by step how to do sequence masking and prediction, dunno yet.6 -
A colleague of mine was assigned to do a manual testing of application. It involved testing application in no-network / offline conditions.
While doing this he disconnected the virtual ethernet of Remote desktop thus crashing the VM forever....
Dear Elon Musk, send him to Mars right away.2 -
They keep training bigger language models (GPT et al). All the resear4chers appear to be doing this as a first step, and then running self-learning. The way they do this is train a smaller network, using the bigger network as a teacher. Another way of doing this is dropping some parameters and nodes and testing the performance of the network to see if the smaller version performs roughly the same, on the theory that there are some initialization and configurations that start out, just by happenstance, to be efficient (like finding a "winning lottery ticket").
My question is why aren't they running these two procedures *during* training and validation?
If [x] is a good initialization or larger network and [y] is a smaller network, then
after each training and validation, we run it against a potential [y]. If the result is acceptable and [y] is a good substitute, y becomes x, and we repeat the entire procedure.
The idea is not to look to optimize mere training and validation loss, but to bootstrap a sort of meta-loss that exists across the whole span of training, amortizing the loss function.
Anyone seen this in the wild yet?5 -
Started testing my most recent side project today... Renaming 1.1TB of movie files according to proper naming conventions over a 100Mbs network. Been running for 11hours already and not even a quarter of the way through8
-
Today I spent 9 hours trying to resolve an issue with .net core integration testing a project with soap services created using a third party soap library since .net core doesn't support soap anymore. And WCF is before my time.
The tests run in-process so that we can override services like the database, file storage, basically io settings but not code.
This morning I write the first test by creating a connected service reference to generate a service client. That way I don't need to worry about generating soap messages and keeping them in sync with the code.
I sent my first request and... Can't find endpoint.
3 hours later I learn via fiddler that a real request is being made. It's not using the virtual in-process server and http client, it's sending an actual network request that fiddler picks up, and of course that needs a real server accepting requests... Which I don't have.
So I start on MSDN. Please God help me. Nope. Nothing. Makes sense since soap is dead on .net core.
Now what? Nothing on the internet because above. Nothing in the third party soap library. Nothing. At this point I question of I have hit my wall as a developer.
Another 4 hours later I have reverse engineered the Microsoft code on GitHub and figured out that I am fucked. It's so hard to understand.
2 more hours later I have figured out a solution. It's pure filth..I hide it away in another tooling project and move all the filth to internal classes :D the equivalent of tidying your room as a kid by shoving it all under the bed. But fuck it.
My soap tests now use the correct http client with the virtual server. I am a magician.4 -
Tested a script that watches a folder for any missing folders (team members kept moving them), and then showed it to the guy who was mentoring me. It did what it needed to, so we were going to switch it off the test directory on the local box to the network share.
I had been testing against the live folder for a couple of days, didn't notice, and didn't screw anything up. There was a, "good job you idiot," and he added it right to his big script that runs 24/7. FeelsGoodMan.jpg -
Ok, so: I have a macbook for work. And for the most part, I love it. Its a good looking device that has a fast cpu, enough ram to run stuff locally for testing, even multiple services / environments at the same time without getting overly sluggish.
And, the best thing: It isn't Windows. I have a good, working shell (zsh), so I can use all the command line tooling I could wish for, I have a somewhat working package manager and everything.
But there are just some little things I really can't wrap my head around. And since everything is so locked in by Apple, there are no sensible ways to fix those things without having a bunch of extra programs / services running all the time, introducing overhead, configuration for things I neither want nor need, and so on.
First of all, why the hell did you think the normal way of typing "@" on a german iso keyboard is the key combination for closing the currently focused application? I am a daily user of macos for over 2 years now, and I still keep quitting applications regularly, almost every day.
Or, scroll direction: I use a mouse (g pro wireless) and not just the touchpad, but when I am in a meeting or something (or when I take my macbook with me to configure a switch that isn't accessible over the network), I don't want to take the mouse with me, the touchpad is pretty good, it is big, precise and everything. But for some dumb reason, they decided to reverse the scroll direction for the mouse by default, so if you change that to use the mouse like a normal person, it also changes the scroll direction for the touchpad. And, the worst part is: there doesn't seem to be ANY easy way to separate those two settings, or to automatically set the scroll direction when a mouse is connected.
So every time I use my laptop somewhere else, wich also happens regularly, the scroll directions is wrong, which means I have to go into the settings, change it, then change it back when I am at my desk again.
It just doesn't make any sense, stop trying to "know what our customers want", and please, dear Mr. Tim Apple, give your customers the freedom to know for themselves what they want.
Thanks for listening to my TED Talk.8 -
I currently use Github as my git server and have worked a little bit with Travis. Sadly Travis can't deploy to local network targets and that's why I had the idea to create my own basic CI for the local network: It will be a simple nodejs-app and listens to pushes via Github Webhooks. Then it fetches the code from Github and runs a task runner like Gulp over it and tests it with any nodejs testing framework. Then it deploys the compiled, tested and linted app to the local network. What do you think of this idea?8
-
My work product: Or why I learned to get twitchy around Java...
I maintain a Java based test system, that tests a raster image processor. The client is a Java swing project that contains CORBA bindings to the internal API of the raster image processor. It also has custom written UI elements and duplicated functionality that became available in later versions of Java, but because some of the third party tools we use don't work with later versions of Java for some reason, it's not possible to upgrade Java to gain things as simple as recursive directory deletion, yes the version of Java we have to use does not support something as simple as that and custom code had to be written to support it.
Because of the requirement to build the API bindings along with the client the whole application must be built with the raster image processor build chain, which is a heavily customised jam build system. So an ant task calls out to execute a jam task and jam does about 90% of the heavy lifting.
In addition to the Java code there's code for interpreting PostScript files, as these can be used to alter the behaviour of the raster image processor during testing.
As if that weren't enough, there's a beanshell interface to allow users to script the test system, but none of the users know Java well enough to feel confident writing interpreted Java scripts (and that's too close to JavaScript for my comfort). I once tried swapping this out for the Rhino JavaScript interpreter and got all the verbal support in the world but no developer time to design an API that'd work for all the departments.
The server isn't much better though. It's a tomcat based application that was written by someone who had never built a tomcat application before, or any web application for that matter and uses raw SQL strings instead of an orm, it doesn't use MVC in any way, and insane amount of functionality is dumped into the jsp files.
It too interacts with a raster image processor to create difference masks of the output, running PostScript as needed. It spawns off multiple threads and can spend days processing hundreds of gigabytes of image output (depending on the size of the tests).
We're stuck on Tomcat seven because we can't upgrade beyond Java 6, which brings a whole manner of security issues, but that eager little Java updated will break the tool chain if it gets its way.
Between these two components we have the Java RMI server (sometimes) working to help generate image data on the client side before all images are pulled across a UNC network path onto the server that processes test jobs (in PDF format), by reading into the xref table of said PDF, finding the embedded image data (for our server consumed test files are just flate encoded TIFF files wrapped around just enough PDF to make them valid) and uses a tool to create a difference mask of two images.
This tool is very error prone, it can't difference images of different sizes, colour spaces, orientations or pixel depths, but it's the best we have.
The tool is installed in both the client and server if the client can generate images it'll query from the server which ones it needs to and if it can't the server will use the tool itself.
Our shells have custom profiles for linking to a whole manner of third party tools and libraries, including a link to visual studio 2005 (more indirectly related build dependencies), the whole profile has to ensure that absolutely no operating system pollution gets into the shell, most of our apps are installed in our home directories and we have to ensure our paths are correct for every single application we add.
And... Fucking and!
Most of the tools are stored as source bundles in a version control system... Not got or mercurial, not perforce or svn, not even CVS... They use a custom built version control system that is built on top of RCS, it keeps a central database of locked files (using soft and hard locks along with write protecting the files in the file system) to ensure users can't get merge conflicts by preventing other users from writing to the files at all.
Branching is heavy weight and can take the best part of a day to create a new branch and populate the history.
Gathering the tools alone to build the Dev environment to build my project takes the best part of a week.
What should be a joy come hardware refresh year becomes a curse ("Well fuck, now I loose a week spending it setting up the Dev environment on ANOTHER machine").
Needless to say, I enjoy NOT working with Java. A lot of this isn't Javas fault, but there's a lot of things that Java (specifically the Java 6 version we're stuck on) does not make easy.
This is why I prefer to build my web apps in python or node, hell, I'd even take Lua... Just... Compiling web pages into executable Java classes, why? I mean I understand the implementation of how this happens, but why did my predecessor have to choose this? Why?2 -
Ticket: here's something wrong with the export of transactions, please check.
Very useful description, let me just go over this logic I've written months ago.
Yeah, I went extra sure that everything's right, besides the ones for created during the initial testing that we left. Took me a hell a long time to prove because there's such a vague description but ok.
Of course I have the time to make an eyecandy of an excel spreadsheet for you.
Only for you I'll also go and fix these entries manually. If you want me to do it so badly, I'll gladly do it.
Oh what, you're upset that I wasted 5h for this complete bullshit? Well fucking go and learn the database structure yourself then or get sued idk
Hope it was worth that 1€ difference the customer paid himself.
Not to mention that I also had to do an emergency setup to work from home because those people who are responsible for giving me an appointment for a covid test sure like to wait days after my sick leave is over. ffs, I just had a cold...
Also fuck all this bullshit mac software required to work in this network, half of this shit flat out requires you to use the same software and ofc it's all closed source to the point where I'd be glad to have an electron app for everything. -
I am so close to crying it is just not funny, every time i close my eyes I picture Superman's Scream after snapping Zod's neck in man of steel i.e. filled with pain, anguish and not being able to accept what you have become... I am not a dev but I have been glued to a computer screen since 7 years old.
I work for a company as the I.T. Administrator that does quite a bit of specialized work in the regulatory industry and has there own in-house software. This was built by one developer after another, hired straight out of university/college and you cannot believe how big of a monster this became being built with direction from someone who cant code and a bunch of "drunk children" who do not know good principles (swear to god thousands of lines with no comments and no OOP)
Now I am validating and testing a system, i keep being asked if we will be ready by the end of the week and due to my lack of qualifications after dropping out of school I keep thinking yes, but every time i test something I find another problem, I may not be able to code but understanding quickly is my strength and I know this shit is not simple.
I am under constant pressure to deliver something quickly.
Any concerns I raise are almost brushed off because I am an idiot with no qualifications who should be greatful for the work I am doing and the low as balls salary
The problems I solve are commended by the 10+ years of experience senior developer writing the application for us, yet I get shit for taking an hour to find the problem that existed in our network setup because it is the devs job (OMFG HE WOULD NEVER HAVE REALIZED WITHOUT COMING HERE AND LOOKING AT OUR INFRASTRUCTURE... WE WOULD HAVE BEEN STUCK FOR A FUCKING MONTH!!!!)
I see only 2 courses ahead for my life. The easy way and the hard way.
Easy way, buy a gun and end it all.
Suffer for 3 more years in the place that is causing constant breathing difficulty and the occasional pain in my left arm, finish my matric, continue learning to code and leave.
But right now I just want cry scream like Superman!!!6 -
I work solo on the Network Services Team at American Eagle as a developer. I've been working on an application for diagnosing wireless from a devices perspective.
I'm extremely happy that my app will be rolled out to the first store for real testing, and get some feedback :)3 -
Why the fuck did Apple just start testing my app using an ipv6 only network? I mean that is a good idea in general, but did they just start doing this recently, or did a major appledick decide that now, after 15 good releases, they should reject the app and force me to rethink my whole backend structure... Screw you, why didn't you tell me when I started?2
-
Testing new server deployment in test env all works, then production it all breaks down. Network didn't allowed the right traffic. Took me whole week to find that out. Until some networking engineer said, you know there is a firewall between those networks?
-
So as a personal project for work I decided to start data logging facility variables, it's something that we might need to pickup at some point in the future so decided to take the initiative since I'm the new guy.
I setup some basic current loop sensors are things like gas line pressures for bulk nitrogen and compressed air but decided to go with a more advanced system for logging the temperature and humidity in the labs. These sensors come with 'software' it's a web site you host internally. Cool so I just need to build a simple web server to run these PoE sensors. No big deal right, it's just an IIS service. Months after ordering Server 2019 though SSC I get 4 activation codes 2 MAK and 2 KMS. I won the lottery now i just have to download the server 2019 retail ISO and... Won't take the keys. Back to purchasing, "oh I can download that for you, what key is yours". Um... I dunno you sent me 4 Can I just get the link, "well you have to have a login". Ok what building are you in I'll drive over with a USB key (hoping there on the same campus), "the download keeps stopping, I'll contact the IT service in your building". a week later I get an install ISO and still no one knows that key is mine. Local IT service suggests it's probably a MAK key since I originally got a quote for a retail copy and we don't run a KMS server on the network I'm using for testing. We'll doesn't windows reject all 4 keys then proceed to register with a non-existent KMS server on the network I'm using for testing. Great so now this server that is supposed to connected to a private network for the sensors and use the second NIC for an internet connection has to be connected to the old network that I'm using for testing because that's where the KMS server seems to be. Ok no big deal the old network has internet except the powers that be want to migrate everything to the new more secure network but I still need to be connected to the KMS server because they sent me the wrong key. So I'm up to three network cards and some of my basic sensors are running on yet another network and I want to migrate the management software to this hardware to have all my data logging in one system. I had to label the Ethernet ports so I could hand over the hardware for certification and security scans.
So at this point I have my system running with a couple sensors setup with static IP's because I haven't had time to setup the DNS for the private network the sensors run on. Local IT goes to install McAfee and can't because it isn't compatible with anything after 1809 or later, I get a message back that " we only support up to 1709" I point out that it's server 2019, "Oh yeah, let me ask about that" a bunch of back and forth ensues and finally Local IT get's a version of McAfee that will install, runs security scan again i get a message back. " There are two high risk issues on your server", my blood pressure is getting high as well. The risks there looking at McAfee versions are out of date and windows Defender is disabled (because of McAfee).
There's a low risk issue as well, something relating to the DNS service I didn't fully setup. I tell local IT just disable it for now, then think we'll heck I'll remote in and do it. Nope can't remote into my server, oh they renamed it well that's lot going to stay that way but whatever oh here's the IP they assigned it, nope cant remote in no privileges. Ok so I run up three flights of stairs to local IT before they leave for the day log into my server yup RDP is enabled, odd but whatever let's delete the DNS role for now, nope you don't have admin privileges. Now I'm really getting displeased, I can;t have admin privileges on the network you want me to use to support the service on a system you can't support and I'm supposed to believe you can migrate the life safety systems you want us to move. I'm using my system to prove that the 2FA system works, at this rate I'm going to have 2FA access to a completely worthless broken system in a few years. good thing I rebuilt the whole server in a VM I'm planning to deploy before I get the official one back. I'm skipping a lot of the ridiculous back and forth conversations because the more I think about it the more irritated I get.1 -
This is a repost of an original rant posted on a request for "Community Feedback" from Atlassian. You know, Atlassian? Those beloved people behind such products as :
• Thing I Love™
• Other Thing You Used One Time™
• Platform Often Mentioned in Suicide Notes, Probably™*
Now this rant was written in early 2022 while I was working in an Azure Cloud Engineer role that transformed into me being the company's main Sysadmin/Project Manager/Hiring Manager/Network Admin/Graphic Designer.
While trying to simultaneously put out over 9000 fires with one hand, and jangling keys in the face of the Owner/Arsonist with the other, I was also desperately implementing Jira Service Desk. Normally this wouldn't have been as much of a priority as it was, but the software our support team was using had gone past 15 years old, then past extended support, then the lone developer died, then it didn't work on Windows 10, then only functioned thanks to a dev cohort long past creating a keygen....which was now broken. So we needed a solution *now*.
The previous solution was shit of a different tier. The sight of it would make a walking talking anthropomorphised sentient puddle of dogshit (who both eats and produces further dookie derivatives) blush with embarrassment. The CD-ROM/Cereal Box this software came in probably listed features like "Stores Your Customer's First AND (or) Last Name!" or "Windows ME Downgrade Disk Included!" and "NEW: Less(-ish) Genocide(s)"!
Despite this, our brain/fearless leader decided this would be a great time to have me test, implement, deploy, and train everyone up on a new solution that would suck your toes, sound your shaft, and that he hadn't reminded me that I was a lazy sack enough lately.
One day, during preliminary user testing I received an email letting me know that the support team was having issues with a Customer's profile on our new support desk. Thanks to our Owner/Firestarter/Real World Micheal Scott being deep in his latest project (fixing our "All 5 devs quit in the last 12 months and I can't seem to hire any new ones" issue (by buying a ping pong table)), I had a bit of fortuitous time on my hands to investigate this issue. I had spent many hours of overtime working on this project, writing custom integrations and automations, so what I found out was crushing.
Below is the (digitally) physical manifestation of my rage after realising I would have to create / find / deal with a whole new method for support to manage customer contacts.
I'm linking to the original forum thread because you kind of need to have the pictures embedded in said reply to get really inhale the "Jira-Rant" ambiance. The part where I use several consecutive words as anchor links to tickets with other people screaming into the void gets a bit sweet n' savoury too - having those hyperlinks does improve the je ne say what of it all.
bit.ly/JIRANT (Case Sensitive)
--------------------------
There is some good news at the end of this brown n' squirty rainbow though!
Nice try silly little Jira button, you can't ruin *my* 2022!
• I was able to forget all about Jira a month later when I received a surprise vacation home! (To be there while my Mom passed away).
• Eventually work stress did catch up to me - but my boss thoughtfully gave me a nice long vacation! (By assaulting *while* firing me (for emailing in a vacation request while he was a having a bad (see:normal) day))5 -
What is the best way to install iOS app for employees in a company?
Disclaimer, I'm no iOS developer.
In 2019 my boss asked if I could make a small iOS app. 3 forms which will have CRUD operations. I did it because I wanted to learn iOS development. The company got an Apple Developer Account and it finished the project within the week. The app can be used only from work's wifi, so if you try to use it outside that network will not work.
The issue is license. When I install the app from my XCode to one of the employees' iPhone, the app will work for 1 year (if I have paid for Developer Account). Also every time I make an update, the employee have to come to me with the iPhone and I have to install it directly.
Yesterday I submitted the app for Testing using TestFlight. The app updates automatically, but today I got notification that the "Submission for testing is rejected", because the tester could not connect with the server.
My question is, How do you install iOS apps for your work employees? I do not want to get Enterprise account because that will cost.5 -
What is the the best job title if you work in a company and does everything related to Software Development. Planning, documentation, architecture, network, full stack coding, testing, bug fixes etc. Full stack engineer? Or... ?7
-
Just built out my first app using Cloudflare Workers, Typescript, and DurableObjects. Holy shit, this is nice stuff.
It's taken little to no time to build out:
* JSON API written in Typescript
* JWT verification against my OAuth backend (SAML support too)
* CI Automated Deployments including unit tests
* DurableObject support
* 3rd party HTTP calls + caching (built in to the framework!) to reduce network latency and hiccups.
* Cron-like tasks on each stored object so they can awaken the app on a schedule and update themselves as necessary
* Rapid deployment to new environments
The local testing with coordinated "miniflare" is dreamy too.