Details
-
Website
-
Github
Joined devRant on 7/15/2020
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
If you have an e-commerce site and you don't let me sort the products based on price, what the fuck are you doing?22
-
Developer: We have a problem.
Manager: Remember, there are no such things as problems, only opportunities.
Developer: Well then, we have a DDoS opportunity.53 -
So a few days ago I felt pretty h*ckin professional.
I'm an intern and my job was to get the last 2003 server off the racks (It's a government job, so it's a wonder we only have one 2003 server left). The problem being that the service running on that server cannot just be placed on a new OS. It's some custom engineering document server that was built in 2003 on a 1995 tech stack and it had been abandoned for so long that it was apparently lost to time with no hope of recovery.
"Please redesign the system. Use a modern tech stack. Have at it, she's your project, do as you wish."
Music to my ears.
First challenge is getting the data off the old server. It's a 1995 .mdb file, so the most recent version of Access that would be able to open it is 2010.
Option two: There's an "export" button that literally just vomits all 16,644 records into a tab-delimited text file. Since this option didn't require scavenging up an old version of Access, I wrote a Python script to just read the export file.
And something like 30% of the records were invalid. Why? Well, one of the fields allowed for newline characters. This was an issue because records were separated by newline. So any record with a field containing newline became invalid.
Although, this did not stop me. Not even close. I figured it out and fixed it in about 10 minutes. All records read into the program without issue.
Next for designing the database. My stack is MySQL and NodeJS, which my supervisors approved of. There was a lot of data that looked like it would fit into an integer, but one or two odd records would have something like "1050b" which mean that just a few items prevented me from having as slick of a database design as I wanted. I designed the tables, about 18 columns per record, mostly varchar(64).
Next challenge was putting the exported data into the database. At first I thought of doing it record by record from my python script. Connect to the MySQL server and just iterate over all the data I had. But what I ended up actually doing was generating a .sql file and running that on the server. This took a few tries thanks to a lot of inconsistencies in the data, but eventually, I got all 16k records in the new database and I had never been so happy.
The next two hours were very productive, designing a front end which was very clean. I had just enough time to design a rough prototype that works totally off ajax requests. I want to keep it that way so that other services can contact this data, as it may be useful to have an engineering data API.
Anyways, that was my win story of the week. I was handed a challenge; an old, decaying server full of important data, and despite the hitches one might expect from archaic data, I was able to rescue every byte. I will probably be presenting my prototype to the higher ups in Engineering sometime this week.
Happy Algo!8