Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "streaming application"
-
Coolest project: I once worked for a customer who hosted an exhibition for a few thousand visitors in a big event arena in Stockholm.
They didn't want to use the existing ticket reading system on the arena so I had to build my own application compatible with barcode scanners (they said this about one week before the event).
It wasn't a complicated application to dev but with the tight deadline and no time to actually stress test it, it was the coolest thing to see hundreds of people streaming through the ticket station flawlessly.
Day 2 of the event I built a simple web application so I could see the flow rate of read tickets while I sat in the arena pub with a beer.6 -
The solution for this one isn't nearly as amusing as the journey.
I was working for one of the largest retailers in NA as an architect. Said retailer had over a thousand big box stores, IT maintenance budget of $200M/year. The kind of place that just reeks of waste and mismanagement at every level.
They had installed a system to distribute training and instructional videos to every store, as well as recorded daily broadcasts to all store employees as a way of reducing management time spend with employees in the morning. This system had cost a cool 400M USD, not including labor and upgrades for round 1. Round 2 was another 100M to add a storage buffer to each store because they'd failed to account for the fact that their internet connections at the store and the outbound pipe from the DC wasn't capable of running the public facing e-commerce and streaming all the video data to every store in realtime. Typical massive enterprise clusterfuck.
Then security gets involved. Each device at stores had a different address on a private megawan. The stores didn't generally phone home, home phoned them as an access control measure; stores calling the DC was verboten. This presented an obvious problem for the video system because it needed to pull updates.
The brilliant Infosys resources had a bright idea to solve this problem:
- Treat each device IP as an access key for that device (avg 15 per store per store).
- Verify the request ip, then issue a redirect with ANOTHER ip unique to that device that the firewall would ingress only to the video subnet
- Do it all with the F5
A few months later, the networking team comes back and announces that after months of work and 10s of people years they can't implement the solution because iRules have a size limit and they would need more than 60,000 lines or 15,000 rules to implement it. Sad trombones all around.
Then, a wild DBA appears, steps up to the plate and says he can solve the problem with the power of ORACLE! Few months later he comes back with some absolutely batshit solution that stored the individual octets of an IPV4, multiple nested queries to the same table to emulate subnet masking through some temp table spanning voodoo. Time to complete: 2-4 minutes per request. He too eventually gives up the fight, sort of, in that backhanded way DBAs tend to do everything. I wish I would have paid more attention to that abortion because the rationale and its mechanics were just staggeringly rube goldberg and should have been documented for posterity.
So I catch wind of this sitting in a CAB meeting. I hear them talking about how there's "no way to solve this problem, it's too complex, we're going to need a lot more databases to handle this." I tune in and gather all it really needs to do, since the ingress firewall is handling the origin IP checks, is convert the request IP to video ingress IP, 302 and call it a day.
While they're all grandstanding and pontificating, I fire up visual studio and:
- write a method that encodes the incoming request IP into a single uint32
- write an http module that keeps an in-memory dictionary of uint32,string for the request, response, converts the request ip and 302s the call with blackhole support
- convert all the mappings in the spreadsheet attached to the meetings into a csv, dump to disk
- write a wpf application to allow for easily managing the IP database in the short term
- deploy the solution one of our stage boxes
- add a TODO to eventually move this to a database
All this took about 5 minutes. I interrupt their conversation to ask them to retarget their test to the port I exposed on the stage box. Then watch them stare in stunned silence as the crow grows cold.
According to a friend who still works there, that code is still running in production on a single node to this day. And still running on the same static file database.
#TheValueOfEngineers2 -
I'm thinking about doing a live coding stream on twitch this saturday, late afternoon or evening (CET).
I've never done a live stream before.
Do you have any suggestions or interests?
I'm thinking about something like a small RESTful API with Angular4/TypeScript (frontend, single page application) and CraftCMS/PHP (backend) with somebasic theory about HTTP requests / response, redirecting, data transfer and interfaces et cetera...
The duration will be around 2-4 hours, maybe longer if I have enough Mate & Beer.
But it's all just an idea at the moment. 😉
I will create an empty project for the stream on my Github and push to it during streaming, so you can pull it live or later.17 -
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
OBS is advertised as the expert's screen recording and streaming tool, every list on the internet makes it out to be some incredibly difficult program not recommended for newbies.
It's also the only linux screen recorder that works out of the box on Pipewire, records both microphone and system sounds and all configuration was to
1. select recording as my main use case in the setup wizard which is a very verbose English popup, then accept all defaults
2. add a new source, following the instructions written in the box which are also the only instructions on screen after application launch
3. set the output directory (optional) by going to File > Settings > Output > Recording Path, all of which were the first items I guessed. If I had not done this, it would've written everything to my home folder which is a bit dumb but not confusing at all
4. click Start Recording
5. click Stop Recording when done
Some newbie-oriented screen recorders have a more complicated setup procedure than this super advanced experts' tool don't touch without safety gloves and a degree in video engineering.11 -
Android is gonna be the death of me. Any fucking idea you have is impossible to implement, because libraries with clear documentation are deprecated. If a library is not deprecated, however, it has documentation written by a fucking caveman who thinks it's extremely self explanatory on how to use something that is extremely application specific. Spent hours looking at Google example code that crashes almost immediately after execution, what a joke.3
-
Worst code I ever had to touch: a React application, createClass era, before redux was a thing, that had everything in one fucking component.
Every fucking thing.
This was a simple video chat application, but still. The component's code included:
- Views (contact list and video call screen) and logic to switch between them;
- All application state;
- API calls;
- Websocket message handling;
- WebRTC logic (getUserMedia and p2p streaming).
This app was built by one person in one month for a demo. That person left the company after the demo and I had to maintain that mess without zero React knowledge (I was doing angular at that time). On his last day he gave me a crash course and an overview of how the app worked.
Around that time I attended a few meetups and a conference with talks about React. That, my curiosity and ability to learn by refactoring helped me a lot when I had to add new features and fix bugs in that app.5 -
I haven’t physically went into an office for over 4 years, I’ve always worked from home. Today I’m starting a new position because of how unreliable government work is and also it felt nice to go into an interview and them be like “I know you! You go by fyroc and you’re the one who made the open source video streaming application!”
Now that it is 6:30 in the morning, I’m thinking ive made a mistake. I really wish I could be sleeping in like I’ve done every day for the last 4 years.1 -
hey guys...
i need to create an application with video call and streaming...
i was advised to use node or golang as backend.
Does anyone know or have an advice to me or do you about a library or framework? other language, database? idk
thanks to all of you3 -
Hmm... Okay crazy deadlines. We hacked together a really makeshift application to handle streaming content to end users. The proof of concept was demonstrated to a partner company on a Wednesday. They said they wanted it on Saturday. Our CTO agreed. We didn't sleep.2
-
- Finish "Introduction to algorithms"
- Learn some genetic algorithms
- Get my hands dirty on reinforcement learning
- Learn more about data streaming application (My currently app is still using plain stupid REST to transport image). I don't know, maybe Kafka and RabbitMQ.
- Learn to implement some distributed system prototypes to get fitter at this topic. There must be more than REST for communicating between components.
- Implementing a searching module for my app with elastic search.
- Employ redis at sometime for background tasks.
- Get my handy dirty on some operating system concepts (Interprocess Communication, I am looking at you)
- Take a look at Assembly (I dont want to do much with Assembly, maybe just want to implement one or two programs to know how things work)
- Learn a bit of parallel computing with CUDA to know what the hell Tensorflow is doing with my graphic card.
- Maybe finishing my first research paper
- Pass my electrical engineering exam (I suck at EE)1 -
I've successful launched a Media Streaming Application on Google Play Store called Rad TV...but I need support to run it.8
-
I've been learning android app development using kotlin/java for about 4 months, and i think i'm pretty good with kotlin/java, i've learned a lot of things related to android development, i've cloned netflix,spotify and made streaming apps with firebase as the backend, and I think I understand using firebase quite well because firebase itself is not difficult to use. Is it for my current skills that I deserve to work as a freelancer or do I still have to improve my skills?if it yes,give me an example of what kind of application I should do to improve my skills again!,I've read the android studio docs what to know and I've studied everything even though I sometimes forget how to make this/make that but I understand the logic quite well ok, please help7