Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "memory storage"
-
In a user-interface design meeting over a regulatory compliance implementation:
User: “We’ll need to input a city.”
Dev: “Should we validate that city against the state, zip code, and country?”
User: “You are going to make me enter all that data? Ugh…then make it a drop-down. I select the city and the state, zip code auto-fill. I don’t want to make a mistake typing any of that data in.”
Me: “I don’t think a drop-down of every city in the US is feasible.”
Manage: “Why? There cannot be that many. Drop-down is fine. What about the button? We have a few icons to choose from…”
Me: “Uh..yea…there are thousands of cities in the US. Way too much data to for anyone to realistically scroll through”
Dev: “They won’t have to scroll, I’ll filter the list when they start typing.”
Me: “That’s not really the issue and if they are typing the city anyway, just let them type it in.”
User: “What if I mistype Ch1cago? We could inadvertently be out of compliance. The system should never open the company up for federal lawsuits”
Me: “If we’re hiring individuals responsible for legal compliance who can’t spell Chicago, we should be sued by the federal government. We should validate the data the best we can, but it is ultimately your department’s responsibility for data accuracy.”
Manager: “Now now…it’s all our responsibility. What is wrong with a few thousand item drop-down?”
Me: “Um, memory, network bandwidth, database storage, who maintains this list of cities? A lot of time and resources could be saved by simply paying attention.”
Manager: “Memory? Well, memory is cheap. If the workstation needs more memory, we’ll add more”
Dev: “Creating a drop-down is easy and selecting thousands of rows from the database should be fast enough. If the selection is slow, I’ll put it in a thread.”
DBA: “Table won’t be that big and won’t take up much disk space. We’ll need to setup stored procedures, and data import jobs from somewhere to maintain the data. New cities, name changes, ect. ”
Manager: “And if the network starts becoming too slow, we’ll have the Networking dept. open up the valves.”
Me: “Am I the only one seeing all the moving parts we’re introducing just to keep someone from misspelling ‘Chicago’? I’ll admit I’m wrong or maybe I’m not looking at the problem correctly. The point of redesigning the compliance system is to make it simpler, not more complex.”
Manager: “I’m missing the point to why we’re still talking about this. Decision has been made. Drop-down of all cities in the US. Moving on to the button’s icon ..”
Me: “Where is the list of cities going to come from?”
<few seconds of silence>
Dev: “Post office I guess.”
Me: “You guess?…OK…Who is going to manage this list of cities? The manager responsible for regulations?”
User: “Thousands of cities? Oh no …no one is our area has time for that. The system should do it”
Me: “OK, the system. That falls on the DBA. Are you going to be responsible for keeping the data accurate? What is going to audit the cities to make sure the names are properly named and associated with the correct state?”
DBA: “Uh..I don’t know…um…I can set up a job to run every night”
Me: “A job to do what? Validate the data against what?”
Manager: “Do you have a point? No one said it would be easy and all of those details can be answered later.”
Me: “Almost done, and this should be easy. How many cities do we currently have to maintain compliance?”
User: “Maybe 4 or 5. Not many. Regulations are mostly on a state level.”
Me: “When was the last time we created a new city compliance?”
User: “Maybe, 8 years ago. It was before I started.”
Me: “So we’re creating all this complexity for data that, realistically, probably won’t ever change?”
User: “Oh crap, you’re right. What the hell was I thinking…Scratch the drop-down idea. I doubt we’re have a new city regulation anytime soon and how hard is it to type in a city?”
Manager: “OK, are we done wasting everyone’s time on this? No drop-down of cities...next …Let’s get back to the button’s icon …”
Simplicity 1, complexity 0.16 -
I'm getting ridiculously pissed off at Intel's Management Engine (etc.), yet again. I'm learning new terrifying things it does, and about more exploits. Anything this nefarious and overreaching and untouchable is evil by its very nature.
(tl;dr at the bottom.)
I also learned that -- as I suspected -- AMD has their own version of the bloody thing. Apparently theirs is a bit less scary than Intel's since you can ostensibly disable it, but i don't believe that because spy agencies exist and people are power-hungry and corrupt as hell when they get it.
For those who don't know what the IME is, it's hardware godmode. It's a black box running obfuscated code on a coprocessor that's built into Intel cpus (all Intell cpus from 2008 on). It runs code continuously, even when the system is in S3 mode or powered off. As long as the psu is supplying current, it's running. It has its own mac and IP address, transmits out-of-band (so the OS can't see its traffic), some chips can even communicate via 3g, and it can accept remote commands, too. It has complete and unfettered access to everything, completely invisible to the OS. It can turn your computer on or off, use all hardware, access and change all data in ram and storage, etc. And all of this is completely transparent: when the IME interrupts, the cpu stores its state, pauses, runs the SMM (system management mode) code, restores the state, and resumes normal operation. Its memory always returns 0xff when read by the os, and all writes fail. So everything about it is completely hidden from the OS, though the OS can trigger the IME/SMM to run various functions through interrupts, too. But this system is also required for the CPU to even function, so killing it bricks your CPU. Which, ofc, you can do via exploits. Or install ring-2 keyloggers. or do fucking anything else you want to.
tl;dr IME is a hardware godmode, and if someone compromises this (and there have been many exploits), their code runs at ring-2 permissions (above kernel (0), above hypervisor (-1)). They can do anything and everything on/to your system, completely invisibly, and can even install persistent malware that lives inside your bloody cpu. And guess who has keys for this? Go on, guess. you're probably right. Are they completely trustworthy? No? You're probably right again.
There is absolutely no reason for this sort of thing to exist, and its existence can only makes things worse. It enables spying of literally all kinds, it enables cpu-resident malware, bricking your physical cpu, reading/modifying anything anywhere, taking control of your hardware, etc. Literal godmode. and some of it cannot be patched, meaning more than a few exploits require replacing your cpu to protect against.
And why does this exist?
Ostensibly to allow sysadmins to remote-manage fleets of computers, which it does. But it allows fucking everything else, too. and keys to it exist. and people are absolutely not trustworthy. especially those in power -- who are most likely to have access to said keys.
The only reason this exists is because fucking power-hungry doucherockets exist.26 -
Guys...
DevRant logged in automatically on my new phone. I don't know how I feel about this.
Or maybe I've already forgotten.
Help. I'm getting old and senile.9 -
The world of storage gonna change..
Yesterday, scientists successfully stored and retrieved back an entire OS, a movie and some other files from a DNA.
It is estimated that a single gram of DNA can store 215 million GB of data. ( Now the storage can be as small as invisible. )10 -
The solution for this one isn't nearly as amusing as the journey.
I was working for one of the largest retailers in NA as an architect. Said retailer had over a thousand big box stores, IT maintenance budget of $200M/year. The kind of place that just reeks of waste and mismanagement at every level.
They had installed a system to distribute training and instructional videos to every store, as well as recorded daily broadcasts to all store employees as a way of reducing management time spend with employees in the morning. This system had cost a cool 400M USD, not including labor and upgrades for round 1. Round 2 was another 100M to add a storage buffer to each store because they'd failed to account for the fact that their internet connections at the store and the outbound pipe from the DC wasn't capable of running the public facing e-commerce and streaming all the video data to every store in realtime. Typical massive enterprise clusterfuck.
Then security gets involved. Each device at stores had a different address on a private megawan. The stores didn't generally phone home, home phoned them as an access control measure; stores calling the DC was verboten. This presented an obvious problem for the video system because it needed to pull updates.
The brilliant Infosys resources had a bright idea to solve this problem:
- Treat each device IP as an access key for that device (avg 15 per store per store).
- Verify the request ip, then issue a redirect with ANOTHER ip unique to that device that the firewall would ingress only to the video subnet
- Do it all with the F5
A few months later, the networking team comes back and announces that after months of work and 10s of people years they can't implement the solution because iRules have a size limit and they would need more than 60,000 lines or 15,000 rules to implement it. Sad trombones all around.
Then, a wild DBA appears, steps up to the plate and says he can solve the problem with the power of ORACLE! Few months later he comes back with some absolutely batshit solution that stored the individual octets of an IPV4, multiple nested queries to the same table to emulate subnet masking through some temp table spanning voodoo. Time to complete: 2-4 minutes per request. He too eventually gives up the fight, sort of, in that backhanded way DBAs tend to do everything. I wish I would have paid more attention to that abortion because the rationale and its mechanics were just staggeringly rube goldberg and should have been documented for posterity.
So I catch wind of this sitting in a CAB meeting. I hear them talking about how there's "no way to solve this problem, it's too complex, we're going to need a lot more databases to handle this." I tune in and gather all it really needs to do, since the ingress firewall is handling the origin IP checks, is convert the request IP to video ingress IP, 302 and call it a day.
While they're all grandstanding and pontificating, I fire up visual studio and:
- write a method that encodes the incoming request IP into a single uint32
- write an http module that keeps an in-memory dictionary of uint32,string for the request, response, converts the request ip and 302s the call with blackhole support
- convert all the mappings in the spreadsheet attached to the meetings into a csv, dump to disk
- write a wpf application to allow for easily managing the IP database in the short term
- deploy the solution one of our stage boxes
- add a TODO to eventually move this to a database
All this took about 5 minutes. I interrupt their conversation to ask them to retarget their test to the port I exposed on the stage box. Then watch them stare in stunned silence as the crow grows cold.
According to a friend who still works there, that code is still running in production on a single node to this day. And still running on the same static file database.
#TheValueOfEngineers2 -
This rant is particularly directed at web designers, front-end developers. If you match that, please do take a few minutes to read it, and read it once again.
Web 2.0. It's something that I hate. Particularly because the directive amongst webdesigners seems to be "client has plenty of resources anyway, and if they don't, they'll buy more anyway". I'd like to debunk that with an analogy that I've been thinking about for a while.
I've got one server in my home, with 8GB of RAM, 4 cores and ~4TB of storage. On it I'm running Proxmox, which is currently using about 4GB of RAM for about a dozen VM's and LXC containers. The VM's take the most RAM by far, while the LXC's are just glorified chroots (which nonetheless I find very intriguing due to their ability to run unprivileged). Average LXC takes just 60MB RAM, the amount for an init, the shell and the service(s) running in this LXC. Just like a chroot, but better.
On that host I expect to be able to run about 20-30 guests at this rate. On 4 cores and 8GB RAM. More extensive migration to LXC will improve this number over time. However, I'd like to go further. Once I've been able to build a Linux which was just a kernel and busybox, backed by the musl C library. The thing consumed only 13MB of RAM, which was a VM with its whole 13MB of RAM consumption being dedicated entirely to the kernel. I could probably optimize it further with modularization, but at the time I didn't due to its experimental nature. On a chroot, the kernel of the host is used, meaning that said setup in a chroot would border near the kB's of RAM consumption. The busybox shell would be its most important RAM consumer, which is negligible.
I don't want to settle with 20-30 VM's. I want to settle with hundreds or even thousands of LXC's on 8GB of RAM, as I've seen first-hand with my own builds that it's possible. That's something that's very important in webdesign. Browsers aren't all that different. More often than not, your website will share its resources with about 50-100 other tabs, because users forget to close their old tabs, are power users, looking things up on Stack Overflow, or whatever. Therefore that 8GB of RAM now reduces itself to about 80MB only. And then you've got modern web browsers which allocate their own process for each tab (at a certain amount, it seems to be limited at about 20-30 processes, but still).. and all of its memory required to render yours is duplicated into your designated 80MB. Let's say that 10MB is available for the website at most. This is a very liberal amount for a webserver to deal with per request, so let's stick with that, although in reality it'd probably be less.
10MB, the available RAM for the website you're trying to show. Of course, the total RAM of the user is comparatively huge, but your own chunk is much smaller than that. Optimization is key. Does your website really need that amount? In third-world countries where the internet bandwidth is still in the order of kB/s, 10MB is *very* liberal. Back in 2014 when I got into technology and webdesign, there was this rule of thumb that 7 seconds is usually when visitors click away. That'd translate into.. let's say, 10kB/s for third-world countries? 7 seconds makes that 70kB of available network bandwidth.
Web 2.0, taking 30+ seconds to load a web page, even on a broadband connection? Totally ridiculous. Make your website as fast as it can be, after all you're playing along with 50-100 other tabs. The faster, the better. The more lightweight, the better. If at all possible, please pursue this goal and make the Web a better place. Efficiency matters.9 -
Meet 'SBI Online' app from Play Store, in their own words:
What they were supposed to do?
"Experience the new Retail Internet Banking of SBI"
What they do?
"SBI online app will redirect to SBI Retail Internet Banking (online SBI) site"
Why do they have app?
"No need to remember URL",
"Less memory space required on device"
App storage space?
F**king 2.6 MB, just to redirect users to their website, in third-party browser.2 -
!rant but history
I found this old micro controller: The TMS 1000 (from 1974). The specs: 100-400kHz clock speed, 4-bit architecture, 1kB ROM and 32 bytes (!) RAM. According to data sheet, you sent the program to TI and they gave you a programmed controller back - updates to the once upload program were impossible, but an external memory chip was possible.
I'm glad we have computers with more processing power and storage (and other languages than assembler) - on the other hand it enforced good debugging before deployment and and efficient code.
Data sheet: http://bitsavers.org/components/ti/...6 -
2012 laptop:
- 4 USB ports or more.
- Full-sized SD card slot with write-protection ability.
- User-replaceable battery.
- Modular upgradeable memory.
- Modular upgradeable data storage.
- eSATA port.
- LAN port.
- Keyboard with NUM pad.
- Full-sized SD card slot.
- Full-sized HDMI port.
- Power, I/O, charging, network indicator lamps.
- Modular bay (for example Lenovo UltraBay)
- 1080p webcam (Samsung 700G7A)
- No TPM trojan horse.
2024 laptop:
- 1 or 2 USB ports.
- Only MicroSD card slot. Requires fumbling around and has no write-protection switch.
- Non-replaceable battery.
- Soldered memory.
- Soldered data storage.
- No eSATA port.
- No LAN port.
- No NUM pad.
- Micro-HDMI port or uses USB-C port as HDMI.
- Only power lamp. No I/O lamp so user doesn't know if a frozen computer is crashed or working.
- No modular bay
- 720p webcam
- TPM trojan horse (Jody Bruchon video: https://youtube.com/watch/... )
- "Premium design" (who the hell cares?!)14 -
I do not like the direction laptop vendors are taking.
New laptops tend to feature fewer ports, making the user more dependent on adapters. Similarly to smartphones, this is a detrimental trend initiated by Apple and replicated by the rest of the pack.
As of 2022, many mid-range laptops feature just one USB-A port and one USB-C port, resembling Apple's toxic minimalism. In 2010, mid-class laptops commonly had three or four USB ports. I have even seen an MSi gaming laptop with six USB ports. Now, much of the edges is wasted "clean" space.
Sure, there are USB hubs, but those only work well with low-power devices. When attaching two external hard drives to transfer data between them, they might not be able to spin up due to insufficient power from the USB port or undervoltage caused by the impedance (resistance) of the USB cable between the laptop's USB port and hub. There are USB hubs which can be externally powered, but that means yet another wall adapter one has to carry.
Non-replaceable [shortest-lived component] mean difficult repairs and no more reserve batteries, as well as no extra-sized battery packs. When the battery expires, one might have to waste four hours on a repair shop for a replacement that would have taken a minute on a 2010 laptop.
The SD card slot is being replaced with inferior MicroSD or removed entirely. This is especially bad for photographers and videographers who would frequently plug memory cards into their laptop. SD cards are far more comfortable than MicroSD cards, and no, bulky external adapters that reserve the device's only USB port and protrude can not replace an integrated SD card slot.
Most mid-range laptops in the early 2010s also had a LAN port for immediate interference-free connection. That is now reserved for gaming-class / desknote laptops.
Obviously, components like RAM and storage are far more difficult to upgrade in more modern laptops, or not possible at all if soldered in.
Touch pads increasingly have the buttons underneath the touch surface rather than separate, meaning one has to be careful not to move the mouse while clicking. Otherwise, it could cause an unwanted drag-and-drop gesture. Some touch pads are smart enough to detect when a user intends to click, and lock the movement, but not all. A right-click drag-and-drop gesture might not be possible due to the finger on the button being registered as touch. Clicking with short tapping could be unreliable and sluggish. While one should have external peripherals anyway, one might not always have brought them with. The fallback input device is now even less comfortable.
Some laptop vendors include a sponge sheet that they want users to put between the keyboard and the screen before folding it, "to avoid damaging the screen", even though making it two millimetres thicker could do the same without relying on a sponge sheet. So they want me to carry that bulky thing everywhere around? How about no?
That's the irony. They wanted to make laptops lighter and slimmer, but that made them adapter- and sponge sheet-dependent, defeating the portability purpose.
Sure, the CPU performance has improved. Vendors proudly show off in their advertisements which generation of Intel Core they have this time. As if that is something users especially care about. Hoo-ray, generation 14 is now yet another 5% faster than the previous generation! But what is the benefit of that if I have to rely on annoying adapters to get the same work done that I could formerly do without those adapters?
Microsoft has also copied Apple in demanding internet connection before Windows 11 will set up. The setup screen says "You will need an Internet connection…" - no, technically I would not. What does technically stand in the way of Windows 11 setting up offline? After all, previous Windows versions like Windows 95 could do so 25 years earlier. But also far more recent versions. Thankfully, Linux distributions do not do that.
If "new" and "modern" mean more locked-in and less practical and difficult to repair, I would rather have "old" than "new".12 -
Firefox Quantum! So fast, so great, amazing, the best.
..
..
Open with 2 tabs, Slack and YouTube homepage). Whats been rewritten exactly?5 -
Yes, today storage memory is "cheap".
But 100MB for an image flash tool
Wtf!?!
An arch iso is 600MB
Rufus on windows is ~1MB10 -
The challenge: a Dell Latitude XPi CD 133ST. It has a Pentium 133MHz, 32MB memory, and a 2GB HDD. I think I have some CD's with Mandrake from the late 90's. I hope. And mission impossible, an IBM Workpad z50 with a MIPS 4000 and 8MB split 50/50 between storage and program memory. Think that one will just be a large paper weight or door stop.17
-
24th, Christmas: BIND slaves decide to suddenly stop accepting zone transfers from the master. Half a day of raging and I still couldn't figure out why. dig axfr works fine, but the slaves refuse a zone update according to tcpdump logs.
25th, 2nd day: A server decides to go down and take half my network with it. Turns out that a Python script managed to crash the goddamn kernel.
Thank you very much technology for making the Christmas days just a little bit better ❤️
At least I didn't have anything to do during either days, because of the COVID-19 pandemic. And to be fair, I did manage to make a Telegram bot with fancy webhooks and whatnot in 5MB of memory and 18MB of storage. Maybe I should just write the whole thing and make another sacred temple where shitty code gets beaten the fuck out of the system. Terry must've been onto something...5 -
My first contact with an actual computer was the Sinclair ZX80, a monster with 512 bytes of ram (as in 1/2 kbyte)
It had no storage so you had to enter every program every time and it was programmed in basic using key combinations, you could not just write the commands since it did not have memory enough to keep the full text in memory.
So you pressed the cmd key along with one of the letter keys and possibly shift to enter a command, like cmd+p for print and it stored s byte code.8 -
MTP is utter garbage and belongs to the technological hall of shame.
MTP (media transfer protocol, or, more accurately, MOST TERRIBLE PROTOCOL) sometimes spontaneously stops responding, causing Windows Explorer to show its green placebo progress bar inside the file path bar which never reaches the end, and sometimes to whiningly show "(not responding)" with that white layer of mist fading in. Sometimes lists files' dates as 1970-01-01 (which is the Unix epoch), sometimes shows former names of folders prior to being renamed, even after refreshing. I refer to them as "ghost folders". As well known, large directories load extremely slowly in MTP. A directory listing with one thousand files could take well over a minute to load. On mass storage and FTP? Three seconds at most. Sometimes, new files are not even listed until rebooting the smartphone!
Arguably, MTP "has" no bugs. It IS a bug. There is so much more wrong with it that it does not even fit into one post. Therefore it has to be expanded into the comments.
When moving files within an MTP device, MTP does not directly move the selected files, but creates a copy and then deletes the source file, causing both needless wear on the mobile device' flash memory and the loss of files' original date and time attribute. Sometimes, the simple act of renaming a file causes Windows Explorer to stop responding until unplugging the MTP device. It actually once unfreezed after more than half an hour where I did something else in the meantime, but come on, who likes to wait that long? Thankfully, this has not happened to me on Linux file managers such as Nemo yet.
When moving files out using MTP, Windows Explorer does not move and delete each selected file individually, but only deletes the whole selection after finishing the transfer. This means that if the process crashes, no space has been freed on the MTP device (usually a smartphone), and one will have to carefully sort out a mess of duplicates. Linux file managers thankfully delete the source files individually.
Also, for each file transferred from an MTP device onto a mass storage device, Windows has the strange behaviour of briefly creating a file on the target device with the size of the entire selection. It does not actually write that amount of data for each file, since it couldn't do so in this short time, but the current file is listed with that size in Windows Explorer. You can test this by refreshing the target directory shortly after starting a file transfer of multiple selected files originating from an MTP device. For example, when copying or moving out 01.MP4 to 10.MP4, while 01.MP4 is being written, it is listed with the file size of all 01.MP4 to 10.MP4 combined, on the target device, and the file actually exists with that size on the file system for a brief moment. The same happens with each file of the selection. This means that the target device needs almost twice the free space as the selection of files on the source MTP device to be able to accept the incoming files, since the last file, 10.MP4 in this example, temporarily has the total size of 01.MP4 to 10.MP4. This strange behaviour has been on Windows since at least Windows 7, presumably since Microsoft implemented MTP, and has still not been changed. Perhaps the goal is to reserve space on the target device? However, it reserves far too much space.
When transfering from MTP to a UDF file system, sometimes it fails to transfer ZIP files, and only copies the first few bytes. 208 or 74 bytes in my testing.
When transfering several thousand files, Windows Explorer also sometimes decides to quit and restart in midst of the transfer. Also, I sometimes move files out by loading a part of the directory listing in Windows Explorer and then hitting "Esc" because it would take too long to load the entire directory listing. It actually once assigned the wrong file names, which I noticed since file naming conflicts would occur where the source and target files with the same names would have different sizes and time stamps. Both files were intact, but the target file had the name of a different file. You'd think they would figure something like this out after two decades, but no. On Linux, the MTP directory listing is only shown after it is loaded in entirety. However, if the directory has too many files, it fails with an "libmtp: couldn't get object handles" error without listing anything.
Sometimes, a folder appears empty until refreshing one more time. Sometimes, copying a folder out causes a blank folder to be copied to the target. This is why on MTP, only a selection of files and never folders should be moved out, due to the risk of the folder being deleted without everything having been transferred completely.
(continued below)29 -
Imagine asking your friends to help you rate your app on the google play store and instead of saying NO I DONT WANT TO RATE YOUR APPLICATION no... they decide to fuck with your mind.
1)
I will rate it tomorrow. (she never rated it tomorrow nor the next couple of weeks later)
2)
I will keep it in mind and rate it later :). (she never rated it later)
3)
I rated it haha (less than 30 seconds later they deleted the rating)
4)
Send me a link and I'll rate it (i send the link, they never respond or read my message again)
5)
I dont have memory on my phone :) (because 13MB of memory is a lot of storage requirements but taking 1 million selfies of up to 25GB is completely fine)
6)
I dont have memory on my phone what dont you understand :) x2 (this is the second girl)
7)
Your trying to give me a virus?? No (i got blocked multiple times)
8)
You want to hack me by making me install this application from the link that you sent me that leads to google play store? No (blocked)
9)
Rate your app? Haha i dont care about it because it doesnt bring me any benefit only the fat cocks that fill my pussy up satisfy me and not ur app haha
10)
Haha send me a link ill rate it (i send link, 8 hours later no reply or reading my message, i text her back if she had done it and im still put on ignore)
...
N)
more
----
Notice how none of these people have said the 2 letter word: "no".
All of these 10 examples are based on a true story.
All of these 10 examples are different people.
---
How hard
Can it be
To just
Write
no
---
.
---
For all of you who are about to trash talk saying i am desperately trying to beg people to rate my app:
i know all of those people for a long time. But when it comes to asking (and not forcing) someone to do you a favor for free that takes no more than 30 seconds, no one seems to have 30 seconds of their free time. Dont get me wrong, some of my friends did politely rate it and left a review, even the people who i barely knew left a review and rated it, but the people with whom I was closer by, didnt.
---
In the beginning i used to not care about this at all. Then i started falling into depression because of it. I fell then into deep depression. Then i sunk so deep that i couldn't feel any emotions anymore so i laughed as an anti depressive mechanism whenever something depressing happened. Now i cant even laugh because i have no more energy. Now i actually leave man tears
---
The only thing more valuable than people, any materialistic thing, animals, coding and even money - is time....
----
why do you waste my time
if i ask you to do something that takes 30 seconds and you dont want to do it
why cant you just say no
why do you drag me
why do you say you're going to do it when you know you wont do it
what do you gain by unnecessarily lying to someone for such a small thing?
to someone who has been a good person to you?
do you feel superior?
is your ego bigger?
----
This experience has taught me that not even a human from the same blood can be trusted.
All of your are fucked up in the head in your own style and i am guilty of it too, all of us are.
But i have never seen the human evolution went from simplicity to overengineered complexitory bULLSHit where you have to lie to someone and waste hours, days, weeks, months and sometimes years of his time just because you dont want to say a 2 letter word, no.
But when that person becomes more successful than you and achieves higher status, Theen you have those 30 seconds of free time. All of you are fucking cynics. and i am so much overly disgusted by all of this fucking bullshit....
-----
This experience has proven to me to simply focus on investing into myself and learn and improve myself and no one else. To not even bother asking even for a small kind of help, a feedback from my work because people don't have 30 seconds of their free time. That is all.12 -
My 27" 8-core imac, i7, 3.8ghz, AMD radeon pro 5500, 40 GB RAM 512 GB storage,
keeps screaming in agony.
But never stuttered.
Never lagged.
Never glitched
Never failed
Never ran out of memory
I can just hear how hard the the ventilation was going. It was getting loud.
I touched its ass from behind. It was heated up and there was lots of dust from the holes
This has been going on for several days but i ignored it knowing what kind of a beast machine i have (big mistake)
Intellj popped up notification to disable hints in order to improve cpu usage performance.
Immediately it struck me. Hold on lemme check the activity monitor stats and find out why my imac has been screaming for days
Turns out intellj is using over 1090% of my fucking CPU?????
THAT SHIT U SEE ON THE IMAGE WENT ABOVE 1100% OF CPU USAGE AND IT WAS ONLY 1 PROCESS CAUSING IT - INTELLIJ
WHAT???12 -
Just now I was reading on https://pve.proxmox.com/wiki/... about high availability. Now my Proxmox VE is just a tower (which happens to have ECC memory) that's stored in my storage room (and which is mostly used for experimental and home server purposes). But my mail servers.. those have been made with high availability in mind. Most importantly, I've made their services entirely redundant (but within the same datacenter). And when they have updates, I apply updates to one, reboot, see if it didn't break something and then do the same to the other server after the first one came up again. So no downtime whatsoever.
If memory serves me right, I think that I've been able to maintain these servers for the last year without any downtime at all (I reboot them every month to apply new kernels but they haven't both been simultaneously down at any moment). Does that make them High Availability? My interventions regarding their availability have been rather trivial. Is it really that hard..?4 -
Man wk89 awesome... bringing back a lot of memories. The one thing really stands out to me though is the software.
I see a lot of rants about people shocked that turboC is still in use or other DOS programs are still in production. A lot can of bad be said here but I think often it's a case of we truly don't build things like we did in the good old days.
What those devs accomplished with such limited resources is phenomenal and the fact that we still haven't managed to replicate the feel and usability of it says a lot, not to mention just how fucking stable most of it was.
My favourite games are all DOS based, my most favourite of all time Sherlock is 103kb in size. When I started coding games I made a clone of it and to this day I am still trying to figure out what sorcery is in the algorithm that generates/solves puzzles that makes it so fast and memory efficient. I must have tried 100+ ways and can't even come close. NB! If you know you can hint but don't tell me. Solving this is a matter of personal pride.
Where those games really stand out is when you get into the graphics processing - the solutions they came up with to render sprites, maps and trick your eyes into seeing detail with only 4-16 colours is nothing short of genius. Also take a second to consider that taking a screen shot of the game is larger than the entire game itself and let that sink in...
I think the dramatic increase in storage, processing power and ram over the last decade is making us shit developers - all of us. Just take one look at chrome, skype or anything else mainline really and it's easy to see we no longer give a rats ass about memory anywhere except our monthly AWS/GCE bill.
We don't have to be creative or even mindful about anything but the most significant memory leaks in order to get our software to run now days. We also don't have constraints to distribute it, fast deliver-ability is rewarded over quality software. It's only expected to stay in production 3-4 years anyway.
Those guys were the true "rockstars" and "ninja" developers and if you can't acknowledge that you can take ya React app and shovit. -
it was not a technical interview.
just screening.
guy: tell me smth about redis.
me: key value, in memory storage.
guy: more
me: umm, the concept is similar to localStorage in browsers, key value storage, kinda in memory.
guy: so we use redis in browsers?
me: no, I mean the high level concept is similar.
guy: (internally: stupid, fail).3 -
I just got a new phone, a Tecno device with a measly 8GB internal storage. Decided I'll have to root it, and force part of my class 10, 32GB mem card as adopted storage.
Went online, learnt for a few weeks, successfully rooted, and began enjoying the vast benefits.
But there are no good endings. Months later, while doing some heavy gaming, my phone reboots... and everything on the memory, pack up and go on a vacation... -
MTP is complete garbage. I want mass storage back.
The media transfer protocol (MTP) occasionally discovers new creative ways of failure. Frequently, directory listings take minutes to load or fail to load at all, and it freezes up infinitely (until disconnected) when renaming an item, and I can not even do two things simultaneously.
While files are being moved, I can not browse pictures or watch videos from the smartphone.
Sometimes, files are listed with the date 1970-01-01 (Unix epoch) instead of their correct date. Sometimes, files do not appear at all, which makes it unsafe to move directories from the device.
MTP lacks random access. If I want to play a two-gigabyte 4K 2160p video and seek in the video, guess what: I need to copy it to my computer's local mass storage first because MTP lacks random access.
When transferring high numbers of files, MTP has to slooooowly enumerate (or "prepare" or "calculate the time of") them all, which might even take longer than mass storage would need for the entire process. This means MTP might start copying or moving the actual files when mass storage is already finished.
Today, the "preparing to move" process was especially slow: five minutes for around 150 files! How am I supposed to find out what caused this random malfunction?
MTP sometimes drives me insane. I want mass storage back, at least for the MicroSD memory card, which uses a widely supported file system.
Imagine a 2010 $100 Android phone is better at file transfer than a 2022 $1000 Android phone (or iPhone, for that matter).3 -
Reanimated an old e-ink tablet today.
First, I didn't even know it needed to be reanimated. I just copied my books there, but it didn't find them. When I connected it again, they were gone.
Factory reset. Format storage. The memory seems empty, but after rebooting I see that everything is still intact.
Ok, imma hit forums then. They tell me I need to replace the internal memory. But isn't that something you need soldering for? Wrong! The internal memory IS JUST A MICRO SD CARD on the motherboard. The card is some cheap no name one, and people tell the similar story of it burning out after like four years of use.
Damn! The vendor has the AUDACITY to charge for signing their firmware to be flashed to a new micro sd card.
But I won't go down this easily. I hit forums again, and apparently there is a tool to sign the firmware yourself, but you need to find the card's serial number. To do that, you have to flash a bootleg tool, boot from that card, and it will show you the data you need. Then, you have to insert them into some shady .ini file (why is everything touching bootleg firmware runs only on windows?).
So I do that. The problem is, I need an image for my book. I find some shady one online, sign & flash it — touchscreen doesn't work. But I have the official firmware. I put two and two together and figure out that if the reader is able to display the ui, it probably has the firmware update tool working. So, immediately after flashing, I launch the firmware update utility that picks up my firmware from the second sd card (yes, they have an additional external slot).
Bingo. It works.
So, here are the steps:
1. Find a shady sd serial number detection tool
2. Flash it on a memory card with a shady vendor-specific flashing tool
3. Insert the new (now shady) card
4. Boot, write down the serial number
5. Find a shady boot image online
6. Edit a shady .ini file of a shady self-signing tool to sign the shady boot image
7. Flash the altered shady boot image with the shady flashing tool on your memory card
8. Copy a shady firmware update on a new card
9. Insert both cards
10. Pray4 -
It's 2022 and Firefox still doesn't allow deactivating video caching to disk.
When playing videos from some sites like the Internet Archive, it writes several hundreds of megabytes to the disk, which causes wear on flash storage in the long term. This is the same reason cited for the use of jsonlz4 instead of plain JSON. The caching of videos to disk even happens when deactivating the normal browsing cache (about:config property "browser.cache.disk.enable").
I get the benefit of media caching, but I'd prefer Firefox not to write gigabytes to my SSD each time I watch a somewhat long video. There is actually the about:config property "browser.privatebrowsing.forceMediaMemoryCache", but as the name implies, it is only for private browsing. The RAM is much more suitable for this purpose, and modern computers have, unlike computers from a decade ago, RAM in abundance, which is intended precisely for such a purpose.
The caching of video (and audio) to disk is completely unnecessary as of 2022. It was useful over a decade ago, back when an average computer had 4 GB of RAM and a spinning hard disk (HDD). Now, computers commonly have 16 GB RAM and a solid-state drive (SSD), which makes media caching on disk obsolete, and even detrimental due to weardown. HDDs do not wear down much from writing, since it just alters magnetic fields. HDDs just wear down from the spinning and random access, whereas SSDs do wear down from writing. Since media caching mostly invovles sequential access, HDDs don't mind being used for that. But it is detrimental to the life span of flash memory, and especially hurts live USB drives (USB drives with an operating system) due to their smaller size.
If I watch a one-hour HD video, I do not wish 5 GB to be written to my SSD for nothing. The nonstandard LZ4 format "mozLZ4" for storing sessions was also introduced with the argument of reducing disk writes to flash memory, but video caching causes multiple times as much writing as that.
The property "media.cache_size" in about:config does not help much. Setting it to zero or a low value causes stuttering playback. Setting it to any higher value does not reduce writes to disk, since it apparently just rotates caching within that space, and a lower value means that it just rotates writing more often in a smaller space. Setting a lower value should not cause more wear due to wear levelling, but also does not reduce wear compared to a higher value, since still roughly the same amount of data is written to disk.
Media caching also applies to audio, but that is far less in size than video. Still, deactivating it without having to use private browsing should not be denied to the user.
The fact that this can not be deactivated is a shame for Firefox.2 -
I really love my mother but.
A couple of weeks ago she asked me for advice regarding a laptop. She wanted something cheap for office and stuff.
Since I know her I exactly knows she needs extreme fast boot and responsiveness. She'll go all hulk rage if the laptop doesn't boot in less than 30 seconds.
Told her to get something with ssd since storage is no issue and 4gb ram with an decent older I5. Took a whole day going through stores in my area and online to find good deals. Send her everything I found. Really good laptop for under 500€ I would've killed for.
Fast forward. She bought some 300€ shit laptop because it had 1tb memory. She didn't ask for advice just bought the cheapest that would read decently description wise.
Now she is raging all day and bitching about it being so slow and I should fix it for her since I'm an it guy etc.
Looking at the specs I nearly started to vomit. She seriously bought a laptop worse than she already had. Old i3 2gb ram 5200rpm HDD.
I told her she should return it because it is shit. But no. She insists that since it's newer it is better and I am only a lazy fuck who doesn't want to be bothered to do her a favor.
Offered the best thing I could think of. Told her I'd install Linux on it for her and teach her how to use it.
Explained it would run more smoothly since she refused to take that shit laptop back. But no. Of course she insists on using windows 10....
FUUUUUUUCK. I love my mother but seriously I'm about to explode.5 -
Time for a rant about shitstaind, suspend/hibernate, and if there's room for it at the end probably swappiness, and Windows' way of dealing with this.
So yesterday I wanted to suspend my laptop like usual, to get those goddamn fans to shut up when I'm sleeping. Shitstaind.. pinnacle of init systems.. nope, couldn't do it. Hibernation on the other hand, no problem mate! So I hibernated the laptop and resumed it just now. I'm baffled by this.
I'll oversimplify a bit here (but feel free to comment how there's more to it regardless) but basically with suspend you keep your memory active as well as some blinkenlights, and everything else goes down. Simple enough.. except ACPI and I will not get into that here, curse those foul lands of ACPI.
With hibernation you do exactly the same, but on top of that, you also resume the system after suspending it, and freeze it. While frozen, you send all the memory contents to the designated swap file/partition. Regarding the size of the swap file, it only needs to be big enough to fit the memory that's currently in use. So in a 16GB RAM system with 8GB swap, as long as your used memory is under 8GB, no problem! It will fit. After you've moved all the memory into swap, you can shut down the entire system.
Now here's the problem with how shitstaind handled this... It's blatantly obvious that hibernation is an extension of suspend (sometimes called S3, see e.g. https://wiki.ubuntu.com/Kernel/...) and that therefore the hibernation shouldn't have been possible either. The pinnacle of init systems.. can't even suspend a system, yet it can hibernate it. Shitstaind sure works in mysterious ways!
On Windows people would say it's a hardware issue though, so let's talk a bit about that clusterfuck too. And I'll even give you a life hack that saves 30GB of storage on your Windows system!
Now I use Windows 7 only, next to my Linux systems. Reason for it is it's the least fucked up version of Windows in my opinion, and while it's falling apart in terms of web browsing (not that you should on an EOL system), it's good enough for le games. With that out of the way... So when you install Windows, you'll find that out of the box it uses around 40GB of storage. Fairly substantial, and only ~12GB of it is actually system data. The other 30-ish GB are used by a hibernation file (size of your RAM, in C:\hiberfil.sys) and the page file (C:\pagefile.sys, and a little less than your total RAM.. don't ask me why). Disable both of those and on a 16GB RAM system, you'll save around 30GB storage. You can thank me later.
What I find strange though is that aside from this obscene amount of consumed storage, is that the pagefile and hibernation file are handled differently. In Linux both of those are handled by the swap, and it's easy to see why. Both are enabled by the concept of virtual memory. When hibernating, the "real" memory locations are simply being changed to those within swap. And what is the pagefile? Yep.. virtual memory. It's one thing to take an obscene amount of storage, but only Windows would go the extra mile and do it twice. Must be a hardware issue as well.
Oh, and swappiness. This is a concept that many Linux users seem to misunderstand. Intuitively you'd think that the swappiness determines what percentage of memory it takes for the kernel to start swapping, but this is not true. Instead, it's a ratio of sorts that the kernel uses when determining how important the memory and swap are. Each bit of memory has a chance to be put into either depending on the likelihood of it being used soon after, and with the swappiness you're tuning this likelihood to be either in favor of memory or swap. This is why a swappiness of 60 is default most of the time, because both are roughly equally important, and swap being on disk is already taken into account. When your system is swapping only and exactly the memory that's unlikely to be used again, you know you've succeeded. And even on large memory systems, having some swap is usually not a bad idea. Although I'd definitely recommend putting it on SSD in a partition, so that there's no filesystem overhead and so that it's still sufficiently fast, even when several GB of memory are being dumped in.6 -
Buying a thunder purple 1+6T on Thursday... It will have more memory and more storage than my daily use computer (a Chromebook). Going to install UserLAnd, get a folding Bluetooth keyboard and a stand.
Laptop replacement. No, seriously.6 -
Did I get old or did I just finish plucking all the low hanging fruit?
When I started on a programming journey about a decade ago everything feel exciting and I learn a lot of things per day (variable,loop,method,class,---etc)
Now a decade later I am more concern with the overall system design,algorithms usage (Big O Notation),how reliable the system it,and how the configurations are set up and how easy is it to change them.
I now notice that I don't really learn anything learn new.Everything feel the same.
Want redundancy? Use more server
Want faster performance? Make a parallel system.
Want program to run on low end device? Think about how memory and storage will be used in system.
Is this a stage everyone went through like puberty? or I am just having a mid life crisis?
PS : I haven't even reach 30 yet but I feel too old.4 -
LXC, no doubt.
I mean to be fair, LXC is an amazing container runtime once you manage to set it up. But setting it up is the hard bit. Starting off with LXC 2.x, it was a nightmare to find out how to get things like the storage backends working. But with ZFS it ended up being alright. Find some arcane values to stick in the /etc/lxc/default.conf to use ZFS as the backend and then the default storage location on those ZFS pools (I'll get back to that later), and it worked alright. Again, once it works it's great, but setting it up and finding the right configuration keys is absolute hell.
So, LXC 2.x for a while and a few months ago I finally ended up upgrading to 3.x. Every single configuration key changed. Every single one of them, and that's why I had to 1) learn LXC all over again, and 2) redeploy each and every one of my containers. That process is still not entirely completed. ZFS backend was once again a dive into arcane configuration keys found on forums and whatnot. Yeah.. official documentation has none of it. Oh and in 3.x you now also have to dodge the torrent of "just use LXD m8" messages. Yeah, very helpful when LXD is also the ONLY way to reasonably configure it. Absolutely beautiful. Oh and as far as the ZFS default storage location goes (such as ssd/lxc/ct)? Yeah forget about it. There's no configuration option for it anymore, and the default is "lxc". In ZFS lingo that means that LXC has the audacity to demand a whole pool for itself. No. No you don't deserve a whole pool for yourself. But hey at least you can define the storage location to use in the lxc-create command! Every single time you have to define it in lxc-create. I abstracted it away into my own LXC interface, so no big deal really. But yeah... That could absolutely be better. And in 2.x it was actually better.
Oh and btrfs, the filesystem I'd like to use on low memory systems because ZFS' ARC is too much on such systems? Yeah forget about it. I still have no idea how to do it. Thank you LXC and its amazing documentation!
And if you want the icing on the cake for LXC's terrible documentation, see their repo's index page at https://github.com/lxc/lxc/.... Yeah, it's totally still at 2.x... That's how well they maintain that. Even Debian has 3.x now. And if you look at the branches, you'll find that even 4.x is already available and considered stable. -
When file managers copy and delete files within the same partition instead of moving or renaming them…
When Google's Storage Access Framework was introduced, it did not feature a move command, so file managers just resorted to copying and deleting files within the same storage. Not only does this cause needless wear and is much slower, but it also destroys the date/time attribute (it gets changed to current).
When moving files through MTP (miserable transfer protocol, used for connecting smartphones to PC), they are also copy-deleted. This makes moving a 20-Gigabyte DCIM folder impractical. Also, if one cancels the operation, it might end up whoopsie-daisy deleting some files from the source before they have been transferred.
MTP is so bogus that it is incapable of a simple operation that would JustWork™ on mass storage devices. Not to mention, MTP lacks parallelism and its directory listing loading it S-L-O-W. Upwards of a minute for just 1000 files. Sometimes, it fails loading at all.
Also, trying to rename a file through MTP using the terminal through GVFS, even if just within the same folder, it copy-deletes it. If I want to rename a 1 GB 2160p 4K video in a highly populated DCIM folder, I can not do so through the terminal. At least, the 4K video has a time stamp in its internal metadata, but it still renames slowly and adds needless wear to the smartphone's flash memory.14 -
So to give you a feel for what evil, clusterfuck code it was in: this projects largest part was coded by a maniac, witty physicist confined in the factory for a month, intended as a 'provisional' solution of course it ran for years. The style was like C with a bit of classes.. and a big chunk of shared memory as a global mud of storage, communication and catastrophe. Optimistic or no locking of the memory between process barriers, arrays with self implemented boundary checks that would give you the zeroth element on failure and write an error log of which there were often dozens in the log. But if that sounds terrifying already, it is only baseline uneasyness which was largely surpassed by the shear mass of code, special units, undocumented madness. And I had like three month to write a simulator of the physical factory and sensors to feed that behemoth with the 'right' inputs. Still I don't know how I stood it through, but I resigned little time afterwards.
Well, lastly to the bug: there was some central map in that shared memory that hold like view of the central customer data. And somehow - maybe not that surprisingly giving the surrounding codebase - it sometimes got corrupted. Once in a month or two times a day. Tried to put in logging, more checks - but never really could pinpoint the problem... Till today I still get the haunting feeling of a luring memory corruption beneath my feet, if I get closer to the metal core of pure C.1 -
I like rants that are thought provoking and push a message forward regardless of whether they may sting a little, so for my first post on here I'd like to hit at home with many of you.
Html5 "Native" Applications are not needed. Let's cover mobile first of all, the misconception that apps are written in either javascript or Native android/ Native ios environment. Or even some third party paid tools like xamarin is quite strange to me. OpenGL ES is on both IOS and Android there is no difference. It's quite easy to write once run everywhere but with native performance and not having to jump through js when it's not needed. Personally I never want to see html or css if I'm working on a mobile app or desktop. Which brings me to desktop, I can't begin to describe how unthought out an electron app is. Memory usage, storage space for embedding chromium, web views gained at the expense of literally everything else, cross platform desktop development has been around for decades, openGL is everywhere enough said. Finally what about targeting browser if your writing a native app for mobile and desktop let's say in c++ and it's not in javascript how can it turn back into javascript, well luckily c++ has emscripten which does that simply put, or you could be using a cross complier language like haxe which is what I use. It benefits with type safety, while exporting both c++ and javascript code. Conclusion in reality I see the appeal to the js ecosystem it's large filled with big companies trying to make js cross development stronger every day. However development in my mind should be a series of choices, choices that are invisible don't help anyone, regardless of the popularity of the choice, or the skill required.8 -
Can anyone help me with this theory about microprocessor, cpu and computers in general?
( I used to love programming when during school days when it was just basic searching/sorting and oop. Even in college , when it advanced to language details , compilers and data structures, i was fine. But subjects like coa and microprocessors, which kind of explains the working of hardware behind the brain that is a computer is so difficult to understand for me 😭😭😭)
How a computer works? All i knew was that when a bulb gets connected to a battery via wires, some metal inside it starts glowing and we see light. No magics involved till now.
Then came the von Neumann architecture which says a computer consists of 4 things : i/o devices, system bus ,memory and cpu. I/0 and memory interact with system bus, which is controlled by cpu . Thus cpu controls everything and that's how computer works.
Wait, what?
Let's take an easy example of calc. i pressed 1+2= on keyboard, it showed me '1+2=' and then '3'. How the hell that hapenned ?
Then some video told me this : every key in your keyboard is connected to a multiplexer which gives a special "code" to the processer regarding the key press.
The "control unit" of cpu commands the ram to store every character until '=' is pressed (which is a kind of interrupt telling the cpu to start processing) . RAM is simply a bunch of storage circuits (which can store some 1s) along with another bunch of circuits which can retrieve these data.
Up till now, the control unit knows that memory has (for eg):
Value 1 stored as 0001 at some address 34A
Value + stored as 11001101 at some address 34B
Value 2 stored as 0010 at some Address 23B
On recieving code for '=' press, the "control unit" commands the "alu" unit of cpu to fectch data from memory , understand it and calculate the result(i e the "fetch, decode and execute" cycle)
Alu fetches the "codes" from the memory, which translates to ADD 34A,23B i.e add the data stored at addresses 34a , 23b. The alu retrieves values present at given addresses, passes them through its adder circuit and puts the result at some new address 21H.
The control unit then fetches this result from new address and via, system busses, sends this new value to display's memory loaded at some memory port 4044.
The display picks it up and instantly shows it.
My problems:
1. Is this all correct? Does this only happens?
2. Please expand this more.
How is this system bus, alu, cpu , working?
What are the registers, accumulators , flip flops in the memory?
What are the machine cycles?
What are instructions cycles , opcodes, instruction codes ?
Where does assembly language comes in?
How does cpu manipulates memory?
This data bus , control bus, what are they?
I have come across so many weird words i dont understand dma, interrupts , memory mapped i/o devices, etc. Somebody please explain.
Ps : am learning about the fucking 8085 microprocessor in class and i can't even relate to basic computer architecture. I had flunked the coa paper which i now realise why, coz its so confusing. :'''(14 -
So my aunt called because her phone had ran out of storage as she had "by mistake" disabled Play Store,WhatsApp, Browser, Chrome and every other fucking app, and she had to install WhatsApp back. After an hour of struggle explaining her to move her songs to memory card, enabling Chrome and Play Store, installing WhatsApp, I have started to lose faith from humaninty.
To make things worse, every Android phone manafacturer feels obligatory to change the settings app as per their wish and I didn't have a clue where the settings to enable apps were on her phone.
And I had to do all this through a phone call
And I can't say "No"
There should be a button in Android: I'm too dumb for all this stuff4 -
Im thinking about getting a raspberry pi 3 or an odroid-c2.
(Specs at the end)
Its to host a simple php server and maybe a gitlab server, both for personal use.
Should I go with the better performance or the better community support?
Odroid specs
System-on-chip used : Amlogic S905
CPU: 1.5 GHz 64-bit quad-core ARM Cortex-A53
Memory: 2 GB LPDDR3 RAM at 912 MHz
Storage: MicroSDHC slot, eMMC module socket
Graphics: Mali-450 MP3
Connectivity: 4× USB 2.0, micro-USB OTG, HDMI 2.0, Gigabit Ethernet (8P8C), Infrared, 40× GPIO ports
Raspberry specs:
SoC: Broadcom BCM2837
CPU: 4× ARM Cortex-A53, 1.2GHz
GPU: Broadcom VideoCore IV
RAM: 1GB LPDDR2 (900 MHz)
Networking: 10/100 Ethernet, 2.4GHz 802.11n wireless
Bluetooth: Bluetooth 4.1 Classic, Bluetooth Low Energy
Storage: microSD
GPIO: 40-pin header, populated
Ports: HDMI, 3.5mm analogue audio-video jack, 4× USB 2.0, Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)8 -
WFH gives me a legit reason to splurge and upgrade my pc: "using a VPN connection to remote into my workstation is less than ideal for development, Dear; I should really upgrade my memory and storage if I'm going to be developing at home for while. I assure you this has nothing to do with gaming or dicking around with virtualization."3
-
A beginner in learning java. I was beating around the bushes on internet from past a decade . As per my understanding upto now. Let us suppose a bottle of water. Here the bottle may be considered as CLASS and water in it be objects(atoms), obejcts may be of same kind and other may differ in some properties. Other way of understanding would be human being is CLASS and MALE Female be objects of Class Human Being. Here again in this Scenario objects may differ in properties such as gender, age, body parts. Zoo might be a class and animals(object), elephants(objects), tigers(objects) and others too, Above human contents too can be added for properties such as in in Zoo class male, female, body parts, age, eating habits, crawlers, four legged, two legged, flying, water animals, mammals, herbivores, Carnivores.. Whatever.. This is upto my understanding. If any corrections always welcome. Will be happy if my answer modified, comment below.
And for basic level.
Learn from input, output devices
Then memory wise cache(quick access), RAM(runtime access temporary memory), Hard disk (permanent memory) all will be in CPU machine. Suppose to express above memory clearly as per my knowledge now am writing this answer with mobile net on. If a suddenly switch off my phone during this time and switch on.Cache runs for instant access of navigation,network etc.RAM-temporary My quora answer will be lost as it was storing in RAM before switch off . But my quora app, my gallery and others will be on permanent internal storage(in PC hard disks generally) won't be affected. This all happens in CPU right. Okay now one question, who manages all these commands, input, outputs. That's Software may be Windows, Mac ios, Android for mobiles. These are all the managers for computer componential setup for different OS's.
Java is high level language, where as computers understand only binary or low level language or binary code such as 0’s and 1’s. It understand only 00101,1110000101,0010,1100(let these be ABCD in binary). For numbers code in 0 and 1’s, small case will be in 0 and 1s and other symbols too. These will be coverted in byte code by JVM java virtual machine. The program we write will be given to JVM it acts as interpreter. But not in C'.
Let us C…
Do comment. Thank you6 -
So i have been thinking..
SQL is a lang that runs on a specific software on the server, and helps creating data stores(databases and tables) that can be queried & manipulated.
is there a way to run sql like queries on the client side with no interaction from backend at all?
Say i have 5 inter related data models. in a backend world, they will form nice little tables of a db with all their joins and composite keys. from the server, i shall be querying them like "SELECT name from x where y=z & ..."
but what if i could store them like tables in browser memory and run the same query filters via a query language... is this possible?
i know this poses a certain security risk, but we already use cookies, local storage and a lot of json based shitty client side storages. surely it might be possible to have a lesser optimised sql tables on the frontend with extremely good querying capabilities?
or am i talking something far fetched here?8 -
1) Reader Rabbit on an Apple IIGS in the late 80s. I might've been in Kindergarten. Found the boxes stored in an unreachable storage area at my dad's house recently. Knowing how he took care of things, it probably still works. He won't let me touch it.
2) Fast forward to early middle school, Ultima VII on an NEC desktop, 90s. That game was great but also a pain in the ass. Had to make a startup floppy disk to help with memory allocation and something else. Learned DOS things. For some reason the disk wouldn't work from one day to the next so would have to reconfigure it frequently. Also learned the hard way not to fork too much with autoexec.bat during this period. -
So I am looking at top on our embedded system. I notice the memory amounts are in KiB. Then I thought to myself: "We are not far from this being in MiB units."
I am excited for Terahertz optical cpus and Petabyte storage drives.5 -
I'm facing a strange problem, I have a 400GB microsd, it is formatted as exFAT
I tried formatting it again to either ntfs or ext4, on either Linux or macOS, but every tool says format complete then when scans again it still shows the files that storage had + that it's exFAT
I tried gparted, disk utilities (macOS), Disks (ubuntu), mkfs all show same result that it successfully formatted the card but after refresh still shows old filesystem + the contents of the memory already there no file was removed
Can anyone help?21 -
!rant, I have a couple of sky hd+ boxes knocking around, we've cancelled our sky subscription as we're big on netflix and amazon prime instead and sky don't want the boxes back. I was going to take the hard drives out (1 TB each) as i could use some more external storage for my work and pics, but before I do so is there anything I can do with the boxes instead?
Afterall they have a circuit board, memory and a chipset, can I replace the sky OS with something else?2 -
tell me guys what would you prefer:
function a(){
..
b(..)
..
b(..)
..
}
function b(p1,p2,p3,p4,p5,p6){.
...
}
or
function a(){
..
b(..)
..
b(..)
..
}
function b(
p1,
p2,
p3,
p4,
p5,
p6
){
...
}
if you read this rant before expanding, you got a complete context on how what function a is, its calling b 2 times and how function b looks.
if instead of the first option, i had used 2nd block, you wouldn't even know the 2nd param of b function without expanding this rant.
my point?
i prefer to keeping unnecessary info on one line. and w lot of linters disagree by splitting up the code. and most importantly , my arrogant tl disagree by saying he prefers the splitted code "for readability" and becaue "he likes code this way, old-eng1 likes this and old-eng2 likes this" .
why tf does an ide have horizontal a scrolling option available when you are too stupid to use it?
ok, i know some smartass is going to point that i too can use vertical scrolling, but hear me out: i am optimising this!
case 1 : a function with 7 params is NOT split into 7 lines. lets calculate the effort to remember it
- since all params could have similar charactersticks ( they will be of some type, might have defaults, might be a suspendable/async function etc), each param will take similar memory-efforts points. say 5sp each.
- total memory-efforts= 5sp *7 = 35 sp.
- say a human has 100 sp of fast memory storage, he can use the remaining 65 sp for loading say 5 small lines above or below.
- but since 5 lines above are already read and still visible on screen, they won't be needed to be loaded again nd again, nd we can just check the lines below.
- thus we are able to store 65+35+65 = 165 sp or about 11 lines of code in out fast memory for just a 100sp brain storage
case 2 function with 7 params IS split into 7 lines.
- in this case all lines are somewhat similar. 5sp for param lines as they are still similar which implies same 35sp for storing current function and params
- remaining 65sp can only be used to store next 5 lines of 13sp as the previous code is no longer visible.
- plus if you wanna refresh the code above, you gotta scroll, which will result in removing bottom code from screen , and now your 65sp from bottom code is overwritten by 65sp of top code.
- thus at a time, you are storing only 6 lines worth of code info. this makes you slow.
this is some imaginary math, but i believe it works10 -
You can make your software as good as you want, if its core functionality has one major flaw that cripples its usefulness, users will switch to an alternative.
For example, an imaginary file manager that is otherwise the best in the world becomes far less useful if it imposes an arbitrary fifty-character limit for naming files and folders.
If you developed a file manager better than ES File Explorer was in the golden age of smartphones (before Google excercised their so-called "iron grip" on Android OS by crippling storage access, presumably for some unknown economic incentive such as selling cloud storage, and before ES File Explorer became adware), and if your file manager had all the useful functionality like range selection and tabbed browsing and navigation history, but it limits file names to 50 characters even though the file system supports far longer names, the user will have to rely on a different application for the sole purpose of giving files longer names, since renaming, as a file action, is one of the few core features of a file management software.
Why do I mention a 50-character limit? The pre-installed "My Files" app by Samsung actually did once have a fifty-character limit for renaming files and folders. When entering a longer name, it would show the message "up to 50 characters available". My thought: "Yeah, thank you for being so damn useful (sarcasm). I already use you reluctantly because Google locked out superior third-party file managers likely for some stupid economic incentives, and now you make managing files even more of a headache than it already is, by imposing this pointless limitation on file names' length."
Some one at Samsung's developer department had a brain fart some day that it would be a smart idea to impose an arbitrary limit on file name lengths. It isn't.
The user needs to move files to a directory accessible to a superior third-party file manager just to give it a name longer than fifty characters. Even file management on desktop computers two decades ago was better than this crap!
All of this because Google apparently wants us to pay them instead of SanDisk or some other memory card vendor. This again shows that one only truly owns a device if one has root access. Then these crippling restrictions that were made "for security reasons" (which, in case it isn't clear, is an obvious pretext) can be defeated for selected apps.2 -
Holy crap, Meta Developer Connect keynote. Amazing innovation. This is what Apple **used to be**
Granted, the hardware is not as elegant as Apple but the cost is 1/10 and the capabilities are close, same, or exceed (Llama) what Apple is offering.
Now here is the gut punch, they figured out that the mobile app build system needs to build AR/VR/MR apps. That was Apple's edge.
As a developer, I am not enamored with Swift and it is pretty clear that if I have to change and use a niche language like Swift or change and do dev on Android, to target new Meta hardware and AI... well... lets just say I think Swift is crap from a language standpoint and I suspect it is the reason Apple's hardware uses so much more memory, battery and storage than it should. At the same time Meta's Orion runs on a god damn battery in the early piece of glasses. My AVP's have a huge brick.
#define kApple kGigaBloat
If I were Apple I would be shitting my pants watching this Meta presentation.6 -
what is wrong with android storage access hierarchy?All i want to do is to make a file explorer app which could show user a list of all the files on their device and memory card(if available), but its been days and i cannot find a proper way for that.
I checked all the Environment class methods and context.getFileDir()/other methods of ContextCompat , but they either point to emulated storage or the app's folder, but not the sd card. I have scratched my head and pulled all my hairs out researching a lot deep into this area, but found nothing. The only thing that works sometime is the hardcoded paths( eg new File("/sdcard") ) , but that looks like a terrible hack and i know its not good.
I have also read briefly about Storage Access Framework, but i don't think that's what I want. From what i know, SAF works in the following manner : user opens my app>>clicks on a button>>my app fires an intent to SAF>> SAF opens its own UI>>user selects 1 or multiple file>> and my app recieves those file uris. THAT'S A FILE PICKER, AND I DON'T WANT THAT.
I want the user to see a list of his files in my app only. Because if not, then what's the point of my app with the title "File explorer"?7 -
PhoenixOS (Android) in Windows
--booting from usb
1. Success
Boots well, with secure boot off, and legacy boot on
http://metroize.com/usb-boot-linux-...
2. Crash
google play store and other google services keeps crashes, but other apps doesn't
when ignoring error popups, the app doesn't actually crash
3. storage
the memory is only allocated to the system, which means no user file storage
have to find a way to fix that3 -
Some mobile file managers kick me back to the beginning after selecting items for copying or moving.
When tapping on "copy" or "move" after selecting files/folders, some file managers like ES File Explorer (back when it was popular) conveniently remain in the current directory, whereas the stock Android file manager and many vendors' pre-installed file managers like that of Samsung kick me back to the initial directory. On phones with MicroSD, that's the storage selector, and on phones without, that's /storage/emulated/0/.
If I wanted to move files into a sub folder of the currently viewed directory, I have to navigate all the way back to that current directory, which is, needless to say, annoying.
Who thought it was a smart idea to kick the user back to the initial directory? But vendors' pre-installed file managers tend to be garbage anyway. Samsung's "My Files" file manager does not let me enter file names longer than 50 characters, does not let me change the extensions of files, does not support selecting files from search results or jumping to their parent directory, does obviously lack range selection, hides the status bar while opened (what's the point of that?!), its search feature is slow and sometimes crashes, and it can only search the entire device storage or memory card and not individual directories.
It's almost like Samsung deliberately tried to design a file manager as terrible as they possibly could.5 -
This question might make you lose a brain cell because of stupidity in the question. Read with caution
Is there a way to compile a game for Windows from Linux in Unreal engine? I did google some posts but the answer was either use a Virtual machine which will not be done or use the the theoretical method of using mingw but the forum posts state that it will be tricky business or use a windows machine. I have dual booted windows with linux on my machine.
However since the machine has a 512 gb ssd most of the storage space is devoted to unreal engine which takes 47 gigs in itself and have a lot of programs installed I have a usable 20 gigs left out of 145 gig partition. Windows has around 318 gigs of storage to it but I have 100 gigs free at most. So after installing the windows sdk, visual studio with extensions, unreal engine and some other stuff I don't have much space left for myself. I need that much space since I install a lot of games to my ssd. So now I cant load my bigger projects for playing on my windows. I could use my hdd which is mostly used for backups and 100+ gig stuff. Though the hdd's are of course far slower than ssd's which shouldn't be a problem however last time I used visual studio it ate more than 2 gigs of ram for a solution meaning that the compiler has very low memory for itself to actually compile so for any large files the hdd has more of a bottleneck.
Oh and I can't upgrade my ssd's or ram because I don't have enough money.
Thanks for the answers in advance4 -
ENOSPC = random things go wrong.
There are many synonyms for ENOSPC, like "disk full", "space storage full", "space storage exhausted", "no more space left on device", and those other repulsive errors. For the sake of simplicity, I am going to refer to it as ENOSPC.
If you are in this condition on the operating system partition, get out of it quickly or random things will go wrong. Text editors which write directly to a text file rather than creating a temporary file and then replacing the text file could end up blanking the text file, softwares' configuration files might fail saving which causes a reset, and web browsers might spontaneously reset cookies and lose history.
For example, Firefox has created a gap in the web browsing history, as shown here. The history that is now memory-holed initially appeared to have been recorded successfully. Apparently, a failed write to the places.sqlite database when closing the browser created this gap.4 -
You know my earliest design relating to ML was something intended to mimic human evolution by creating large trees of ideas and rules regarding emotions and how they regulated decisions and priorities.
I somehow think that was a better approach. It was more complex but it was better.
and i could reproduce the stolen diagram from memory as well.
hey is it illegal for someone to sell the contents of a storage locker with your birth certificate in it ?2