Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "mutable"
-
As a Java developer, reasons to kill other programmers:
- static mutable variables
- WRITING to static mutable variables
- API call with Framework X didn't work. Add Framework Y along with X and try that. Wrap X in try/catch statement. Catch block fires framework Y.
- six, seven, ten levels of nested code. Zero thought put in organization
- 6K LOC Java files
- spring (singleton? Maybe) object assigning values in static mutable (see pt.1)
- a couple of unit tests in code base that no longer work. Zero unit tests in new code
- unit testing disabled in CI pipeline
- empty catch blocks
- pass mutable data between threads. Modify in various places concurrently.3 -
Functional Programming. Because Moores Law has moved from making processors faster to multiplying cores, and we may eventually have to code on machines that have 1024 cores or more. Mutable state will cause all kinds of hell in those scenarios. We already have problems with it when we have like 2-3 different threads.4
-
Okay, story time.
Back during 2016, I decided to do a little experiment to test the viability of multithreading in a JavaScript server stack, and I'm not talking about the Node.js way of queuing I/O on background threads, or about WebWorkers that box and convert your arguments to JSON and back during a simple call across two JS contexts.
I'm talking about JavaScript code running concurrently on all cores. I'm talking about replacing the god-awful single-threaded event loop of ECMAScript – the biggest bottleneck in software history – with an honest-to-god, lock-free thread-pool scheduler that executes JS code in parallel, on all cores.
I'm talking about concurrent access to shared mutable state – a big, rightfully-hated mess when done badly – in JavaScript.
This rant is about the many mistakes I made at the time, specifically the biggest – but not the first – of which: publishing some preliminary results very early on.
Every time I showed my work to a JavaScript developer, I'd get negative feedback. Like, unjustified hatred and immediate denial, or outright rejection of the entire concept. Some were even adamantly trying to discourage me from this project.
So I posted a sarcastic question to the Software Engineering Stack Exchange, which was originally worded differently to reflect my frustration, but was later edited by mods to be more serious.
You can see the responses for yourself here: https://goo.gl/poHKpK
Most of the serious answers were along the lines of "multithreading is hard". The top voted response started with this statement: "1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is."
While I'll admit that my presentation was initially lacking, I later made an entire page to explain the synchronisation mechanism in place, and you can read more about it here, if you're interested:
http://nexusjs.com/architecture/
But what really shocked me was that I had never understood the mindset that all the naysayers adopted until I read that response.
Because the bottom-line of that entire response is an argument: an argument against change.
The average JavaScript developer doesn't want a multithreaded server platform for JavaScript because it means a change of the status quo.
And this is exactly why I started this project. I wanted a highly performant JavaScript platform for servers that's more suitable for real-time applications like transcoding, video streaming, and machine learning.
Nexus does not and will not hold your hand. It will not repeat Node's mistakes and give you nice ways to shoot yourself in the foot later, like `process.on('uncaughtException', ...)` for a catch-all global error handling solution.
No, an uncaught exception will be dealt with like any other self-respecting language: by not ignoring the problem and pretending it doesn't exist. If you write bad code, your program will crash, and you can't rectify a bug in your code by ignoring its presence entirely and using duct tape to scrape something together.
Back on the topic of multithreading, though. Multithreading is known to be hard, that's true. But how do you deal with a difficult solution? You simplify it and break it down, not just disregard it completely; because multithreading has its great advantages, too.
Like, how about we talk performance?
How about distributed algorithms that don't waste 40% of their computing power on agent communication and pointless overhead (like the serialisation/deserialisation of messages across the execution boundary for every single call)?
How about vertical scaling without forking the entire address space (and thus multiplying your application's memory consumption by the number of cores you wish to use)?
How about utilising logical CPUs to the fullest extent, and allowing them to execute JavaScript? Something that isn't even possible with the current model implemented by Node?
Some will say that the performance gains aren't worth the risk. That the possibility of race conditions and deadlocks aren't worth it.
That's the point of cooperative multithreading. It is a way to smartly work around these issues.
If you use promises, they will execute in parallel, to the best of the scheduler's abilities, and if you chain them then they will run consecutively as planned according to their dependency graph.
If your code doesn't access global variables or shared closure variables, or your promises only deal with their provided inputs without side-effects, then no contention will *ever* occur.
If you only read and never modify globals, no contention will ever occur.
Are you seeing the same trend I'm seeing?
Good JavaScript programming practices miraculously coincide with the best practices of thread-safety.
When someone says we shouldn't use multithreading because it's hard, do you know what I like to say to that?
"To multithread, you need a pair."18 -
I've told the same story multiple times but the subject of "painfully incompetent co-worker" just comes up so often.
I have one coworker who never really grew out of the mindset of a college student who just took "Intro to Programming". If a problem couldn't be solved with a textbook solution, then he would waste several weeks struggling with it until eventually someone else would pick up the ticket and finish it in a couple days. And if he found a janky workaround for a problem, he'd consider that problem "solved" and never think about it again.
He lasted less than a year before he quit and went off to get a job somewhere else, leaving the rest of our team to comb through his messy code and fix it. Unfortunately, our team is mostly split across multiple projects and our processes were kind of a mess until recently, so his work was a black box of code that had never been reviewed.
I opened the box and found only despair and regret. He was using deprecated features from older versions of the language to work around language bugs that no longer existed. He overused constants to a ridiculous degree (hundreds of constants, all of which are used exactly once in the entire codebase, stored in a single mutable map variable named "values" because why not). He didn't really seem to understand DRY at all. His code threw warnings in the IDE and had weird errors that were difficult to reproduce because there was just a whole pile of race conditions.
I ended up having to take a figurative hacksaw to it, ripping out huge sections of unnecessary crap and modernizing it to use recent language features to get rid of the deprecation warnings and intermittent errors. And then I went through the same process again for every other project he'd touched.
Good riddance. -
"In programming mutable state is evil. In future programming we need to avoid mutable state because we will throw cores at our code which will work on it in parallel."
debate.init();21 -
One of the most infuriating ideas in software development culture is that you can build maintainable applications without a strictly enforced type system and structured data.
Sure, it's more fun to wack around a dynamically typed system until it works or to write a major application with mutable datastructures... It's a least fun until a few years in and you have to debug an unexpected overwrite or a inconsistent use of an object property or whatever.
Anyone who writes maintainable code eventually figures out that you need rules and procedures, the issue with JavaScript, python, ruby, lisp, etc developers is that they think it's us developers that needs to enforce these rules instead of the compiler (which is infinitely better at it).60 -
What kind of seizure of insanity led the devs of Python to believe that static, mutable default parameters could be a good idea? You can literally share whole arrays between multiple calls of the same function with the same input, and the number of cases where a sane person would intentionally want to do that is FUCKING ZERO.4
-
Reading JS written by “creative” types:
var myArray = []; //un-sorted values
var SortedArray = myArray.sort();
OrderedHtmlElement.innerText = myArray[0];
Look here if you are going to do it wrong at least commit to the wrongdoing!!!6 -
If you wanna think that I'm a bad programmer, that's ok, but I can't put up anymore with Xcode.
Jesus Christ. An entire afternoon spent trying to make an array with two dimensions. I tried every fucking way I found in SO, in the apple site and in every another site that I found in my way.
First: For every example for Swift 3 there's another 10 for Swift <3.
Second: Mutable arrays, as I'm noticing, aren't a thing anymore, so, to declaring array size we go! Except it's impossible to. Tried 3 different ways. Not a single one worked.
Third: Actually, one of the 3 tries worked, for int arrays, and for some obscure reason it won't work for strings, as declaring the array as [String] is too general for swift, I mean, I completely agree with it, a [String] array could contain anything right???? FUCK NO. IT CONTAINS STRINGS YOU FUCKER!!!!
I swear, if the equipment was mine and not from the office, I would have thrown that piece of shit which disconnects from the fucking computer every 30 seconds that apple calls keyboard out of the window already.
Why the fuck do I need to develop for iOS in swift/xcode?? There's so many cross platform alternatives out there, good ones in fact, but no, we must build the applications natively or else the phone will catch on fire according to my boss.
I kinda liked Apple until now.
From now on? Fuck Apple.10 -
Scala's default Seq is MUTABLE. Why the fuck do I want mutability in my functional scala code!?
Now I have to riddle my code with imports to scala.collection.immutable.Seq which looks just ugly.
Gosh dangit.3 -
This way of converting "string" to "s" (for example)
0) program reads the whole buffer, stores it as an array of instructions
1) program reverses the order of the instructions
2) parser makes standard token from an instruction ("asd" -> ASD)
3) parser2 assigns operands to instruction
4) parser 3 makes string from instruction token ?????
5) parser fucking 4 makeS A MUTABLE STRING INSTRUCTION
6) PARSER FIVE SUBSTITUES OPERANDS
7) AND THEN CUTS IT TO A REVERSED ARRAY OF COMMANDS
8) AND PUSHES IT UNTO THE STACK
WHAT1 -
[Rust]
I have a bunch of computational steps in a Rust program, all very expensive. They all depend on each other, forming a cycle-free and rather small graph of dependencies which is not a tree. The results of each of them for a given input are likely used tens of times by the others, so I would like to cache the subresults dynamically.
How would I go about doing this, considering that caching (rightfully) requires mutable access to the cache and multiple operations often refer to the same subresult?
I can't ask SO because they'd just tell me to use another language or recalculate everything every time, fully convinced that difficult questions can only emerge from design mistakes.12 -
Django devs, prepare your eye bleach:
(I love that this is possible, but I really don't trust myself to wield the power of making things mutable at will...)7 -
One of my supervisors once said: "A computer without mutable state is just a glorified electrical heater."
Meaning that at some level you'll need some mutability.
A processor/memory unit without mutability is not worth very much, except if you want to build a new one for every clock tick...3 -
And here I am, staring at a piece of code shared by a friend. I've never seen mutable state being pushed so far, nor a darker night than the one approaching the poor soul that will have to deal with it.
-
It's a shame that people don't want to use F# but prise C# for how cool it became and continue becoming. At the same time, little do they know that many of the features were simply drawn from F#.
It's just rediculous how far this OO and C-Style syntax crap has progressed. They keep copying things from functional langugages, making the initial language to be a monstrocity like C++ is now, insted of just using languages like C#. I mean, it was right there before C#: async/task, immutablility, records, indexes, lambdas, non-null by default, who the hell knows what else.
Besides, many people (in my company at least) are just blindly overengineering with patterns and shit, where a simple function would be just enogh.
Watch some some NDC talks about F#, in particular those of Scott Wlaschin. It's just better in so many ways: less noice (I'm looking at you, brackets, commas and semicolons), the whole LOT of type inference and less duplication (just look at the C# signatures of linq methods - it's difficult to read them), immutability by default, non-nullable by default, ADTs and pattern matching, some neat features like type providers (how many times have used "paste special" or an online tool to create C# classes from a JSON/XML file, and how many times have your regenrated it because of schema changes?) and units of measure.
Of course, in some cases it's not optimal, in some cases mutable datastructures of C# are better for performance. But dude, how many performance critical systems have you wrote in C#? I mean, if it comes to performance you should use Rust or C++ or C after all.
*sighs*15 -
"Rust, the language that makes you feel like a memory astronaut navigating through a borrow-checker asteroid field. Lifetimes? It's more like love letters to the compiler. Safety first, even if it means writing a Ph.D. thesis to move a mutable reference around."2
-
Can someone please explain why LISP and LISP inspired langs breed the most insufferable twats?
I mean, just look at this, I'm trying to learn Clojure and happened across this site/slash book: braveclojure.com
Some highlights:
>Chapter 7 - Clojure Alchemy: Reading, Evaluation, and Macros:
>The philosopher’s stone, along with the elixir of life and Viagra, is one of the most well-known specimens of alchemical lore, pursued for its ability to transmute lead into gold. Clojure, however, offers a tool that makes the philosopher’s stone look like a mere trinket: the macro.
> The -> also lets us omit parentheses, which means there’s less visual noise to contend with. This is a syntactic abstraction because it lets you write code in a syntax that’s different from Clojure’s built-in syntax but is preferable for human consumption. Better than lead into gold!!!
>Chapter 10 - Clojure Metaphysics: Atoms, Refs, Vars, and Cuddle Zombies:
>The Three Concurrency Goblins are all spawned from the same pit of evil: shared access to mutable state.
>In fact, Clojure embodies a very clear conception of state that makes it inherently safer for concurrency than most popular programming languages. It’s safe all the way down to its meta-freakin-physics.
And look at this: https://quora.com/Why-are-Lisp-prog...
It reminds me of Python before the data-science craze and its adherents thought IT was God's programming language.1 -
Is there anything like React Context or Unix envvars in any functional language?
Not global mutable state, but variables with a global identity that I can set to a value for the duration of a function call to influence the behavior of all deeply nested functions that reference the same variable without having to acknowledge them.13 -
imagine building an app using javascript(react-native) that get list of skills which will be displayed to the user to select and the same list displayed as interest for the user to do the same if you try to treat them seperately but from the same source you get a shit result e.g
const list = responseFromApi;//array of object
let skills = list;;
let interest = list;
skills.checked = true;
interest is changed and the list constant variable is changed the values change were ever the list variable is assign to a variable and the variable is tempered.
the above code was the fustration i got with javascript when i didnt know of the mutation guy.
Mutation is a nightmare never knew javascript object are passed by reference which make them mutable this sucks....
another crazy behavior below though the same;
const person = {name:'Dan',age:43} //object
let person0 = person;
let person1 = person;
person1.name = 'David'
the value of both person0 and person get change.
this is madness .3 -
Finally I finished the exams, now I have to write my thesis. An agency who wants remain anonymous at the moment told my supervisor to choose a student who will works out on the ransomware argument. The relator was a little bit scared about consequences but I'm pressing to write a controlled ransomware in a closed network brtween virtual machines. What qualities a good ransomware should has?
Mutable structure to avoid antivirus detection? Good exploits and vulnerability scanners to make itself viral? The payload should stay in the code or should be downloaded from a server? I need some reference on analysis of vx codes, any help? -
use a mutable obj as hashmap key and the hash code changes when obj got updated. evey text book told u this is a stupid bug and impossible to debug, and I spend a week to prove u can debug it out when in single thread app
-
Why did I go broke?
Because every time I tried to cash in my promises, I’d end up nullifying my income with undefined deductions! I thought I could get by on short functions and quick closures, but in the end, I found myself callbacked into a corner with no scope for improvement.
I tried to debug my financial situation, but my stack overflowed with deferred payments, and I couldn’t even parse where all my money was going. The compiler of my life just kept throwing unexpected "expenses" errors.
In a final attempt, I refactored my entire approach, renaming myself Async to buy some time, but it was too late. My funds were hoisted to the global scope, and before I knew it, I was reduced to Boolean poverty.
Now, I live my life in strict mode, always awaiting the day I’ll finally get a return on my investment... but deep down, I know I’m just an object in a mutable state.
P.S : I'm a JS DEV1 -
I'm in a big fat fucking stinking rut, as in progress on this project has absolutely stagnanted.
Gonna rubber face your duck now **UNZIPS** excepts I don't have zippers, as joggers are the one true way; fake Adidas til I fucking drop.
Brain damage aside, I understand both how I've layed out the data and what I'm supposed to do with it. We have a virtual machine, an array of instructions and arguments for a given process within it, and we need to walk this array and map values to registers.
We also need to spill values inside registers to stack, IF they are required at a further point within that block. This also isn't terribly complex. We simply look forward in the array and see if the value is an argument to any instruction that *needs* this value to be loaded (ie, within a register).
So this implies multiple iterations; we need to better understand how one particular value is used throughout an F before we can make a final decision on how many registers and stack space are actually needed for the whole block.
Here's where it gets tricky. If there's a call, we need to be certain that the symbol being invoked has already been fully processed. Besides the obvious fact that recursion fucks me up, there's another matter: say a private method gets invoked by another private method. We can take advantage of this, by which I mean, sacrilege incoming so put on this toga.
Looking at the output for C compilers, it would seem this is not done in practice, I would assume because it's a pain in the ass. But when you have the guarantee that F will only be called internally, as that's what "private" means, there's two ways it can go:
0. It's well below the 13-20 cycle threshold, so you inline the fucker. No suprises there.
1. It's a more involved affaire, and invoked in more than one place, so you don't inline it. Codesize matters.
Recursion and [1] are the big deal things holding me back. Not because it's too hard, like I said this is kindergarten level abstraction. I'm just slow and fanatical, which is how I prefer to spell "constant obsessive paranoid delusions". I can see the potential optimization I can pull here, so I'm stuck trying to figure it out.
Idea would be, handling the register allocation and stack spill for an internal-internal (or deep internal; what we like to call a "guts" method) in synchronization with the *calling* processes. This is, fundamentally, violating all conventions -- but so under the hood no one will notice.
Let me give you an example. If we were to pass some value to a function, expecting to mutate it and get a different value back, in a lot of cases it'd be stupid to make an implicit copy by using two registers, one for input and another for the output. Dude, it's one cycle. Multiply it by a million, say sixty times per second, for every time you __needlessly__ make a copy of a value that we've already stated is mutable.
Clearly unacceptable. This is, in the strictest sense, everywhere in every single codebase. Premature micro optimization is the root of all goodness, God is great and praiseworthy. So how do we go about it?
Answer is I know and I don't know. By which I mean to say, this very thing I've done by hand. Assembly is fun. Now the issue is teaching a calculator how to do it. Not so fun.
There is a dependency chain between processes, as I believe I've kind of alluded to. I'm trying to make decisions on the side of the caller depending on the details of the callee, which is why recursion is rawdogging my soul. This is the same situation, it's inverting the direction of one or more links in the dependency chain, which makes no fucking sense.
And yet it does.
Brain, explain yourself.
How do *you* handle this without crashing?
Brain?
<<ME STEWPED; BEEP-BOOP>>
Alright then, that was a useless attempt at fuckery. Let's have a nap then, maybe it'll come to me in the morning. That's what I've been saying to myself for almost a month now.
Perhaps it is a hardcoded fuk.1 -
why are there some mutable objects in java? Why can everything just be immutable? Are there any disadvantages?5
-
I'd like to make an android Git client eventually. I tried React Native because I already know React and Isomorphic-git would have been a godsend, but the hello world example has a splash screen. Can you recommend an android dev toolkit with the following constraints?
1. Are accessible to a mobile dev newbie
2. Launch very quickly
3. Hopefully accommodate an existing mutable Git implementation2 -
For years now I've been "dreaming in code" but in the stupid way, which is only appropiate.
I try to explain it to myself and *I* can't understand it.
One, by some oniric enchantment, is capable of communicating signals through use of some symbolic language; and any time one speaks, they are affecting all that follows.
So a sequence of these, of any size, corresponds to some kind of program, and the self is some sort of collection of mutable structures being affected by them. And new symbols arise from within the self, corresponding to sequences of previously spoken symbols.
This process in itself can be satisfying, for the mere challenge of engaging with it's bottomless complexity, but it also suffers from a complete lack of purpose.
What does it mean? It's all undefined, yet doing something, so it must *mean* something. But what is it doing? One simply cannot grasp it!
I go to bed at night and traverse my tree, I recognize it, I've been working on it for years. Time is different there, you can just keep infinitely building shit, it never ends. Then I wake up and everything makes sense, for a little while.
But what I see isn't quantifiable; I can't turn it into a representation that works outside of a dream. Does it give me some vague ideas for the "actual" code I'm working on, yes of course. Yet it's all so... elusive, I can never put it into words. How exactly I could think of this? Well, it's in my tree, I know it because I wrote it as I slept. But how?
Fucking brains, maan.1