Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "low level language"
-
I had to open the desktop app to write this because I could never write a rant this long on the app.
This will be a well-informed rebuttal to the "arrays start at 1 in Lua" complaint. If you have ever said or thought that, I guarantee you will learn a lot from this rant and probably enjoy it quite a bit as well.
Just a tiny bit of background information on me: I have a very intimate understanding of Lua and its c API. I have used this language for years and love it dearly.
[START RANT]
"arrays start at 1 in Lua" is factually incorrect because Lua does not have arrays. From their documentation, section 11.1 ("Arrays"), "We implement arrays in Lua simply by indexing tables with integers."
From chapter 2 of the Lua docs, we know there are only 8 types of data in Lua: nil, boolean, number, string, userdata, function, thread, and table
The only unfamiliar thing here might be userdata. "A userdatum offers a raw memory area with no predefined operations in Lua" (section 26.1). Essentially, it's for the API to interact with Lua scripts. The point is, this isn't a fancy term for array.
The misinformation comes from the table type. Let's first explore, at a low level, what an array is. An array, in programming, is a collection of data items all in a line in memory (The OS may not actually put them in a line, but they act as if they are). In most syntaxes, you access an array element similar to:
array[index]
Let's look at c, so we have some solid reference. "array" would be the name of the array, but what it really does is keep track of the starting location in memory of the array. Memory in computers acts like a number. In a very basic sense, the first sector of your RAM is memory location (referred to as an address) 0. "array" would be, for example, address 543745. This is where your data starts. Arrays can only be made up of one type, this is so that each element in that array is EXACTLY the same size. So, this is how indexing an array works. If you know where your array starts, and you know how large each element is, you can find the 6th element by starting at the start of they array and adding 6 times the size of the data in that array.
Tables are incredibly different. The elements of a table are NOT in a line in memory; they're all over the place depending on when you created them (and a lot of other things). Therefore, an array-style index is useless, because you cannot apply the above formula. In the case of a table, you need to perform a lookup: search through all of the elements in the table to find the right one. In Lua, you can do:
a = {1, 5, 9};
a["hello_world"] = "whatever";
a is a table with the length of 4 (the 4th element is "hello_world" with value "whatever"), but a[4] is nil because even though there are 4 items in the table, it looks for something "named" 4, not the 4th element of the table.
This is the difference between indexing and lookups. But you may say,
"Algo! If I do this:
a = {"first", "second", "third"};
print(a[1]);
...then "first" appears in my console!"
Yes, that's correct, in terms of computer science. Lua, because it is a nice language, makes keys in tables optional by automatically giving them an integer value key. This starts at 1. Why? Lets look at that formula for arrays again:
Given array "arr", size of data type "sz", and index "i", find the desired element ("el"):
el = arr + (sz * i)
This NEEDS to start at 0 and not 1 because otherwise, "sz" would always be added to the start address of the array and the first element would ALWAYS be skipped. But in tables, this is not the case, because tables do not have a defined data type size, and this formula is never used. This is why actual arrays are incredibly performant no matter the size, and the larger a table gets, the slower it is.
That felt good to get off my chest. Yes, Lua could start the auto-key at 0, but that might confuse people into thinking tables are arrays... well, I guess there's no avoiding that either way.13 -
Confuzzled if I should go the low level way and learn more about software architecture and foundation or go the artificial intelligence machine learning way because I want to get out of this infinite loop of only developing apps!4
-
I just found a game (have not played it yet) that I think everyone here will cream over.
It's an insanely detailed hardware/ low level/ make-your-own-computer game.
I watched the trailer and it sets you up by teaching you logic gates and basic circuitry.
Then, it eventually teaches you how to build your own computer using these gates.
Then, you start creating your own assembly language using the computer you made.
Then, you use your computer to solve problems like sending a robot through a maze or just building snake on a display.
Absolutely check it out, it's on sale for $13 USD. I just bought it. Turing Complete on Steam.10 -
I learnt programming by making cheats for games and reverse engineering them. It was a fun experience as it wasn't always easy to start with C++ and assembly but it was definitely worth it. Though when you come from a low level language such as C++, looking at highly abstract languages such as Javascript makes everything feel wrong in Javascript, especially when it comes to types and how you can just switch types in the middle of the code :D. But it also gives you an understanding of how Javascript could be implemented, what the engine is doing in the background when you create an object etc..
-
I've optimised so many things in my time I can't remember most of them.
Most recently, something had to be the equivalent off `"literal" LIKE column` with a million rows to compare. It would take around a second average each literal to lookup for a service that needs to be high load and low latency. This isn't an easy case to optimise, many people would consider it impossible.
It took my a couple of hours to reverse engineer the data and implement a few hundred line implementation that would look it up in 1ms average with the worst possible case being very rare and not too distant from this.
In another case there was a lookup of arbitrary time spans that most people would not bother to cache because the input parameters are too short lived and variable to make a difference. I replaced the 50000+ line application acting as a middle man between the application and database with 500 lines of code that did the look up faster and was able to implement a reasonable caching strategy. This dropped resource consumption by a minimum of factor of ten at least. Misses were cheaper and it was able to cache most cases. It also involved modifying the client library in C to stop it unnecessarily wrapping primitives in objects to the high level language which was causing it to consume excessive amounts of memory when processing huge data streams.
Another system would download a huge data set for every point of sale constantly, then parse and apply it. It had to reflect changes quickly but would download the whole dataset each time containing hundreds of thousands of rows. I whipped up a system so that a single server (barring redundancy) would download it in a loop, parse it using C which was much faster than the traditional interpreted language, then use a custom data differential format, TCP data streaming protocol, binary serialisation and LZMA compression to pipe it down to points of sale. This protocol also used versioning for catchup and differential combination for additional reduction in size. It went from being 30 seconds to a few minutes behind to using able to keep up to with in a second of changes. It was also using so much bandwidth that it would reach the limit on ADSL connections then get throttled. I looked at the traffic stats after and it dropped from dozens of terabytes a month to around a gigabyte or so a month for several hundred machines. The drop in the graphs you'd think all the machines had been turned off as that's what it looked like. It could now happily run over GPRS or 56K.
I was working on a project with a lot of data and noticed these huge tables and horrible queries. The tables were all the results of queries. Someone wrote terrible SQL then to optimise it ran it in the background with all possible variable values then store the results of joins and aggregates into new tables. On top of those tables they wrote more SQL. I wrote some new queries and query generation that wiped out thousands of lines of code immediately and operated on the original tables taking things down from 30GB and rapidly climbing to a couple GB.
Another time a piece of mathematics had to generate all possible permutations and the existing solution was factorial. I worked out how to optimise it to run n*n which believe it or not made the world of difference. Went from hardly handling anything to handling anything thrown at it. It was nice trying to get people to "freeze the system now".
I build my own frontend systems (admittedly rushed) that do what angular/react/vue aim for but with higher (maximum) performance including an in memory data base to back the UI that had layered event driven indexes and could handle referential integrity (overlay on the database only revealing items with valid integrity) or reordering and reposition events very rapidly using a custom AVL tree. You could layer indexes over it (data inheritance) that could be partial and dynamic.
So many times have I optimised things on automatic just cleaning up code normally. Hundreds, thousands of optimisations. It's what makes my clock tick.4 -
Application level developer: It's so simple just import that dependency and use that API.
Low level developer : That's cute.2 -
My family is pretty clueless about what I do, but they are genuinely curious. My mom especially. She always asks questions about stuff I'm learning and tries her best to understand.
I might do a little course in programming for anyone in my family who wants to learn. Helps a lot in how people solve problems, and would help reinforce my knowledge.
Question is, do I teach them a low level language like C, or something that's a bit easier to understand, like Python?2 -
Working in the embedded systems industry for most of my life, I can tell you methodical testing by the software engineers is significantly lacking. Compared to the higher level language development with unit tests and etc, something i think the higher level abstracted industry actually hit out the of park successfully.
The culture around unit testing and testing in general is far superior in java and the rest.
Down here in embedded all too often I hear “well it worked on my setup... it worked at my desk”.. or Oh I forgot to test that part.. or I didn’t think that perticular value could get passed in... etc I’ve heard it all. Then I’ve also heard, you can’t do TTD or unit tests like high level on embedded... HORSESHIT!
You most definitely can! This book is a great book to prove a point or use as confirmation you are doing things correctly. My history with this book was I gonna as doing my own technique of unit testing based on my experience in the high level. Was it perfect no but I caught much more than if I hadn’t done the testing. THEN I found this book, and was like ohh cool I’m glad I’m on the right thought process because essentially what they were doing in the book is what I was doing just slightly less structured and missing a few things.
I’ve seen coworkers immediately think it’s impossible to utilize host testing .. wrong.
Come to find out most the of problems actually are related to lack of abstraction or for thought out into software system design by many lone wolf embedded developers.. either being alone, or not having to think about repercussions of writing direct register writes in application or creating 1500 line “main functions” because their perception is “main = application”. (Not everyone is like this) but it seems to be related to the EEs writing code ( they don’t know wha the CS knows) and CS writing over abstraction and won’t fit on Embedded... then you have CEs that either get both sides or don’t.. the ones to understand the low level need but also get high level concepts and pariadigms and adapt them to low level requirements BOOM those are the special folks.
ANYway..the book is great because it’s a great beginner book for those embedded folks who don’t understand what TDD is or Unit testing and think they can’t do it because they are embedded. So all they do is AdHoc testing on the fly no recording results no concluding data very quick spot check and done....
If your embedded software engineers say they can’t unit test or do TDD or anything other than AdHoc Testing...Throw the book at them and say you want the unit test results report by next week Friday and walk away.
Lol7 -
start teaching people how and why to delete code instead of teaching them only how to write code
compare functional and object oriented languages as well as high level and low level languages and explain what are advantages of using certain language without going into the syntax
let people do mistakes and don’t punish people for making them but let them explain what happened, if they know what was the cause of mistake it is worth ten times than doing things correctly
mix teams per period of time instead of per project
make showcases how to modify ugly code to pretty one and what are the steps and what patterns people should look after
teach by not showing old stuff but showing where old stuff exists in modern things and why it’s important there and what’s the purpose of doing things certain way instead of flat theory based on ancient examples1 -
I fucking hate chained methods. Ok, not all of them. Query things like array.where.first... that stuff is ok.
Specially if it's part of the std lib of a lang, which would be probably written by a very competent coder and under scrutiny.
But if you're not that person, chances are you'll produce VASTLY inferior code.
I'm talking about things like:
expect(n).to.be(x).and.not(y)
And the reason I don't like it is because it's all fine and dandy at first.
But once you get to the corner cases, jesus christ, prepare to read some docpages.
You end up reading their entire fucking docs (which are suboptimal sometimes) trying to figure if this fucking dsl can do what you need.
Then you give up and ask in a github issue. And the dev first condescends you and then tells you that the beautiful eden of code he created doesn't let you do what you want.
The corner cases usually involve nesting or some very specific condition, albeit reasonable.
This kind of design is usually present in testing or validation js libraries. And I hate all of those for it.
If you want a modern js testing lib that doesn't suck ass, check avajs. It's as simple as testing should be.
No magic globals, no chaining, zero config. Fuck globals forced by libs.
But my favorite thing about it that is I can put a breakpoint wherever the fuck I want and the debugger stops right fucking there.
Code is basically lines of statements, that's it, and by overusing chaining, by encouraging the grouping of dozens of statements into one, you are preventing me from controlling these statements on MY code.
As an end dev, I only expect complexity increases to come from the problems themselves rather than from needlessly "beautified" apis.
When people create their own shitty dsl, an image comes to my mind of an incoherent rambling man that likes poetry a lot and creates his own martial art, which looks pretty but will get your ass kicked against the most basic styles of fighting.
I fucking hate esoteric code.
Even if I had to execute a list of functions, I'd rather send them in an array instead of being able to chain them because:
a) tree shaking would spare from all the functions i didn't import
b) that's what fucking arrays are for, to contain several things.
This bad style of coding is a result of how low the barrier to code in higher level langs are.
As a language or library gets easier to use you might think that's a positive thing. But at the same time it breeds laziness.
Js has such a low learning curve that it attacts the wrong kind of devs, the lazy, the uninspired, the medium.com reader, the "i just care about my paycheck" ones.
Someone might think that by bashing bad js devs I'm trying to elevate myself.
That'd be extremely stupid. That's like beating a retarded blind man in a game and then saying "look, I'm way better than this retarded blind man".
I'm not on a risky point of view, just take a stroll down npmjs.com. That place is a landfill. Not really npm's fault, in fact their search algorithm is good.
It's just the community.
Every lang has a ratio of competence. Of competent to incompetent devs.
You have the lang devs and most intelligent lib devs at the top. At the bottom you have the bottom.
Well js has a horrible ratio. I wouldn't be shocked to find out that most js devs still consider using import or await the future.
You could say that js improved a lot, that it was way worse beforr. But I hate chaining now, and i hated back then!
On top of this, you have these blog web companies, sucking the "js tutorial" business tit dry, pumping out the most obscenely unprofessional and bar lowering tutorials you can imagine, further capping the average intelligence of most js devs.
And abusing SEO while they're at it, littering the entire web with copy paste content.2 -
Kotlin
All the languages have a basic objective in mind that shapes both the language and it's community:
for c/c++ was low level hardware access and performance, for Java OOP and learning; Kotlin was mostly made to make dev life easier and tries to anticipate what you want to do instead of forcing his patterns and tries to help you instead of punishing errors.
As a dev at least i feel a little more cared about and less left alone (especially in the ugly world of Java for Android)14 -
Learning Go. How I didn't learned it before? It has the efficiency of a low level language with the beautiful syntax and abstraction of a high level one.4
-
First year: intro to programming, basic data structures and algos, parallel programming, databases and a project to finish it. Homework should be kept track of via some version control. Should also be some calculus and linear algebra.
Second year:
Introduce more complex subjects such as programming paradigms, compilers and language theory, low level programming + logic design + basic processor design, logic for system verification, statistics and graph theory. Should also be a project with a company.
Year three:
Advanced algos, datastructures and algorithm analysis. Intro to Computer and data security. Optional courses in graphics programming, machine learning, compilers and automata, embedded systems etc. ends with a big project that goes in depth into a CS subject, not a regular software project in java basically.4 -
I just installed Opera Mini on my PSP. That alone isn't very exciting on its own, although I am stoked that my website does in fact render on a device from 2009. With the helpful guidance of a laptop from 2004 that's doing the hotspot duties for this thing.
No, what really got me stoked is that Opera still supports these old platforms, and how small they managed to make it. The .jar file for Opera Mini 4.5 is ~800kB large. There's a .jad file as well but it's negligible in size and seems to be a signature of sorts.
Let that sink in for a moment. This entire web browser is 800kB. Firefox meanwhile consistently consumes 800 MEGABYTES.. in MEMORY. So then, I went to think for a moment, how on earth did they manage to cram an entire functioning web browser in 800kB? Hell, what makes up a web browser anyway?
The answer to that question I got to is as follows. You need an engine to render the web page you receive. You need a UI to make the browser look nice. And finally you need a certificate store to know which TLS certificates to trust. And while probably difficult to make, I think it should be possible to do in 800k. Seriously, think about it. How would you go *make* a web browser? Because I've already done that in the past.
Earlier I heard that you need graphics, audio, wasm, yada yada backends too.. no. Give your head a shake. Graphics are the responsibility of the graphics driver. A web browser shouldn't dabble with those at all. Audio, you connect to PulseAudio (in Linux at least) and you're done. Hell I don't even care about ALSA or OSS here. You just connect to the stuff that does that job for you. And WebAssembly.. God I could rant about that shit all day. How about making it a native application? Not like actual Assembly is used for BIOS and low-level drivers. And that we already have a better language for the more portable stuff called C.
Seriously, think about it. Opera - a reputable browser vendor - managed to do it in 800kB on a 12 year old device. Don't go full wank on your framework shit on the comments. And don't you fucking dare to tell me that there's more to it. They did it for crying out loud. Now you take a look at your shitpile for JS code and refactor that shit already. Thank you.21 -
I was talking to a friend of mine(more of an acquaintance really) about our shared interest in Go and how I am trying to see if I can implement it more and more into my daily activities(simple CLI utilities, maybe a web app or two) and he mentioned how much he likes it after being part of a Java shop for such a long time. He said that he got tired of the verbosity of Java and how Go was such a "breath of fresh air"
var i SomeShit
do.SomeShit(&i)
if do.Error != nil {
panic(do.Error)
}
fmt.Println("Could not agree at all")
On how bullshitty it is to say that one switched over to Golang because of the verbosity of other languages, specially when anything meaningful that you might do with the code requires constant checking.
And let us not
forget := lol.bullshit(); forget != nil {
about some of the other bs you get to do
oh look scoped errors
}
.....like I get it man. I like the language, no, It ain't replacing C or C++ for low level shit, not with a garbage collector are you fucking high?
But yes, I do like the language, they got a lot of shit right, the thing is, I feel like I know everything about it already since A) shit is way too simple, simple enough to be used by anyone really and B) other than goroutines this language does not really bring anything new to the table, far as I can tell.
I mean shit. I thought I was at odds with Python disliking syntactical whitespace enough to make me try and not use an otherwise perfectly good lang(Python I love you but hate syntactical whitespace) but Golang really puts me at odds. I love it but dislike it at the same time.8 -
20 years later us high level language developers we will be considered low level language developers,programming GOd's and Goddess3
-
Ye, so after studying for an eternity and doing some odd jobs here and there, all I can show for are following traits:
* Super knowledgeable in arm/Intel assembly language
* C-Veteran with knowledge of some sick and nasty C-hacks/tricks which would even sour the mood of your grandma
* Acquired disdain of any and all scripting languages (how dare you write something in one line for which I need a whole library for!)
* All-in-all low-level programmer type of guy (gimme those juicy registers to write into!)
After completing the mandatory part of my computer science studies, all I did was immerse myself into low-level stuff. Even started to hold lectures and all.
Now I'm at the cusp of being let free into the open market.
The thing is: I'm pretty sure that no company is really interested in my knowledge, as no one really writes assembly anymore.
Sure, embedded programming is still a thing, but even that is becoming increasingly more abstract, with God knows how many layers of software between the hardware and the dev, just to hide all the scary bits underneath.
So, are there people in here who're actually exposed to assembly or any hands-on hardware-programming?
Like, on a "which bit in which register/addr do I need to set" - kind of way.
And if so, what would you say someone like me should lookout for in a company to match my interest to theirs?
Or is it just a pipe dream, so I'd need to brace myself to a mundane software engineer career where I have to process a ticket at a time?
(Just to give a reference: even the most hardware-inclined companies I found "near" me are developing UIs with HTML5 to be used in some such environment ....)12 -
to the guys saying "oop is dumb" / "i don't need oop" / "i've never worked with oop"...
i have some questions:
- which language are you working with
- which problems are you solving
- how big is your code base
- how do you maintain readability of your code?
don't get me wrong, i don't believe that oop is always the answer. i'm just curious which fields these statements are coming from. if they all come from a low level (assembler, C, ..) or functional languages or "scripty" languages (python, JS), or if there are also people working with languages like C++ where oop is pretty much established. and if the latter, i'm curious how people design their code and what problems they solve... tell me your story :D30 -
Well, I wanna specialize in low-level software as I get older. Everyone is telling me to go out and learn a processor architecture. I'm willing to be patient, so I do what people recommend to me and I download the Intel x86_64 manual. I was excited... UNTIL I REALIZED THE MANUAL WAS 4474 PAGES LONG! Like, how am I supposed to jump into assembly, machine language, and low-level programing with a beginner's task like that? I cannot find ANY resources online to simplify the transition, and college sure ain't gonna teach me anytime soon.10
-
We had 1 Android app to be developed for charity org for data collection for ground water level increase competition among villages.
Initial scope was very small & feasible. Around 10 forms with 3-4 fields in each to be developed in 2 months (1 for dev, 1 for testing). There was a prod version which had similar forms with no validations etc.
We had received prod source, which was total junk. No KT was given.
In existing source, spelling mistakes were there in the era of spell/grammar checking tools.
There were rural names of classes, variables in regional language in English letters & that regional language is somewhat known to some developers but even they don't know those rural names' meanings. This costed us at great length in visualizing data flow between entities. Even Google translate wasn't reliable for this language due to low Internet penetration in that language region.
OOP wasn't followed, so at 10 places exact same code exists. If error or bug needed to be fixed it had to be fixed at all those 10 places.
No foreign key relationships was there in database while actually there were logical relations among different entites.
No created, updated timestamps in records at app side to have audit trail.
Small part of that existing source was quite good with Fragments, MVP etc. while other part was ancient Activities with business logic.
We have to support Android 4.0 to 9.0 of many screen sizes & resolutions without any target devices issued to us by the client.
Then Corona lockdown happened & during that suddenly client side professionals became over efficient.
Client started adding requirements like very complex validation which has inter-entity dependencies. Then they started filing bugs from prod version on us.
Let's come to the developers' expertise,
2 developers with 8+ years of experience & they're not knowing how to resolve conflicts in git merge which were created by them only due to not following git best practice for coding like only appending new implementation in existing classes for easy auto merge etc.
They are thinking like handling click events is called development.
They don't want to think about OOP, well structured code. They don't want to re-use code mostly & when they copy paste, they think it's called re-use.
They wanted to follow old school Java development in memory scarce Android app life cycle in end user phone. They don't understand memory leaks, even though it's pin pointed by memory leak detection tools (Leak canary etc.).
Now 3.5 months are over, that competition was called off for this year due to Corona & development is still ongoing.
We are nowhere close to completion even for initial internal QA round.
On top of this, nothing is billable so it's like financial suicide.
Remember whatever said here is only 10% of what is faced.
- An Engineering lead in a half billion dollar company.4 -
!rant
Started learning Rust yesterday. As a web developer I like the static typing and the speed. I want to know a low-level language to complement Python but kind of dislike C and C++ and that's why I chose Rust. At the moment the syntax still feels kind of foreign but I probably need to just man up and embrace it. :)9 -
<opinion>
You may be a prod ninja but I believe that every dev should have a decent level of exposure with a low level language(s). Sure you can make an HTTP server, do a sentimental analysis, topic modeling, set up multinode clusters, write ORM queries from dbs and all sorts of awesome stuffs with Python/Ruby/PHP/JS/GO etc but none of them teaches you what happens at kernel level. Things like memory leaks, threading, multiprocessing, memory allocations etc can only be better learnt from a low level language.
</opinion>
P.S. Not a C/C++ fanboy. I'm a python dev 😄5 -
My boss keeps pushing me to do „any“ courses..
I’d say I’m doing my job exceptionally well. In fact he even told me before he promoted me.
I had to tell him what I wanna learn in the next 2-3 years. I told him I wanna be decent in C++ because i love the language and in my opinion every dev can improve by learning a low level language.
Have some MITx courses and stuff I wanna do (I actually want to do them) but he keeps pushing me to send him the courses so he can push me and (I think) Monitor my progress..
C/Cpp and asm have always been my love, I wanna improve and learn. But I wanna do it for myself, not for my boss. The company doesn’t have any use for it anyway..
And those courses are 4 weeks to 12 months with scheduled assessments.
I shouldn’t have mentioned it. Now it’s an expectation they have.
Now I have to force myself into doing those courses in time.. on a schedule..?
90% of then will bore the shit out of me cause I already know it and the remaining 10% are stuff I wanna look at when I feel like it. But I don’t have a paper that says I know those 90% so yeah..
Why can’t he just be happy with the work I do during working hours and leave my free time up to me???12 -
As a junior dev from a sysadmin and security background, this is a list of software development concepts I never seemed to truly understand but hope to(rated from most intimidating to least):
1) Frontend web development and all the huge world of javascript frameworks and tools. - It's more overwhelming than the political geography of the Holy Roman Empire in the Middle Ages.
2) Machine Learning, Deep Learning and A.I- too much math that fucks with my brain.
3) low-level programming(kernel,drivers) - sounds extremely interesting but the code in assembly/C/C++ looks like Linear A Minoan hieroglyphics.
4) Rx(insert language here) - I never get why it is useful or why someone invented this. Seems interesting though.
5) Code Reflection - sounds like Thelemic magick.
6) Packaging, automation, build tools, devops, CI, Testing -seems too complicated. I just want to run an executable at the client or make a web app that does something. Why all this process?6 -
I'm just frustrated. I wanted a simple, statically-typed language that doesn't get in your way and offers GC. I can't find anything "just perfect".
- Go: enforces a style on you, nono.
- Rust: ownership system. I love it, but it's too low level for what I want.
- Scala: seems to have a bunch of useless and bug-prone features.
- Java: I hate how you have to declare and catch exceptions. Good practice, yes, but the code gets bloated with try-catch statements.
- C and C++: Too low level, no GC.
- C#: maybe? idk
I want to make a back-end for an app but I want it to be easy and fast. I need something with a gentle learning curve, not keep fighting the language. I'm between Java and Rust. Java's easier to use. Rust is rust <3, but it's hard, I haven't learned it properly and I just keep fighting the fucking compiler.39 -
I know I’ll get mixed views for this one...
So I’ll state my claim. I agree with the philosophy of uncle bob, I also feel like he is the high level language - older version of myself personality wise.. (when I learned about uncle bob I was like this guy is just like me but not low level haha).
Anyway.. I don’t agree with everything because I think he thinks or atleast I get the vibe he thinks everything can be solved by OOP, and high level languages. This is probably where Bob and I disagree. Personally I don’t touch ruby, python and java and “those” with a 10 foot pole.
Does he make valid arguments, yes, is agile the solve all solution no.. but agile ideas do come natural and respond faster the feedback loop of product development is much smaller and the managers and clients and customers can “see things” sooner than purly waterfall.. I mean agile is the natural approach of disciplined engineers....waterfall is and was developed because the market was flooded with undisciplined engineers and continues to flood, agile is great for them but only if they are skilled in what they are doing and see the bigger picture of the forest thru the trees.. which is the entire point of waterfall, to see the forest.. the end goal... now I’m not saying agile you only see a branch of a single tree of the forest.. but too often young engineers, and beginners jump on agile because it’s “trendy” or “everyone’s doing it” or whatever the fuck reason. The point is they do it but only focus on the immediate use case, needs and deliverables due next week.
What’s wrong with that?? Well an undisciplined engineer doing agile (no I’m not talking damn scrum shit and all that marketing bullshit).. pure true agile.
They will write code for the need due next week, but they won’t realize that hmm I will have the need 3 months from now for some feature that needs to connect to this, so I better design this code with that future feature in mind...
The disciplined engineer would do that. That is why waterfall exists so ideally the big picture is painted before hand.
The undisciplined engineer will then be frustrated in the future when he has to act like the cool aid man thru the hard pre mature architectural boundaries he created and now needs links or connections that are now needed.
Does moving to agile fix that hell no.. because the undisciplined engineer is still undisciplined.
One could argue the project manager or scrum secretary... (yes scrum secretary I said that right).. is suppose to organize and create and order the features with the future in mind etc...
Bullshit ..soo basically your saying the scrum kid is suppose to be the disciplined engineer to have foresight into realizing future features and making requirements and task now that cover those things? No!
1 scrum bitch focuses too much on pleasing “stake holders” especially taken literally in start ups where the non technical idiots are too involved with the engineering team and the scrum bastard tries to ass kiss and get everything organized and tasks working so the non technical person can see pretty things work.
Scrum master is a gate keeper and is not needed and actually hinders the whole process of making a undisciplined engineer into a disciplined engineer, makes the undisciplined engineer into a “forever” code grunt... filling weekly orders of story points unable to see the forest until it’s over because the forest isn’t show to the grunt only the scrum keeper knows the big picture..... this is bad this is why waterfall is needed.
Waterfall has its own problems, But that’s another story for another day..
ANYWAY... soooo where were we ....
Ahh yess....
Clean code..
Is it a good book, yes.. does uncle bobs personality show thru the book .. yes lol.
If you know uncle bob you will understand what I just did with this post lol. I had to tangent ( at least mine was related to the topic) ...
I agree with the principles of the book, I don’t agree with the extreme view point. It’s like religion there’s the modest folks and then there are the extremists. Well he’s the preacher of the cult and he’s on the extreme side.. but that doesn’t mean he’s wrong.. many things he nails... he just hits the nail thru the wall just a bit.
OOP languages are not the solution... high level languages do not solve everything.. pininciples and concepts can be used across the board and prove valuable.. just don’t hold everything up like the 10 commandments of which you cannot deviate from.. that’s the difference here I think..
Good book, just don’t take it as the Bible as a beginner, actually infact DONT read this book as a beginner. Wait a bit learn then reflect by reading this.15 -
It's over.
I've been working on you for months, and thinking about you for near a year.
I built you with a shitty language first and some crappy ideas. I obviously got bad results, but I didn't lose courage and I continued you.
Got near the obsession to improve you. Every time. Switched to a fast but hard language. Got into my first low-level fuss. All for you.
Now I reached the end with no more improvements and tweaks I could imagine, I can tell that:
I had a lot of expectations from you.
But turns out you were nothing more than a nasty brain fart pretending to be a good idea.
The core of the concept was rotten. Blinded by my lust for success (perhaps cupidity ?) I didn't see you just couldn't work.
I'm utterly disgusted, of course. Who wouldn't, after working so hard on something that looks right but is completely useless ?
But even though this was all in vain, you taught me some great lessons down the road.
Efficiency matters over facility.
Get sure you're using the right tools, and stay open for changes of such.
But some others were harsher, though just as important.
There's times you just have to admit defeat.
Putting a lot of efforts into something doesn't always bring a reward.
If after a long time you can't get the thing right, then stop. Your time is precious. Don't waste your time or time will waste you (Thanks Muse, I love this sentence).
And the most important: next time I got some "grand" idea that is not about improving some random software, I'll bang my head to my desk enough times to forget about it.
So now the time has come.
Goodbye, project "hpym". You put me in grief, but I know I matured a lot in my concepts of development because of you.
Now take place into the project graveyard among the other clunky half-assed shit I got rid off.6 -
Anyone here worked a whole lot with low level programming?
I have always worked with high level languages like Python and C++, but I’ve also had an interest in working with embedded systems, real close to the metal.
Any directions on where I should go to start learning low level programming? Sites, languages, etc?
Appreciated devRant fam!😊12 -
Starting to learn rust... It's hard! I've never worked with a language this low-level before, there are a lot of concepts to learn. It's a good hard, though.2
-
I feel like saying "I know C#" (or Java or other similar languages) to mean that you know it as a language as opposed to more of a framework is ridiculous. We should say what programming language level we know (high, mid, low...) since the difference between say C# and Java is pretty much the same as the difference between say WinForms and WPF. Depending on which two languages and which two frameworks you choose it can be a much bigger difference between the frameworks than the languages.
In a CV I'd like to say "I know x-level languages with experience in [actual programming language + frameworks]" instead of saying I know C# and then recruiters and HR people and such assume I don't know Java at all, but know MVC, WebForms and whatever else even though I might specialise in something else and would take me pretty much the same to get proficient in Java as it would take me to get proficient in that framework or something that's technically C#.
It just makes so much more sense to me. As a dev you're supposed to know the principles, the syntax should be secondary. A pointer is a pointer regardless of it's marked with a * or IntPtr or just a value in a register with no special marking that it's a pointer...
Can we, as devs, come up with something like this?2 -
If I could, I'd attempt to create an ideal language. I'd aspire that its features would be:
-The easyiness of Python
-The library ecosystem of Javascript
-The readability and cross- platformness of Java
-Functional features of Haskell
-Modularity of Lisp
-Low level features of C/C++
-Powerful with strings and data, like Perl
-Both compiled and interpreted, with REPL
Anything missing from your favorite languages?9 -
That feeling when your friends' college life kind of depends on you helping them out in this assignment using a low level programming language (low level means it was meant to operate on the machinery level) that you were really good in at the first semester. Then you realize that you have forgotten a lot of things just because the logic and approach ist totally different from the high level programming language and you forget how a programming language works once you stop using it and it takes time to dive back in and you really like being friends with them. Now all you're left with is with the fear of letting them down.
-
I initially chose System Administration simply because it was attractive to me to be the HMFIC, and generally above the law as corporate policy is concerned, as said law for the most part applied to people with less comprehensive knowledge about how any given system or technology works.
Since then though, I've learned that there's basically no better way to become a jack of all trades than being a sysadmin. There's no other position in the tech field that more easily and gracefully parlays into other specialties.
I write automation and aggregation software now, but I still consider myself a sysadmin by trade, as automation is just another function of system administration. I write everything in vim, and almost entirely in perl, because I am concerned above most other concerns about performance. I could learn C or Go or Rust or some other low-level compiled language, and I'm sure I could create even more performant software that way, but that would take me farther away from my passion: System Administration. -
Good code is a lie imho.
When you see a project as code, there are 3 variables in most cases:
- time
- people / human resources
- rules
Every variable plays a certain role in how the code (project) evolves.
Time - two different forms: when certain parts of code are either changed in a high frequency or a very low frequency, it's a bad omen.
Too high - somehow this area seems to be relentless. Be it features, regressions or bugs - it takes usually in larger code bases 3 - 4 weeks till all code pathes were triggered.
Too low - it can be a good sign. But it should be on the radar imho. Code that never changes should be reviewed at an - depending on size of codebase - max. yearly audit. Git / VCS is very helpful here.
Why? Mostly because the chances are very high that the code was once written for a completely different requirement set. Hence the audit - check if this code still is doing the right job or if you have a ticking time bomb that needs to be defused.
People
If a project has only person working on it, it most certainly isn't verified by another person. Meaning that only one person worked on it - I'd say it's pretty bad to bad, as no discussion / review / verification was done. The author did the best he / she could do, but maybe another person would have had an better idea?
Too many people working on one thing is only bad when there are no rules ;)
Rules. There are two different kind of rules.
Styling / Organisation / Dokumentation - everything that has not much to do with coding itself. These should be enforced at a certain point, otherwise the code will become a hot glued mess noone wants to work on.
Coding itself. This is a very critical thing.
Do: Forbid things that are known to be problematic in the programming language itself. Eg. usage of variables in variables, reflection, deprecated features.
Do: Define a feature set for each language. Feature set not meaning every feature you want to use! Rather a fixed minimum version every developer must use and - in case of library / module / plugin support - which additional extras are supported.
Every extra costs. Most developers don't want to realize this... And a code base that evolves over time should have minimal dependencies. Every new version of an extra can have bugs, breakages, incompabilties and so on.
Don't: don't specify a way of coding. Most coding guidelines are horrific copy pastures from some books some smart people wrote who have no fucking clue what you're doing and why.
If you don't know how to operate on people, standing in an OR and doing what a book told you to do would end in dead person pretty sure. Same for code.
Learn from mistakes and experience, respect knowledge from other persons, but always reflect on wether this makes sense at this specific area of code.
There are very few things which are applicable to a large codebase on a global level. Even DRY / SOLID and what ever you can come up with can be at a certain point completely wrong.
Good code is a lie - because it can only exist at a certain point of time.
A codebase should be a living thing - when certain parts rot, other parts will be affected too.
The reason for the length of the comment was to give some hints on what my principles are that code stays in an "okayish" state, but good is a very rare state -
The near future is in IOT and device programming...
In ten years most of us will have some kind of central control and more and more stuff connected to IOT, security will be even a bigger problem with all the Firmware bugs and 0-day exploits, and In 10 years IOT programmers will be like today's plumbers... You need one to make a custom build and you must pay an excessive hour salary.
My country is already getting Ready, I'm starting next month a 1-year course on automation and electronics programming paid by the government.
On the other hand, most users will use fewer computers and more tablets and phones, meaning jobs in the backend and device apps programming and less in general computer programs for the general public.
Programmers jobs will increase as general jobs decrease, as many jobs will be replaced by machines, but such machines still need to be programmed, meaning trading 10 low-level jobs for 1 or 2 programming jobs.
Unlike most job areas, self-tough and Bootcamp programmers will have a chance for a job, as experience and knowledge will be more important than a "canudo" (Portuguese expression for the paper you get at the end of a university course). And we will see an increase of Programmer jobs class, with lower paid jobs for less experienced and more well-paid jobs for engineers.
In 10 years the market will be flooded with programmers and computer engineers, as many countries are investing in computer classes in the first years of the kids, So most kids will know at least one programming language at the end of their school and more about computers than most people these days. -
I'm thinking about what language to dive into next.
I already have a pretty good knowledge of Go and mediocre knowledge of C and Java.
So far I thought about...
1. CPP, as I need it for school and it runs on literally anything.
2. Rust, as is seems to spread and the combination of low-level, memory-safety and abstraction seems pretty appealing to me.
3. Kotlin, specifically kotlin-native, is it combines java-like high-level programming with native speed.
4. Nim, as it combines high-level techniques with c-like freedom.
What do you people recommended, or something completely else?6 -
A beginner in learning java. I was beating around the bushes on internet from past a decade . As per my understanding upto now. Let us suppose a bottle of water. Here the bottle may be considered as CLASS and water in it be objects(atoms), obejcts may be of same kind and other may differ in some properties. Other way of understanding would be human being is CLASS and MALE Female be objects of Class Human Being. Here again in this Scenario objects may differ in properties such as gender, age, body parts. Zoo might be a class and animals(object), elephants(objects), tigers(objects) and others too, Above human contents too can be added for properties such as in in Zoo class male, female, body parts, age, eating habits, crawlers, four legged, two legged, flying, water animals, mammals, herbivores, Carnivores.. Whatever.. This is upto my understanding. If any corrections always welcome. Will be happy if my answer modified, comment below.
And for basic level.
Learn from input, output devices
Then memory wise cache(quick access), RAM(runtime access temporary memory), Hard disk (permanent memory) all will be in CPU machine. Suppose to express above memory clearly as per my knowledge now am writing this answer with mobile net on. If a suddenly switch off my phone during this time and switch on.Cache runs for instant access of navigation,network etc.RAM-temporary My quora answer will be lost as it was storing in RAM before switch off . But my quora app, my gallery and others will be on permanent internal storage(in PC hard disks generally) won't be affected. This all happens in CPU right. Okay now one question, who manages all these commands, input, outputs. That's Software may be Windows, Mac ios, Android for mobiles. These are all the managers for computer componential setup for different OS's.
Java is high level language, where as computers understand only binary or low level language or binary code such as 0’s and 1’s. It understand only 00101,1110000101,0010,1100(let these be ABCD in binary). For numbers code in 0 and 1’s, small case will be in 0 and 1s and other symbols too. These will be coverted in byte code by JVM java virtual machine. The program we write will be given to JVM it acts as interpreter. But not in C'.
Let us C…
Do comment. Thank you6 -
I am a fan of Rust, whilst I do not consider it a good contender for the "low level" area that I am privy to (game dev) I still find it an absolute joy of a language to learn, and use.
So this example here made me lol:
https://youtube.com/watch/...16 -
What programming books do you all recommend?
Language wise any books on C, GoLang, Python, Rust, and LUA are welcome
And topic wise I’m interested in books about computer science theory, network programming, low level programming, and backend programming are welcome.
I know it’s a wide variety of topics but some are stuff Im currently doing, I’ve already messed with and just really want to learn more or focus on, or plan to do it when I get around to it6 -
Human is just computer with meats. There are several subsystems:
0️⃣ low-level hormonal,
1️⃣ mid-level primal instinct and
🅰️ high-level natural language processor5 -
Please don't use OS specific libraries/binaries/build tools...etc
I'm talking to C/C++ users here. once in a while I see something on github maybe im just curios maybe I find your niche code useful but then you use make (who the hell still uses make?) or your library depends on another library than can only be mindlessly installed in a unix environment. and the most obscene of all a solution file...
thank god for rust.14 -
I started university in september and after one semester using java (i learned java three years ago and used since), we started C.
I noticed that developers that program developing with high-level languages, or modern languages (like rust that it's low-lewel, but still has many modern language features) are for spoiled developers.24 -
Are there any best practices for binding C-librarys to a higher level language?
Like, regarding Concurrency-Safety, memory-safety, general fault-tolerance, glue-datastructures with low overhead?2 -
Why do we still use floating-point numbers? Why not use fixed-point?
Floating-point has precision errors, and for some reason each language has a different level of error, despite all running on the same processor.
Fixed-point numbers don't have precision issues (unless you get way too big, but then you have another problem), and while they might be a bit slower, I don't think there is enough of a difference in speed to justify the (imho) stupid, continued use of floating-point numbers.
Did you know some (low power) processors don't have a floating-point processor? That effectively makes it pointless to use floating-point, it offers no advantage over fixed-point.
Please, use a type like Decimal, or suggest that your language of choice adds support for it, if it doesn't yet.
There's no need to suffer from floating-point accuracy issues.26 -
How do I get into low-level programming?
I already know Java Js and Python and I feel I want to take my skills to the next level and learn C or Go.
But what to start with in that area after I learn the language? I have no idea what to do with low level stuff.