Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "virtual environments"
-
For almost twenty years I have sheltered in the protective, safe, warm bosom of Debian. For a long time, it had the largest body of available software of all the distros, and by far when Ubuntu rose to prominence. So I used Ubuntu for years for the depth of package availability, and because if something esoteric was released, it would almost certainly come out first on Ubuntu, and sometimes only on Ubuntu. I was happy. Things were good.
But over time, Ubuntu and even Debian started to lean harder and harder on gnome, which I've always hated, along with all desktop environments, as they obscure the system from the user, and introduce graphical layers of abstraction, so the actual job of getting things done becomes a black art, hidden behind gnome-specific tools. This is my preference, and It's been disheartening in recent years to see the direction the desktop appears to be taking.
Then I joined devrant in 2017, and until then, I had heard peripherally about Arch, but never more than that. I had not heard of Manjaro at all. People started posting success stories and happy screenshots, and I was intrigued.
In 2018 I built a windows machine to use for parsec streaming games that wouldn't run on my linux rig. For not a great deal of money, I built a solid machine that's unequivocally better than any machine I've ever used, and installed windows on it. For a while, I was pleased. I had the best of both worlds: a windows box to stream some games from, and a linux desktop for everything else.
But after a couple months, as proton matured, I found fewer and fewer reasons to use my windows machine. My use of it declined to where I was last week: it had been months since I'd even powered it on. It was the most powerful machine I've ever used, and it was just collecting dust behind the TV in the living room. The full realization came to me while I was fighting a battle in the Gnome Takeover War, and I realized: I don't have to do this.
I pulled the newer machine out from behind the TV and installed Manjaro architect edition on it. The flexibility in the install was staggering. I am using nilfs2 for my /boot and / partitions: an option that Ubuntu has never offered. Normally they just default you into the garbage ext4 filesystem, and if you can dig deep enough, you can install with something else, though you have to really want it, in my opinion.
But Manjaro has been a dream-come-true. Pacman is easily the best package manager I have ever used, and pamac's intuitive and easy commands are a great view into AUR. Booting into the virtual console instead of a display manager has been wonderful too. On Ubuntu, I had to disable systemd's version of runlevel 5 to even get it working. But I just popped my xrandr script into my .xinitrc, and X opens with startx in less than a second. On Ubuntu, it takes about 5-10 seconds.
This has nothing to do with Manjaro, but I also switched to Radeon for this install, and I couldn't be happier about that. No more "installing" nvidia's drivers.
No more gnome. No more PPAs. No more settling. I am a Manjaro user now. Full stop. Thank you, devrant, for bringing it to my attention.11 -
If you’re writing in Python and you find yourself in dependency management hell and you don’t know about pipenv, consider this a friendly PSA:
pipenv is your friend.4 -
Python is a fucking joke. "Readability" disguised as 150+ magic methods and values. Virtual environments to hide shitty dependency management. Strings that may or may not act as comments. No correlation between package and module names - install Pillow; import PIL. **kwargs instead of options=dict(), because why separate function arguments from arbitrary extra data? And finally, the only way to have tkinter on Windows is to install IDLE, so that some fucktard can stick their shitty app right up yours ...7
-
I have been keeping this inside for long time and I need to rant it somewhere and hear your opinion.
So I'm working as a Team Lead Developer at a small company remotely based in Netherlands, I've been working there for about 8 years now and I am the only developer left, so the company basically consists of me and the owner of the company which is also the project manager.
As my role title says I am responsible for many things, I maintain multiple environments:
- Maintain Web Version of the App
- Maintain A Cordova app for Android, iOS and Windows
- Working with pure JavaScript (ES5..) and CSS
- Development and maintenance of Cordova Plugins for the project in Java/Swift
- Trying to keep things stable while trying very hard to transit ancient code to new standards
- Testing, Testing, Testing
- Keeping App Stable without a single Testing Unit (sadly yes..)
- Just pure JavaScript no framework apart from JQuery and Bootstrap for which I strongly insist to be removed and its being slowly done.
On the backend side I maintain:
- A Symfony project
- MySQL
- RabbitMQ
- AWS
- FCM
- Stripe/In-App Purchases
- Other things I can't disclose
I can't disclose the nature of the app but the app is quite rich in features and complex its limited to certain regions only but so far we have around 100K monthly users on all platforms, it involves too much work especially because I am the only developer there so when I am implementing some feature on one side I also have to think about the other side so I need to constantly switch between different languages and environments when working, not to mention I have to maintain a very old code and the Project Owner doesn't want to transit to some more modern technologies as that would be expensive.
The last raise I had was 3 years ago, and so far he hasn't invested in anything to improve my development process, as an example we have an iOS version of the app in Cordova which of course involves building , testing, working on both frontend and native side and etc., and I am working in a somewhat slow virtual machine of Monterey with just 16 GB of RAM which consumed days of my free time just to get it working and when I'm running it I need to close other apps, keep in mind I am working there for about 8 years.
The last time I needed to reconfigure my work computer and setup the virtual machine it costed me 4 days of small unpaid holiday I had taken for Christmas, just because he doesn't have the enough money to provide me with a decent MacBook laptop. I do get that its not a large company, but still I am the only developer there its not like he needs to keep paying 10 Developers.
Also:
- I don't get paid vacation
- I don't have paid holiday
- I don't have paid sick days
- My Monthly salary is 2000 euro GROSS (before taxes) which hourly translates to 12 Euro per hour
- I have to pay taxes by myself
- Working remotely has its own expenses: food, heating, electricity, internet and etc.
- There are few other technical stuff I am responsible of which I can't disclose in this post.
I don't know if I'm overacting and asking a lot, but summarizing everything the only expense he has regarding me is the 2000 euro he sends me on which of course he doesn't need to pay taxes as I'm doing that in my country.
Apart from that just in case I spend my free time in keeping myself updated with other tech which I would say I fairly experienced with like: Flutter/Dart, ES6, NodeJS, Express, GraphQL, MongoDB, WebSockets, ReactJS, React Native just to name few, some I know better than the other and still I feel like I don't get what I deserve.
What do you think, do I ask a lot or should I start searching for other job?23 -
This always gets me:
Developers complaining that their 4 year old / cheap ass computer is slow.
Get. A. New. One.
It's not that hard.
Here, let me do one for you:
https://computeruniverse.net/en/...
I just went to a site that delivers across Europe, and selected a cheap laptop with a decent CPU and SSD. Short on RAM, sure, and without a Windows License. But you can buy RAM for an additional 50$, and that brings you to a total of 550€, delivery included. And it will WORK. And it will be fast.
It's too expensive?
No, not exactly. Wherever you are in the world, if you can code decently, good enough to have the right to complain about development tools, you are eligible to at least 10$ per hour income as a freelancer across the globe. I've had such opportunities offered to me by many organizations, especially non-profit ones that need cheap employees. I actually was offered more but let's stick to 10$ per hour.
So that's 1600$ per month. Enough to buy 3 such laptops. Oh, taxes, I forgot. So you get 2 laptops. Wait! You need food and everything else. Well if you're in a country where that offer actually makes sense, then it's likely that you can live off of 400$ per month quite well. Maybe 800$ if you need to pay rent.
So that's roughly 1 month of work for a laptop that will make you not waste time on waiting for stuff.
Sweet! 1 Month! What does it get me?
Well assuming that you have no laptop, it gets you A JOB that pays you 1600$ per month.
But if you DO have a laptop, you can sell it for cheap, and benefit from the following:
1. Boot-up time from 30-60 seconds to 10 seconds.
2. Installing software - from 1 minute to 10 seconds.
3. Opening a browser - from 10 seconds to 1 second.
4. Opening an advanced text editor (Atom, VS.Code) - from 10 seconds to 1 second.
5. Searching for a file on your entire hard drive - from 1 hour to 2 minutes.
....
You get the point. Waiting is reduced by several times.
So how much do you really wait when coding?
Well are you compiling? Are you opening a new project and the IDE needs to re-index the files? Are you opening programs like a terminal emulator, browser and such? Are you using virtual machines for dev environments?
Well all of these processes become several times faster. Depending on how often you do it, you'll be saving yourself from 1 hour per day to upto 4 hours per day (my case, where a HDD would be just out of the question).
How much is that time worth? At least 10$ per day. If you're working for 20 days per month, 240 days per year, that's a total of 2400$. And for the life time of that crappy laptop of 2 years, that's 4800$ saved. And that's with hugely conservative numbers. Nobody pays 10$ per hour any more, except if you've just started in the industry. I know because I've been there.
Please, for all that's sacred to you, justify right here, right now, HOW THE FUCK can you not afford to get that 8GB of RAM, that cheap ass SSD for 100$, or even a brand new laptop (hey! it's even portable and has FHD graphics on it!) for 550$.
That's why every time I hear someone who is a professional developer complain that they don't have money for a decent machine, I have to ask: why the fuck are you wasting yours and everyone else's time?!10 -
Worst mistake ever :
Didn't care for changing environment while pip or apt, always did sudo for no reason.
One day installed Conda and unity. Didn't realise it changed everything to python3-gi. Everything fucked up.
Tried to fix by started removing Conda removed gnome*, lost GUI.
For first time worked on tty, after a 6 hour redbull session. Got back the system working.
First thing then done is learn to install in virtual and local environments.1 -
The source engine is interesting, because it has reached that stage of life where it's old enough to be remarkable-- in the sense that it could be called 'legacy', a sort of milestone in development practices and thinking, both in software, and design.
That said, a better look at it might be from the lense of *uses today*.
A lot of former source engine (SE) devs are now going to unity or unreal, I don't blame them.
But it's interesting to examine examples of games that haven't.
One such game is the freeware "No More Room In Hell". A couple online play throughs shows a wealth of well designed maps (and an even greater horde of shovelware maps, but hey, you take the good with the bad).
The age of the engine itself shows. Even in games like Left 4 Dead the engine's age can be seen. This, in some respects has been a drag, but also a blessing. Where other games could rely on their effects, shaders, and other tech, modders, map makers, and designers have had to rely on wit and creativity.
Enter "situated environments."
In an age where many people desire to travel, to go places, and have grown up doing the exact OPPOSITE, there is a great desire for variety of locations in games: not merely 'environmental' in the shallow sense of a 'theme' such as 'lava', 'tundra', etc. But in the sense of setting in general.
We want places that are both out of reach and yet familiar. Fire-fights happen in city streets. Apocalypses happen in neighborhoods where the skyline is both broken and at once something we know by sight. Open air markets, grocery stores, neighborhoods, all of these provide the back drops of popular games and series such as COD, Battlefield, The Last of Us, and yes, the example game, NMRIH.
I call this idea of 'familiar but out-of-reach level design', "situated environments", because familiarity with them, but *lack of real life experience* with them, on a day to day basis, allows people's expectations to fill in the gaps.
No one for example would argue the layouts of 7 Days To Die are familiar, but most of us don't spend all day in a junkyard or a high rise hotel.
So they *feel* familiar. Likewise with Skyrim, the villages and towns, both iconic and strange, our expectations formed by cultural inheritance, hollywood films, television shows, stories, childrens books, and yes, other games.
In a way, familiarity-without-real-in-person-experience is a shortcut for designers, one that lets them play with the player's head-space, the players subconscious idea of how a space and setting *should* work, what to *expect* out of the area, how to *operate* within the area. And the more it conforms to expectations, the more surprising an overdesigned element appears to be, rather than immersion breaking. A real life example of this is people's idea of chernobyl. When they discover the amusement park and ferris wheel they're blown away by the juxtaposition of the wasteland that surrounds them and the associations ('nostalgia' as it were) that such a carnival ride carries for many of us. It simultaneously *doesn't belong* and is yet all at once *perfectly situated in the environment*.
It is to say 'surreal', which is adjacent to the idea of *being real*, in terms of our "perception of what is and isn't plausible, if not possible."
This is at the heart of suspension of disbelief, because in essence, virtual worlds are a lie, like fiction, and good fiction violates expectations in order to tell us truths about reality. As part of our ability to differentiate bullshit from reality, there is to say an element in our bullshit detectors (doubtless evolved over many 10's of thousands of years), that is designed to not merely detect what is absurd in our limited experience, but to incorporate absurdity into everyday experience. In that sense part of our rationality is the acceptance of irrational experiences, learning from it, and discovering 'a proper place for each thing' in the "models of the world" we all carry around in our heads. Eventually we normalize the absurd, it becomes the new reality, and what remains unassimilated becomes superstition (real or otherwise), a figment, or an anomaly.
One of the best examples I've encountered is The Last of Us: Left Behind, a good chunk of which is spent in a mall. And they nailed the environment perfectly I would say.
Or for those who don't own a PS4, a more accessible example is a map in NMRIH aptly called "the museum", and few words better do it justice than to go play it yourself--that is, if you really want to know what I mean by a 'situated environment'.
What better way, during this pandemic, to get out of the news cycle and into your own head? Sometimes the best way to escape isn't outside, it's within.3 -
I am amazed at human stupidity.
I always enjoyed the idea of DevOps: to use virtual machines and constant integration in order to avoid errors and free the developers of hard-to-setup environments and somehow-it-works compilations.
I am amazed how [company I used to work for] managed to turn this into a nightmare.
Just imagine: silent forests, the smell of flowers, no developer trust to the point your devs can’t either make docker environments cause reasons nor they can access your actual machines programmatically because they are filthy peasants, forcing them to do everything manually: every deployment will be a frustrating editing process which takes up to an hour, but here lies the trick... it will still have continuous integration... or better: every feature will be deployed as if it was a release.
The true peak of illumination:
Turning a tool into a disease.
Take a sip of tea, manager... you deserve it.
Just thought about this job because I keep being tempted to just start my own company. The more I think about it, the less being employed makes sense, given my end goal.2 -
Got the GitHub student developer pack in 10th grade (highschool)
I recently made an application for GitHub student developer pack which got accepted .
If you don't know what this pack is all about , let me tell you this pack gives you free access to various tools that world-class developers use. The pack currently contains 23 tools ranging from Data Science, Gaming, Virtual Reality, Augmented Reality, APIs, Integrated Development Environments, Version Control Systems, Cloud Hosting Platforms, Code tutorials, Bootcamps, integration platforms, payment platforms and lots more.
I thought my application wouldn't qualify because after reading the documentation , I thought that It was oriented more towards college and university students but nonetheless I applied and my application got accepted . Turns out all you need is a school -issued verifiable email address or proof of you current academic status (marksheets etc.)
After few minutes of the application I got the "pro" tag on my GitHub profile although I didn't receive any emails .
I tested it out and claimed the Canva Pro subscription for free after signing up with my GitHub account.
I definitely recommend , if you are currently enrolled in a degree or diploma granting course of study such as a high school, secondary school, college, university, homeschool, or similar educational institution
and have a verifiable school-issued email address or documents that prove your current student status, have a GitHub user account
and are at least 13 years old , PLEASE APPLY FOR THE PROGRAM .
Checkout the GitHub docs for more info..
Thanks !
My GitHub GitHub Username :
satvikDesktop
PS. I would have posted links to some sites and documentations for further reading but I can't post url's in a rant yet :(5 -
Oddly enough, i have simultaneously been less busy and more productive since working 66% remotely.
I find myself with more time that feels "wasted" or not busy, but my metrics show that I have more production, better results, and far nicer documentation. A bunch of us also sat down and did a bunch of coursework on really putting together a domain script library for one click onboarding of new servers or new client setups. We spun up a bunch of new virtual environments that literally solved headaches that had existed for years that never got dealt with because of too many other tickets.
Some of our web clients freaked out at us because the business is moving away from doing maintenance of legacy web work (small to midsize businesses). But it didn't matter. Rather than respond with a "make them happy," the response was "well, we will get rid of them as clients. We need to focus our energy on the essential service sectors we support."
Hell, we even got an automated test that has been broken apparently since 2018 to work again.
Granted, the incoming workload has slowed down. But it's still interesting to me to see that despite the slowdown, there isn't any concern; its still paying the bills and we are getting rid of technical debt everywhere. Tbh, this has really been a good reality check.1 -
Alright, I've got a confesstion. It's a confession and a question, combined, get it?
Anyway, I've been a happy Linux user for over 20 years now, and I've used all kinds of graphical envs, from tiling wms like dwm and xmonad (I didn't care for hyprland, sorry if that's weird) to full DEs like kde, cinnamon, gnome, etc.
The "question" here is why do people hate Gnome so much? It's the one environment that I keep coming back to, especially now that my main machine is a beast, and RAM usage is nary a concern. Even then, my system is sipping RAM compared to KDE (running two docker dev environments, three browser windows with several tabs - one of which is streaming music, slack, and steam is sitting on the fourth virtual desktop, chilling), and I'm still at just over 18 GB of ram.Being able to push one single key/key combo, and type anything at all that is vaguely relevant to what you want to accomplish, and having that thing be instantly available (including searching for individual files) is super nice. Easy virtual and multi monitor switching is intuitive; little to no effort needed.
Even when I want to do other stuff, like play a game, or edit a photo, video, or some of my shitty musical-aspirational material - GNU+Linux with Gnome has been and continues to be the easiest, most neato way to get shit done.
Why the hate, gnome haters? Maybe you’re using it wrong?13 -
Design in Motion: Real-Time Rendering's Impact on Architecture
Architecture, a discipline that once relied heavily on blueprints, models, and lengthy render times, has undergone a revolutionary transformation in recent years. The advent of real-time rendering technology has fundamentally altered the way architects visualize, present, and interact with their designs. This paradigm shift has not only enhanced the creative process but has also empowered architects to make more informed decisions and create immersive experiences for clients and stakeholders.
Real-time rendering, a technological marvel that harnesses the power of high-performance graphics hardware and advanced software algorithms, allows architects to generate photorealistic visualizations of their designs in a matter of milliseconds. Gone are the days of waiting hours or even days for a single rendering to complete. This acceleration in rendering time has not only expedited the design process but has also encouraged architects to explore multiple design iterations rapidly.
One of the most significant impacts of real-time rendering on architecture is the ability to visualize a design in various lighting conditions and environmental settings. Architects can now instantly switch between daytime and nighttime lighting scenarios, experiment with different materials, and observe how their designs respond to different seasons or weather conditions. This level of dynamic visualization offers insights into how a building's appearance and functionality evolve throughout the day, contributing to more holistic and thoughtful design solutions.
Moreover, real-time rendering has transformed client presentations. Architectural concepts can now be communicated with unprecedented clarity and realism. Clients can virtually walk through spaces, observing intricate details, exploring different angles, and even experiencing the play of light and shadow in real-time. This immersive experience fosters a deeper understanding of the design intent, enabling clients to provide more targeted feedback and make informed decisions.
The impact of real-time rendering on collaboration within architectural teams cannot be overstated. Traditionally, architects and designers would need to wait for a rendering to complete before discussing design changes or improvements. With real-time rendering, team members can make adjustments on the fly, observing the immediate effects of their decisions. This seamless collaboration not only enhances efficiency but also encourages interdisciplinary collaboration as architects, engineers, and other stakeholders can work together in real-time to refine designs.
The integration of virtual reality (VR) and augmented reality (AR) into the architectural workflow is another transformative aspect of real-time rendering. Architects can now create VR environments that allow clients to step inside their designs and explore every nook and cranny. This not only enhances client engagement but also enables architects to identify potential design flaws or spatial issues that might not be apparent in 2D drawings. AR, on the other hand, overlays digital information onto the physical world, facilitating on-site decision-making and construction supervision.
Real-time rendering's impact extends beyond the design phase. It has proven to be a valuable tool for public engagement and community involvement in architectural projects. By creating virtual walkthroughs of proposed structures, architects can offer the public an opportunity to experience the design before construction begins. This transparency fosters a sense of ownership and allows for constructive feedback, contributing to the development of designs that resonate with the community's needs and aspirations.
The environmental implications of real-time rendering are also noteworthy. The ability to visualize designs in various environmental contexts contributes to more sustainable architecture. Architects can assess how natural light interacts with interior spaces, optimizing energy efficiency and reducing the need for artificial lighting during the day.
In conclusion, real-time rendering has ushered in a new era of architectural design, propelling the industry into a realm of dynamic visualization, immersive experiences, and enhanced collaboration. The ability to witness designs in motion, explore different lighting conditions, and interact with virtual environments has redefined how architects approach their craft. From facilitating client presentations to fostering sustainable design solutions, real-time rendering's impact on architecture is profound and multifaceted. As the technology continues to evolve, architects have an unprecedented opportunity to push the boundaries of creativity, efficiency, and sustainability in the built environment. -
I've always managed the different versions of python manually on my machine, and set up virtual environments off of them.
Now I've found pyenv, it's so simple now.
Can't wait to see what plugins are available for it2 -
Just discovered https://twitter.com/ExpertBeginner1. It's the story of my life. Giant classes, copying and pasting, and architects who create frameworks. It's great when we combine all three: A "framework" created by an architect which is made of giant classes that you copy and paste. Imagine a giant generic class where the generic argument is only used by dead code. Pause for a moment and try to visualize that.
It inherits from a base class with lots of virtual methods called by base methods that throw NotImplementedException, so if you don't need them you have to override them to return empty collections. If you're going to do something so messed up you could just put those default implementations in the base. But no, you can inherit, it compiles, and then it throws a runtime error unless you override methods the compiler doesn't require you to override.
The one method you're required to override has a TODO comment telling you what to put there. Except don't ever do what the comment says because that's the old standard. The new standard says never, ever do that.
Most of the time when I read about copy-and-paste coding it's about devs who copy and paste because they don't know how to write or reuse code. They don't mention the environments where copying and pasting the same classes over and over again is the requirement and you're not allowed to write your own code.
Creating base classes where you just override a method or two can potentially work, but only in the right scenarios and only if you do it right. If you're copying and pasting a class that inherits from the base class and consists entirely of repeated code, why the heck isn't that the base class? It could be a total mess, but at least it would be out of sight and each successive developer wouldn't become responsible for it by including it in their own code.
It's a temporary engagement, but I feel almost violated. I know it's a first-world problem, and I get to work indoors and take vacations. I'm grateful for those things.
Before leaving I had to document the entire process of copying and pasting an entire repo, making a ton of baseline edits that should just be in the template but aren't, and then copying and pasting from other places into the copied and pasted code. That makes me a collaborator. I apologize more than once in the documentation, all 20 pages of it that you have to read and follow before you even get to the part where you write the code for what you actually need it to do.
This architect has succeeded in making every single thing anyone does more about servicing the needs of his "framework" than about writing actual code to do what needs doing. Now that the framework is in and around everything it creates the illusion that it's a critical part of our operations. It's not. It's useless overhead.
Because management is deceived into thinking they need it they overlook the fact that it blows up, big and small, every single day. The log is full of failures that I know no one ever sees. A big chunk of what they think it does fails silently, and they don't even notice until months later when they realize how much data they're missing. But if they lose, say, 25% they'll never notice.
When they do notice they just act like it's normal, go into fire drill mode, and fix it. Doom. You're all doomed. I'm standing on the deck of the Titanic next to my jet ski.1