Details
Joined devRant on 9/6/2018
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API

From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
@whimsical I use the $3/month GLM Coding plan , I only tested it on sst/OpenCode and Roo , but you can use it inside ClaudeCode too, somebody claimed they used it on Zed, but I was totally unsucessful
Yesterday I did a pretty big feature on a Legacy Yii2 / Boostrap3 app using GLM4.6 through roo and it did an amazing job with minor hickups with bootstrap3 -
@whimsical hi, cheaper country man here, where are they hiring? lol
-
@whimsical I noticed that depending on the model, It's much more autonomous in GLM and CC for instance
Zed is fantastic, but I only use it with claude code
What are your thoughts on Grok-Code-Fast? I testes a few frontend generations, was not that impressed -
@Lensflare when they fire people out there do they usa actual flamethrowers too?
-
I'm testing stuff on OpenRouter with Cliene and Roocode (Roo is just amazing) despite Qwen coder 30b being able to generate complex nextjs projects, it's not very good at making them work.
I had it create a svelte project and then 3 prompts later decide svelte was not really required (despite my prompts) and recreate everything in Pure HTML/JS
All an all , I think I'm stuck with cloud providers at least for now , as I was aiming for a 48Gb machine and it feels that it's not up to the task -
@gitstashio You can run several open weights models, equivalent of opensource for LLMS, on your own hardware
OpenAI released GPT OSS, there's Qwen Coder like you mentioned, Gemma, llama from meta etc
The models themselves are stuck with the knowledge they had during training, some models can do RAG (I don't know about the specific OW ones) and there are people who do fine tuning including lora in local models , Which requires some pretty hefty , I'm not sure apple silicon is well suited for this
My interest is most on consuming and prompt engineering, apple silicon seems to do great with those. -
@gitstashio > why does no one explain these things in simple terms: millions of AI will take your job videos and only 2 about running LLMs locally
because it's easier and more lucrative to create desperation and fear, then actually teaching stuff -
I'm convinced now that pure vibe coding is sole for the fun of seeing the frontend appear.
I vibecoded some stuff and every I saw the code going sideways and kept going it became just a trainwreck
my hot take here is using GenAI for coding is a skill on itself -
@BordedDev which providers are you guys using?
something like openrouter/groq or baremetal? -
Yes, fellow human, I also am a flesh and blood human, with a normal amount of appendages and eye sockets
-
Yes, fellow human, I also am a flesh and blood human, with a normal amount of appendages and eye sockets
-
@retoor that's the kind of data I don't know how to get for my use case.
-
@12bitfloat upgrading is not an option, my pc is just too old, I'll have to upgrade the whole thing
-
@12bitfloat also, for some reason specifically here the m4 is slighty cheaper than the RTX 5090,
I have to keep to local vendor due to warranty (super important for me) and taxes (which are many and probably why the RTX is so expensive here) -
@retoor Right now I'm thinking mostly of using it for Coding with tools like SST/OpenCode and Cline with models medium/smaller models like QwenCoder 30b and DevMistral etc
Maybe using local models as a testbed for future LLM enabled applications before going online.
imagine a local AI enabled development machine.
Also there is the economics/politics of it, while the hardware is (obscenelly) expensive out here, it's something you own.
While I don't think I'll get rid of APIs/Subscriptions I don't have any hope they will become cheaper, so it's nice to have options. -
@afaIk two things mostly, having the ability to play and learn locally is a great plus since it's an (very needed) upgrade
and depending on what I'm doing cloud services get incredibly expensive for me due to exchange rates, yes the computer would be expensive, but it's there no matter what.
Still this is a good question, maybe testing some 30b models on groq to test the cost before making any purchase
I'll probably wait till M5 gets released tough -
@djsumdog mostly I want to try the coding models, but would be general lab stuff really,
a friend showed qwen coder running on a 24gb macbook pro I was impressed with the result
My main hangup with going nvidia is the setup would cost more for less total LLM usable ram, for the price of a single 5090 32gb , I can buy an M4 mini pro with 64gb unified ram (at least here in the jungle)
I understand it's not a replacement for claude/codex tho -
this is a triumph
-
Talk about reality, it's what people care.
I honestly have no idea how the market is in the us (suposedly since you talked about dollars) -
pff, you store the password? I just throw it away, I only store the length, then only check the size of the password and the username client side.
But I always add a null filled row called password, that way if the password gets leaked by hackers they will be confused and give up -
@whimsical let's add a NFSW tag to that url lol
-
@D-4got10-01 thinking about it now, the sad part is that they really were clueless... they really though everything in tech was free.
that idea essentially made them market their product (a b2b two way marketplace ) as a freemium product...
they thought they had traction, but they really just burned lots of money for no return. -
for a second there I though you are fredie and the company was Mistery Incorporated
-
websockets keep disconnecting for me :/
-
@jonathands
http://localhost:8080/
Quick, before the admins discover us! -
don't worry I'll vibecode some stuff with Lovable and GLM ,BRB
-
@IHateForALiving don't worry he'll try to fill that void by overworking people
-
@IHateForALiving don't worry he'll try to fill that void by overworking people
-
comes with old age
-
@Lensflare yes pressure can be an issue, but they are used mostly because they are much cheaper than installing gas heating, due to infrastrucure costs... most people are used to them, Gas heating is considered something for hotels, in very specific places people use them, mostly upper class appartments and house holds