Details
-
LocationZwolle, The Netherlands
-
Website
Joined devRant on 12/12/2024
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
I have made an interactive talking AI but it's not open source. It contains passwords/keys and tasks that are personal. But, a lot was learned while the code is nearly nothing. I spend many hours on research and didn't want to let it go to waste.
If you are interested in TTS / STT, this will be a nice resource: https://molodetz.nl/retoor/...
Side note: the builtin webkit TTS/STT engine is maybe even better and has a great API! Amazed by the quality of that thing.
This is python research. I hope that I can motivate someone but devRant is always empty on Saturday.
If someone needs help with an implementation regarding this, you know where to find me.2 -
Now I know why no one uses the google cloud. Making TTS and STT working costed me the whole night. Gemini was easy tho. But fuck google, you costed me a lot of energy. You guys are crazy. Now my api connects in a magic way i don't even understand with the gcloud cli app. The rest of my application is totally rest, don't use much of the google library.
I implemented google TTS and STT into ChatGPT. I use for somethings google because it's cheaper. It works using a JBL Go! speaker. I can just turn it on and start chatting with it. I implemented google search and gave it a memory. It can remember numbers for me. It accepts dutch and english. I can say 'google' and google is the main action. It will fetch results from google and uses gpt to summary the results. It works perfect! BUT FUCKING AI. I want to know the color of the hair from Mona Lisa. Not freaking Ona Isa! I send it literally correct. The speechtotext works great. But fucking API with it's reading. Pathethic. How far is AI? Barely usable as home assistant. So far - besides auto completion and giving code snippets / concepts AI is freaking useless. You need more patience for AI than a kid.
I hope the inventor of oauth2 dies alone. He should.11 -
Lady next door was visiting me. She is 67. She is a kinda psychologist and she needed a new site. With her just sitting here, i generated one with gpt and explained her html a bit. The instructions are as follow:
- if you add a page, duplicate the other page and remove the text you don't want. Do not remove html unless it's around content what you don't use anymore.
- learned her how to copy a href and change the link.
So, first, she asked me for a website. The last thing I want is somebody's website in maintenance or even work on it. Making "beautiful"/commercial sites is not my hobby.
Now, for 1,80 per month she has a domain name (i asked if it will have SSL, that's still bit unclear) and 1,- hosting per month. I think it can be even cheaper. It doesn't support php/python and stuff.
TLDR; i gave someone a HTML site generated in 5 minutes, tried a few style sheets and she was happy with it. Bye bye designers :P No one cares! Also full responsive.16 -
There is one difference between left and right. I discovered it in less than a minute. How fast are you?19
-
Almost no one seems to be online today. Most people have smth better to do but I don't.
I have upgraded the Ragnar anti spam service. It will:
- respond to spam in less than a minute like you're used to (a bit slower than before actually. The other system reacted in seconds but I decided that it's a bit overkill / not worth the resources.)
- use around 20 times less resources (instead of 20 bots, it's now one bot (a scout) that checks situation and activate other bots when needed)
- spam less, from now on, only one message gets posted under a downvoted rant. (before this was varying, could be 7 messages under a rant)
- since spammers upvote their down voted rant after a week or so, Ragnar down votes 5 times so it's harder to upvote. (before this was varying)
If someone thinks to see more spam because the service is a bit slower, let me know. It's very easy to adjust it's speed.7 -
YouTube keeps feeding me react videos. Disliking them for just showing up doesn't seem very ethical. It probably thinks "You watch them quite often so here, you have some.". Yh, because YOU PLAY THEM AUTOMATICALLY. What a dystopia.14
-
Why the fuck is the elastic search docker image 900Mb. Why the fuck does it include a complete logging system and why the fuck doesn't it have an up-to-date alpine image?
The arrogance of some systems these days. You're part of someone's software, not the software itself. You don't have the right to claim x resources. It's not about you.
Same for Sentry, a logging application that literally requires 8Gb of ram? I removed the limits and did try it anyway and stuff just crashed. Congrats, a logging system that REALLY requires 8Gb ram. The best my VPS does is 4Gb and therefrom you're only allowed to use 512Mb max imho.
I care about image sizes since my laptop only has 114Gb drive and my internet is a 4g hotspot with 50Gb/day limit (trust me, you can't find better for 40 euro's). 114Gb is maybe a bit outdated but be realistic, I only use vim, vscode, some sdks and source files. Why would my harddisk ever be full? Because of bloated docker setups. That's why. The other option is screwing over your system with everyone's configuration.
Alpine all the things!10 -
Watching YouTube on TV in the bright morning sun with a sigaret in your left hand and a ice cream in your right hand while typing a random post with your tongue and toes.
Life's so good often. I can't help it guys, I just can't rant. Life's too beautiful.10 -
It's almost my birthday. My mom wanted to give me a month ChatGPT for my birthday but I have it already. Actually amazed by the spot-on suggestion. Recently for xmas she was spot-on too. She gave a 10kg warm blanket. Ever slept under such heavy blanket? You sleep in NO TIME. Heavy recommend!
Tip for when somebody asks what to give you for present and you have no idea: supermarket stuff of their own choice. You'll learn some new products that way and will have stuff you normally don't buy. So asked that.
A good friend who lives in Ukraine comes to my birthday so I'm happy.12 -
Statistics. Tbh, it is way more. Didn't use the codeium plugin for a long time. Regarding key presses i'm in the 0.10%. Longest streak is not impressive. 21 on codeium and 40 on github or so.
At this moment I'm very happy with the plugin, it knows me completely. * tab tab tab *. Almost always knows what I want to do. It advanced a lot last year, I did quit a few times on it last year for few months because it often sucked. Now it's perfect. Especially under VIM it's very cool!2 -
Happy new year Ragnar, doing such an amazing job that AI would be a down grade in any way. Two false positives since existance and rarely one slips trough. People do actually call @ragnar if smth slips trough. Makes me happy and surprised how much people did read the few times I mentioned that feature. Says a lot about activity regarding readers in the community.
The spammers didn't celebrate new years it seems. Hard workii workii. Also happy new year.
Buffon, especially happy new year to you. You owe me two days of work for your shit. At least happy to know that the spam king at least is a real dev.
I hope this will be a good year for devrant. There's no reason that it wouldn't. We're te coolest independent single subject reddit thread on the internet.1 -
I have a new UNTRAINED bot on my site. It's based on openai now. And that's why it's blazing fast and blazing usless.
I can tell you why bots are so boring and will sure cause the dead internet theory. My datasets for example never contain real disturbing stuff ACCORDING TO NORMAL PEOPLE. EVERY TIME:
"The job failed due to an invalid training file. This training file was blocked by our moderation system because it contains too many examples that violate OpenAI's usage policies, or because it attempts to create model outputs that violate OpenAI's usage policies."
Now i'm really done. I gonna email them about their unusable training system.
In theory, i could test the message one by one if it is bad first. Don't want to do or pay for that. There should be an option to skip the data it considers disturbing instead of cancelling a whole data set for 0.1%. You also don't want to know how long it takes BEFORE he is finished validating you set. I think someone is doing it manually and clicks 'Uh uh..'-button..
Also, for the people who think they have gpt4o by having the API, you're lied to. The 'own gpt'-option on the paid openai is way more advanced than the ones you make locally.
They don't give us the real good stuff!
Oh, btw! The input data for my training is based on FORMER conversations with the bot. I automated a script to repeat a conversation I had and selected those messages and clicked 'train'. So it even complained about its OWN data! That data was already saying stuff like 'I can't help you with that' IN my training data. So, you 'corrected' and corrupted my data and now its still nog good enough for round 2?
I would really love to go back to local LLM's, but I can't imagine having ever a machine that generates as fast as the real GPT does. I also prefer to do it myself, but it's David vs. Goliath, even with a 5k computer. I'm sure.
Low quality rant, I know. I'm typing while still frustrated. For people who think censorship is needed often, this is the result! According to someone else, YOU are the one who has to be censored. Don't forget that.11 -
Needed an application that generates data very fast for a networking application i'm writing but I did notice that /dev/urandom and /dev/random are not very consistent in speed.
Still, i needed something fairly random with more consistent speed. Now, I made an application that caches 1000 randoms upfront and use them for calculation. Now I have my own randomization algorithm backed by the uniqueness of the original rand(). For fun I added data in the set like some phone numbers. I can watch ages to the data to find smth in common or interesting combinations of the data.
I did verify with GPT is the algorithm is unique and it's fail. It generated a complete ML script for itself to check it. Very awesome.
You use urandom, i use retoordom. We are not the same.15 -
You are watching 20 hours a week Netflix featuring broken families.
I'm reading Donald Duck since I was a kid once week for 30 minutes featuring good family values.
We're not the same.
Christmas was great. I love my once dysfunctional family. Dysfunctional as F but solid as a rock when needed.80 -
Do you guys remember a few days ago that I was looking for someone with certain email address because he didn't receive his email because HE had an insecure mail server? I was sad, because I love new members. While my site has everything public, even api urls to api services without any auth, email confirmation off, hardcoded links to internal servers like retoor42 in repositories, still someone managed to think he hacked me: https://retoor.molodetz.nl/hi/.... That guy! Ironically I went even looking for him to give him credentials! Listing all members of my site is even possible because I have literally right under in my site a link to the most advanced api ever where you can list everything the site contains THAT I ALLOW YOU TOO. That hacker says "magic". I have the url to that "magic" literally on every page Einstein.
Don't let that guy found out what you can do with api.molodetz.nl without any protection..
Dear lord. It's probably the most public site with no secrets ever.
Also, the server runs with a small password and it's a pwned password. Ssh is on port 22. No security measurements are taken.
I can assure you, I know security and worked on cloud shit for three years at one of Dutch biggest cloud provides, kinda aws.
You won't be able to do anything I don't want you to with causing big damage.
Dear lord.3 -
AI is cheap. It only costs a few euro to translate a whole Harry Potter book using the best AI models I just calculated.
Wow, a few euro for the whole book? That's cheap! Yes, but it takes time.
Days? Hours? Minutes?
And how much do other things than a translation service cost since if you're doing smth more complex and need the most expensive model? My article is all about wasting as much money as possible regarding AI.
The article we really need regarding AI pricing: https://molodetz.nl/retoor/gists/...
I translated the terrible AI pricing to something humans can understand! Time and Harry Potter.
If you didn't read Harry Potter, read that first. Priorities.
And this, this was a great christmas night!
TLDR; if you would use the best AI (4o!) for a translation service it would cost you $0.0015 per minute.4 -
Family, family, family, charity, charity, some coding. Some dR. Even had a Grinch. A lot of snow. This Christmas is perfect and one more evening to go!
Happy x mass guys!2 -
Anyone here registered to my site using evusd.com emailaddress? I received delivery notification failure because your server isn't correctly secured. You requested a password recovery. My mail server only allows secure mail over TLS.
Make a new account or contact me here so i can reset it. Would be happy with someone new to callobrate. I am at 7 registered users now :P * proud *! -
I am monitoring my behavior per hour based on key logger data. I summarize it using AI. It gives some nice compliments about advanced shortcut usage and positive indications about me. It also tells you what applications you're using and what languages you program.
It's all fun and games until:
This is followed by navigating to a website (xhmaster.com), possibly suggesting a break or a shift in activity.
well, that's one way to put it.31 -
Hi everyone,
I was surprised how much lessons there are to be learned using OpenAI api, especially when configuring your own agents..
So, I made a small doc in the form of a class for people who want to start right away containing all urls and lessons learned. I especially made NOT a library of it - like many losers did - so it can be native in your project and you don't have an extra dependency. It's in exception of OpenAI api dependency free on purpose.
You could have your own GPT in the CLI within literally 15 minutes. Links to dashboards where to register what is all mentioned in the doc. It's written for programmers but just not familiar with python and programmers who want to start directly without issues what seems to be hard in the AI business. A lot low quality stuff out there.
https://molodetz.nl/retoor/gists/... -
Dammit, it's morning. Then I get my kinda post nut clarity. Things I thought last night to be a good idea, not anymore. Good that I went to sleep. Phew. New rule, only decide what to do regarding new projects in the morning. The new background of my site is a good example when deciding things tired. What serious dev has such stuff? I dunno, maybe I do leave it for now, site is because many reasons unprofessional. There is a duplicate one with my real name on it. I made a reverse proxy project that replaced http content by interpreting httpd, fixing content length after replacing (else browsers will endlessly load or give error) with support for websocket and buffered content so implemented quite a bit. If I replace retoor with my real name now, you'll see it literally everywhere, I git history and such. Probably even downloaded zip files, I have to see if that doesn't corrupt. This software can also be used to make sure smth is NOT published. Sensitive data you could put a password in it for example so Noone will figure out if you accidently stored a password in git. I check this now by grepping using my common passwords in git. But I use env vars right now for passwords.
Got off topic, no decisions when tired anymore!5 -
I have paid version of GPT now. Kinda accidental, I wanted the API version and that seemed to be smth completely else. Didn't know. I have a lot of opinions / mixed feelings about it so far. Take the imagine generation, this image is kinda perfectly what I wanted. But it took so much iterations and it was forgetting what i've said lines before. Everytime adding text balloons why I said to remove it, then it removes it but generates a complete new image with something else I don't want with it. THan i fix that, and tadaaa, again a text balloon.
This is the question where it is about: is GPT in current state worth the money? I have no idea, I need some more time.29 -
Some people wanted to download their rants / comments. I'm working on it.
Three lines of native python code (no dependencies) to see what @Lensflare said:
from pprint import pp
from xmlrpc.client import ServerProxy
pp(ServerProxy("https://victoria.molodetz.nl/rpc").get_comments({'id':{'gt':42},'_limit':1337,'username':'Lensflare'}))
I think this gives example of possibilities enough. Use your fantasy on how to retrieve rants.
Limitations:
- Not whole dR is available yet, but way more than retrievable using the public dR API. This system uses the user website as source.
- It doesn't show rant_id or comment_id and it won't to prevent abuse. Later today, there will be away to attach rants comments.
- maximum 2500 record limit. But soon you can get comments for every rant per user. You won't reach this limit in normal usage
Have fun with it! Don't worry about the abusing the API. Everything is allowed. It's fast as F. If it doesn't respond - it wasn't you. I work on it and often reboot services and it takes some time to recover to state.
If you're not familiar with python, that's ok. Check if you're a decent dev and have python or python3 on your computer. Just execute it and paste the lines. Other way is to save these three limes to a file ending with .py and execute python3 [your-file].
Another example for people not used to python:
from pprint import pprint as pp # nice printing of values
client = ServerProxy("https://victoria.molodetz.nl/rpc")
comments = client.get_comments({'_limit':1337})
for comment in comments:
if comment.get('username','default username') == 'kiki':
print(comment.get('body'))
pp(comment)
Happy hacking!7 -
I have a score difference of two in two posts after eachother on the same page. This is how it must feels to win a lottery without gaining smth. I should buy a ticket. The odds are with me.11
-
Black box. It does seem to put messages with an URL in a certain category though, but also that's not always correct. It's trained on 3000 normal dR messages, and 3000 spam dR messages. 6000 dR messages in total. Many epochs but not good for use yet. The idea that the system could classify without discriminating new users is from the table. That discrimination is needed as a safe margin. Original spam system is a bit simple, but it doesn't do false positive and works great. Still, I want to make smth advanced out of it for the sake of education. Tomorrow I'll have my neural networks book. Probably over two weeks I have some good insights how to improve this all. New hobby :)
(pretrained 3b models are fine for recognizing spam btw. But it costs resource. 8 CPU's 100%. A self trained model pure on spam doesn't and is fast. With a pretrained model you can't do mass classification.)7 -
Hello all,
Did a MASSIVE spam down vote action..
If you've watched the algo-list you kinda had an idea how much spam there was. In exception of rants, it also down vote the spam comments posted under regular rants. Also made a progress with historic rants.
I think I leave it at this for now, gonna do some machine learning for spam and will apply that as filter in the future.
Site is now clean enough now to not encounter spam just browsing the site.
Our new frenemy Buffon gave as tip the no index tag what is a great idea for users not having reputation < 5 or so. Will contact dfox if he's willing to do this small change.12 -
So, a while ago I made a keylogger (called tikker (project is on my site) and now I created plot with statistics. Interesting, I work literally full time (24h) :P So now I have a graph of activity per hour:14