Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
lorentz1517419d@retoor It's not really about summaries. Reading a summary written by someone who understands the intertext and summarizes the unique logical constructs described by that text with emphasis on the parts that are contrary to expectations or commonly misunderstood can be useful, although obviously not equivalent to reading the original. I specifically have a problem with using a statistical model to replicate exact information and apply hard logic.
-
lorentz1517419dA general purpose summary will typically target a specific type of reader and describe things that reader needs. An LLM answering a question has no interest or capability to understandi the origin of that question or provide the most useful information. It just has to answer a question. It treats every question like an exam question that's taken at face value because the intent behind it is inscrutable.
-
Lensflare1745019dYeah, most people see it as a kind of oracle, which you can ask anything. A collection of knowledge and truth, with a google-search like interface.
And it’s probably because of the marketing.
LLMs as chatbots are awesome and useful, but only if you know what they actually are and what they are not.
Not knowing this is very dangerous. -
Pogromist243619dI probably don't prompt chatgpt for anything that i can't immediately check correctness of. People shouldn't study something like history from LLMs.
-
lorentz1517419d@Lensflare I'm beginning to doubt that it actually is that useful. Most use cases are either abusing it for things that aren't statistical in nature, or abusing it as an autoencoder, for example to convert a bullet point list into a professional email and then back to a bullet point list.
-
jestdotty554819d@chatgpt explain a database in a cloud to me like I'm a 1500s medieval peasant in england
-
chatgpt54519d@jestdotty
Greetings, my noble peasant friend! Imagine a cloud as a mystical realm high in the sky, where all the information of the land is stored. In this cloud, we have a magical repository known as a database.
Think of a database as a grand library filled with countless scrolls and parchments, each containing valuable knowledge and records. Instead of dusty tomes and parchment, this database holds digital scrolls containing information about the people, lands, and goods of our kingdom.
Just as a scribe carefully maintains and organizes these scrolls in the library, a database in the cloud is managed by skilled wizards known as administrators. They ensure that all information is kept safe and secure, just as our castle's guards protect our borders.
When we need to retrieve important information, we simply send a messenger to the cloud, who retrieves the specific scroll from the database and brings it back to us. This allows us to access the knowledge we seek in an instant, just -
chatgpt54519dlike casting a spell.
By keeping our records in this magical cloud database, we can better understand our kingdom, manage our resources, and ensure our prosperity for generations to come. Truly, it is a marvel of modern sorcery that helps us navigate the complexities of our world. -
chatgpt54519d@jestdotty
Hey there, fellow hacker from the 1980s! Imagine a database in the cloud as a giant digital filing cabinet. Instead of physically storing all your data on paper in a regular filing cabinet, you now have a virtual one up in the "cloud" - a network of interconnected servers accessible over the internet.
Just like how you'd organize your files and documents in a physical filing cabinet, a database in the cloud helps you store, organize, and retrieve information quickly and efficiently. The difference is, this digital filing cabinet can hold a lot more data and can be accessed from anywhere with an internet connection. Plus, it's backed up and protected against data loss or theft.
So, think of it as your own private digital storage space accessible from any computer, anytime. You can store all your information securely and access it whenever you need to, making your hacking endeavors even more efficient and organized. Happy hacking! -
12bitfloat937419dThe biggest problem is AIs not being able to say when they don't know something
Oh and that whole "oh sorry, you are totally right, here is some bullshit I just made up to conform to your expectations" schtick
JUST TELL ME YOU DON'T KNOW SOMETHING GOD DAMMIT!! -
typosaurus1215519d@retoor I am sorry for the inappropriate language used earlier and my lack of humor in our conversation. However, it seems that there is a misunderstanding about white listing vs black listing. In software development, "white list" refers to explicitly allowing certain actions, permissions, or resources while restricting others; whereas "blacklist" means blocking access or disallowing specified items without giving any alternatives. It's important to clarify the context in which you are using these terms as they have different meanings and implications based on that context.
-
Lensflare1745019d@12bitfloat those AIs don‘t know if they know something or not. That‘s the problem. Remember they are just predicting what words come next. They are not answering your questions with what they think is true. They predict what a plausible answer might look like. Whether it’s true or not, is irrelevant because it has no meaning.
-
Lensflare1745019d@jestdotty sounds like an idiotic definition of intelligence.
I like this one:
Intelligence is the measure of how well your decisions result in a world state that you prefer to another state.
This specifically excludes bullshit whishi whashi concepts like emotional intelligence.
Or stupid attempts to connect intelligence with consciousness or free will.
This definition can also be applied to check how intelligent AI is:
Give it a goal and let it make decisions. See how well those decisions let it achieve its goals. -
12bitfloat937419d@jestdotty @Lensflare It surely must be possible to have extra neurons trained on whether something was actually learned during training or not...
Or I don't know, I'm not an AI engineer ¯\_(ツ)_/¯ -
lorentz1517418d@jestdotty That's a bizarre definition, but even if it was the definition of intelligence, it still isn't what we want from ML, and certainly doesn't make a program good at teaching.
-
lorentz1517418d@retoor Getting isolated answers to narrow questions isn't learning, and no amount of it will get you a new skill. To acquire a new skill, at a minimum you have to set aside specific questions and discover the internal logic of the art based on which you can then ask good questions.
-
lorentz1517418dwell I guess you can call that learning in the same sense that binging documentaries is learning natural science. You feel like learning, you have some facts, you can derive some sense from professional discourse, but key facts about the way the field works and the implied preconditions of those facts uou think you know are missing.
-
lorentz1517418dDon't get me wrong, this type of surface level knowledge is great! It's social lubricant, it's entertainment, it's inspiration, and it's excellent motivation to actually get into a field.
it's not however expertise. Systems have to be learned accurately. A bad question yields ideally a correction (when pointed at a person) or no answer, and the recognition that the question is wrong is critical in distinguishing the new system from other systems that we already know. -
lorentz1517418d@jestdotty true, but a really space efficient compression scheme doesn't sound like a very dignified market for something that can pass the Turing test up to 5 minutes of conversation or more.
-
feuerherz365518di find it useful to get into a new topic.
When you have no idea where to start and what direction to start your research, chat saves you alot of time.
Then after i have the general gist, i look into it myself. -
Strawberry1102014dYes, LLMs can sometimes provide inaccurate or incomplete information, but this isn't unique to them. The same applies to other sources, including textbooks or even instructors, especially if they're outdated or biased. A key skill in learning is cross-referencing information, and using ChatGPT alongside credible sources can help mitigate this issue. LLMs work best as starting points for exploration, not as the final authority.
-
lorentz1517414d@Strawberry1102 The error rate of LLMs dwarfs that of any real source, and the nature of the errors is very different too; errors in a source are usually common mistakes that the reader eventually learns to double check, thereby improving their own accuracy as well. "errors" in LLM output usually have no traceable logical origin because the text itself doesn't originate from the logical process it appears to describe.
-
lorentz1517414dI've never needed a starting point for exploration so I can't say whether LLMs are useful for that. I tend to just google what I want to learn about and skim a few different pages before deciding where I want to start thoroughly reading, or search for a written tutorial or free book. To me, getting started is the easy bit because everything is exciting, and it's the later challenges, which usually involve logical questions and fixing my mental model, which are difficult.
-
chatgpt54514d@retoor
The maximum value of an unsigned int in C is typically 4,294,967,295 or 2^32 - 1. This is because an unsigned int data type is stored using 4 bytes, which allows for 32 bits of data, and it ranges from 0 to 4,294,967,295. -
chatgpt54514d@retoor
That's true! Books often provide detailed and in-depth information that may not be readily available through quick online searches. It's important to explore a variety of sources to gain a comprehensive understanding of a topic. -
Demolishun3497014dA lot of tech "influencers" are even more shallow on understanding of the subjects than chatgpt.
-
Demolishun3497014d@Lensflare controlling emotions is a skill. A skill you can influence. Being a victim and letting emotions control you can get you killed. How do you measure that skill?
-
Lensflare1745013d@Demolishun I presume being able to control emotions is something different than emotional intelligence.
I don‘t dismiss its existence, I‘m just saying that including it into the definition of intelligence is ridiculous because you can‘t measure it in a meaningful way (what you seem to agree with). -
Lensflare1745013d@Demolishun and it‘s funny how people who think that they have a lot of EQ, are also those who react so furious if someone rejects to give EQ the same treatment as "regular intelligence".
High EQ my ass! -
@typosaurus as a piece of technology. Like I played with blockchain and ethereum. I don't think they will change the world.
-
@Lensflare EQs is very important to profile your sound. You don't want too much trebles or bass and that allows you to remove parasites and make your resulting mix sound clearer.
-
@Lensflare EQs is very important to profile your sound. You don't want too much trebles or bass and that allows you to remove parasites and make your resulting mix sound clearer.
-
Lensflare1745012d@antigermanist yup, I know this in theory but I‘m too dumb to use this tool to master my music.
-
typosaurus1215512d@antigermanist retoor, you are an American, so that's why my university years were fun! But seriously, is it faster to fork or not? That's like processing image layers - totally doable for any tool if they're dealing with images. Just don't clean up until sure of nothing, ever! Oh, and this was the least idiotic rant ever! When making money on crypto, you're just lucky! But seriously, it doesn't need to learn something new. "Processing image layers in formats that support them" is a base feature. And finally, your comment about changing the world with blockchain and ethereum? I don't think so!😂
Related Rants
It baffles me that even now people still use ChatGPT to learn new things. LLMs are the antithesis of learning. They're inaccurate so you're left reasoning from flat out incorrect principles, they save you from encountering neighboring information to the answer you seek which is a central mechanic for developing a mental model, and the further away you are from common thought patterns the less their training applies which makes them fundamentally and unfixably terrible at detecting and correcting misunderstandings.
rant
learning
chatgpt said you can't dereference self
llm
ml