<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing.]]></title><description><![CDATA[<p>My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.</p><p>LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.</p><p>In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.</p><p>But LLMs don’t have an understanding of the conversation. There is no intent.  It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing.  There is no mind to model, and no predictability to the output.</p><p>If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.</p>]]></description><link>https://forum.other.li/topic/74345ad8-6a28-435a-ad57-3eb5d657c4c2/my-biggest-problem-with-the-concept-of-llms-even-if-they-weren-t-a-giant-plagiarism-laundering-machine-and-disaster-for-the-environment-is-that-they-introduce-so-much-unpredictability-into-computing.</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 20:12:43 GMT</lastBuildDate><atom:link href="https://forum.other.li/topic/74345ad8-6a28-435a-ad57-3eb5d657c4c2.rss" rel="self" type="application/rss+xml"/><pubDate>Fri, 20 Mar 2026 01:07:15 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Sat, 28 Mar 2026 22:44:38 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> THIS! So much this. I've said before that the worst thing about how we use LLMs is they destroy the basic computing concept of Garbage In=Garbage Out. They turn it into Anything In=Maybe Garbage Out.</p>]]></description><link>https://forum.other.li/post/https://gamerstavern.online/ap/users/116167869750398409/statuses/116309221586616656</link><guid isPermaLink="true">https://forum.other.li/post/https://gamerstavern.online/ap/users/116167869750398409/statuses/116309221586616656</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Sat, 28 Mar 2026 22:44:38 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 17:33:18 GMT]]></title><description><![CDATA[<p><span><a href="https://wargamers.social/@evildrganymede">@<span>evildrganymede</span></a></span> This post is a hallucination. It's weird how concepts people came up with 2 years ago and have been since disproven are repeated as fact. You're not an LLM, but here you are, bullshitting because you need updated training. Not sure why you're better, I guess because you have authority as a human being, and have totally mislead us... that's better?</p>]]></description><link>https://forum.other.li/post/https://beige.party/users/joe/statuses/116285348166269928</link><guid isPermaLink="true">https://forum.other.li/post/https://beige.party/users/joe/statuses/116285348166269928</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 17:33:18 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 17:26:36 GMT]]></title><description><![CDATA[<p><span><a href="https://infosec.exchange/@SearingTruth">@<span>SearingTruth</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> It's cute to read humans say they are the height of invention</p>]]></description><link>https://forum.other.li/post/https://beige.party/users/joe/statuses/116285321822292626</link><guid isPermaLink="true">https://forum.other.li/post/https://beige.party/users/joe/statuses/116285321822292626</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 17:26:36 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 14:27:43 GMT]]></title><description><![CDATA[<p><span><a href="https://techhub.social/@DocBohn">@<span>DocBohn</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> That is true! People defer to things they shouldn't all the time. I just think LLMs are the next level of this, one that's about to be way worse, and way more societally impactful, than any before. I mean, look at what it's doing to primary education, like smartphones - the shiny silicon tablets designed to a tee to trap your attention - didn't do enough damage to it already.</p>]]></description><link>https://forum.other.li/post/https://bark.lgbt/users/wallabra/statuses/116284618430136775</link><guid isPermaLink="true">https://forum.other.li/post/https://bark.lgbt/users/wallabra/statuses/116284618430136775</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 14:27:43 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 11:53:33 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> <br />Not to be gatekeeping, but normies should have never gotten control of the Internet.</p>]]></description><link>https://forum.other.li/post/https://infosec.exchange/users/kallisti/statuses/116284012187959416</link><guid isPermaLink="true">https://forum.other.li/post/https://infosec.exchange/users/kallisti/statuses/116284012187959416</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 11:53:33 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 11:53:09 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> </p><p>"I found a computer. Wait a second, this is<br />cool. It does what I want it to do. If it makes a mistake, it's because I screwed up."</p><p>Horrible that this amazing core trait of computers is getting eroded.</p>]]></description><link>https://forum.other.li/post/https://infosec.exchange/users/kallisti/statuses/116284010636619738</link><guid isPermaLink="true">https://forum.other.li/post/https://infosec.exchange/users/kallisti/statuses/116284010636619738</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 11:53:09 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 04:25:18 GMT]]></title><description><![CDATA[<p><span><a href="https://mastodon.social/@ennenine">@<span>ennenine</span></a></span> <span><a href="https://indiepocalypse.social/@gourd">@<span>gourd</span></a></span> <span><a href="https://neurodifferent.me/@mikemccaffrey">@<span>mikemccaffrey</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> I guess I'm the wrong kind of disabled because this is how search engines do work now</p>]]></description><link>https://forum.other.li/post/https://beige.party/users/joe/statuses/116282249613116773</link><guid isPermaLink="true">https://forum.other.li/post/https://beige.party/users/joe/statuses/116282249613116773</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 04:25:18 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 03:49:41 GMT]]></title><description><![CDATA[<p><span><a href="https://infosec.exchange/@SearingTruth">@<span>SearingTruth</span></a></span> <span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> Which is why the decision to apply it is made by people. And people can decide how to weight the mass death of innocents and we should not allow those decisions to be made by people who will get it wrong.</p>]]></description><link>https://forum.other.li/post/https://mastodon.nz/users/rupert/statuses/116282109564301976</link><guid isPermaLink="true">https://forum.other.li/post/https://mastodon.nz/users/rupert/statuses/116282109564301976</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 03:49:41 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 03:47:51 GMT]]></title><description><![CDATA[<p><span><a href="https://mastodon.nz/@rupert">@<span>rupert</span></a></span> <span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> </p><p>It's a perfect example.</p><p>As machine learning comprehends nothing.<br />ST</p>]]></description><link>https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116282102369203312</link><guid isPermaLink="true">https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116282102369203312</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 03:47:51 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 03:39:40 GMT]]></title><description><![CDATA[<p><span><a href="https://infosec.exchange/@SearingTruth">@<span>SearingTruth</span></a></span> <span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> Right, and if that asymmetry doesn't apply, as in your example, then it's not a good candidate for ML.</p>]]></description><link>https://forum.other.li/post/https://mastodon.nz/users/rupert/statuses/116282070178646896</link><guid isPermaLink="true">https://forum.other.li/post/https://mastodon.nz/users/rupert/statuses/116282070178646896</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 03:39:40 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 03:23:24 GMT]]></title><description><![CDATA[<p><span><a href="https://mastodon.nz/@rupert">@<span>rupert</span></a></span> <span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> </p><p>"This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."<br /><span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span></p>]]></description><link>https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116282006238957641</link><guid isPermaLink="true">https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116282006238957641</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 03:23:24 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 03:12:08 GMT]]></title><description><![CDATA[<p><span><a href="https://infosec.exchange/@SearingTruth">@<span>SearingTruth</span></a></span> <span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> <br />I don't think anyone's claiming that there's any benefit of a correct answer that "massively outweighs the cost" of mass death.</p>]]></description><link>https://forum.other.li/post/https://mastodon.nz/users/rupert/statuses/116281961892123310</link><guid isPermaLink="true">https://forum.other.li/post/https://mastodon.nz/users/rupert/statuses/116281961892123310</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 03:12:08 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 03:07:13 GMT]]></title><description><![CDATA[<p><span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span> <span><a href="https://mastodon.nz/@rupert">@<span>rupert</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> </p><p>"This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."<br /><span><a href="https://infosec.exchange/@david_chisnall">@<span>david_chisnall</span></a></span></p><p>Good god. Not if the incorrect answer leads to the mass death of the innocent. Which it most always does.<br />ST</p><p>"Evil knows no ideology or boundary, only an eloquent stance behind them."<br />SearingTruth</p>]]></description><link>https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116281942542679881</link><guid isPermaLink="true">https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116281942542679881</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 03:07:13 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 02:53:55 GMT]]></title><description><![CDATA[<p><span><a href="https://bark.lgbt/@wallabra">@<span>wallabra</span></a></span> <span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> This isn't unique to LLMs. I've seen people defer to an Excel spreadsheet that plainly had been built with faulty assumptions.</p>]]></description><link>https://forum.other.li/post/https://techhub.social/users/DocBohn/statuses/116281890260723352</link><guid isPermaLink="true">https://forum.other.li/post/https://techhub.social/users/DocBohn/statuses/116281890260723352</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 02:53:55 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 02:46:35 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> </p><p>"There is zero artificial intelligence today. There could have been, but 50 years ago the decision was made by most scientists and companies to go with machine learning, which was quick and easy, instead of the difficult task of actually reverse engineering and then replicating the human brain.</p><p>So instead what we have today is machine learning combined with mass plagiarism which we call ‘generative AI’, essentially performing what is akin to a magic trick so that it appears, at times, to be intelligent.</p><p>While the topic of machine learning is complex in detail, it is simple in concept, which is all we have room for here. Essentially machine learning is simply presenting many thousands or millions of samples to a computer until the associative components ‘learn’ what it is, for example pictures of a daisy from all angles and incarnations.</p><p>Then companies scoured the internet in the greatest crime of mass plagiarism in history, and used the basic ability of machine learning to recognize nouns, verbs, etc. to chop up and recombine actual human writings and thoughts into ‘generative AI’.</p><p>So by recognizing basic grammar and hopefully deducing the basic ideas of a query, and then recombining human writings which appear to match that query, we get a very faulty appearance of intelligence - generative AI.</p><p>But the problem is, as I said in the beginning, there is no actual intelligence involved at all. These programs have no idea what a daisy, or love, or hate, or compassion, or a truck, or horse, or wagon, or anything else, actually is. They just have the ability to do a very faulty combinatorial trick to appear as if they do.</p><p>And while the human brain consumes around 20 watts, these massive pattern matching computers consume ever increasing billions.</p><p>However there is hope that actual general intelligence can be created because, thankfully, a handful of scientists rejected machine learning and instead have been working on recreating the connectome of the human brain for 50 years, and they are within a few decades of achieving that goal and truly replicating the human brain, creating true general intelligence.</p><p>In the meantime it's important for our species to recognize the danger of relying on generative AI for anything, as it's akin to relying on a magician to conjure up a real, physical, living, bunny rabbit.</p><p>So relying on it to drive cars, or control any critical systems, will always result in massive errors, often leading to real destruction and death."<br />SearingTruth</p>]]></description><link>https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116281861469845450</link><guid isPermaLink="true">https://forum.other.li/post/https://infosec.exchange/users/SearingTruth/statuses/116281861469845450</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 02:46:35 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 00:48:47 GMT]]></title><description><![CDATA[<p><a href="https://lgbtqia.space/@anyia">@anyia@lgbtqia.space</a><span> There's not really much specific preparation involved in Type A infodumps. Preparation in immersing myself enough to do it, yes, but I did that, anyway, because I wanted to for my own reasons?<br />For example, the genetics of horse coats. I could spontaneously give a little speech and presentation about the Leopard complex that feels like I only lack a presentation running in the background to give it on a stage somewhere, but that's just from the whole topic well understood and filed away. It was definitely a useful trait to have in university, when no one in the group actually prepared their part in the presentation, but since I understood what we were writing about... </span><a href="https://twipped.social/@twipped">@twipped@twipped.social</a> <a href="https://chaosfem.tw/@JoscelynTransient">@JoscelynTransient@chaosfem.tw</a> <a href="https://anarres.family/@faithisleaping">@faithisleaping@anarres.family</a></p>]]></description><link>https://forum.other.li/post/https://blahaj.zone/notes/ak7gnn0g5txs000d</link><guid isPermaLink="true">https://forum.other.li/post/https://blahaj.zone/notes/ak7gnn0g5txs000d</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 00:48:47 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Tue, 24 Mar 2026 00:41:43 GMT]]></title><description><![CDATA[<p><span><a href="https://blahaj.zone/@thatfrisiangirlish">@<span>thatfrisiangirlish</span></a></span> neat, thank you! I'd say I'm more likely to venture down path B. Path A feels like a lot of prep work. Or maybe it's a mix of the two? Often as I'm explaining something I realise I need to detour to provide necessary foundational knowledge, before returning to the first train of thought. Sometimes I get lost in the nesting.</p><p><span><a href="https://twipped.social/@twipped">@<span>twipped</span></a></span> <span><a href="https://chaosfem.tw/@JoscelynTransient">@<span>JoscelynTransient</span></a></span> <span><a href="https://anarres.family/@faithisleaping">@<span>faithisleaping</span></a></span></p>]]></description><link>https://forum.other.li/post/https://lgbtqia.space/users/anyia/statuses/116281370439568880</link><guid isPermaLink="true">https://forum.other.li/post/https://lgbtqia.space/users/anyia/statuses/116281370439568880</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 00:41:43 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Mon, 23 Mar 2026 23:35:35 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> I'm stressing so hard over this... Like I've got 19 years of experience, senior engineer, went through the pipeline of:<br />- company over-relies on telemetry and fails to make product better<br />- blindly invests in ai to try and save themselves <br />- shit hits fan and mass layoffs</p><p>And honestly I'm not sure if I've got any job prospects in my future, in a field that's prioritizing getting it "done" regardless if the engineers understand the code they're committing.</p>]]></description><link>https://forum.other.li/post/https://fursuits.online/users/tkwolf/statuses/116281110403923598</link><guid isPermaLink="true">https://forum.other.li/post/https://fursuits.online/users/tkwolf/statuses/116281110403923598</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Mon, 23 Mar 2026 23:35:35 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Mon, 23 Mar 2026 23:05:31 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> all that said I don’t use the slop for anything other than finding my own way to say things.</p>]]></description><link>https://forum.other.li/post/https://hachyderm.io/ap/users/115802204577229878/statuses/116280992200496756</link><guid isPermaLink="true">https://forum.other.li/post/https://hachyderm.io/ap/users/115802204577229878/statuses/116280992200496756</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Mon, 23 Mar 2026 23:05:31 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Mon, 23 Mar 2026 23:04:55 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> I have a slightly different view. An LLM has some of the same language processing issues that I do, to the point that “I have LLM brain” is a useful cognitive model. It makes them surprisingly easy to “play” for me. The ability to take something I don’t understand and rewrite it into something else that aligns better with the corpus of normals-thought is definitely useful to me for understanding how normal communicate and bypassing my own limitations there.</p>]]></description><link>https://forum.other.li/post/https://hachyderm.io/ap/users/115802204577229878/statuses/116280989790153728</link><guid isPermaLink="true">https://forum.other.li/post/https://hachyderm.io/ap/users/115802204577229878/statuses/116280989790153728</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Mon, 23 Mar 2026 23:04:55 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Mon, 23 Mar 2026 22:58:49 GMT]]></title><description><![CDATA[<p><a href="https://lgbtqia.space/@anyia">@anyia@lgbtqia.space</a><span> This is largely a work in progress for myself, as well, so there are some edges that I'm not too sure about myself, and it's definitely a subjective thing - this certainly works like that for me, but for yo or anyone else, I don't have the faintest idea.<br /><br />Type A is mostly structured, and basically there to share something with you that I find extremely interesting. To do this justice, and to give you the full picture like you deserve, I have to give you the exhaistive rundown. I looked hard into this, and I'm just so excited to share this with you! This is mostly motivated by some need to share, meant to convey a complex bit of information, and I'll probably get upset if you're not excited, as well.<br /><br />Type B is more exploratory, where I mostly verbalize the train of thought going on in my head. And believe you me, I can think and speak like an extremely pedantic text book. What I say draws on other things I know, but I am not quite sure where this one goes. This is mostly motivated by sharing my thoughts on a topic as they happen, meant to collaboratively work on a topic, but unfortunately, I'll get very upset if you cut into this, because that's cutting right into my thought process, and who likes to be interrupted just as you have an idea at the tip of your tongue.<br /><br />Anyway, I don't know which belongs where, or even if they belong to specific neurotypes, but it is a hypothesis. From the outside, both probably feel quite like getting this text read at you at pretty high speed.<br /><br /></span><a href="https://twipped.social/@twipped">@twipped@twipped.social</a> <a href="https://chaosfem.tw/@JoscelynTransient">@JoscelynTransient@chaosfem.tw</a> <a href="https://anarres.family/@faithisleaping">@faithisleaping@anarres.family</a></p>]]></description><link>https://forum.other.li/post/https://blahaj.zone/notes/ak7cq85wgc5p0086</link><guid isPermaLink="true">https://forum.other.li/post/https://blahaj.zone/notes/ak7cq85wgc5p0086</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Mon, 23 Mar 2026 22:58:49 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Mon, 23 Mar 2026 19:58:19 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> (fwiw I think that ALL of the “AI” companies are some form of investment scam)</p>]]></description><link>https://forum.other.li/post/https://hachyderm.io/users/thomasfuchs/statuses/116280256037706106</link><guid isPermaLink="true">https://forum.other.li/post/https://hachyderm.io/users/thomasfuchs/statuses/116280256037706106</guid><dc:creator><![CDATA[thomasfuchs@hachyderm.io]]></dc:creator><pubDate>Mon, 23 Mar 2026 19:58:19 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Mon, 23 Mar 2026 19:56:51 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> I think they only have a future and indeed utility when 1) run locally, 2) being based not on stolen data and 3) being highly customized to a specific task (there’s a few tasks I find them useful for, e.g. searching a text corpus with very vague terms)</p><p>and definitely not with a subservient chatbot userinterface</p>]]></description><link>https://forum.other.li/post/https://hachyderm.io/users/thomasfuchs/statuses/116280250307974129</link><guid isPermaLink="true">https://forum.other.li/post/https://hachyderm.io/users/thomasfuchs/statuses/116280250307974129</guid><dc:creator><![CDATA[thomasfuchs@hachyderm.io]]></dc:creator><pubDate>Mon, 23 Mar 2026 19:56:51 GMT</pubDate></item><item><title><![CDATA[Reply to My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. on Mon, 23 Mar 2026 19:45:04 GMT]]></title><description><![CDATA[<p><span><a href="https://hachyderm.io/@EmilyEnough">@<span>EmilyEnough</span></a></span> Yeah I stopped reading when you disrespected autism</p>]]></description><link>https://forum.other.li/post/https://mastodon.world/users/humanhorseshoes/statuses/116280203976655672</link><guid isPermaLink="true">https://forum.other.li/post/https://mastodon.world/users/humanhorseshoes/statuses/116280203976655672</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Mon, 23 Mar 2026 19:45:04 GMT</pubDate></item></channel></rss>