<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[This is always the thing I think about with LLMs.]]></title><description><![CDATA[<p>This is always the thing I think about with LLMs. By definition, they are the statistical average of every thing ever written. Using one only pushes you into being mediocre. They are homogenizing humanity; eliminating any variation in how people speak, write, and even think.  </p><p><span><a href="https://tech.lgbt/@ngaylinn">@<span>ngaylinn</span></a></span> <a href="https://tech.lgbt/@ngaylinn/116284172328690293" rel="nofollow noopener"><span>https://</span><span>tech.lgbt/@ngaylinn/1162841723</span><span>28690293</span></a></p>]]></description><link>https://forum.other.li/topic/44382f7d-d02a-430b-a4c2-8b6178f7453f/this-is-always-the-thing-i-think-about-with-llms.</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 05:48:23 GMT</lastBuildDate><atom:link href="https://forum.other.li/topic/44382f7d-d02a-430b-a4c2-8b6178f7453f.rss" rel="self" type="application/rss+xml"/><pubDate>Tue, 24 Mar 2026 12:55:21 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 15:12:41 GMT]]></title><description><![CDATA[<p>I have a hard time understanding how anyone that cares deeply about some problem would find a mediocre solution acceptable. Every time I hear someone promote how some AI system can do such and such, what I hear is a person who is saying they are okay with mediocre output in this situation. They are basically saying that they don't actually care at all, and that you shouldn't either. </p><p>That's what I find so offensive: a kind of supremely arrogant nihilism and apathy.</p>]]></description><link>https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284795212731198</link><guid isPermaLink="true">https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284795212731198</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 15:12:41 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 15:02:34 GMT]]></title><description><![CDATA[<p>Cory Doctorow has gone on record that he believes the anti-AI backlash has its roots in "purity culture" — that people find genAI systems offensive because they were derived from morally corrupt sources, thus tainting the entire endeavor.I can only speak for myself, but my aversion to LLM and genAI has nothing to do with maintaining "purity" in any way.</p><p>To me, it has to do with this idea of LLMs being a force of homogenization and mediocrity.</p>]]></description><link>https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284755417051256</link><guid isPermaLink="true">https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284755417051256</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 15:02:34 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 14:02:22 GMT]]></title><description><![CDATA[<p><span><a href="https://thingy.social/@malcircuit">@<span>malcircuit</span></a></span> I feel insane, I don’t think most technology throughout the last 20+ years *ever* did what you describe.</p><p>I feel like such an outsider thinking that most of this cursed industry never respected anyone. Including a lot of FOSS etc. too.</p><p>Not an AI defense but rather my pov on a fundamentally broken system.</p><p>Venture capital has ruined nearly every company I’ve worked for… far before any of this, and any diversity was tolerated only to the point it allowed them to hit deliverables.</p>]]></description><link>https://forum.other.li/post/https://mastodon.social/users/dotsie/statuses/116284518728854769</link><guid isPermaLink="true">https://forum.other.li/post/https://mastodon.social/users/dotsie/statuses/116284518728854769</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 14:02:22 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 13:45:42 GMT]]></title><description><![CDATA[<p><span><a href="https://dice.camp/@johnzajac">@<span>johnzajac</span></a></span> <span><a href="https://thingy.social/@malcircuit">@<span>malcircuit</span></a></span> Is it unreasonable to consider the public domain?</p>]]></description><link>https://forum.other.li/post/https://mastodon.green/users/thematic/statuses/116284453195879279</link><guid isPermaLink="true">https://forum.other.li/post/https://mastodon.green/users/thematic/statuses/116284453195879279</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:45:42 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 13:42:22 GMT]]></title><description><![CDATA[<p><span><a href="https://thingy.social/@malcircuit">@<span>malcircuit</span></a></span> </p><p>You know, in reading this thread I remembered reading something about how almost all of "everything ever written" [that we have access to]  happened after 1990. On the internet.</p><p>That medium famous for accuracy, rigor, and it's compassion and kindness.</p><p>So really, when we use LLMs we're taking ~4k years of human ingenuity and language, discarding it, and replacing it with subreddits, fanfic, Facebook and 4chan.</p>]]></description><link>https://forum.other.li/post/https://dice.camp/users/johnzajac/statuses/116284440074468929</link><guid isPermaLink="true">https://forum.other.li/post/https://dice.camp/users/johnzajac/statuses/116284440074468929</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:42:22 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 13:36:27 GMT]]></title><description><![CDATA[<p>This is one of the main aspects of my philosophical opposition to "generative AI" and large language models. I don't care how "useful" they might be. Making my life easier or more productive isn't a sufficient justification to submit myself to a system that fundamentally does not respect anyone's unique experience and perspective. It's a system that's biased to enforce cultural conformity and stagnation, rather than embracing diversity and evolution.</p>]]></description><link>https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284416830125506</link><guid isPermaLink="true">https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284416830125506</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:36:27 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 13:16:22 GMT]]></title><description><![CDATA[<p>All of the most important writing in history has been at least slightly difficult to read. Any truly novel idea is uncomfortable to a degree. It often requires stepping outside of the status quo in some way and challenging assumptions.</p><p>LLMs never challenge assumptions. They are the assumptions crystalized — freezing and anchoring cultural development to one moment in time. </p><p>Using one isn't the future. It's trapping you in the past.</p>]]></description><link>https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284337862532969</link><guid isPermaLink="true">https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284337862532969</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:16:22 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 13:14:34 GMT]]></title><description><![CDATA[<p><span><a href="https://thingy.social/@malcircuit">@<span>malcircuit</span></a></span> <span><a href="https://tech.lgbt/@ngaylinn">@<span>ngaylinn</span></a></span> my first inkling that LLMs were not good was many years ago, when I received a clearly AI-generated email from an colleague I’d known and had worked with for over a decade. This person admittedly was not a good writer, but they had their own voice which I had come to appreciate, even if imperfect. I grieved that day when they chose to give up their own voice in lieu of some bland corporatized copypasta.</p>]]></description><link>https://forum.other.li/post/https://mspsocial.net/users/iamdoon/statuses/116284330761341922</link><guid isPermaLink="true">https://forum.other.li/post/https://mspsocial.net/users/iamdoon/statuses/116284330761341922</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:14:34 GMT</pubDate></item><item><title><![CDATA[Reply to This is always the thing I think about with LLMs. on Tue, 24 Mar 2026 13:06:41 GMT]]></title><description><![CDATA[<p>This is something I was thinking about when I learned that Cory Doctorow is an adamant user of Ollama, an open source LLM (whatever that means), to do spelling and grammar checking on his writing.</p><p>If I were a professional writer with an established voice, I wouldn't touch anything based on an LLM for fear that it would subtly erase that voice. So slowly you wouldn't even notice, your writing will be barely distinguishable from anyone else's.</p>]]></description><link>https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284299751132915</link><guid isPermaLink="true">https://forum.other.li/post/https://thingy.social/users/malcircuit/statuses/116284299751132915</guid><dc:creator><![CDATA[[[global:guest]]]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:06:41 GMT</pubDate></item></channel></rss>