My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing.
-
@EmilyEnough this is a very justified rant
But the thought of computers being too autistic so people had to turn them neurotypical by adding llms is just so funny
-
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.
@EmilyEnough I completely agree. This rant inspired a tangential thought. There’s a article “ChatGPT is Bullshit” that talks a lot how LLMs are bullshit generators. It starts with Harry Frankfurt’s famous essay “On Bullshit,” which defines bullshit as distinct from lying. As I recall, a lie requires 2 things: some reference to the truth (you can’t lie without knowing that what you’re saying isn’t true); and some intent. It argues that a liar needs intent and a bullshitter doesn’t care.
It’s clear that LLMs have no reference to something like truth. That’s easy. But intent? The article makes a decent case that LLMs have a built in intent: deception. Pretending to be human is their intent. They “intend” to write words that are very human like. So do they have intent? Maybe. It’s part of why all the best uses of LLMs are around fraud.
I thought this might be an interesting slight pivot off the idea that they don’t have intent. You’re right: they don’t have it like a human, who presumably has some point; some reason for writing what they write. But maybe there is a latent intent.
https://link.springer.com/article/10.1007/s10676-024-09775-5
-
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.
@EmilyEnough Yeah, very telling that the people most excited about LLM seem to be middle managers and C-levels: people adept at the "waffling about" conversations.
-
@EmilyEnough Wow, I have thought a lot about how coding LLMs are antithetical to my own OCD tendencies that want everything to be built and formatted in a very specific way (i.e. the right way), but had not considered how terrible the interface would be for folks who prefer not to have to process information conversationally.
I would love to read an entire book or series of articles about how LLMs as an interface enforce neurotypical modes of communication on neurodiverse people.
@mikemccaffrey Neurotypicality is just one of many biases that LLMs amplify. It also amplifies the latent racism, sexism, ableism, Western ideologies that dominate English language writing online, etc.
But until I read this post by @EmilyEnough , I didn’t realise what a neurodivergent torture device LLMs are. I think not enough has been written on that subject yet. My adult son is neurodivergent and an awesome programmer. He also hates LLMs with a passion. I’m now seeing how this all comes together.
-
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.
@EmilyEnough thank you, I can absolutely relate to that!
️the struggle that coworkers / managers don't see ambiguity or inaccuracy in requirements that they wanted me to write software for seems to be the same lack of understanding when talking with the same people about software produced by LLMs. they seem to favor "something but faster" over "correct thing" and when pointed out, the "solution" seems to be to generate multiple iteration until finally reaching a "good enough" version. this is absolutely not how I understand my profession.
-
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.
@EmilyEnough Another thing is that it seems to hijack the thinking autonomy of a lot of people. People defer to an LLM instead of putting the struggle and effort into researching and learning. I'm not anti-convenience, but when we don't need to think about things anymore, the brain's thinking facilities just atrophy.
-
@EmilyEnough as a non-autistic person, they are also horrible at the other communication styles, since those require comprehension and intuition. Like, I can’t read what an LLM is getting at because it’s not getting at anything. It’s a parlor trick at best, with no memory and no real relationship with me.
And yeah, the whole point of computers was to have something more dependable and predictable than human capacities, especially in…computing. Like, it’s almost impressive to make a computer bad at computing.
@JoscelynTransient "as a non-autistic person" says the lady with a hyperfixation on a cartoon character she strives to personify
-
@JoscelynTransient "as a non-autistic person" says the lady with a hyperfixation on a cartoon character she strives to personify
-
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.
@EmilyEnough “ You’re trying to give directions to a lossily compressed copy of the entire works of human writing.” — Perfect.
-
@JoscelynTransient @twipped I mean... If you're going to have an autistic hyper-focus on an ADHD cartoon character, Harley Quinn is the one...
-
@mikemccaffrey @EmilyEnough The "you can write natural language queries" idea has always gotten a response from me of "why the fuck would I want to do that?" Standard search engine queries and stuff are so much easier.
@gourd @mikemccaffrey @EmilyEnough "I don't want to spend thirty minutes learning! I don't want to read a guide! I don't want to learn how to use a tool! I'm afraid of learning!"
People are taught to be uncurious & to be terrified of learning things now. Maybe the reason most people don't complain about search engines being nonfunctional now is because most people do not use search engines, libraries, or other methods of seeking information. They're ok with not knowing. They prefer to not know. Very dystopian. -
@JoscelynTransient "as a non-autistic person" says the lady with a hyperfixation on a cartoon character she strives to personify
-
@JoscelynTransient @twipped As a total side note, I love how the HowToADHD girl has started an entire YouTube career around infodumping about ADHD coping strategies and hasn't come out as autistic yet.
-
My biggest problem with the concept of LLMs, even if they weren’t a giant plagiarism laundering machine and disaster for the environment, is that they introduce so much unpredictability into computing. I became a professional computer toucher because they do exactly what you tell them to. Not always what you wanted, but exactly what you asked for.
LLMs turn that upside down. They turn a very autistic do-what-you-say, say-what-you-mean commmunication style with the machine into a neurotypical conversation talking around the issue, but never directly addressing the substance of problem.
In any conversation I have with a person, I’m modeling their understanding of the topic at hand, trying to tailor my communication style to their needs. The same applies to programming languages and frameworks. If you work with a language the way its author intended it goes a lot easier.
But LLMs don’t have an understanding of the conversation. There is no intent. It’s just a mostly-likely-next-word generator on steroids. You’re trying to give directions to a lossily compressed copy of the entire works of human writing. There is no mind to model, and no predictability to the output.
If I wanted to spend my time communicating in a superficial, neurotypical style my autistic ass certainly wouldn’t have gone into computering. LLMs are the final act of the finance bros and capitalists wrestling modern technology away from the technically literate proletariat who built it.
@EmilyEnough @drahardja
Exactly. I too need my automations to be deterministic. The element of surprise is fine for a novel, but not for a health care integration. -
@twipped on a serious note though, I really am not on the spectrum. Seriously considered it, but I fail almost every aspect, from not getting overstimulated and needing to actively work to understand what causes that for my autistic friends to being guilty of failing to be explicit and direct enough with autistic friends and causing communication difficulties (thanks to living in Japan and Japanese language contexts for a number of years, I actually sometimes am too indirect for neurotypical USians even).
A person can just be severely ADHD and have the overlapping manifestations like hyperfocus, communication difficulties with more neurotypically-aligned folks, and Infodumping. To be a bit more direct: It does feel a bit invalidating to have people insist ADHD doesn’t exist or have some of these manifestations, so would really encourage people not to do that please?
-
@faithisleaping @twipped hey, adhd people also hyper focus and infodump! This isn’t only an autistic thing

@faithisleaping @twipped going to leave this here too. Kinda don’t like where this joke is heading, cause it’s kind of miserable having people deny one’s ADHD symptoms

-
@EmilyEnough “ You’re trying to give directions to a lossily compressed copy of the entire works of human writing.” — Perfect.
-
@faithisleaping @twipped hey, adhd people also hyper focus and infodump! This isn’t only an autistic thing

@JoscelynTransient@chaosfem.tw I'm sitting here with both, and I can infodump in two different modes. They feel very different, even if they look the same from the outside.
@twipped@twipped.social @faithisleaping@anarres.family -
@faithisleaping @twipped going to leave this here too. Kinda don’t like where this joke is heading, cause it’s kind of miserable having people deny one’s ADHD symptoms

@JoscelynTransient I'm sorry. I was just being playful. I didn't mean to deny your ADHD struggles. FWIW, I can usually clock an autistic person at a mile and I'm also pretty sure you're not. I'm just making dumb (and apparently insensitive) jokes.