Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
-
@firefoxwebdevs you came up with the "killswitch" as if it was opt-in (it's *clearly* opt-out!), you put translate and llm-stuff into one box, *you* are the ones engaging in worst faith. why don't you go ahead and ask us why we're punching ourselves?
@malte there will be granular options for this stuff. The question is about the non-granular "kill switch".
-
@firefoxwebdevs come on man.
-
Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
They're not LLMs. They're trained on open data.
Should translation be disabled if the AI 'kill switch' is active?
I chose “No”. I find the translation feature very useful and greatly appreciate that is is local.
I do however think the local translate functionality should have an enable/disable switch right next to the AI enable/disable switch along with a brief and expanded description of functionality and locality of the feature.
-
@firefoxwebdevs Here's a concrete example of what I mean, that should be pretty consistent with the Firefox UI design:
@joepie91 I think a lot of people in the replies would consider this sneaky. It's a tricky UX problem. But yes, granular control needs to be part of the solution, along with a kill switch.
-
@malte there will be granular options for this stuff. The question is about the non-granular "kill switch".
@firefoxwebdevs you can't cherry-pick yourself out of your general bad faith engagement.
-
Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
They're not LLMs. They're trained on open data.
Should translation be disabled if the AI 'kill switch' is active?
@firefoxwebdevs As worded, and if we can trust Mozilla, then the acceptable answer should be No for these reasons: ML is not AI, and on-device means nothing is sent out of the device. In exchange you get free translation. Win.
BUT… there’s the trust issue now.
And what we REALLY need is not an AI kill switch but more of a “data transfer/phone-home kill switch”, almost like a firewall, where we know the browser is not taking any data and sending it to a device we don’t control ourselves.
-
@firefoxwebdevs As worded, and if we can trust Mozilla, then the acceptable answer should be No for these reasons: ML is not AI, and on-device means nothing is sent out of the device. In exchange you get free translation. Win.
BUT… there’s the trust issue now.
And what we REALLY need is not an AI kill switch but more of a “data transfer/phone-home kill switch”, almost like a firewall, where we know the browser is not taking any data and sending it to a device we don’t control ourselves.
@mdavis folks want to disable 'AI' for more reasons than privacy. Privacy is important of course, but folks are also concerned about the training data, and energy used for the training.
-
Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
They're not LLMs. They're trained on open data.
Should translation be disabled if the AI 'kill switch' is active?
@firefoxwebdevs tbh, the open embracement of AI, the addition of AI into the browser, while full well knowing your user base is well known for being anti big tech and privacy focused, was a mask-off moment.
I've already switched to librewolf, and I didn't have to disable/remove bullshit.
I recommend your ELT 1) get a grip and 2) remember you exist because of your userbase, not to please tech giants. If big tech had their way, they'd eat you alive. people who want AI slop aren't using Firefox.
-
@firefoxwebdevs That's exactly the motivation behind my suggestion, though - I've attached a mockup in an additional reply to hopefully make it clearer, but the idea here is to not redefine it so much as it is to explicitly pick a definition, and then provide an additional option for the broader definition, so that a user can essentially pick whichever definition they are following without getting into the technical weeds too much.
@joepie91 agreed.
@firefoxwebdevs we're not in those meetings so we don't know what all is actually included within the AI module suite, or even if that has been fully defined internally at this point, so of course there won't be a clean consensus externally from us on what "it" is and if it should be included or excluded, as it's up to our interpretation.
-
@mdavis folks want to disable 'AI' for more reasons than privacy. Privacy is important of course, but folks are also concerned about the training data, and energy used for the training.
@firefoxwebdevs But if the ML/AI training work is processing on the device and not is shared off device, and it is in support of a feature like translating a page (which should be prompted/selectable) then what’s the issue? You can say no and nothing happens. Or you can say yes and the worse that happens is you chew up some local power on your laptop or PC. Or are you saying that even though the translation happens on the device, the RESULT of that training data is sent back out?
-
@joepie91 I think a lot of people in the replies would consider this sneaky. It's a tricky UX problem. But yes, granular control needs to be part of the solution, along with a kill switch.
@firefoxwebdevs I can only speak for myself of course, but I'm someone who is strongly opposed to sneaky approaches, like hiding things in submenus or requiring people to go back later to disable new things, for example. And I'm also strongly opposed to basically everything in the current generation of "AI" (LLMs, GenAI, etc.) - but personally I wouldn't consider this sneaky, as it's immediately visible that there's a second choice to make, at the exact moment you disable "AI".
Of course if that stops being the case and the second option gets hidden behind an "Advanced..." button or foldout for example, it would be sneaky. But in the way it's shown in my mockup, I would consider it fine as it's both proactively presented and immediately actionable.
(I do still think that exploitative "AI" things should be opt-in rather than opt-out, but it doesn't seem like that's within the scope of options that will be considered by Mozilla, so I'm reasoning within the assumption of an opt-out mechanism here)
-
@firefoxwebdevs But if the ML/AI training work is processing on the device and not is shared off device, and it is in support of a feature like translating a page (which should be prompted/selectable) then what’s the issue? You can say no and nothing happens. Or you can say yes and the worse that happens is you chew up some local power on your laptop or PC. Or are you saying that even though the translation happens on the device, the RESULT of that training data is sent back out?
@mdavis I believe it's a moral stance due to how the models were produced.
-
@joepie91 agreed.
@firefoxwebdevs we're not in those meetings so we don't know what all is actually included within the AI module suite, or even if that has been fully defined internally at this point, so of course there won't be a clean consensus externally from us on what "it" is and if it should be included or excluded, as it's up to our interpretation.
@chillicampari @joepie91 fwiw I asked about translation because we're figuring out what to do specifically about translation.
-
@mdavis I believe it's a moral stance due to how the models were produced.
@firefoxwebdevs Hookay… then this is less about a local feature or data sharing and more about an overall “Made with AI” concern where nothing related to AI *at*all*ever* taints the user’s browser, in or out. In that case, if the user turns on the AI kill switch, it should totally kill anything having to do with AI for those who take that position.
That’s an issue with these polls — too much undisclosed nuance to be able to answer properly.
-
@angelfeast @twifkak No, I don't think so. It says this (with a takedown compliance process posted afterward)...
License
These data are released under this licensing scheme: PD
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these parallel data under the Creative Commons CC0 license ("no rights reserved").@tasket @angelfeast https://paracrawl.eu/moredata says "This is a release of text from Internet Archive.... The project also used CommonCrawl which is already public." Those crawls quite famously/infamously include copyrighted content. I don't see anything to suggest they filtered those datasets for public domain annotations. (Not that such an annotation would be enforceable, but it would at least be an indication of intent.)
-
@firefoxwebdevs Hookay… then this is less about a local feature or data sharing and more about an overall “Made with AI” concern where nothing related to AI *at*all*ever* taints the user’s browser, in or out. In that case, if the user turns on the AI kill switch, it should totally kill anything having to do with AI for those who take that position.
That’s an issue with these polls — too much undisclosed nuance to be able to answer properly.
@firefoxwebdevs But wait… what if the developers used AI to help develop the code in the browser itself? Does that mean AI kill switch purists should then rather not even use the product at all?
-
@firefoxwebdevs I can only speak for myself of course, but I'm someone who is strongly opposed to sneaky approaches, like hiding things in submenus or requiring people to go back later to disable new things, for example. And I'm also strongly opposed to basically everything in the current generation of "AI" (LLMs, GenAI, etc.) - but personally I wouldn't consider this sneaky, as it's immediately visible that there's a second choice to make, at the exact moment you disable "AI".
Of course if that stops being the case and the second option gets hidden behind an "Advanced..." button or foldout for example, it would be sneaky. But in the way it's shown in my mockup, I would consider it fine as it's both proactively presented and immediately actionable.
(I do still think that exploitative "AI" things should be opt-in rather than opt-out, but it doesn't seem like that's within the scope of options that will be considered by Mozilla, so I'm reasoning within the assumption of an opt-out mechanism here)
@joepie91 they will be opt-in, but different people have different opinions about what that means. For us, it means models won't be downloaded or data sent to models without the user's request.
However, some folks have said the only meaningful opt-in would be a separate binary for the browser-with-AI, or even having to compiling it manually.
-
Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
They're not LLMs. They're trained on open data.
Should translation be disabled if the AI 'kill switch' is active?
@firefoxwebdevs stop putting AI in your products, full stop. The machine translations made with the help of native speakers is 1000x better than the slop you're feeding us
-
@firefoxwebdevs But wait… what if the developers used AI to help develop the code in the browser itself? Does that mean AI kill switch purists should then rather not even use the product at all?
@mdavis it's definitely a complicated topic! I guess it's down to us to figure out a model that best serves most people, while providing options to cover the rest.
-
@tasket @angelfeast https://paracrawl.eu/moredata says "This is a release of text from Internet Archive.... The project also used CommonCrawl which is already public." Those crawls quite famously/infamously include copyrighted content. I don't see anything to suggest they filtered those datasets for public domain annotations. (Not that such an annotation would be enforceable, but it would at least be an indication of intent.)
@tasket @angelfeast It's not clear to me that I'm looking at the right place. Is this the data being used by Mozilla? I'm hoping that could be resolved by more than the 10 minutes of research I spent on it. I'd like even more for it to require much less research to understand the supply chain of a product offered as a public service. I've also got lots of reasons not to give them the benefit of the doubt here.