Skip to content
  • Kategorien
  • Aktuell
  • Tags
  • Beliebt
  • World
  • Benutzer
  • Gruppen
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Standard: (Kein Skin)
  • Kein Skin
Einklappen

other.li Forum

  1. Übersicht
  2. Uncategorized
  3. The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

Geplant Angeheftet Gesperrt Verschoben Uncategorized
18 Beiträge 12 Kommentatoren 0 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • ? Offline
    ? Offline
    Gast
    schrieb zuletzt editiert von
    #1

    The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

    If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

    A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

    A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

    A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

    ? ? ? ? ? 10 Antworten Letzte Antwort
    0
    • ? Gast

      The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

      If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

      A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

      A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

      A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

      ? Offline
      ? Offline
      Gast
      schrieb zuletzt editiert von
      #2

      @david_chisnall What bout malicious overzealous moderators?

      ? 1 Antwort Letzte Antwort
      0
      • ? Gast

        The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

        If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

        A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

        A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

        A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

        ? Offline
        ? Offline
        Gast
        schrieb zuletzt editiert von
        #3

        @david_chisnall I'm seeing a lot of talk about reputation systems at the moment, applying to open source contributing and social media.

        Every time, I'm reminded of how awful it was getting started on Stack Overflow.

        I had an account for years before I ground through the painful process of building a reputation.

        I'm not surprised that they're dying, it's not just AI; if you build walls in front of new users they'll give up and go somewhere else.

        Much of my angst was that I'd put in the work elsewhere, but there was seemingly no means of transferring that reputation.

        But there will always be new people trying to start from scratch, and somehow we need to welcome them whilst keeping out the abusers.

        ? 1 Antwort Letzte Antwort
        0
        • ? Gast

          @david_chisnall What bout malicious overzealous moderators?

          ? Offline
          ? Offline
          Gast
          schrieb zuletzt editiert von
          #4

          @humanhorseshoes

          Always a problem, but that’s usually where the second layer comes in: moderation decisions from other instances are shared only if you trust the moderators of that instance. And that is a reputation that they earn by sharing moderating decisions and by you deciding that you agree with them.

          ? 1 Antwort Letzte Antwort
          0
          • ? Gast

            The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

            If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

            A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

            A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

            A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

            ? Offline
            ? Offline
            Gast
            schrieb zuletzt editiert von
            #5

            @david_chisnall it couldn't work here, but I like SomethinAwful's approach: you pay a one-time nominal fee ($10USD) to get to post there.

            It stops all but the most determined, demented bad actors (there is one specific lunatic who keeps re-registering accounts with the names all numbers, but apart from him the system works pretty well).

            1 Antwort Letzte Antwort
            0
            • ? Gast

              The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

              If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

              A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

              A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

              A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

              ? Offline
              ? Offline
              Gast
              schrieb zuletzt editiert von
              #6

              @david_chisnall
              Would love your thought on moderation @shlee, because here’s a possible shortcut to keeping out bad actors from the beginning: crowdfund a new instance, so costs are covered and users have skin in the game from the start. Anyone done that?

              1 Antwort Letzte Antwort
              0
              • ? Gast

                The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

                If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

                A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

                A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

                A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

                ? Offline
                ? Offline
                Gast
                schrieb zuletzt editiert von
                #7

                @david_chisnall

                Admins and moderators themselves are often ignored as being part of the threat model.

                A key difference between one large instance and a federation of many small instances is that the "social pressure" to ensure good moderation decisions is a lot smaller too.

                It isn't always malicious. The long tail of small instances are run by people who are tech enthusiasts first, not trained regulators and phds in the contentious topics being moderated.

                But the sum of many small such biased decisions leads to a large effect.

                Kinda like how money laundering breaks large sums into smaller sums to pass under the regulatory filters.

                This is just my opinion, formed by experience, I may be wrong.

                1 Antwort Letzte Antwort
                0
                • ? Gast

                  The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

                  If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

                  A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

                  A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

                  A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

                  ? Offline
                  ? Offline
                  Gast
                  schrieb zuletzt editiert von
                  #8

                  @david_chisnall how do new users get discovered if they can't even comment in other threads?

                  1 Antwort Letzte Antwort
                  0
                  • ? Gast

                    @humanhorseshoes

                    Always a problem, but that’s usually where the second layer comes in: moderation decisions from other instances are shared only if you trust the moderators of that instance. And that is a reputation that they earn by sharing moderating decisions and by you deciding that you agree with them.

                    ? Offline
                    ? Offline
                    Gast
                    schrieb zuletzt editiert von
                    #9

                    @david_chisnall @humanhorseshoes

                    I've long pushed for actual transparency on fedi moderation decisions.

                    Today's common practice of obscurity makes that trust transfer almost impossible.

                    ? ? 2 Antworten Letzte Antwort
                    0
                    • ? Gast

                      @david_chisnall @humanhorseshoes

                      I've long pushed for actual transparency on fedi moderation decisions.

                      Today's common practice of obscurity makes that trust transfer almost impossible.

                      ? Offline
                      ? Offline
                      Gast
                      schrieb zuletzt editiert von
                      #10

                      @rzeta0 @david_chisnall It is a big gap in the model for sure

                      1 Antwort Letzte Antwort
                      0
                      • ? Gast

                        The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

                        If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

                        A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

                        A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

                        A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

                        ? Offline
                        ? Offline
                        Gast
                        schrieb zuletzt editiert von
                        #11

                        @david_chisnall does it have to be at instance level? Can we let users turn on a rainbow of 'trusted by N community-trusted users', or 'boosted by someone I follow', or 'banned by less than N users I personally trust' or 'lives on an instance known for strict moderation'... filters individually?

                        Is this a path towards community moderation? What would be an efficient set of filters to implement and update?

                        ? 1 Antwort Letzte Antwort
                        0
                        • ? Gast

                          @david_chisnall @humanhorseshoes

                          I've long pushed for actual transparency on fedi moderation decisions.

                          Today's common practice of obscurity makes that trust transfer almost impossible.

                          ? Offline
                          ? Offline
                          Gast
                          schrieb zuletzt editiert von
                          #12

                          @rzeta0 @humanhorseshoes

                          Yup, I often see ‘this user / instance is hidden by your instance, show it anyway?’ And have no context for knowing why.

                          ? 1 Antwort Letzte Antwort
                          0
                          • ? Gast

                            @rzeta0 @humanhorseshoes

                            Yup, I often see ‘this user / instance is hidden by your instance, show it anyway?’ And have no context for knowing why.

                            ? Offline
                            ? Offline
                            Gast
                            schrieb zuletzt editiert von
                            #13

                            @david_chisnall @rzeta0 This is the problem for scaled systems, once the degrees of separation increase so does the governance

                            1 Antwort Letzte Antwort
                            0
                            • ? Gast

                              The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

                              If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

                              A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

                              A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

                              A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

                              ? Offline
                              ? Offline
                              Gast
                              schrieb zuletzt editiert von
                              #14

                              @david_chisnall a downside of the negative-starting-reputation model is the privacy posture erosion, as it encourages users to stay on the same account by putting a barrier on starting a new account. This is particularly risky for marginalized groups where (pseudo-)anonymity can be a matter of life and death.

                              1 Antwort Letzte Antwort
                              0
                              • ? Gast

                                @david_chisnall does it have to be at instance level? Can we let users turn on a rainbow of 'trusted by N community-trusted users', or 'boosted by someone I follow', or 'banned by less than N users I personally trust' or 'lives on an instance known for strict moderation'... filters individually?

                                Is this a path towards community moderation? What would be an efficient set of filters to implement and update?

                                ? Offline
                                ? Offline
                                Gast
                                schrieb zuletzt editiert von
                                #15

                                @TomBerend @david_chisnall this has been tried - web of trust. The biggest problem is that trust is subjective. It varies not only by person, but by person+topic. Furthermore, as cliques form it becomes harder for an authentic newcomer to enter a circle of trust. Conflict resolution is hard too: I trust 100 people who trust person A, but I don't trust A. Does person B who trusts me trust A because the 100 people I trust outweigh me? Finally, there's the matter of dealing with account compromise: A highly trusted person's account becomes a sought after target for threat actors.

                                1 Antwort Letzte Antwort
                                0
                                • ? Gast

                                  The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

                                  If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

                                  A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

                                  A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

                                  A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

                                  ? Offline
                                  ? Offline
                                  Gast
                                  schrieb zuletzt editiert von
                                  #16

                                  @david_chisnall

                                  moderation is always essentially a game of defense, nothing is going to change that

                                  i fear what you're saying will just turn new users off

                                  i could see a posting limit for new accounts though

                                  and it shouldn't be "after 7 days the limits are off" it should be "at the moment they first post, the number of posts they can make in the next hour/ day/ whatever has a ceiling" because otherwise spammers will just a create a new account and sit on them until they are able to firehose

                                  1 Antwort Letzte Antwort
                                  0
                                  • ? Gast

                                    The root problem with a lot of Fediverse moderation is a problem that is well known the reputation-system literature:

                                    If the cost of creating a new identity is zero then a reputation system cannot usefully express a lower reputation than that of a new user.

                                    A malicious actor can always create an account on a different instance, or spin up a new instance on a throw-away domain. The cost is negligible. This means that any attempt to find bad users and moderate them is doomed from the start. Unless detecting a bad user is instant, there is always a gap between a new fresh identity existing in the system and it being marked as such.

                                    A system that expects to actually work at scale has to operate in the opposite direction: assume new users are malicious and provide a reputation system for allowing them to build trust. Unfortunately, this is in almost direct opposition to the desire to make the onboarding experience frictionless.

                                    A model where new users are restricted from the things that make harassment easy (sending DMs, posting in other users’ threads) until they have established a reputation (other people in good standing have boosted their posts or followed them) might work.

                                    ? Offline
                                    ? Offline
                                    Gast
                                    schrieb zuletzt editiert von
                                    #17

                                    @david_chisnall even collecting reputation over time is not going to help. reddit is a best example of that. many bot accounts lurk around and contribute mediocre reposts and comments for years before being used for something like smear or astroturfing campaigns (thus completely negating account age or reputation filters)...

                                    1 Antwort Letzte Antwort
                                    0
                                    • ? Gast

                                      @david_chisnall I'm seeing a lot of talk about reputation systems at the moment, applying to open source contributing and social media.

                                      Every time, I'm reminded of how awful it was getting started on Stack Overflow.

                                      I had an account for years before I ground through the painful process of building a reputation.

                                      I'm not surprised that they're dying, it's not just AI; if you build walls in front of new users they'll give up and go somewhere else.

                                      Much of my angst was that I'd put in the work elsewhere, but there was seemingly no means of transferring that reputation.

                                      But there will always be new people trying to start from scratch, and somehow we need to welcome them whilst keeping out the abusers.

                                      ? Offline
                                      ? Offline
                                      Gast
                                      schrieb zuletzt editiert von
                                      #18

                                      @cpswan @david_chisnall So, either you have a system with anonymity and abuse, or you have a system where new users struggle. It's very naive to believe once again that technology could solve such a NON-technical, social dilemma. Good technology can optimize/minimize these issues, and it should. But it cannot make them go away.

                                      1 Antwort Letzte Antwort
                                      0
                                      • monkee@other.liM monkee@other.li shared this topic
                                      Antworten
                                      • In einem neuen Thema antworten
                                      Anmelden zum Antworten
                                      • Älteste zuerst
                                      • Neuste zuerst
                                      • Meiste Stimmen


                                      • Anmelden

                                      • Anmelden oder registrieren, um zu suchen
                                      • Erster Beitrag
                                        Letzter Beitrag
                                      0
                                      • Kategorien
                                      • Aktuell
                                      • Tags
                                      • Beliebt
                                      • World
                                      • Benutzer
                                      • Gruppen