🏠
Author: eustace.link (did:plc:yupjodsb265htel3yennasoq)

Collections

Record🤔

uri:
"at://did:plc:yupjodsb265htel3yennasoq/app.bsky.feed.post/3l4i22p4tya2j"
cid:
"bafyreie7qq7b5gblslfp2e5aqufixqlushv7hyk3ies7tbh7lxzj5aejua"
value:
text:
"This business will get out of control. It will get out of control and we'll be lucky to live through it.

(Imagine attracting like, ALL the shitposters to your site then telling 'em you're gonna ban 'em for "rudeness." 😂)"
$type:
"app.bsky.feed.post"
embed:
$type:
"app.bsky.embed.images"
images:
  • alt:
    "Bluesky
    
    Toxicity detection experiments
    
    Addressing toxicity is one of the biggest challenges on social media. On Bluesky, the two areas that made up 50% of user reports in the past quarter are for content that is rude and for accounts that are fake, scams, or spam. Rude content especially can drive people away from forming connections, posting, or engaging for fear of attacks and dogpiles.
    In our first experiment, we are attempting to detect toxicity in replies, since user reports indicate that is where they experience the most harm. We'll be detecting rude replies, and surfacing them to mods, then eventually reducing their visibility in the app.
    Repeated rude labels on content will lead to account level labels, and suspensions. This will be a building block for detecting group harassment and dog-piling of accounts.
    Automating spam and fake account removals
    Harm on social media can happen quickly."
    image:
    View blob content
    $type:
    "blob"
    mimeType:
    "image/jpeg"
    size:
    936501
    aspectRatio:
    width:
    923
    height:
    2000
langs:
  • "en"
createdAt:
"2024-09-19T02:51:54.027Z"