Facebook is still struggling to curb the spread of spam, hate speech, violence and terrorism on its site.
Facebook仍在努力遏制垃圾邮件、仇恨言论、暴力和恐怖主义在其网站上传播。
In its first quarterly Community Standards Enforcement Report, Facebook disclosed that it disabled 1.3 billion 'fake accounts' over the past two quarters, many of which had 'the intent of spreading spam or conducting illicit activities such as scams'.
在其首个季度社区标准执行报告中,Facebook透露,在过去两个季度中,它已经禁用了13亿个“虚假账户”,其中很多都有“传播垃圾邮件或进行诈骗等非法活动的意图”。
The update marks the first Community Standards report since Facebook was hit with a massive data privacy scandal earlier this year.
这是自Facebook今年早些时候遭遇大规模数据隐私丑闻以来首次发布的社区标准报告。
The tech giant also revealed millions of standards violations that occurred in the last six months leading up to March.
这家科技巨头还透露了在截至3月份的最后6个月里发生的数百万条违反标准的事件。
This includes inappropriate content like vilification, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam and fake accounts.
这包括不适当的内容,如诽谤、暴力、成人裸体和性活动、恐怖主义宣传、垃圾邮件和虚假账户。
Facebook acknowledged that its artificial intelligence detection technology 'still doesn't work that well,' particularly when it comes to hate speech, and that it needs to be checked by human moderators.
Facebook承认,它的人工智能检测技术“不够强大”,尤其是在仇恨言论方面,它需要由人为检查。
'It's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works,' said Guy Rosen, vice president of Product Management at Facebook, in a statement.
Facebook负责产品管理的副总裁Guy Rosen在一份声明中说,”重要的是要强调这是一项正在进行的工作,我们可能会改变我们的方法,因为我们会更多地了解什么是重要的,什么是有效的。“
'...We have a lot of work still to do to prevent abuse', he added.
“…我们还有很多工作要做,以防止虐待,“他补充道。
The firm has said previously that it plans to hire thousands more human moderators to 'make Facebook safer for everyone'.
该公司此前曾表示,计划招聘数千名真人主持人,以“让Facebook更安全”。”
Facebook moderated 2.5 million posts for violating hate speech rules, but only 38% of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm.
Facebook减少了250万篇违反仇恨言论规则的帖子,但其中只有38%的帖子被自动化标记了,这难以解释诸如反言论、自我引用评论或讽刺之类的细微差别。