(Note: Jack didn't write, sign or send this message.)
Dear All,
Thank you for making our social platform the most exciting and diverse public forum on the planet.
Because of you, we have survived and thrived for nearly twelve years now. The road has not been without bumps, but here we are, thanks to your support. Our first profitable quarter is a good time to reflect on some of the things we have done well and where we have gone wrong. Most importantly, I’d like to talk about our learning and how we will do much better from now on.
We need to be a profitable company – one able to sustain itself. But we also want to be a force for good in the world, as we have from Day 1. In striving to create a better and more sustainable product for you, we may have sometimes lost sight of this mission.
We have been listening closely to your feedback and remain committed to making our platform more transparent and more useful, so let’s get something out of the way right now. We are biased, and we know it. We are biased because we are human. We are biased and we cannot credibly police speech or “fake news” beyond obvious threats and harassment. We cannot and will not act as a Ministry of Truth. But we can take steps to get our bias out of the way, and today I want to talk about some of the things we are going to do about it.
We are committed to being the go-to place for free exchange of ideas, promoting peace, open dialog and innovation. We are already taking action on these commitments and in the coming days we will roll out a number of improvements, many of them suggested by the community.
First, we will upgrade our safety review process. We understand that trust is a two-way street, so we are going to make some significant changes, which aim to achieve less bias and more transparency in our decision making. We hope the result will be improved safety and less unnecessary disruption to our users and advertisers.
- While posts and accounts can be flagged by both algorithms and users, no user account will be suspended without review by at least three different reviewers in three different locations around the world. We will only deviate from this rule and allow automatic suspension if we have too many reports of abuse, violence or self-harm to handle at any given time. Such reports requiring immediate action will obviously have priority in the (human) review process, followed by accounts which have been auto-suspended.
- Our reviewers will not be able to see any identifying information about the flagged post or the reports against it. If a post contains possible threats of violence or other information requiring urgent action, reviewers will immediately escalate to a supervisor who can unmask the metadata and notify law enforcement. Any decision to suspend, however, will still be done through our double-blind process.
- Reviewers will see a randomized mix of flagged and non-flagged items to help us calibrate the process, improve training and further reduce bias.
We do reserve the right to deviate from this process at our own discretion, but rest assured that it will be applied broadly. Obviously, we remain obliged to suspend accounts if local law-enforcement legally requires it.
We will notify the community as soon as these changes take effect. We are working around the clock to make them happen as soon as possible. Expect – and demand – more improvements to our review process as we learn from it.
Second, we are going to change our practice of verifying accounts. Within a few days, we will provisionally replace the current verification symbol with a small “identity verified” label. We will work to design the most judgement-free, yet convenient way to mark accounts whose identity has been verified. We will engage the community in this process, as well as expert psychologists and anthropologists with proven success in designing human-centric interfaces.
Please keep in mind that “verified” does not mean we approve or agree with the speaker – just that you can be confident about the identity of the speaker. The rest is up to you, the community.
Third, as part of our commitment to transparency, in the coming days we will start rolling out a new feature called TestDrive. Users and advertisers will be able to access TestDrive through their Settings panel and opt into trying out new tools and features before their full rollout. We may also provide ad credits to select users and advertisers as an incentive to participate in TestDrive. We will also use ad credits to reward those of you who have provided useful feedback on existing and upcoming features.
Here is a partial list of what will soon become available in TestDrive:
- Threaded posts.
- A dislike button.
- Ability to designate posts as public or private.
- Private-post subscriptions.
- One click opt-out from all timeline curation (including ICYMI).
- A system for instant (one-click) micro-payments to your favorite posters.
- Custom character counters for composing messages.
- An option to reduce all message previews in the timeline to three lines or less.
- A much more powerful and accessible advanced search (with reverse image search).
- Ability to delete or hide notifications from the Notifications list.
- Notification and direct-message search on both desktop and mobile.
- “Live Now” and “Subscription” tabs for current video broadcasts.
And more.
Please understand that to be able to provide you with an ever-better community platform we have to be far ahead of the pack. Thus, it may not always be feasible to TestDrive new features long before full release or while in early development. In making those decisions, we will err on the side of transparency and listening to our community.
With gratitude,
Jack