(Pending Approval From Slack)
This is a Slack bot that uses machine learning to infer if anything posted to Slack is NSFW.
There are two core services involved
- Safely Listener, The Real Time Slack events processor that dispatches classification tasks.
- Safely Moderator, The Classifier that receives tasks from the real time listener and makes inferences about content.
If you don't want to bother deploying this and running your own infrastructure, you can join Safely (coming soon), which is a paid content moderation service that provides the same capability in addition to:
- Fine tuning inference parameters.
- Continuously trained classification models.
- Custom reporting and behaviors.
- Additional content-moderation insights.
- Daily/Weekly digests.
If you prefer to run this yourself, follow these initial steps
- Create a Slack App and Bot app in Slack.
- export your Slack token as
SAFELY_SLACKBOT_TOKEN=<your token>
. - export your list of Slack admins as
SAFELY_ADMINS_LIST=@michael,@admin
(separated by a comma). - Invite your Slack bot to all the channels you would like to monitor.
See Deployment Options to learn how to deploy using Kubernetes or Docker.
-
Safely focuses on basic NSFW content including nudity and profanity. See the Open NSFW project for further information about the scope of the model.
-
This tool is imperfect. There will be some false positives & negatives. Please see the license file to learn more about guarantees (there are none).
Special thanks to:
-
Yahoo's NSFW Model, which has been converted from Caffe to Tensorflow.
-
Profanity Check, which provides a model trained by Sci Kit learn (not a black list)
MIT