Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
-
Updated
Aug 17, 2024 - Python
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
Voice safety classifier
A web-app to identify toxic comments in a youtube channel and delete them.
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning technologies. The bot will warn admins about toxic users. Also, the admin can allow the bot to ban toxics.
A revolutionary AI-powered platform to help you solve doubts instantly, make learning easy, and achieve academic success.
A supervised learning based tool to identify toxic code review comments
Streams block game
This repository contains the code for the paper: "DeToxy: A Large-Scale Multimodal Dataset for Toxicity Classification in Spoken Utterances"
NLP deep learning model for multilingual toxicity detection in text 📚
Module for predicting toxicity messages in Russian and English
Toxformer is an attempt at using transformers to predict the toxicity of molecules from their molecular structure using the T3DB database.
Offensive Language Identification Dataset for Brazilian Portuguese.
An AI to Scan for Toxic Tweets
Fast text toxicity classification model
Classifying users on social media, using text embeddings from OpenAI and others
It is a trained Deep Learning model to predict different level of toxic comments. Toxicity like threats, obscenity, insults, and identity-based hate.
This repository contains code for the paper: Cisco at SemEval-2021 Task 5: What’s Toxic?: Leveraging Transformers for Multiple Toxic Span Extraction from Online Comments
A REST API for detecting toxicity in a sentence. Using Tensorflow.js in the backend to detect parameters like identity_attack, insult, obscene, severe_toxicity, sexual_explicit, threat.
Build a model to identify toxic statements and reduce bias in classification
This is my repository and all the code needed to complete my Bachelor thesis on the detection of toxic comments.
Add a description, image, and links to the toxicity-classification topic page so that developers can more easily learn about it.
To associate your repository with the toxicity-classification topic, visit your repo's landing page and select "manage topics."