A simple Flask API which detects toxicity in text
-
Updated
Aug 30, 2021 - Python
A simple Flask API which detects toxicity in text
This repository contains a comment toxicity classification model implemented using TensorFlow and Bidirectional LSTMs, designed to identify toxic content in online comments.
Add a description, image, and links to the comment-toxicity topic page so that developers can more easily learn about it.
To associate your repository with the comment-toxicity topic, visit your repo's landing page and select "manage topics."