Aware is a unique tool by Securly that uses artificial intelligence (AI) to detect cyber-bullying, self-harm, profanity and grief sentiments in students’ Gmail, Outlook, Gdrive, OneDrive, Gchat, MS Teams chat, G-Sheets, G-Slides. Also, our AI detects nudity in images on Gmail, Outlook, G-drive, and OneDrive and nudity in videos on G-drive, and OneDrive.
Our system uses a sophisticated natural language processing and sentiment analysis algorithm that can infer the sentiment behind emails. This helps us effectively distinguish “This is an ugly sweater” from “You are ugly”, thereby helping detect bullying via emails. Unlike other systems that rely on keywords, we analyze the sentiment and hence are prone to fewer false alarms. Our approach is automated, allowing us to provide this service at no cost to schools as no human labor is required.
To analyze the emails, our system needs to read them first. For this, we require the school to add a mail router via their G Suite admin console. This sends a copy of the emails to the Securly servers. Only emails from a registered domain are sent over to Securly.
Once we receive them, our sentiment analysis and natural language processing algorithms get to work and analyze the content for any traces of self-harm, bullying, grief, nudity and profanity. If none is found, the message is discarded. If something suspicious is discovered, email is flagged and saved to the database so that the school admin can view it from his Aware tab in the Securly admin console. An email alert is triggered to the schools informing them of such activity.