The Securly web filtering solution not only protects kids from inappropriate content, it also monitors their activity for cyberbullying and self-harm tendencies. The school admin and parents are informed of such suspicious activities to help them intervene and help the kid(s) involved.
Cyberbullying and self-harm activities are detected using a sophisticated machine learning and sentiment analysis algorithm. The database consists of thousands of words, phrases, and sentences that are categorized into different categories to help the engine identify the type of activity. We constantly update our database to ensure that the engine is trained using as comprehensive a list of words, phrases, and sentences as possible.
During training, groups of 2-3 words are used to help the engine understand how they should be categorized. At times some groups of words may co-occur in two different types of sentences, thereby increasing the probability of false categorization. For example, if the group of words - ‘you are a’ - is used during training, and it appears, to be followed by both ‘genius’ and ‘***hole’ in two different sentences, there is a probability that the engine throws a false positive or a false negative. The engine in this case returns the result for the class with the highest probability. As the size of our training set increases, the incidence of false-positive and false-negatives will also be reduced.
There is also the possibility that a specific group of words was not used during the training phase and therefore could not be identified accurately by the engine. Whenever we encounter such instances, the training set is updated appropriately to ensure that subsequent instances are categorized correctly.
We recently introduced exact keyword matches so that while “depression symptoms” will be flagged, “great depression” will be excluded. This will help us reduce false positives and provide greater accuracy of self-harm detection.