Abstrakt | Regrettable disclosures (i.e., things people wish they had not posted/shared) on social media platforms have become a serious issue in recent years. When engaged in self-presentation and impression management on online social networks (OSN), users often make disclosures that they subsequently regret. Such regrets have been shown to typically revolve around disclosures on sensitive topics or sharing content with strong sentiment, lies, and secrets. As such, regrettable self-disclosures do not only jeopardize peoples' privacy but are also damaging to their public reputation and private relationships. In this work, we present WallGuard, a system for nudging OSN users towards detecting and avoiding embarrassing, privacy sensitive and regrettable online disclosures. WallGuard's key building block is a hierarchical machine learning framework which provides mechanisms to predict regret-specific labels associated with any given user-generated text. To achieve this goal, we designed and experimented with new deep learning models and propose Regret Embeddings. The latter are domain-specific pre-trained word embeddings for regrettable disclosures. Extensive evaluations of the proposed models demonstrate their high classification performances (with a weighted AUC score of up to 0.975) on a real-world corpus of annotated regrettable user-generated texts. WallGuard allows OSN users to specify individual preferences with respect to the types of topical content to be shared with specific audiences. Therefore, while content analysis is done in an objective manner, nudgy personalized disclosure recommendations are generated based on user's privacy preferences. A proof-of-concept of our tool is available, currently as a Facebook thirdparty app yet easily deployable on other social media platforms. |
---|