top of page
Writer's pictureLuke Abela

Natural Language Processing for Mental Health

The Age of Mental Health & Social Media


In 2019, it was estimated that 13% of the world’s population was living with a mental disorder, with a plurality of those residing in high-income countries. With up to (at the time) 970 million individuals falling into this category, a slight majority of which were female, over 50% of these individuals suffered from anxiety disorders and/or depressive disorders. Other common disorders included developmental, attention-deficit, bipolar, eating, and more.


One of the great obstacles in tackling this prevalent issue is the stigma around the discussion of this very topic. Many individuals may not be aware they are suffering from a mental condition, or are simply uncomfortable speaking up, and seeking professional help.


One way to reach out to individuals on the scale of millions is through the increasingly popular social media. With an excess of 2 billion monthly users on Facebook in 2019, as well as just under 2 billion monthly users on YouTube in the same time period, social media was already an immense global phenomenon. These numbers were prior to Covid-19 pandemic and subsequent lockdowns, which drove up use of digital platforms even more.





Social Media in Support of Mental Health


Text Classification is a task in Natural Language Processing (NLP), a machine learning technique which assigns a predefined category to a piece of text, and is typically used for broad applications such as sentiment analysis, topic labelling, spam detection, and intent detection. Text classification is a fantastic tool for analysing large amounts of unstructured text, which is typical of social media generated content, especially from forum sites such as Twitter and Reddit. We can thus use text classification as a tool to assess text posted on social media by various users, and analyse that text for signs of mental distress.


To prepare a text classifier to analyse for a specific task, an appropriately trained artificial intelligence model is required. In recent years, NLP fields have found great advancements in performance across various use cases by virtue of Deep Learning, Word Embeddings, and Transformers.


Word Embeddings allow computers to understand each word, and that word’s relationship to each and every other word. Transformers are deep learning models which adopt a mechanism known as self-attention, which allows the models to weight their input data with the appropriate significance.


With the capabilities of understanding how words relate to one another, along with the capability to understand how important each word is, NLP models have the means to tackle various tasks, including those of sensitive nature including mental health.


Identifying Those at Risk


Social media websites such as Facebook are already employing text analysis software to filter out for potentially harmful language. With tools including their anti-bullying filter, as well as others, the technology for parsing user text on social media websites already exists, and it was designed with the intent of making digital public spaces for interaction and conversation more protected, and safer for all involved.


The goal, by combining NLP techniques for mental health in the realm of social media, is to identify users who are displaying signs of mental distress, evident in the content they post. By analysing individuals throughout the medium term, an early warning system could encourage those whose content is more indicative of mental distress to seek help and aid. Furthermore, such a system could be used to improve social media algorithms. By identifying the type of content which would induce mental distress, or would not be conducive towards good mental health, the social media algorithms used by these major platforms would be able to limit the reach of such content.


These techniques could be generalised beyond text to also include imagery, and would be particularly useful for sites such as Instagram, which has specifically been shown to have a notably bad effect on the self-esteem of young girls.


The Challenges Involved


Gathering data to train such a model is not trivial. Supervised training requires large amounts of meticulously labelled data. Asking a significant number of individuals to come forward and explicitly state whether they were suffering from mental health disorders, the timeframe in which they suffered, is a significant task on both a human, and a logistical level.


Furthermore, if a significant number of people who suffered from mental health disorders were identified, it should be noted that whether one is suffering from a mental health disorder or not is not binary. To take the example of depression, an individual would not simply be depressed one day, and not depressed the next. The transition into and out of a mental health disorder is a journey, which often takes significant amounts of time. Mental health professionals typically assess individuals over weeks and months. It is therefore not suitable to simply analyse an individual’s three most recent posts and look no further.


Finally, training AI models results in a black-box process. This simply means that once a model is trained, and is put into production, it is not a trivial task to simply reverse engineer the output it produced to a given input. This may introduce questions of ethical, legal, and moral ambiguity.


Addressing the Data Issue


To address the data ambiguity, various individuals and researchers in the field have taken to Reddit to harvest textual data from the relevant forums, known as SubReddits. Reddit is a popular social media forum based website, with subReddits dedicated to a vast number of topics and communities, including individual subReddits for various mental health disorders. These subReddits invite individuals struggling with the relevant mental disorder, family members, and those curious to share their experiences, discuss, and engage with one another.


Though not every individual posting and commenting on the subReddit “r/Depression” may in fact be clinically depressed at their time of posting, the discussions and content would be relevant, and represents a low barrier, easy to obtain source of data.


Results


A comparison of different standard architectures with different embeddings was conducted to determine the architecture's ability to potentially identify probable cause for mental health concerns. In general, LSTMs, CNNs and Deep Averaging Networks performed at comparatively similar levels. Closer inspection indicated that the CNN architectures typically minimally outperformed their LSTM counterparts. The pre-loaded DistilBERT and RoBERTa classifiers were used as a performance comparison, to determine whether there would be any benefit to the inclusion of the CNN, LSTM, and Deep Averaging architectures.


The pre-trained embeddings of the transformers DistilBERT and RoBERTa served to improve the performance, exceeding the performance of the custom trained embedding set. DistilBERT and RoBERTa were trained on very large collections of text, making use of techniques such as byte-pair encoding, as opposed to a more traditional word encoding mechanism.


Overall, the highest performing model was the RoBERTa Convolutional Network (refer to Figure 1). RoBERTa was a higher performance model than DistilBERT but also larger and more computationally expensive. DistilBERT provided a significant increase over the baseline whilst being smaller than RoBERTa. The selection between such models would be dependent on computational resources available and ultimately the importance placed upon maximising the performance of the text classifiers.



Figure 1: Comparison of the classification performance of different architectures, prior to data augmentation.



The quantity and quality of training such embeddings clearly exceeded the custom trained embeddings. The strength of BERT model embeddings is remarkable and is becoming a standard in various text applications with exceptions being made for tasks which require highly unusual and/or technical terminology such as medical journals due to the frequent use of Latin words.


Furthermore, data augmentation methods also showed great improvements to performance at test time (refer to Figure 2). The use of rule-based language data augmentation methods was well suited to text originating from social media. Text originating from articles published by reputable sources, such as BBC, a corporate publication, and/or professional writers is typically proofread and continuously edited to ensure no spelling or grammatical errors are left. Social Media text, in contrast, is much more likely to contain spelling errors, incorrect grammar, accidental repeated words, random missing words, and randomly swapped words. These errors occur due to social media users not typically proofreading or editing their posts. Thus, by effectively introducing similar modifications of synonym replacement, random insertion, random word swap, and random word deletion through easy data augmentation (EDA), a social media dataset can be augmented by creating sentences which will not necessarily be grammatically perfect, like text predominantly on social media.


Figure 2: Comparison of the classification performance of different architectures, after data augmentation.


Scope for Future Work


There is promise in using text classification across social media platforms to identify users experiencing mental distress. Deep Learning provides state of the art results across various fields of study, and this is no exception. There do exist shortcomings in this work and others. The text classifiers trained here are appropriate for forum based social media sites (Reddit and Twitter), however Facebook continues to dominate as the global social media domain of choice, whilst Instagram continues to also be hugely significant. The greatest potential leap forward would be to move beyond simply text and begin to examine content such as imagery, videos, gifs, and more. A comprehensive multi-modal approach spanning text, image, and audio would serve to better assess online users.



 

The research challenges above were investigated by Luke Abela for his Master’s thesis during his time at Queen Mary, University of London, under the supervision of Dr. Arkaitz Zubiaga.



Recent Posts

See All

留言


bottom of page