key: cord-0920772-wmyml9zc authors: Saha, Koustuv; Yousuf, Asra; Boyd, Ryan L.; Pennebaker, James W.; De Choudhury, Munmun title: Social Media Discussions Predict Mental Health Consultations on College Campuses date: 2022-01-07 journal: Sci Rep DOI: 10.1038/s41598-021-03423-4 sha: 2849711d5eaa390a3a0133a24945e6de8561f1b8 doc_id: 920772 cord_uid: wmyml9zc The mental health of college students is a growing concern, and gauging the mental health needs of college students is difficult to assess in real-time and in scale. To address this gap, researchers and practitioners have encouraged the use of passive technologies. Social media is one such "passive sensor" that has shown potential as a viable "passive sensor" of mental health. However, the construct validity and in-practice reliability of computational assessments of mental health constructs with social media data remain largely unexplored. Towards this goal, we study how assessing the mental health of college students using social media data correspond with ground-truth data of on-campus mental health consultations. For a large U.S. public university, we obtained ground-truth data of on-campus mental health consultations between 2011–2016, and collected 66,000 posts from the university’s Reddit community. We adopted machine learning and natural language methodologies to measure symptomatic mental health expressions of depression, anxiety, stress, suicidal ideation, and psychosis on the social media data. Seasonal auto-regressive integrated moving average (SARIMA) models of forecasting on-campus mental health consultations showed that incorporating social media data led to predictions with r = 0.86 and SMAPE = 13.30, outperforming models without social media data by 41%. Our language analyses revealed that social media discussions during high mental health consultations months consisted of discussions on academics and career, whereas months of low mental health consultations saliently show expressions of positive affect, collective identity, and socialization. This study reveals that social media data can improve our understanding of college students’ mental health, particularly their mental health treatment needs. Mental health on college campuses is a matter of growing concern as an increasing number of college students show rising levels of anxiety, depression, and suicidal ideation. According to the 2019 National College Health Assessment 1 16 .7% students felt too depressed to function in the last 2 weeks from when the survey was conducted while 8.6% seriously considered suicide or tried to harm themselves in the past 12 months. Another decade-spanning study found that the percentage of students diagnosed with mental illness rose up from 22% in 2007 to 36% in 2017 even though the rate of treatment increased from 19 to 34% 2 . Mental health services on college campuses, including on-campus counseling centers and psychiatric clinics, therefore continuously struggle to address the increasing demands of mental health consultations in a timely fashion 3 . A research study conducted by Penn State's Center for Collegiate Mental Health, for instance, reported a 30-40% increase in the on-campus counseling consultations between 2009-2015, despite an only 5% increase in enrollment 4 . In short, these services often lack in resources, staff, and preparedness, leading to long waiting lists and selective/infrequent consultations of many 5 . This underscores an urgent need to meet the rising demand for mental health services with adequate and accessible resources. However, campus mental health services do not currently have adequate means to assess the evolving nature of demand or needs. While periodic surveys of students' mental health provides some barometer of mental health incidence, in terms of medication use, daily lifestyle, suicidal thoughts, depression symptoms, as well as potentially contributing academic, environmental, personal, and social factors 6 , they are accurate only in snapshots, and are prone to retrospective and susceptible to biases 7 . Since it is practically and financially unsustainable to OPEN 1 Associating social media expressions and on-campus mental health consultations. Next, we examined if inferring the symptomatic mental health expressions bears relevance to the ground-truth data of oncampus mental health consultations. Figure 1 also shows some form of trend and seasonality in the occurrence of symptomatic mental health expressions. A Dicky-Fuller test revealed that these time series are not stationary ( p > 0.05 ). Therefore, for each time series, we conducted trend and seasonality decomposition, and applied moving window based trend and seasonality removal to obtain transformed residual time series that passed the stationarity test ( p < 0.05). We conducted similar time series decomposition on our ground-truth data. Then, we obtained the crosscorrelation between the residual time series of social media mental health expressions and ground-truth data of mental health visits. We built linear regression models at various lags by controlling for base-rates of the previous month's number of mental health consultations and the prevalence of mental health expressions on social media. A lag of n months indicates a comparison where the social media data is shifted by n months behind the ground-truth data. A higher standardized coefficient would explain a greater predictive ability of the social media expressions towards the ground-truth data. Next, Fig. 2 www.nature.com/scientificreports/ Predicting on-campus mental health consultations. Now, we predict on-campus mental health consultations using seasonal auto-regressive integrated moving average (SARIMA) time series modeling. Table 1 shows the predictive performance of the two models, M 0 and M 1 −M 0 is the baseline model which uses only the time series of on-campus mental health consultation data, and M 1 combines the time series of on-campus mental health consultation data and mental health expressions captured from the college subreddit. We find that M 1 shows 13.16% better correlation and 41.25% lower error than M 0 . A dependent overlapping correlation between the two model predictions shows a statistical significance (t = − 2.07, p < 0.01 ). Figure 3 shows the model predictions in comparison to the actual values. Drawing on permutation test approaches 29,30 , we permuted (randomized) the predictions of mental health consultations. 1000 such permutations of randomized predictions show a Pearson's r = 0.09 and SMAPE = 32.40 at average, and a probability of 0 of better performance than either of M 0 or M 1 . This rejects the null hypothesis that any prediction improvement is by chance. Overall, our results reveal that combining baseline model ( M 0 ) with social media based inferences of symptomatic mental health outcomes (in M 1 ) is an effective means to predict on-campus mental health consultations. Examining how social media language explains mental health consultations. Finally, we illuminate the characteristics of social media language that corresponds with our ground-truth. For this, we separated the social media data of the months that showed high mental health consultations (Hi-MHC) and those that showed low mental health consultations (Lo-MHC) on a median split. We conducted two types of language analysis, which we describe below. Analyzing linguistic cues. First, we employed an unsupervised language modeling technique called Sparse Additive Generative Model (SAGE). Table 2 shows the most salient keywords distinctly used in Hi-MHC and Lo-MHC months. We find that the Hi-MHC months show greater prevalence of keywords related to academics and examination, such as, "finals", "hours semester", "summer classes", "textbooks", etc, and keywords related to disciplines such as, "cs majors", "geology", and "psychology", e.g., "I need urgent help. I'm about to get kicked out of my CS major. I need a 2.65 entry level GPA to advance. I made an A-in 312 and a C-in data structures, so my www.nature.com/scientificreports/ CS gpa at 2.66. " Hi-MHC also show keywords related to "commencement" and "graduation", which could associate with the stress during post-college transition period of students 31 , for instance, "When my met my advisor to apply for graduation he told me that I needed a BF certificate to count as my minor, I wish I knew this before. " In contrast, Lo-MHC months show a greater prevalence of keywords related to events, such as "parties", "football", "events", "hangout", and "social", such as "The parties were pretty lame and we were bored at one. My friends and I stole some beers and broke into a pool only to get nearly arrested!" Likewise, Lo-MHC also show keywords related to people and friends. Other forms of social gatherings such as "game" and "football", and accommodations such as "frat" and "dorm" occur saliently in the Lo-MHC months, e.g., "I'm a freshman, currently pledging a frat! I like partying, programming, drinking, playing, lifting weights, and mindlessly scrolling social media for hours!. " Psycholinguistic characterization. We next discuss the results from our psycholinguistic characterization. First, we extracted the normalized occurrences of the 50 psycholinguistic categories as per LIWC 50 . Then, for each category, we conducted an independent sample t-test between the occurrences in Hi-MHC and Lo-MHC months followed by a Benjamini-Hochberg-Yekutieli False Discovery Rate (FDR) correction. We present the results in Table 3 . Affective and cognitive attributes. Affective and cognitive attributes are indicative of an individual's disclosure and expressiveness in social media language. Among affective attributes, we find that Lo-MHC months show greater prevalence of affective categories, including anger, negative affect, and swear. Although all of these categories bear a negative connotation, their greater occurrence reflects greater expressiveness, which is known to be a positive wellbeing indicator 32 . This might associate with people venting out more often about their campus life, such as in, "Now I have even more reason to not live here next year. Fuck this place!" Among cognitive attributes, we find that the Hi-MHC months show a greater prevalence of tentativeness and discrepancies, which indicate an individual's insecurity and low degree of immediacy about the situation 50, 33 . In contrast, the Lo-MHC months show a greater prevalence of certainty, percept, hear, and see, e.g., "If you want to save your bandwidth, go to a computer lab, and watch youtube/listen to grooveshark/watch netflix all day long. " The greater use of these category of language has been associated with an individual's better cognitive functioning and mental health 34 . Linguistic style attributes. We first examine pronoun usage; pronouns are markers of social attention and connectedness 35 . We find that Hi-MHC months show a greater prevalence of first person singular and second person pronouns -these could be indicative of heightened self-attentional focus, first-hand accounts of personal events, narrative, and conversational language 34 , for example, "I added psychology in my second year. I Table 2 . Top salient n-grams (n = 1, 2, 3) distinguishing months of high and low mental health visits as per SAGE 49 . Bar lengths and color indicate magnitude and sign of SAGE value; pink bars (positive SAGE) indicate a saliency in Hi-MHC months whereas green bars (negative SAGE) indicate a saliency in Lo-MHC months. www.nature.com/scientificreports/ have learned that this is a very rigorous path to take, a huge commitment, and that you may need to take an extra year to complete. I must have taken at least 15 semester hours for every semester I spent here, peaking at 21 h last semester. ", where an individual describes the challenges of their college journey. In contrast, Lo-MHC months show a greater prevalence of first person plural pronoun which associates with narrating as a collective identity 23, 36 , such as in "There are plenty of ways to socialize here, as we have several student organizations. ". We also see a greater use of several function words including preposition, conjunction, relative, and inclusive in the Hi-MHC months, which are known to relate with personal narrative writing style 36 . Personal and social concerns. Among personal and social attributes, we find that Hi-MHC show a greater usage of work and achievement keywords, which could associate with discussions on career and self-actualization, and a greater use of money may associate with students discussing financial concerns, e.g., "Is it possible to consolidate jobs, save money and improve level of service? What would be an implementation to achieve this?". Hi-MHC also show a greater use of keywords related to home, which could include challenges with roommate, e.g., "I got stuck on a top floor between a bad roommate and an old, tiny room, such a terrible year. " In contrast, Lo-MHC months show a greater use of social words, such as in "I have a lot of free time and realized I really don't have a lot of friends. I've always been a social person, but it's been hard to make friends at this time of the year, since classes and clubs and everything are ending. How do you recommend I meet some new peeps?". Principal findings. This study showed that social media interactions of college students can help predict ground-truth data of on-campus mental health consultations. We adopted machine learning approaches to infer mental health expressions on a university's Reddit community, and then incorporated the model outcomes in time series forecasting models of the normalized number of on-campus mental health consultations. First, we found that (online) mental health expressions of college students correlate with (offline) mental health service utilization on college campus. Second, we found that the SARIMA model of forecasting on-campus mental health consultations accounting for social media data could predict the ground truth within 10.65% of error, which was also 38% lower error than models that did not include social media data. Finally, we conducted a deeper dive into the language of social media posts by comparing the data of months with high and low mental health visits using psycholinguistic characterization and an unsupervised language modeling called SAGE. We found that the months of high mental health visits tend to show a greater prevalence of words related to academics, academic examinations, career, and psycholinguistic attributes indicative of worse mental wellbeing, whereas the months of lower mental health visits show a greater prevalence of words related to social, partying, leisure, and psycholinguistic attributes indicative of better mental health. Together, social media data bears the ability to capture the language and social interaction of college students, and therefore can function as a "verbal sensor" to assess mental health needs and demands of college students. Methodological and practical implications. This work establishes the construct validity of computational assessments of mental health from social media data. This data can therefore serve as an unobtrusive and passive lens to gauge offline critical measures that are otherwise challenging to predict, including other forms of mental health service utilization, such as uptake of peer support interventions, should that type of data be acces- www.nature.com/scientificreports/ sible or easily gathered. Our study also demonstrated the face validity of this data, where it revealed discussions and concerns related to local, contextual, and contemporary events of interest, for example, during certain political event on gun laws in U.S., a student posted, "It's fucking nonsensical to carry a pistol around campus despite a handgun license!" Likewise, following a student death on the campus, students felt stressed and anxious about the event, e.g., "It is so depressing! Seems like he jumped out wanting to die. " This construct and face validity showcases promise that, since our machine learning models were built on considerable amounts of social media data, we believe they would be applicable across time periods of various lengths, capturing the ebbs and flows of a typical academic year, such as expectedly stressful periods as well as those when students typically recuperate and rejuvenate. Nevertheless, we note that due to the underlying sensitive nature of the mental health consultation data and practical challenges in gathering and gaining access to it, we had to rely on data from a single university. Consequently, we cannot claim generalizability at this stage. Still, this paper provides a first feasibility study of validity of social media data that can be extended in future research, spanning different universities, contexts, and datasets. Next, this work provides empirical evidence that can help to move toward constructing practical applications of on-campus mental health assessments using passive and unobtrusive data sources. Recent research has highlighted the extent to which stakeholders-including campus stakeholders and more generally, clinicians-value the potential of these technologies, such as in the form of proactive mental health assessment tools 37, 38 . This work established the construct and face validity of these assessments, as described above, and therefore, can guide building tools and dashboards that proactively assess the mental health of college students from online social chatter. Although not ready for real-world use immediately, we foresee two applications that our work could inspire. The first application sounds ways to assess campus pulse or campus morale-timely, contextual information regarding the mental wellbeing of students. These can be in the form of interfaces, visualizations, and systems 37 which help college stakeholders, including administrators, policymakers, and wellbeing councils, to gauge the needs of the students and accordingly ensure that adequate resources are available and measures are taken to meet the demands of mental health related services. Because our approach can yield assessments over time, the models can further be used to capture the ebbs and highs of mental wellbeing as well as its temporally-varying and evolving characteristics, such as during a typical academic year. Since our methods were predictive of mental health consultations on campus, these assessments could also be used to understand the impacts of academic events like examinations, regulations and policy decisions in campus life. A second application of this work could center around facilitating improved preparedness in campus in case of an emergency or crisis, and assessing mental resilience of the student body in response to adverse events that affect mental well-being of student (e.g., shooting incident on campus 23 , an infectious disease outbreak like COVID-19 39 , and so on. Speaking more specfically, such preparedness may mean managing/increasing the allocation of resources in the student health clinics on campus in the form of available clinicians or consulting hours, amplifying avenues for seeking alternative sources of mental health help, such as peer support, peer counseling, or crisis rehabilitation services, or even organizing awareness and educational initiatives/campaigns that encourage students to seek help and care more proactively. In essence, with our models, decision-making and resource allocation around college student mental health could be made more evidence-based. Limitations and future directions. Our study has limitations, some of which also suggest novel and impactful future directions. We cannot claim clinical validity to our assessments, and building upon prior work 40 is a direction to evaluate in the future. The findings of our study is limited to one college campus and a single form of ground truth data (on-campus mental health consultations). However, our computational approaches can be translated and adapted on other college campuses and for other wellbeing measures. We note that social media data suffer from limitations of sparsity and self-selection, i.e., this data only allows us to observe those who use and choose to post on social media. Therefore, the utility of these approaches are bounded by how active and generally engaged the social media discussion board and students of a college campus are, although we expect our methods to be applicable to comparably sized institutions and with similar demographic makeup as the one studied in this paper. Prior work noted that Reddit communities with at least 500 subscribers are somewhat representative of the campus population 23, 24 . Future research can thus expand the models developed here to varied university settings with active social media presence, such as a rural or suburban institution, or a liberal arts college, to test generalizability and robustness of the construct validity findings explored in this research. Further, as in the case of any large data source, Reddit data is not immune to noise. Despite moderation strategies, this data can contain discussions irrelevant to personal and campus lives of students (e.g., advertisements, promotions, etc.), and members who do not belong to the college communities-these need to be accounted for when considering practical implementations of computational and data-centric assessments. Future work can also validate mental wellbeing assessments from other social media streams that allow longitudinal posting, instantaneous interactions, and private socializations such as Facebook, Twitter, or Snapchat, which can provide complementary information about individual and collective mental health on college campuses. Finally, replicating and reproducing the validity results from this work to other types of mental health service utilization data would bolster confidence in the application of our methods to real world settings. Ground-truth data of on-campus mental health service utilization. This research builds upon health center data stemming from a large public university in the southern U.S. with an enrollment of over 50,000 students. Our ground-truth dataset comprises the count of monthly health center visits by students at the www.nature.com/scientificreports/ same university. The visits are classified into two types: visits related to mental health issues, and those unrelated to mental health issues. This data spans a period of 84 months: September 2009 to August 2016. For the purposes of our study, we normalized the monthly measure of mental health consultations as the percentage of enrolled students who sought mental health service in the same month. Such a normalization facilitates two goals-(1) minimalization of confounding outliers and distortion due to total number of enrolled students; and (2) preservation of the privacy of the university and the students of the university whose data is being studied. Social media data. We focus on the social media data pertaining to the college students from the same university. For this purpose, we used data from Reddit. Reddit is a popular social media platform among the age group between 18 and 29 years: Pew Research found that 65% of Reddit users are young adults 41 . This age group aligns with the typical college student demographic. Reddit facilitates focused conversations through "subreddits" that comprise of members interested in a specific topic. Many colleges have a dedicated subreddit community, which provides a common portal for the students on a campus to share and discuss about a variety of issues related to their personal, social, and academic life 23, 24 . Reddit is suitable data source for the study as it allows us to isolate posts from students from a particular college campus. Reddit, by design, facilitates candid disclosures by allowing pseudonymous and throwaway accounts, and community-driven moderation to maintain authenticity of members and civility and relevance of discussions [42] [43] [44] . In the case of college subreddits, the members often need to provide proof of their authenticity status to the moderators before participating in the discussions. While the subreddits also remain open to the alumni and staff, typically, only active students engage the most in ongoing discussion threads. Prior work has also leveraged Reddit data to study college students 23, 24, 28, 45 . We obtained the data from the subreddit corresponding to the same university under study, using the BigQuery API which hosts Reddit data archives 23, 24 . This archive included 66,020 posts by 18,401 unique users averaging at 33 posts per day between May 2011 and August 2016. The rest of the paper studies this period as this overlaps with our ground-truth data availability as well. Modeling approach. Our primary objective concerned examining if the online college community data is reflective of on-campus mental health service consultations. We identified the language indicative of symptomatic mental health outcomes from these social media posts. Then, we conducted time-series modeling to predict the mental health visits. We evaluated if including information gathered from social media data improved the predictions. Measuring symptomatic mental health expressions on social media data. We quantified mental health related expression in Reddit posts using machine learning classifiers identifying the language indicative of symptomatic mental health expressions of depression, anxiety, stress, suicidal ideation, and psychosis. We adopted the approach presented in 17 . Essentially, these classifiers are built using transfer learning methodologies, i.e., transferring a classifier trained on a different labeled dataset. These classifiers are n-gram (n = 1,2,3) based binary SVM models where the positive class of the training datasets come from appropriate subreddits, i.e., r/depression for depression, r/anxiety for anxiety, r/stress for stress, r/SuicideWatch for suicidal ideation, and r/psychosis for psychosis, and the negative class of training data comes from non-mental health content on Reddit-a collated sample of 20M posts, gathered from 20 subreddits from Reddit's home page such as r/AskReddit, r/aww, r/movies, etc. These classifiers perform at a high accuracy of approximately 0.90 on test data 17 . We used the classifiers to label each post in our Reddit dataset with binary (0 or 1) labels of each symptomatic mental health expression. Predicting mental health service utilization. To predict monthly mental health consultations, we adopted a time series modeling approach. We used seasonal auto regressive integrated moving average techniques (SARIMA)a standard time series forecasting method based on past behavior accounting for seasonality 46 . SARIMA incorporates seasonality in addition to auto regressive integrated moving average techniques (ARIMA) 47 , and is suitable in time series with seasonality (e.g., in our case there is known seasonality in academic cycles). We draw on k-fold (k = 10) cross-validation approach to predict and evaluate our modeling approaches. We first set aside the data from the first year of our dataset (2011) as the default training set so that the models could learn from the same baseline historical data. Then, we obtained various combinations of tenfolds, i.e., 90% of the remaining data was used to build a model that predicted the monthly mental health consultations on the remaining 10% data, and we iterated on various combinations to predict the entire dataset. As our work primarily targets the efficacy of social media data in understanding mental health service utilization, we built two kinds of models, as listed below. • Model M 0 is trained using only the time series of on-campus mental health consultation data. This model can be considered to be the one used in most in-practice purposes, or as our baseline model. • Model M 1 is trained using the time series of on-campus mental health consultation data, in conjunction with the time series of monthly aggregated mental health discussions in social media. For this, we calculate the monthly average of posts relating to depression, anxiety, stress, and suicidal ideation as identified by our classifiers. We used the above trained models to separately predict the number of monthly consultations in test data. We pooled all the predictions together to compare against the actual values and compute the Pearson correlation coefficient (r), where higher values directly associate with better performance. We also measured the prediction error between the actual and predicted data as mean absolute error (MAE) and symmetric mean www.nature.com/scientificreports/ absolute percent error (SMAPE). MAE calculates the arithmetic average of the absolute errors ( |y i − x i | ) where y i and x i are the predicted and actual values respectively, and SMAPE calculates percentage of relative errors ( |y i − x i |/[(|y i | + |x i |)/2] ), and is bounded between 0 and 100. For both error measures, lower values indicate lower error and better predictive performance. While comparing M 0 and M 1 , if M 1 shows comparatively better predictive performance than M 0 , we would conclude that using social media data contributes to better predict on-campus monthly mental health consultations. To measure statistical significance in prediction differences between M 0 and M 1 , we conducted t-tests using the dependent overlapping correlation method, which controls for comparing against a common variable of interest (here, the ground-truth number of monthly on-campus mental health consultations) 48 . Analyzing the social media language of mental health. Finally, we interpreted how social media language associates with on-campus mental health consultations. We obtained the months of high and low number of mental health visits-we adopted a median split on the normalized number of visits in a month. Then, we examined the distinction of these periods as per social media language. This examination would help establish the face validity of the social media language in correspondence to the ground-truth. We conducted two analyses: First, we adopted an unsupervised language modeling technique called the Sparse Additive Generative Model (SAGE) 49 . Given two documents, SAGE finds the keywords that distinguish the documents by comparing the parameters of two logistically parameterized multinomial models using a self-tuned regularization parameter that controls the tradeoff between frequent and rare terms. We aimed to obtain keywords that would relate with the key concerns faced by college students that lead to heightened mental health concerns. Second, we conducted a psycholinguistic analysis. For this, we used the well-validated psycholinguistic lexicon, Linguistic Inquiry and Word Count (LIWC) 50 . LIWC characterizes social media language in 50 psycholinguistic attributes ranging across affect, cognition and perception, interpersonal focus, temporal references, lexical density and awareness, and personal and social concerns. This analysis would help to contextualize the social media language of college students in the literature on mental health and therefore explain the predictive ability of social media language. www.nature.com/scientificreports/ American College Health Association et al. American College Health Association-National College Health Assessment spring 2019 reference group data report (abridged): the American College Health Association Increased rates of mental health service utilization by us college students: 10-year population-level trends Eisenberg Technology and College Student Mental Health: Challenges and Opportunities Center for Collegiate Mental Health (CCMH) National survey of college counseling centers The prevalence and socio-demographic correlations of depression, anxiety and stress among a group of university students The Psychology of Survey Response Studentlife: assessing mental health, academic performance and behavioral trends of college students using smartphones Inferring mood instability on social media by leveraging ecological momentary assessments Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures Estimating geographic subjective well-being from twitter: A comparison of dictionary and data-driven language methods Tracking fluctuations in psychological states using social media language: A case study of weekly emotion A way with words: Using language for psychological science in the modern era Values in words: Using language to evaluate and understand personal values Predicting depression via social media Quantifying mental health signals in twitter A social media study on the effects of psychiatric medication use Studying expressions of loneliness in individuals using twitter: An observational study Personality, gender, and age in the language of social media: The open-vocabulary approach Variability in language used on social media prior to hospital visits Feeling bad on facebook: Depression disclosures by college students on a social networking site Me and my 400 friends: The anatomy of college students' facebook networks, their communication patterns, and well-being Modeling stress with social media around incidents of gun violence on college campuses A social media based index of mental well-being in college campuses Coming of age (digitally): An ecological view of social media use among college students The benefits of facebook "friends": Social capital and college studentsúse of online social network sites A dynamic longitudinal examination of social media use, needs, and gratifications among college students A social media based examination of the effects of counseling recommendations after student deaths on college campuses Imputing missing social media data stream in multisensor studies of human behavior Influence and correlation in social networks Transition, stress and computer-mediated social support Expressive writing, emotional upheavals, and health. Handbook of Health Psychology Linguistic styles: Language use as an individual difference Psychological aspects of natural language use: Our words, our selves Natural language analysis and the psychology of verbal behavior: The past, present, and future states of the field Linguistic markers of psychological change surrounding Designing dashboard for campus stakeholders to support college student mental health Designing a clinician-facing tool for using insights from patients' social media activity: Iterative co-design approach Psychosocial Effects of the COVID-19 Pandemic: Large-scale Quasi-Experimental Study on Social Media A collaborative approach to identifying social media markers of schizophrenia by employing machine learning and clinical appraisals Mental health discourse on reddit: Self-disclosure, social support, and anonymity Understanding social media disclosures of sexual abuse through the lenses of support seeking and anonymity The impact of context collapse and privacy on social network site disclosures Prevalence and psychological effects of hateful speech in online college communities Dynamic linear model and SARIMA: A comparison of their forecasting performance in epidemiology Time series analysis using autoregressive integrated moving average (ARIMA) models Comparison of tests of the equality of dependent correlation coefficients Sparse additive generative models of text The psychological meaning of words: LIWC and computerized text analysis methods K.S. conducted this research while at Georgia Institute of Technology. We thank the feedback from the SocWeB lab. We also thank Amelia Glaese and Jayant Jain for preliminary data analysis. The authors declare no competing interests. Correspondence and requests for materials should be addressed to K.S. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.