How Can Deep Learning Transformer Models Amplify Positive Impacts of Journaling?

Using Zero-Shot Classification with Hugging Face Transformers 🤗 to Support Emotional Intelligence in a Journaling Context.

Journaling (Photo by Green Chameleon on Unsplash)

Picking up journaling can bring surprising benefits such as improvements in mindfulness, memory and sleep routine or reduction of stress and immune system boost.


One of the key ideas is that journaling helps us reflect on thoughts and feelings as we recount and organise past events in our minds. And this is where transformer language models come in with the potential to support engagement, awareness and understanding. We can decipher the meaning encoded in a journal entry and flag it with the most appropriate labels from our predefined, use-case specific set. This could be a wider pool of emotion and feeling descriptors or a narrow one, tailored to a specific part of the user journey.


Opportunity

This sentiment analysis variation can take many shapes in terms of purpose-driven integration into the product context.


For instance, as empathic probing question prompts that offer validation, support and further the reflective journaling posture. Perhaps as one-ofs or branching into a dialogue-like back and forth chatbot exchanges.

Feedback and probing questions promoting emotional awareness (by author).

Taking a more passive role, the model could help us automatically tag journal entries and consequently use this metadata for search, categorisation and analytics over time (similarly to how many mindfulness products incorporate manual mood tracking).


Automatic 'Empathic Tagging' of journal entries (by author).
 

Zero-Shot Classification Prototype

In comparison to a sentiment analysis model trained to distinguish pre-defined set of sentiments, a transformer language model with zero-shot learning capabilities gives us a lot more design freedom.


We can benefit from the semantic language understanding of the model where it comes to the categories themselves, allowing us to modify them on the fly at an inference time. Thanks to this, we can employ the same pre-trained model in various contexts with different set-ups and even use user input or metadata to inform those set-ups. In addition, this allows for extremely fast-paced experimentation and fine-tuning of how the model works within our designed product features.

The simple prototype below explores this in the context of a "Letter to your future self". Semantic understanding capabilities aid the design of automated feedback that supports defined objectives while utilizing some principles of cognitive behavioral psychology.


Streamlit web app with the model prototype (by author)

 

Implementation

We use Hugging Face high-level Pipeline API for the model and Streamlit to quickly build a test-able web application:

And deploy it with a local tunnel:

This way of on-demand hosting within the Google Colab instance allows you to conveniently experiment with the model and app with a minimal setup required!


You can walk through the code and build the prototype directly in your browser! Continue to this Jupyter notebook 📔 !


 

What’s Next?

Despite hurdles like privacy concerns or incompatibility with a physical form, journaling practice seems ready to benefit from contemporary natural language processing (NLP) capabilities. We've used broad strokes to outline couple of possible directions where this could support our product goals. And we've built a simple web application that allows us to explore and test those ideas hands-on!

While startups like Morningpages or Reflectly do some amazing work innovating in this space, something’s telling me there’s yet a lot to discover.🤔


What do you think? What does the future journaling experience has to offer? And how can such semantic understanding capabilities help in solving of problems that you are dealing with?