In the world of technology, large language models (LLMs) have been making waves with their remarkable ability to generate text, translate languages, and provide insightful answers. Word sequence or the order of words plays a crucial role in Large Language Models and language processing in general. The sequence of words determines the meaning of a sentence.
What Is LLM?
Large language modelling refers to the use of advanced and extensive language models in natural language processing (NLP). In the context of machine learning and artificial intelligence, a language model is a type of model that is trained on vast amounts of text data to understand and generate human-like language.
“In Large Language Models, the sequence of words is used to predict the next word in a sentence. The model is trained on a large corpus of text and learns the probability of a word given the previous words. So, if you input the beginning of a sentence, the model can predict what word is likely to come next, and it does this by understanding the sequence in which words usually appear. This same concept can now be applied to recommendation systems. For instance, when recommending the next song or movie, previous actions (songs listened to, movies watched) can establish the context for the next recommendation. We can now imagine that user actions on the platform (songs listened, movies watched) are just like words, the building blocks of the platform language, and the sequence of interaction a user does is just like a sentence. Now we can leverage this formulation to predict the next likely content (or word) that the user might engage with, which is exactly what LLMs are being modelled for. Note that this is what is the core problem recommendation systems are trying to solve”, explained Aayush Mudgal, an expert in Deep Learning, with experience in building large-scale recommendation systems.
Aayush also explained in detail that another concept borrowed from language processing is embedding, particularly word embeddings (like Word2Vec or GloVe). “These embeddings, which represent words in a high-dimensional space, capture semantic meanings and relationships between words. Similarly, in recommendation systems, user and item embeddings (like songs, and movies) can be used to capture underlying tastes/preferences and item features. Embedding features aim to learn a high-dimensional representation of content to ensure content that is similar learns a similar embedding. This work has been foundational in improving the featurization of content as building blocks for recommendation systems. Having a good representation enables us to use language models,” he said.
In Machine learning, there is a well-known theorem: the universal function theorem that states that neural networks are powerful enough to learn any function given enough parameters. Transformers and similar architectural advancements both in algorithms and associated hardware have made it possible to realize the theoretical boundaries. The research in Natural language processing, vision and recommendation spaces are converging together, widely adopting techniques from one domain to another.
Aayush also shared how learning feature interactions are crucial for recommendation systems. “Before features were handcrafted by hand, for example, a feature like the user’s location and age might be useful to understand what they would like. Such features were earlier hand-crafted which started to slow down new innovations. With improvements in better architectures like transformers. These are being used to self-learn feature interactions, making feature engineering easier but at the same time improving its performance,” he said.
He explained that new technology called zero-shot learning is making a big difference. It uses deep learning and models like GPT to make recommendation systems better. They can now handle new things they didn’t see before during their training. With improved transfer learning, humans can now create better recommendation systems without the need for lots of training data. This is a big change in how things are done.