This course introduces the fundamental concepts underlying Large Language Models (LLMs). It starts with an introduction to the various problems in NLP, and discusses how to approach the problem of language modeling using deep learning. It describes the architectural intricacies of Transformers and the pre-training objectives of the different Transformer-based models. It also discusses the recent advances in LLM research, including LLM alignment, prompting, parameter-efficient adaptation, hallucination, bias and ethical considerations. This course prepares a student to comprehend, critique and approach various research problems on LLMs.