I was planning my routine for the new semester and thought I should document the learning protocols that have worked for me. This guide helps to maximize learning efficiency for anyone, especially students. The protocols below are formed with inspiration from Huberman Lab and Andrej Karpathy (sources below). These protocols (and AI “prompts”) have proven helpful for me, and I feel like these principles are effective for all kinds of learning (human and machine).
When to learn
I build my routine around bouts of scheduled learning throughout the week (mostly classes). Regularity is key; it also reduces friction by removing the need to make the difficult decision to the library every single time.
Before starting any bout of learning, it helps to get your body alert. Exercise, breathwork, or cold exposure are useful ways to do that. This opens up a window of enhanced neuroplasticity, making learning easier (Huberman Lab). Limit learning bouts to 1.5 hours of focused work (one ultradian cycle). Don’t make yourself too comfortable during the process, as that can lower alertness, making focusing more difficult.
I usually do my weekly cardio an hour or so before classes start to get my adrenaline pumping. I drink electrolytes or eat a banana and take my daily creatine, both of which give me a boost in focus and help me recover. I attend lectures in person because I find that the slightly uncomfortable classroom environment helps me stay attentive.
How to learn
Setting up the environment is an important first step, but the magic really happens in the methodologies used to learn. Two principles that are proven to work are iteration and exploration. (I will discuss the generalization of this in a later post).
Iteration involving a performance review – After consuming new information, ask your favorite AI the following question:
“Ask me 3 questions about {{topic_you_studied}} at an {{expertise_or_academic_level}}”,Building from scratch – The next step is to model the concept you just learned from scratch. This is inspired by Andrej Karpathy’s tutorial projects like Micrograd and NanoGPT and is similar to an assignment in a course. By going through each step of an algorithm or concept, the brain creates an internal representation of that knowledge, which can be far more valuable than retaining words from a textbook or lecture.
For example, the first time I saw the algorithm for long division (multiply, subtract, carry) I was confident I could long divide ANY number. However, when I actually tried and encountered 456 ÷ 5, I immediately blanked out and started scratching my head not knowing how to even start. It wasn't obvious to me that I could start start with 0. An exercise like this encourages exploration and, as a result, deeper learning.
Patience is welcome in this long process, but the fruit it bears is truly worth the effort. Continual learning is one of the greatest challenges in machine learning but is something we humans have been bestowed with by default. I hope we can all make the most of it and keep learning.
References
Huberman Lab - Teach and Learn Better with a Neuroplasticity Super Protocol
Karpathy - Andrej Karpathy on Lex Fridman Podcast