SPEAKER_02: Alright, delighted to be joined here once again by Justin Skysak, Chief Quant and Director of Analytics at Matakami. So Justin, you've been working away on a cool new Matakami course. I've been seeing all the updates on Twitter. Wondering if you can tell us a little bit about that. SPEAKER_03: Oh yeah, yeah. Machine learning one. So it's a legit, it's going to be a legit machine learning course. We already have math for machine learning. It's so far, it's like for anyone who signs up on our system wanting to learn machine learning, what they do is they go through this Foundations Sequence, Foundations One, Foundations Two, Foundations Three, that covers your standard high school on a bit of undergrad math and then you jump into math for machine learning. Which covers all the math that you'd need to know in order to take a proper machine learning course. But up until now, we haven't really, we haven't had a legit, like the actual machine learning course goes through like back prop and neural nets and that stuff. It's all been just been like the math supporting that. So like multivariable chain rule, gradient, etc. Probably working with probability distributions, whatever. But now we're actually building out a course that covers these real algorithms. Machine learning algorithms. Yeah, so how the reason why we got started on this is actually came out of a conversation with Jason Roberts, he and his wife, Sandy, the founders of math Academy. And so we were just kind of talking about like, just doing like a vibe check on like, how are our learners doing? What are they most excited about? And I told them like, you know, Jason, I think seven, like if I were to estimate how many, what portion of our adults on Twitter, X, are excited about machine learning? I would say that's about 70% of everybody out there would be my, my guest in it. It's just we, so at this point, most of our, more than half of our users are, are people from, from Twitter. And I want to do machine learning, which makes sense. I mean, that's kind of what you guys are interested in too, right? And so right now it's kind of, it just, it seems kind of silly that we, we support you with all the math up to machine learning, right? And then like, you get there, you know your math machine learning, and then you're like, okay, cool, I'm ready to go teach me machine learning. And then we're like, oh, just go find another course online. Like what? No, and it's the courses online. I, I have, I've been pretty disappointed honestly with just like all the machine learning resources out there. There's some halfway these ones, but like by and large, like, you know how you can pick up an algebra textbook or calculus textbook, and it just, it lays out topic by topic. Concrete example, here's some nice problems. Scaffolded. Now there's very, very degrees of, of, of, of scaffolding. Some resources are, are better than others. But generally, there's just, you, you, you go to a machine learning book or resource, and it just kind of like tells you in, in broad strokes, what is the, how does the algorithm work? And like, give you some like conceptual intuition about it so that you can kind of just check a box. I'm like, okay, I sort of kind of understand this now. And then you like run some code off of a tutorial. There's like, import tense or flow copy and paste code over, you write it, you get a result and you're like, wow, I did machine learning. That's great. And then like, it's just not, you're not actually doing the, the math yourself. You got to like really just get into the nuts and bolts of this stuff. And now that there are some resources that, that kind of like point you in that direction. But then it's, it's like, you just hit a wall so quickly on, on just the level of difficulty and then you, you just kind of fall off. So this, this just seems like it's, it's the right area for us to like just build out more courses. So many people want it. There's such a lack of good scaffolded resources. And it's, it's also kind of like, it's, it's kind of like a missing piece in getting people to where they want to go. Career wise, a lot of people sign up for math Academy. Like we've talked about in previous podcasts is like you sign up, you want to scale up in math. The reason why you're doing that is because you think it's going to transform your life in some way, or like just improve your future, your prospects. And there is some amount of that that comes from, from just like knowing map being knowledgeable and in math, but, but like there's, there's a, there's a, there's a jump that you're trying to make, you're trying to get yourself into a new position in life for a lot of our learners at least. That's what they're, they're trying to do. And right now this like knowledge of actual machine learning and ability to apply it is like that's, it's not one, it's not one missing piece that we need to put in to, to finish that bridge over to, to where you want to go. So that's really interesting. Yeah, that's, that's kind of been the, the, the inspiration for it. And now we're still going to do like all these other math courses, like real analysis, yup, that's our technology for, yup, proof based, linear algebra. We're going to, we're going to do everything, but it's just the priority levels have kind of shifted a bit in that previously we were thinking like, okay, let's just, let's finish all of undergrad math and, and then like expand out into math adjacent subjects like, like machine learning and more computer science. But, but right now it's like when, when the universe just kind of puts an opportunity in front of you and it's pushing you in that direction, it's just kind of dumb not to, not to take it. So yeah, so that's, that's the story behind the, the machine learning core. Oh, I should say this is, this is going to be several courses actually. So we've, initially we were planning like, just, okay, let's make a machine learning course, but then I started just really scoping out like, okay, what are, what are the topics in this course going to be? Turns out there's, there's a lot of, uh, topics and machine learning as, as we all know, but just when you start scoping it down to like, just the individual scaffold topics, you start to realize like, oh, dang, this is, this is more than a semester course. This is like, this is like, uh, it's, it's, it's almost like, we, we talk about, we often talk about machine learning as if it's just like, one sort of course, one standardized course, but really that's almost like talking about about like high school math as if it's just one course is like, no, you, you got your, well, you need to know your arithmetic. There's algebra, there's geometry, there's alpha two, there's stuff in pre calculus, also there's calculus. There's all this whole spread of subjects that are baked into there. It's the same with machine learning. Um, and so our, our plan is just kind of machine learning one is going to be mostly your, your kind of classical machine learning regression, like linear regression, logistic regression, clustering techniques, um, definitely decision trees and neural nets. Um, but it's, it's going to, it's going to kind of stop around like, um, convolutional neural nets. If you pick up any like classical machine learning textbook or look at any classical, any like first course on machine learning syllabus, they, they typically top out around convolutional neural nets. So the, they'll introduce your neural net, your back propagation, uh, simple like multi layer perceptron, feed for net, and then, uh, the convolutional net is kind of like the, the start of like, okay, there is some interesting architectures that we can, that we can build out with these neural nets. And then, um, that's kind of like your, your gateway drug to the rest of a, of, of, uh, the neural nets, like transmomers and LSTMs, recurrent neural nets. Uh, oh, this, this whole zoo of, of different sorts of architectures. And so, uh, that just, it just like opens up a, a can of worms on like how many topics there are to cover there. Like that's, it's just a ton. And you can't really, you can't do all classical machine learning and fit in like all of those neural net architectures in the same course. So that kind of stuff would go with machine learning too. Um, and then there's also, there's also like a lot of different ways to, to train these sorts of models. Um, like for example, in, uh, in support vector machines and, uh, and even in like other sorts of, uh, like logistic regressions, uh, you can train this sort of stuff through gradient descent. You just like define a, uh, like a cost function and then, uh, you just apply a gradient descent optimizer to it. Um, what there are sometimes tricks, uh, for using like linear quadratic programming, um, to, to do this in a, in a less compute intensive way. So sometimes that is covered in classical, like really hardcore classical machine learning courses. Um, might, might have you do some of that. Um, and, and so I was, I was starting to think like, well, should we, should we also do this in machine learning one? Uh, but it's just, we just kind of started running out of room. I'm like, there's so many core topics that fit into this one semester course. So stuff like that, uh, like the, the linear quadratic programming would also go with machine learning too. So machine learning too will be like a, uh, it'll be like a, a, a second pass at a lot of topics in machine learning one where you like maybe train them using more sophisticated training algorithms. And you also learn about just more advanced setups or architectures, uh, variations of the models, like the whole zoo neural networks. Uh, and, uh, and then I, I bet probably after that, uh, there will be a lot of, uh, almost likely be a, um, a machine learning three that kind of gets into the, the cutting edge stuff. Cause I mean, just thinking about like transformers. Um, if you know how a transformer works and you can do some math with its transporters, like that, uh, sort of like you can, you can do some math on, on, on the like nuts and bolts of the, of the transformer. Like even if you know how to do that, you're, you're not exactly at the cutting edge yet. You're pretty close to the cutting edge of machine learning. But like if you want to go just like pick up a machine learning paper, there's like, there's other stuff that's, that's going on. That's more, it's a little more cutting edge than just the symbol like, okay, how does the basic transformer work? There's, there's more sophisticated techniques that are, they're going on. Um, and so my, my guess is that would probably, there, there'd probably another machine learning three course, but I don't know exactly. That's, that's kind of so far ahead at this point that I'm not, it's just a little nebulous in my mind what it would be, but ultimately we, we want to get people to the point where they can just pick up a machine learning paper and they just know all the, all the background information that they need to just, to just kind of read, to read through it, to implement it. SPEAKER_00: Um, and they'll actually be able to build their own projects. Yeah. Yeah. SPEAKER_03: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. So in the short here though I will Fear book this, a have you, you know, like roll-up camera or designingMusic Is it looking like scientists, but, we're just flailing things with, so there's, there's nothing like Howell to, etc., like What, I'm kind of sure I can see likely puzzle and initially an ask you to fill some other parts in. And so that'll be kind of like a sort of like a repetition skill training thing. But the goal will be to have these other, these more little project like tasks that where we decompose some kind of little coding project into a bunch of steps. It's kind of like one of the, it comes back to one of the topics that you learned. And you kind of have the implement it in encode. And so some of the reasons why this is kind of the approach that feels like it just kind of fits in right is, first of all, these coding exercises, like it's, it's kind of hard to make this as a repeated like spaced repetition sort of thing. Like you code up a model from scratch. What do we do like ask you to do that again? Like when you just go like copy and paste your code from last time. It's like how do you create a new sort of variation of the problem? No, it's really just sort of a different project. Right. SPEAKER_00: Plus it's too coarse-grained. I think Larry Sanger, Kofano Wikipedia, he built an open source tool for spatial-petition with coding. And it was, it had that character where you would do a single task and it would have you repeat that task. And it was a command line-based thing. It would actually run the strip that you wrote and make sure the output is all that. But to me that felt too coarse-grained. It's like you're combining all sorts of skills in writing that strip. So you need to target it more specifically. Otherwise it's going to be so frustrating. It's going to fail the whole thing because you forgot one bit of syntax or whatever it is. SPEAKER_03: Yeah. Yeah, yeah, exactly. Right. Yeah. And that's, yeah. Sometimes the challenge with, when you jump like right into project-based learning is, yeah. You know, it's targeting the core underlying skills as granularly as you need to be. But yeah, yeah. I'm excited for it. And so the word we're shooting to have it, we're like, we are seriously prioritizing this course. More than any course we've prioritized in the past. So just to give you an example. Starting next week, I am going to be working full-time on content development for this course. Usually I do my quant stuff, my algorithms course, or my algorithm stuff. I got a bunch of stuff I have to work on with analytics and task behavior analysis and various things with the model. But right now, starting next week, once I close the loop on a couple of things, I'm going to be full-hog into this machine learning course. SPEAKER_00: I was actually going to ask about that because I saw you posting on Twitter some of the contents that you're working on. I look like you had scaffolded out all of the knowledge points and the basic structure of the course and that kind of thing. You are already getting to work on specific questions. And I was going to ask, and some of this, you don't have to reveal it if you don't want to, but how is it that you guys write these questions? Because like you said, you have all these difficulties with writing questions that are different every time and can be introduced in this place. But how do you make sure in a QA quality assurance way that the questions you're writing actually work with the vast majority of people? What's the process like? And how is it possible that you who normally doesn't do this, you're able to just slide in and do it? Is that just a comment on yours? Is there some process you guys follow internally? I mean, what's the... SPEAKER_03: Well, so I think the secret ingredient is just having experience doing this stuff in the past. So, it's like on one hand, yeah, I usually focus on the technical aspects of the system as opposed to the content. But when I started out with Math Academy, when I initially joined Math Academy, I was actually working as a content developer. Because that's what was needed at the time and I had a bunch of experience writing content. So it's kind of like, I used to do a ton of stuff or like hundreds of lessons in the system. And yeah, I even... so I also, I spent three years teaching a highly accelerated quantitative coding course sequence within Math Academy's original school program. So a lot of these topics, particularly in classical machine learning, but also in coding in general. Like, I actually spent like three years teaching these to high schoolers who would learn all of... who came in knowing like calculus, multivariable probability statistics, etc. And something that we always joke about at Math Academy is like having to teach... we came from the background of having to teach all this stuff to kids, to students. So they had all the prerequisite information that they needed because we had taught it to them. We taught them the foundational math behind the stuff. But yeah, it's kind of... it can be kind of... it's more challenging to get something to work for a kid with a lower attention span and just more generally more adversarial. Especially when they're in a school program, it's not like they're... it's not always like it's 100% out of like, I'm just really intrinsically interested in that. So if you can make that work for kids, then you can do it at scale for adults. And so it's... I think it just comes down to having just years of experience going through the grunt work of doing all this manually. And it's kind of hard to replicate without that experience. I've tried previously... I mean, me and Alex both have tried using chat GPT in the content development flow. And it can be good for idea generation sometimes, but it's just you. At the end of the day, it's not a... it doesn't really like solve the problem for you. It can generate some ideas that can help you solve the problem of scaffolding all this. But you need a really good mental model of how a student is thinking about these things, what kind of things they get confused about, and what it takes to scaffold things up. And it also helps having done that for math subjects like algebra and calculus and differential equations, where it's just a little bit more straightforward about how to scaffold things. Because in math courses, you can flip open a math textbook like an algebra or calculus textbook. And people generally take the approach of trying to scaffold things. It's not always at the granularity that would be ideal, but it's at least directionally correct. But in machine learning, a lot of resources that I've seen online are not even directionally correct. And it's hard to know how to do that unless you come from the easier case of scaffolding math. It's like, okay, we've done this for math, now let's do a simpler thing for more challenging. It's like having to do this stuff manually and do it in the case of math has kind of scaffolded us up into the situation where we kind of have an idea of how to do it for machine learning. Right, anyway. Sorry, good. SPEAKER_02: So there's this phenomenon in, I guess, especially for me in math, where sometimes you read an explanation and it's so good that you can't help but understand it, it's like almost like, you know, even after having read it, you couldn't make yourself not understand it. You know what I mean? Like sometimes you can just nail that explanation perfectly. And I'm wondering if you've found any design principles for coming up with explanations that achieve that kind of level of explanation. Yeah. SPEAKER_03: Yeah. Yeah, I think there's a couple of components to it. So, all right, there's, I'd say there's kind of two groups of principles. One of these groups is like inherent to the explanation itself. And another group is, well, I guess the other thing, I think in order for this to really happen, I think you need to be up to speed on your prerequisite knowledge. Not only having learned your prerequisite knowledge, but also having, like having it like relatively fresh in your head. Not like, oh, I learned this five years ago and then I forgot it and now I've like quote unquote learned it, but not really. Like you need to be pretty spun up on your prerequisite knowledge. And then so that condition is satisfied. Then I think a good explanation can kind of help make things click for you. And so that I think, yeah, so the question now is like what, what are aspects in a good explanation. And I think that the first thing that pops into my head is just having a concrete example. It's, I mean, it's always helpful to like to draw analogies and to like illustrate things conceptually, but having a concrete example that brings it down to actual numbers. And in a simple, in a simple case where you can kind of wrestle with it in a hands on way. Because at the end of the day, all of these, like like in machine learning, there's a bunch of like concepts like overfitting and underfitting bias, variance and stuff. And so like the core of all this is, is numerical measurements on various functions and data. And if you can, if you can bring something down to a concrete numerical example, then it's kind of like, it almost feels like a, like a ledge. And then you can just hold on to that. And anytime you feel yourself like slipping, or in a different direction, you can just like hold on harder to that go back to that concrete example. Like, Oh, this, this is what it means concretely. It's, I think one of the, one of the things that makes math explanations sometimes hard to follow is when it's kind of like, it's kind of like, it's kind of two hand wavy and sort of nebulous, where it's like, you can kind of get into this stage where it's like, okay, maybe, maybe the explanation might be like you can, maybe you can't follow it at all. And you're frustrated because it's too hand wavy or nebulous. Or maybe you think that you're following it, but then like later on, you try to think of it in terms of this hand wavy nebulous concept that you have. And then it turns out now you're confused again because you thought you knew it, but you didn't actually. And when you, when you, when you boil things down to a concrete example, you just, you kind of like, you're immune to all this like confusion of like, well, I thought I knew how it worked, but I don't actually, or just being confused. It's like it, yeah, just concrete examples. That's what I would say. It's the main thing. SPEAKER_00: Is there a way that you, well, I'll preface it with this because I think it's important to understand the quality of explanation we're talking about here. Again, I feel like we've been keeping praise on math academy for like, you know, a dozen hours, but I'm happy to continue doing it. If you look on Twitter, the way that people describe their experience with math academy. Some of it approaches like a description of magic almost. When they say, I just did this and now I, now I understand that and I didn't realize that, that I did almost. You get to the end of a few lessons and now you understand a new topic and you almost didn't even realize that's how simple it was, but because the way you learned it just now, it was simple, you know. So that's the kind of quality we're talking about. So in that, with that taken, how do you avoid common errors in explanations like the curse of knowledge, something like that, where you're writing an explanation that makes sense to you because you know it, but it doesn't make sense to somebody who's fresh. Is it all just experiencing you have an internal sense or is there something else, some more formal process you go through to check it or? SPEAKER_03: Yeah, I'd say some part. I mean, okay, there's a couple of ways. Yeah, there's some amount of like an internal knowledge of like, oh, well, I know from experience that students always mix up this with that, so we're going to be very careful to keep to not intermingle those two things or just to clarify the difference between those two things and the explanation. And there's also some like we run analytics on all the lessons that we put out. So we can tell like, okay, what percentage of students who try this lesson pass it on the first trying, how many pass it within two tries, and we also drill down to like, okay, suppose that the pass rate is low. Then where are people getting stuck, like where specifically in the lesson, which knowledge point, which questions within that knowledge, we can drill down to exactly what the issue is and take corrective measures like scaffold it up more in that in that area, sometimes like split the topics into two because we realize we're trying to bite off too much in one topic. And we've done that for years. So we kind of have a decent idea or a pretty good idea. I'd say of like, what kind of issues have we had in the past and how to avoid them in the future. So we don't really run into as many as many of these issues going forward with new material. What I would say, the main thing still, I'd say these are still sort of secondary to concrete examples. Concrete examples, concrete examples, concrete examples. When you write a concrete example, it kind of forces you to, it prevents you, it helps. It helps prevent you from falling victim to that, that occurs of knowledge. Because when you write a concrete example and you write down a solution, you see like in your solution, like, okay, these are the steps that are being taken. Oh, crap, we haven't covered that step yet. We need another topic on this step or just you see maybe some, maybe you covered a couple steps, but you're like, wow, we're really stringing these like four steps together, like all the time and all these problems. That's really an underlying technique. And then you're like, well, or, okay, another, to that extent is like, when you have things kind of, when you have all the underlying material, the prerequisite material scaffolded up into prerequisite topics that the student needs to know before this current topic, your explanations should be decently short. It shouldn't be like pages upon pages, math. If you, if you write a concrete example and you have to go through pages upon pages of like explaining like, oh, this means blah, blah, blah, that means blah, blah, blah. And here's how you solve this sort of equation. It means that you're just trying to do too much. And it, so it kind of forces you to think like, okay, well, we need to compress down this explanation. This is way too long. We need to offload some of this cognitive load into lower level topics that should come as prerequisites. The student does that and then you can kind of condense your explanation of the top level concept. But, but if, yeah, if you don't have concrete examples, then it's kind of, it's kind of hard to get a sense of just how big these explanations might be. Right. SPEAKER_00: Right. So how do you balance all of that? All of the considerations of writing a good question with this one XP per minute thing. How does that fit into it all? To make sure that you're about, you know, equal time commitment for Xt. Is it just changing the number of questions per lesson or what's the approach? SPEAKER_03: So you mean like how do we try to make each lesson the same sort of bite size? SPEAKER_00: Like, how do we avoid how to scale to quite complex things? So I'm just thinking, does that go into the writing of the questions or is there some other approach where you just change the number of total questions or change the number of knowledge points or change the number of XP for a given lesson or is that like what approaches SPEAKER_03: Okay. So I'm just trying to still understand the question. So you mean like, how do we, how do we keep, are you asking like, how do we keep the lessons calibrated so that there's that equivalent of like one X per minute? Okay. Yeah. So the XP is kind of done in hindsight. It's not like, it's not like we have this. It's not like we say, okay, this lesson is going to be 20 Xt. Let's go right at 20 Xt lesson. It's kind of like you just, it's kind of write the lesson. Try to keep it bite size. Try to keep it manageable. And if you do that, then it should be somewhere between like, like seven and. Twenty five Xt just spitballing like seven to twenty five minutes. There are some topics that just have like intrinsically more load in them and they take longer. There's some topics that are just kind of intrinsically easier, less computation involved. And then so the way we, we, so we have a lesson and then we come up, we basically compute the amount of XP for that lesson. And it starts out, the XP computation starts out with just how many knowledge points are in that lesson. How many tutorial slides and what is, what is for a knowledge point, how many questions per knowledge point, roughly an average, and then multiply by the, by the expected time for question. And so we have these time estimates for question. We start out by just having subject matter expert liquid content writers will kind of estimate how long it takes to go through a question. If you, not to speed through it, but just if you have an idea of what you're doing and you're just kind of going at a normal sort of pace, how long, how long does it take. So what we record that information, we have a couple content writers do that, that sort of estimate. So it's like an average over a couple, it's like a little hive mind average. So we get this hive mind average and, and then we take that as the initial estimate and then we further calibrate that based on student data of how long students actually take. To do these, these questions because sometimes when you're, when you're doing a lesson learning this thing for the first time, you might be like a little slower than, than somebody who, who, who, who's, who, who generally knows what they're doing is not having to go back to get all the time. And other times, like sometimes it just, it clicks for, for most students really quickly and they're able to speed through and we actually underestimate. That's, that's a little more rare, but it does, it does happen sometimes. So, yeah, it's all, like, it's, yeah, it's, it's a, it's a highly manual process that gets further calibrated as opposed to student data. SPEAKER_00: So you mentioned the time, because that play of men, maybe you've talked about the term. Is that play a role in this spatial petition? Because I remember seeing something about how that determines something, but I'm not sure if the response time on a question plays a role. SPEAKER_03: So, for, for diagnostics, it does, for lessons and reviews and other tasks, it, it doesn't. And for quizzes, it sort of does. So, so let me go into like, so for diagnostics. We, we do measure time on diagnostics and if you're, if you're, if you're taking like a long time to solve a problem, then we're not going to give you very much credit for, for solving that. We'll give you a little bit, but we don't want to place you like too far ahead of your ability. Like if somebody takes, if somebody takes like five minutes to solve a quadratic equation, like they're not, they're not ready to go far beyond that. Because that's, they need to build up more practice doing that. SPEAKER_00: And that's, that is a direct result of the, you know, the automaticity that's through the whole system. That's the goal of everything. SPEAKER_03: Yeah, yeah, right. So, like, right, the goal is to, to get you to the point that you can do these foundational skills quickly and it doesn't occupy a lot of mental effort. So you can actually focus on the next thing that you're learning. Yeah, so, and so for that reason, sometimes people get kind of stacked by the diagnostic and they place a lot lower than they think they should have placed because they're like, oh, well, I, I learned topics XYZ in school. And then the question is like, okay, well, can you solve problems correctly, consistently, quickly, on those, on those topics? Oh, no, you can't like, okay, well, you need more practice on it. It doesn't matter if you did it in school. What we're measuring is, is whether you're actually able to, to, to do it at the level that you need to continue building on that, that skill. Right. Now, in lessons, let me preface this by saying that we eventually want to work in time measurements and all these micro behaviors like going back to examples, like back to reference material. We want to work that into all the, the space repetition mechanics. But right now, for lessons, like it, all the, all the matters is your, is your accuracy on the questions. That's the, that's the only thing that feeds into the space repetition mechanics right now. Basically, if you're solving questions correctly, then you're, you're getting a lot of positive credit. If you're, you're missing questions. Now with that, you're getting less positive credit. If you're missing a lot of questions, failing task, like you're getting negative space repetition credit. And so, like, the reason for that is just like, there's a lot more variance in, in lesson performance. Like, so different students can, can sometimes just, they might come out of a lesson with just different, different speeds. And if, if a student is able to solve the questions in a lesson, get them right consistently, even if they're kind of slow, you don't necessarily want to like hold them, hold them back and continue, they come practicing that over and over again. Like, you can, you can get implicit practice on this skills by continually building on your knowledge with, with more advanced topics. And so, yeah, so we, we don't really want to take too drastic approaches when factoring time into space repetition for lessons. But at the same time, you would want to like trigger reviews earlier, if a student is kind of slow to solve problems, even if they're doing that correctly. It's like, okay, well, let's get you a little bit faster. And also, like, as we, as we work in the task behavior analysis, like if a student is going back to the example for like every problem that they solve, then that's not a great sign. Like, it should be trying to do this without the example. And, and so we want to not only do we want to measure that sort of behavior in lessons and, and bring it into the space repetition mechanics. But we also want to incentivize, like just proper learning behaviors. Like, if you go back to the example by default, at every question that you do without even trying to do a question first. That's just not, that's not how you do it. That's, you're just using the example as a crutch. And we don't, not only do we want to not want to give you as much space repetition credit, but we don't want to give you as much Xt. Right. And then you're not like you're not aligning yourself with the way with the way with the optimal process for building up this, this knowledge. And at the same time, if you are trying to do questions without relying on the example, I'm going to give you more XP. Like, if you are, if you are, yeah, if you're, if you're, if you're working through this with the right habits in mind and building more as a result, that's, that's worth the bigger reward. And then, yeah. SPEAKER_02: I was curious, like, since you're, you know, the chief of analytics, you must have looked at a lot of different student behavior data points. And obviously, you've mentioned it in some of the questions you've answered so far. I'm curious if you've seen anything like mysterious or interesting or unexplained, maybe in the student behaviors that you've been looking at. I'm not getting students so far. SPEAKER_03: Yeah, that's a good question. Yeah, mysterious unexplained behavior as well. So it be, this reminds me of a time back kind of back in the early days. When I saw something, oh, we're going out in the database. And it was like, initially, it seemed like my, my model was just doing something stupid because kids would get it. And they'd get an answer correct. Or they'd get a bunch of, they do well on a quiz. All right, they do well on a quiz and then get a bunch of follow up reviews afterwards on questions that they got correct. So the question is like, well, why, why is the model assigning them follow up reviews? That's, that sounds like something's broken. And so I, yeah, I dug into it and it looks. It looked like the database was just like changing itself after it was. I was like, wow, this is a weird, this is a weird behavior. Whatever the student is doing is like very. Unusual. And of course, like I talked to Jason about that. I'm like Jason. I don't know how else to say this, but I think the database is doing something weird and he's like, this is, this is students cheating. And he's like, yeah, this 100%. So we looked into it. Yeah, of course, the students cheating. They were like opening up the quiz in a new tab. They would submit their quiz, see what questions they got wrong. See what the correct answer like, have it open in a new tab before they submitted and then they change it. It was like just some like exploit that we had attached at the time. And I was like, I don't really know. But so like any time that we see this sort of weird unexplained behavior, like why, what are you doing? It's, it's often an indication that people are trying to exploit the system in some way. Now, I imagine now that I think about it more, your question is probably more about like learner behavior of like, of, like, is there any weird things that students do that they think is productive for learning that's actually not. And I guess, I mean, there is weird. There's definitely weird behavior, but I wouldn't call it unexpected. I guess probably just because I'm like, I just seen it all. And just so you used to it by now, but just so I can just list off some of the things that are that are OK, things that are. Weird and would probably be unexpected to a reasonable adult using the system who is not dealt with these sort of issues for years of their life, but something that is like just totally expected to me. So first of all, going through a diagnostic and guessing on all the questions, just submitting. I don't know to everything because they don't want to do the diagnostic. That's one of them. It's crazy. Going through the diagnostic and looking up material, like, getting a question saying like, oh, I bet I can figure out how to do this if I, if I look this up online and then taking like half an hour to solve the question before going on to the next book. Maybe they get it wrong still and going on to the next question and then complaining that the diagnostic took them like five hours to do. Let's see. Another one is clicking on a lesson, skipping the tutorial side, skipping the example side, trying to solve a problem, getting it wrong, but that spending like 10 minutes on the problem, just confused how to do it, not going back to the exam, not going back to the tutorial. I guess we do this. So when I was teaching classes with Matt Academy, I see this sort of stuff all the time. It's like, I get a class of like 10 sixth graders come in. This is the first day using Matt Academy. I'm just like, okay, which one of you is going to be the one who just raises through. It doesn't read anything. It gets stuck on questions. And now I have to go sit with you and just like show you how to not how to do the problem, which is how to like you have to read the tutorial slide. You have to read the example side. You have to write stuff out on paper. You get these kids who are sometimes like speed demons, or they would just like, yeah, they wouldn't read. There'd be some kids who would take a while on a question, try to figure it out, have struggle because they were not reading the sample. And then they would just take their best guess and like, we went and sleep low through like, you'd see like seven failed tasks in a row within the span of 10 minutes. You're like, what, what is going on? He just in love with confidence. SPEAKER_00: Oh, I got that. No problem. Yeah. SPEAKER_03: Yeah. Oh, also getting questions wrong and then not reading the solution and figuring out what you got wrong. Just going like straight on to the next question and thinking that that's somehow magically you're going to be able to do it despite and then just making the same exact mistake. So people will like kids in particular, but people in general will just fall off the rails in all directions unless you intentionally really try to the corral them and coach them. And so I guess like initially when I started teaching and inspecting the user behavior and stuff, all this stuff was like, wow, people are really not approaching this the right way. But, but I guess, yeah, just having seen that sort of stuff over and over and over again. And just sometimes like email from a parent of like, oh, my kid is struggling with math Academy. They say this explanation isn't isn't good. And then like look at their data turns out they spent like two seconds on the question. Did I read the example? And just, yeah, just sort of all these sorts of like adversarial behaviors. Now, a lot of this sometimes this is in in good faith, like the, like, like the kid is actually trying to. Trying to trying to solve the problem and maybe they just forget that there's the tutorial side or the example side. I know it sounds silly like they just skip their own forget that it's there. When you're a kid, sometimes you just forget a lot of things. I remember one kid who forgot to write their name on the on the AP calculus of the exam, which is like the big exam. It's a standardizing. So of course, like they don't. Oh, yeah, some other kid like just circled answers in the test book, but instead of filling up all these. So, yeah, some of this is just like being like a not good, but other times it's like they're just trying to like game the system or maybe they don't want to do the work. And so they'll just kind of like try to create this like confusing scenario like where they can kind of trick their parent into thinking that they're doing a lot of like hard work system. I remember one time there was like an email from a from a from some parent who's like, Oh, my, my kid has has been struggling with the system. They're doing like four hours of work every day for the past month. And they just can't get any XP and then we like take a look in the past two weeks. They've spent like eight minutes total and the system answered one question. It's like, what are you talking about? So yeah, there's a lot of that sort of there's like slice and dice it. Sometimes adults do weird stuff. Sometimes kids do weird stuff. Sometimes it's it's unintentional and it's in good faith. Sometimes it's in very poor faith. There's all the like just all these different dimensions. There's all these sorts of different. SPEAKER_00: If an adult were going through math academy, what would be the incentive to go any trickery like that? Because you would think it's all self directed. So what would be the best? SPEAKER_03: Yeah, okay, so all right. The adults are typically less adversarial. Right. What? So one one kind of failure mode that that adults sometimes get into is that they especially like on their diagnostic. They'll just try to grind through questions that are better well beyond their capabilities. And the reasoning is usually like, well, I just want to get as far as I can get faced with the most challenging problems. Like you're supposed to try as hard as you can, right? On this on this on this problem. And so they take that as like, oh, just like struggle with it for as long as you're willing to put up with. When in reality, it's just kind of like what you should do on a diagnostic. The purpose of the diagnostic is just the diagnose. Are you able to do this like comfortably and quickly and correctly or you need more practice on it? And so what they should do is just say like, okay, well, I don't really know how to do this without reference material or like, oh, I covered this once like five years ago, but it probably take me like half an hour to figure out how to do it again. Just like, I don't know. That's like, I don't know. Go for no need or no need to struggle with it. You'll get more practice. You need more practice on it. Don't try to fake the system out into thinking that you know how to do this problem because then you're going to get put further than you should be. But I mean, that sort of thing is like it's a good faith because like you're trying to work hard as hard as you can. You're trying to just put your best foot forward. But sometimes that's not actually what you ought to do. Yeah, we have to kind of handle that a bit better in our diagnostic of like just telling like, I mean, we put this on the screen before the diagnostic, but who reads screens. So we kind of need to just have like a trigger that goes like like if somebody is spending too much time on question, just kind of like pop up and be like, Hey, seems like you're taking a while. If you're not sure how to do this, just click. I don't know. Don't grind through it. And then like if the enough time goes on after that, just kind of just moving forward to the next question. SPEAKER_00: But yeah, so that's good practical advice. If you're listening to this and you haven't signed up and done the diagnostic when you do it, just remember that. Trying to be automatic on everything. You're not trying to work it out over an hour or look things up and then try to work it out. You're trying to have that in your brain and actually recall it and be able to do it with the relative ease. So I think that's good straightforward advice. And just another point on that. Well, actually, I'll just ask you generally, are there any other good habits that some learners apply that would be good if more people did it? Like some people know that the quiz is meant to be purely recall. You're never supposed to look anything up. If you have to look anything up, that means you didn't get the quiz answer. That's my understanding of it. And so people, I imagine they do look things up and that kind of messes with the scheduling of the tasks, the result of the quiz. SPEAKER_03: Yeah, I would say that's definitely a good one. Quizzes are meant to be closed book because it's trying to gauge whether you need more practice on this thing or not. And if you have to look it up, how to do it, then you need more practice. You don't actually have the level of recall that we would like. Yeah, I'd say in general, in general, the, so I would put that as like the number two suggestion. And the number one suggestion, I would say to any adult wanting to get the most of the system is just like in terms of learning behavior is just try not to rely on the examples or solutions of previous problems if you can help it. So, like initially, okay, use that stuff to use the reference to figure out how to solve the problem. But if you go through a work example, you're like, okay, I think I got this. You go to a problem, you're like, oh crap, I forgot how to start. Go back to the work example, like, okay, go through that. So, you don't go to the next problem and have your work example up in a separate tab. It's like trying to like transpose the solution technique. The point is to try to recall as much as possible and use your work example, almost like a spotter at the gym. You're the one who's lifting the weight. The spotter is not lifting the weight. The spotter, like, okay, if you're really struggling under the weight, like, okay, yes. If you're going to end up dropping the weight on yourself, have the spotter help you out and get that set up. But if you just rely on the spotter for everything, you're not lifting the weight. And that's the thing that'll get you into the situation on the quiz where you're like, oh, I can't remember how to do this. Well, if you can't remember how to do it on the quiz, sometimes that does happen naturally. It just means you need some more practice. But if that is happening all the time, then it's typically an indication that you're not really engaging in recall practice and retrieval practice on the lessons and review. And so I've seen that in a handful of adults, actually, where, again, in good faith, maybe they're trying to go back to the work to the example to do it very carefully, just like the examples. They have the example in a separate tab every time. They don't really realize that the problem is that they're not recalling. They think they're just being very conscientious and disciplined and working out the same way. And then we'll take some great notes and then use those notes like they get a review on a topic. They open up their notes to look up how to do the problem as kind of like just refreshing on it. And then they go apply that same technique, but they're really kind of shooting themselves in the foot because they're not actually practicing that retrieval. They're just having the spotter. We'll have to wait for the right. SPEAKER_00: So, the point of finding point on it is the goal to simply try and therefore you're activating that testing effect. And then if you can't get it, it's always okay to look as long as you try before or what if you keep trying actually, and then you keep having to go look at the work example every time. What should you do then? You just give the wrong answer. You just go, I'm not going to look at it anymore. Or is it just the goal to try to recall it every time? And then that's enough to get the testing effect. SPEAKER_03: Yeah, the goal is to try to recall it every time. Now, trying alone doesn't really trigger the testing effect because it's more like it's about the successful retrieval. But the idea is that if you are actually trying and putting effort to win yourself off this resource and you only look at it when needed, then you're going to get yourself into a position where you are succeeding and retrieving more and more about the solution technique each time that you try. So, I guess what I would say, this sort of scenario that you suggested, like you're continually trying but continually having to rely on the work example, as long as you're relying on the work example less and less and less, that's good. But if you repeatedly can't figure out how to start the problem and you always have to go back to the work example, then it strikes me as, as though you're not really trying to remember, like, how do you, how does somebody do a problem or go get some of the work? And then if you have information from the work example, apply it to the problem, go to the next problem and then forget that same information right after. Like, I've seen that happen with kids before, and it always turned out that it's like, dude, you're not, you're not even trying to remember something to put a better effort forward. And I guess it's kind of like, also at the, I mean, just think about somebody going to the gym. They say that they, they, they say that they are working out the proper way, but somehow the amount of weight or the amount of reps that they're doing is not increasing. Like, there's, like, it's just a biologically, like this, this, this quantity should be increasing the amount of weight or the amount of reps. There's your capacity to do things should be increasing. And now, like excluding things like edge cases like overtraining and whatever, but if you're just like a beginner, you go to the gym and you're just doing your basic workout, like you should see improvement. And if you're not, then it indicates something is going wrong with the way you're lifting the weight, likely your spotters just listening for you or just, yeah. SPEAKER_00: All right. So kind of a corollary of that. And this could be confused with what you talked about as a bad behavior, which is just skipping to the problem without being the explanation. But if you've already read the explanation, you've done the first set of problems and then you're on a new knowledge point, is there a value to be gained by trying to solve that introductory problem without looking at the explanation that's particular to that knowledge point first? Because very often they share the same base technique. And so you are able to solve that with the knowledge you already gained from the first point. SPEAKER_03: That's a good question. I think the answer is that there's a trade off. It's a little hard to optimize and the safe thing is typically to just work through or just look at the example before doing the problem. But if you go about it the exact right way, kind of like threading the needle up, then maybe you can get a little bit better outcome. So how the trade off is like, well, let me just explain that the failure mode with that. So most of the time when people start doing that strategy, where they just try to solve the next round of problems without looking at the example beforehand, there's a number of things that can happen. So that's one of the most important things that you can do is to spend so long on the problem. They just kind of lose track of time. They just kind of lost some thought of spending so long on the problem. They look up 20 minutes has passed and maybe they go back to example after that. It's not necessarily that that was unproductive for learning, but it's that when you consider the opportunity cost of it, what else you could have been doing in those 20 minutes that has a high opportunity cost, you could have finished the rest of the lesson in 20 minutes and then on to other stuff. Another thing, so in addition to just spending too much time on question, you might sometimes it's happened that I've talked to some adults, so they'll both do this thing, though, they'll skip the example. They'll actually manage to solve the problem afterwards, but they end up solving it with a kind of weird solution technique that's just very over fit to these specific features of this one problem. And then they realize on the next problem, like, oh, my solution technique, I used it again. It got the previous problem right, but I applied it on this problem and I got it wrong. Like, what's going on? And so what ends up happening is they get pretty frustrated. If they're, if they kind of understand how learning works and they're like, oh, you know, I bet I over fit the solution technique and it's time to go back. Look at the example. How is this, how is the more general solution technique that the math academy suggests? How is that different from what I came up with? What is the shortcoming in my approach? And again, like that, there is some positive learning to be at from that. But there's a lot of ways that you can kind of fall off the rails and in terms like, for instance, this whole process, you might drag this out over just such a long period of time that the additional learning from it is just again, not worth the opportunity cost. Or you might, it might feel demotivating to you. And you might just kind of stop and say like, oh, this is too hard. So, yeah, I guess, I guess I wouldn't say it's like a necessarily a bad thing to take a look at the next set of problems and see if you can solve it without the worked example. You got to be like really, really, really careful not to fall into this trap of spending too much time on it or getting demotivated. And like, and even if you come up with it, even if it looks like a an obvious sort of like, oh, well, you just do blah, blah, blah, and then you solve the problem. It's still, it's a good idea to just like, okay, go back to the work, work, for example, or just look at the solution and see like, oh, does my solution kind of match what they're doing? Yeah, then ask like, oh, is there a, is there a real difference between the approaches? Because sometimes there is and sometimes it's subtle, it'll get you in the future. And, and, yeah, you definitely don't want to be practicing a solution technique that does not generalize properly because then you start building automaticity on the wrong thing. It's like you build a bad reflex, right? Or it's like, yeah, like, I can just imagine like playing sports and just having like the wrong reflex or like if I just think like a goalie, if a goalie is so used to like going out of the wrong way when the shooter like exhibits some characteristics, it's going to be hard to train out of them. SPEAKER_00: Yeah, and combat sports a lot where a fighter, even a professional fighter will have a weird tick, you know, whenever he does an uppercut, he tilts his own head up. He's looking at his guy while he does an uppercut. How do you train that out of somebody? Like you're putting yourself in huge danger every time you do this. Just to have it, he's developed it. So long to get rid of those bad habits. SPEAKER_03: Yeah, yeah, yeah, because that's a really good, really good concrete analogy right there. Yeah, I think that's exactly it. So I would say unless you are like 100% confident, you know exactly what you're doing. Like just read the work to example and do apply that technique to the next problem. SPEAKER_00: Okay, so a couple more questions about getting the most out of the system because I think a lot of people listen to maybe you want to know that exactly. You can make the most of the time. This is very much related, but when you're doing a review of a topic you've already learned, there's that button review a topic to brush up and I saw people recommending on Twitter, you know, don't guess just click this button and read it. And I thought, you know, similar, but you're saying, at least try, like struggle with it a little bit. So you get that effect. And then if you have to go look. So I assume that's what you'd recommend just to know. Yeah, yeah, I'd say you're exactly right. SPEAKER_03: Yeah, like try to solve the problem. Maybe struggle with it a little bit. Again, don't spend like half an hour on the problem. Just like sitting there staring at the screen or just trying out a bunch of things that don't work. But like give it like a solid several minutes. Don't just give up in like 10 seconds. Be like, I don't know how to do this. Look back at the reference. Give it a real attempt. Several minutes. And if you're making progress on your template, keep going a little bit longer. But if you get to a point where you're just like kind of banging your head on the wall, like, I don't know how to do this and a couple minutes of tasks. And it's a task still. No, no, how to do this. Like, okay, go, go look back at the reference. And, but only just peek at it. Don't look at the full solution. Just peek at the part where you're getting stuck and go back to the problem and then try to carry out the rest on your own. Again, it's like the spotter. Don't let the spotter lift all the way for you if you can help it. Just just have the spotter get you right over the edge or you're having trouble. You do as much as you can on your own. But yeah, yeah, so I would agree. Don't do that at the very outset. The reference material should be kind of like the last resort. And also, I would, yeah, I recommend like guessing should should almost never be guessing to problems because the record like if you don't know how to do the problem, the reference material is there. And it's, it's everything is mapped out where the prerequisites and the keeper requisites should and the expert, the worked examples that the problem corresponds to should be sufficient for you to know how to. How to do the problem. And so, I mean, sometimes maybe like there's some specific aspect of the problem that just makes it like not totally click for you. And then if you wrestle with it for for a while, like five minutes, ten minutes, you're just not getting anywhere despite using the reference material like, okay, take your best guess. Time to move on. Can't spend all day on this problem. But it should be very rare. Yeah, the guessing, taking your best guess. SPEAKER_00: That makes a lot of sense to me. And just so people know, struggling with it is not a bad thing just from a general learning perspective. If you struggle with it for three minutes and then you are able to do it successfully, that's actually, I would think, a very positive comment on your future ability for that task. Probably in the future, you'll be able to do it better as a result of having stronger for that. So that's just from a general learning perspective. SPEAKER_03: Yeah, yeah, totally agree. Yeah, that's like, it's like one of the main features of space repetition, right? Or it's like, you let your memory decay to a point where it's difficult to overcome the decay and retrieve the information. But if you're able to overcome that difficulty, that really increases your retention, right? Yeah, desirable difficulties in general. SPEAKER_00: And also, just don't be, I would imagine, you don't want to be neurotic about it. You know, if you have to look at the review, it's fine because there are closed book tests, there are quizzes where there's no review. And if you get it wrong, you get it wrong. And also because the way the math academy works, eventually there will be higher level problems that integrate that. That type of thing without even making mention of it. And so if you don't understand that thing fully, eventually you won't be able to get those and then it'll reintroduce in the review process. If I'm getting everything right there, if I'm getting something wrong here, let me know. If I introduce in the review process, the areas where you're misunderstanding, then you'll be good to go. So I would say don't be neurotic about it. Just try to get the most out. Yeah, yeah, totally. Right. Yeah. SPEAKER_03: The thing that you're describing, we typically refer to as layering, where you just layer additional knowledge onto what you've learned. You keep building on and that just, it gives you implicit practice with your lower level skills. And the more deeply ingrained those lower level skills get just the more structural integrity there is in your knowledge base. It's kind of like how if you're building, if you're coding up a project or something and you have to make a bunch of new features or capabilities on it, a lot of times it'll start building a feature and then you'll realize that, like, crap, I have to refactor some stuff on the lower level in order to get this kind of into place. And you keep building enough, eventually your lower level aspects of your system are just like really, really strong. And it's the same way with knowledge. Yeah, 100%. SPEAKER_00: I think that's one of the main benefits of the way that the dependencies work. It's so smoothly introducing and layering everything. So now you're using a skill that you acquired last week in a totally natural way. Instead of just solving a set of problems about this skill, you're just implicitly using it and now it becomes much more natural to you to do something that before you are explicitly trying to learn. And so in that way, like you say, there's just so much integrity from a neural perspective, I imagine it just really generalizes the connections and makes that skill more stable. I would say that's a huge benefit of the system overall. But just one more point about getting the most out of it. I think I've seen you mention on Twitter that you don't recommend people take notes. And I know some people take or make flashcards to see some people make like, Anki flashcards or definitions and stuff like this. Do you recommend explicitly recommend against that and recommend against making notes? Am I getting that right? SPEAKER_03: Yeah, so, okay, so notes. Yeah, I explicitly recommend against that. Now, there's a difference between taking notes that you're going to refer to in the future and use as a crutch versus just kind of like thinking on paper and diagramming something out. And so basically the reason why I say don't take notes is because it's just too tempting to go back to those notes in the future and just use them as a crutch. So you want to make it kind of annoying to look up material. If you got your notebook right beside you, you're solving a problem and the distance between you and looking up how to do that problem, it's just a matter of flipping the page and looking at it, looking at your perfect notes. Then you're typically going to over-relie on that. It's just going to be too tempting. And especially if you put so much effort into your notes and you're just like, well, am I just never going to use these? You're going to be tempted to rely on that. And it's going to kind of shoot yourself on the foot because of what we talked about earlier about the retrieval process. You've got to try to retrieve as much as possible without the reference material. Reference material should be a last resort. If you really need it, you can go to the brush up on a topic and go to it in the system. You'll get there pretty quickly, but it might be a little annoying, but that's a good thing because you want to stop doing it. You want to be incentivized to not have to do that. But yeah, of course, if you're just reading an explanation, a working system stuff out paper, you want to diagram out how these concepts relate to each other or whatever. You're just kind of thinking on this like on scratch paper. When the notes are just like when they're just augmenting or you're thinking process. When you're reading, you're basically just listening and thinking on paper and you're going to throw that paper away later. Yeah, no problem with that at all. Yeah, the goal is just you don't want to, the pit falls to avoid. You don't want to use your notes as a crutch and you don't want it to take so much time that it's kind of like slowing down your learning process. Those are the main things to avoid. SPEAKER_00: Yeah, I think the time point is important because like you mentioned, it can be demotivating. It's really inefficient and it's taking you five minutes to get one extra instead of one minute. Yeah. And that all plays into it. And also, and this may be wrong, but my internal sense is that a lot of the value of notes applies to things where the specificity of the material is not matching the level of which you're trying to learn. So you're reading a 300 page book and there's maybe five pages worth of important detail in that. So you take notes on any important or interesting stuff you find, right? But taking notes in a system like Math Academy, every knowledge point is useful in a particular kind of way. So you'd be taking notes all the time. And also, you can easily access them in that fine-grain way again. And you can see that they're obscured in 500 pages of a book and you've got to go, oh, I've got to open right to that page and read it again. And it's like, to me, the benefits you get from taking notes, memory and whatever else, and then also just having easy reference and stuff like this. But it's all built into the system for the most part, I would think. SPEAKER_03: Yeah, yeah, pretty much. It's intentionally built in where you can, if you really need it, you can go back to the reference material pretty quickly. But yeah, at the same time, we don't want to make it so easy that it's so tempting, like, that it's just that you're tempted to always look at the material each time. Yeah, I would fully agree. I think we try to strike a nice balance on there and you really don't need to be taking notes on paper and referring back to them. So about the flashcard stuff, I've also heard that from a couple people who are like, do I need to make flashcards for topics, for various, like, derivative rules or whatever. And so I think that answer is a little bit, it's a little bit more, yes, and no. So I think the thing, you don't want to just be going through making flashcards for everything you read, because we have a spaced repetition system that is going to take care of most of this review for you, ideally all of this review for you. And you also, you don't want to freak out if you did a lesson and then the review comes several days later and you kind of forgot how to do the problem. This is kind of expected at the beginning of spaced repetition, because your decay, your memory decay curve is just, it's plummeting so fast that if we are a little bit off and when you should do the topic, or even if we decide that the correct moment, but you just take a day off or whatever, or maybe you had a quiz, you got some quiz reviews to do first, whatever, there can be some noise in the process of the point you get there view, a little bit of noise at the beginning of the spaced repetition process can mean that you kind of forgot how to do it and you need to just glance back at the reference to refresh. So, yeah, you just want to give it some time to get to the point where you're further out in the spaced repetition process. So maybe like several weeks, like a month, if you find that you are still having to look up stuff all the time, and it's been a month since you learned it and you've had several, a number of exposures, then at that point, I think it's when it maybe it could make some sense to make a flash chart, but before you make a flash chart, the first thing is to ask are you actually, are you actually engaging in retrieval practice with this thing? Like, just make sure you want to make sure, like, okay, is the reason that you always look up this thing? Is it because you always just default to looking it up, or you don't try very hard at the beginning? Or is it because you made some notes on it and you're referring back to these notes all the time, and now you are just using it in a crunch here at this vicious cycle of forgetting, or you always have to go back because you never remember, because you never try to remember. So if you're, like, totally confident that you've covered all those bases, then yeah, sure, it makes sense to maybe make a flashcard. That should not happen very often. But I can kind of see how it, in certain cases, depending on how quickly you're forgetting this stuff, maybe there are some, like, maybe some trick identities or derivative rules, stuff like that. Or you might want to make a flashcard if it's just not syncing in as well as you'd like after many exposures. And this is something that Jason and I have actually talked about with an idea to integrate this sort of thing into the system. So it kind of, when somebody is having trouble remembering derivative rules, or trick identities, or whatever, these are ultimately just math facts. This is an extension of multiplication tables. Multiplication tables, just math facts. There are ways to get kids to have automatic retrieval on multiplication tables. It involves time-renewable practice, flashcard style. And so we want to build that into the system. That's one of the things. I actually started working on that last summer, but I had to get pulled out to other things. But we have plans to have math facts practice for arithmetic built into the system. And this same sort of approach, we want to extend up to, like, trick identities and derivative rules, all these sort of, just these facts that you need to know. And provide the sort of, like, time-renewable practice on those facts as well. SPEAKER_00: So I would like to integrate that. Or even if some people do it, it seems for, like, the names of shapes or definitions of words, you know, that kind of thing. Yeah. When you are solving it, you know how to solve it, but when it's asking you about, you know, is this this or what you just still can't remember exactly what the term is. SPEAKER_03: Yeah, yeah, or the recognition type problems. SPEAKER_02: Exactly. This is, like, kind of a little bit tangential. It's related to the sort of analytics discussion. I'm curious, like, how much more information you'd like to gather about the user? Like, let's say in a sort of science fiction ideal world where everyone has a neural link and you could read and write data from their neural link API. Like, additional information, do you think you would want beyond what you already have in the system to be able to make the educational experience better? SPEAKER_03: Yeah, that's a really interesting question. SPEAKER_01: Yeah. SPEAKER_03: Okay. It'd be so nice if we could know, like, exactly what is the neural connectivity of all these math topics and students brains? Like, what is, like, we have this knowledge graph, right, of math topics and how far you are along in the space repetition process, which is kind of like how solidified this is in your brain. But this is all, like, these are estimates based off of answering questions correct or incorrect. It's based on behavioral data. And, yeah, it'd be amazing to actually know what a biological health level this is. Though, that seems pretty far off. So, when I was in college, I got really interested in computational neuroscience and I came in just from kind of like a math background, just thinking, like, wouldn't it be amazing if I could just have a data set of, like, in a brain, like, all the neural connectivity and all the properties of the neurons that I can just do some data set of data. And, yeah, I thought that was, like, just having that information to begin with. It just doesn't, right now, like, the level of granularity that we read out, like, brain signals, even in, like, brain computer interfaces with, like, brain activity and brainways. So, like, these are still just, like, very aggregate metrics and you can do some machine learning on them to kind of match these kind of these properties of your aggregate brain metrics to, like, the different sort of actions that you want to do or whatever. So, like, understanding the underlying, like, conceptual mapping of, like, just information in somebody's brains. It seems very, very far away. But, yeah, I mean, it'd be so cool to actually have, like, a literal, like, physical MRI of the student's brain. But our knowledge graph is, like, an MRI of the student's brain. It's, in some sense, like, an approximation of how your math connectivity is. But, yeah, it'd be interesting, though. Yeah, I mean, the, if, if or when that sort of thing exists technologically, then I'm sure there will be a number of ethical issues to graph all of them. So, so, yeah, I don't know. It's an interesting question. SPEAKER_02: Yeah, I was interested in, I was interested in, like, BCIs at one point, like, the brain computer interfaces. But, as you said, the signal that you get from reading from them is so noisy that it's unlikely that you're actually able to derive any great useful format, which is really unfortunate. But, like, there were some things that I thought would be interesting that you could do, like, maybe not the level of reading brainwaves, but maybe if you could monitor more physical signals, like, where people are looking on the screen, or their facial expression or something like that, or whether they're yawning. Because, yeah, my use case is like, I want a reading app where as soon as I'm bored, and somehow the system can detect that through physical signals or through my behavior in the app, it will just immediately go to the next article. Okay, and it will keep me interested, almost like a, like a TikTok style algorithm for reading. So, yeah, I don't know, maybe that's already possible, kind of, with the number of behavioral things you're tracking, because you can kind of infer whether someone's bored by their scrolling rate, or whether they're clicking around. When I tend to, like, click really rapidly around the answer button, it's when I'm bored and stuff like that. Yeah, I don't know. SPEAKER_03: Oh, that's interesting. Yeah, I think, I mean, I create that, like, in the platonic ideal of a learning system, it kind of responds to these emotional behaviors, or just different cognitive states of a student. Right, that's kind of what a good coach does, too, if they're working with an athlete, like, they can tell the athletes just not in a good headspace, and some particular kind of exercise that they're having to do, like, maybe time to switch it up. Yeah, as far as us incorporating that sort of stuff, I think, I think we want to start off with just as much as we can get, as much information as we can get from a student's, like, clicks within a system. Their answer behavior, they're going back to work examples, their navigation within the system. I think you can infer a lot of that stuff, kind of like you're saying. Here, you can infer a lot of the, just a lot of the actionable emotions that a student is experiencing based on their clicks, their navigation. But, yeah, I don't know. It's, of course, it's something that, like, it'd be really interesting to have more, more, more affective information about what's going on in a student, but there's always, like, the trade-off, the hitting costs, and, like, well, it introduces a lot of complexity, like, privacy issues, like, doing that sort of stuff with kids, like, it just opens, like, this whole can of worms, and what kind of information or storing. We typically try to avoid the, like, we don't want sensitive information or things that people are, like, data that people are, are little uneasy about. We just don't want to really get involved in that. In that, I don't know. Maybe someday, way, way, way, way down the road, it turns out that we can seriously improve the system by measuring that stuff, and it seems to be worth the headache of having that sort of data. Yeah, I mean, maybe, maybe be a cool way out of the way, way, way, way down the road. SPEAKER_00: You were so insane to register for an app, and they said, you know, give us optimization, you're a girl like Katie, you're not coming back with me. The idea is a possible thing. That would be crazy. But the thing is, like you say, it's way off, but it's not just that we need technology to get those data. So we need research to know what they mean and what we should do in response to it. You know, I see professional athletes that the coaches will say, oh, we put these new activity trackers on them or whatever. Oh, we have a new thing in the helmet to see how hard they're getting hit or whatever. I'm thinking, okay, that's great. But like, what are you doing in response to the activity? Like, oh, when we were in the practice field, he was, he did this many steps. So it's like, what does it mean? What are you supposed to do with it? Are you improving performance based on that? Or is it just another number? You know, and I think that's what's going to happen. We're going to get all this neural data eventually. And then people are going to have to go, okay, what does it actually mean? What does it correlate with in real life? And how can we intervene? Like, what are possible interventions to change that and have this correct? I think it's many steps. SPEAKER_03: Yeah. Yeah, it's a good point. Right. Yeah, that have to be a lot of, it's like you'd have to get some kind of insights into what's going on into somebody's head that that results and you actually taking some different actions than what you can infer based on behavioral data. And yeah, it'd be kind of silly to, right, just like measure all these metrics and come out to the same decision that like that a coach could make just by like watching a video of the player on the field. SPEAKER_01: Well, of course, they got to run more. SPEAKER_03: They only ran like three steps of practice. Yeah, exactly. Yeah, don't need a neural link to tell that, but I don't know. It'd be interesting to, yeah, just see how that, how that develops. SPEAKER_00: So we were kind of talking about the graph. I'm curious. I've seen some people who have knocked on to the foundation. They choose as their course mathematics from machine learning. Is it correct that no matter what course you choose, it will fill in the whatever is needed to get to that point in the graph. SPEAKER_03: Pretty much. There is technically a limit to how far it looks back. At least for now there's limits on my to do list to kind of refactor the diagnostic algorithms. I'm a bit to look back all the way. But right now. I think. I think Matt for machine learning either looks back all the way to the beginning of foundations to or maybe it looks back all the way to foundations. I can't remember off the top of my head, but for these for these university courses. It typically like. Looks back to early high school math. It's kind of like. Yeah, it'll. If you could like reasonably take the course and you have you just have like a bunch of foundations that need to be filled in. Yeah, you're fine, but if you don't know how to. How to add fractions and you sign up for multivolta or calculus. Yeah, it's probably it's not going to look back that far. Yeah, I see. But it looks back pretty pretty far. So for. If you. If you know. Like if you even remotely think that like math machine learning. The topics in that course might be appropriate for you to work on them. Yeah, you're probably. We can probably capture all of your missing foundational knowledge in the diagnostic for that exam and just have you fill it in along the way. SPEAKER_00: But it's not the case that if you finish that out, then you go back to whatever foundations to the you'll be 100%. SPEAKER_03: Right. Yeah, so whenever you take a diagnostic for the course, what we're looking at is we're looking at we're trying to assess your knowledge of all the topics in that course and all of the prerequisites of those topics. So. So I guess. The easiest way to think about this is probably like. Think about somebody who who signs up for for calculus and what kind of what kind of foundations are captured in there. Well, it's going to be looking back through pre calculus through algebra to geometry, probably like. Stop around algebra one or something. And, and, and so we're going to, we're going to be assessing you on all this algebra knowledge, a bunch of geometry knowledge. Trig definitely all the things that kind of come up in calculus, but there's also going to be some stuff that we don't assess you on. That might be in pre calculus like. Like matrices. It's pretty common for pre calculus courses to include some treatment of early linear algebra. So matrices linear transformations and. And that sort of stuff doesn't pop up in your standards single variable calculus course. So, so if you, if you signed up for calculus, you did all that course to 100%. You still have some stuff from pre calculus and probably geometry that you, that you had not been assessed on and that you did not learn on the system. You can always go back and fill out those courses. But basically where when you sign up for course, we're just trying to get you to to to learning all the topics in that course. That's as quickly as possible. SPEAKER_00: Right. So when you, if you start another, for example, let's say you do the diagnostic, you do you complete foundations. When you go on to foundations to, I've seen some people there at 0% or at least it seems that way in their Twitter progress. Why is that? Wouldn't the diagnostic unless it's perfectly orly? So I would think some people just have a little bit more knowledge. Plus something that's not covered in one and covered it. You know, so far. Is that the case? How does it work? SPEAKER_03: Describe this again. So they sign up for. SPEAKER_00: So you do the diagnostic right? You do the diagnostic and it says, okay, you're whatever 70% done with one. Yeah, I mean you're 100% done with one and you start to. Right. It doesn't always start at 0%. Right. I imagine there's some of the diagnostic that it figured out that you knew this material that some of its in foundations to or is that right. Or is that wrong? SPEAKER_03: If you just take a diagnostic for foundations one, then. No, it wouldn't be assessing you on any stuff in foundations to unless there are some overlap between foundations to foundations one, which is possible. I can't remember if there's overlap, but sometimes we have courses that have topics in common. Okay. So that it's possible maybe there's an overlap between those courses. Or maybe other times sometimes what happens is somebody might take a diagnostic for like foundations to or foundations three and get like totally hammered by it and then switch down into foundations one and then take that diagnostic for foundations one and then start climbing back up. But but they may have gotten some credit for first some higher level topics based on that higher level diagnostic that that actually happens. Pretty commonly that people drop down. Yeah, right, right, because it's. Yeah, because you can know. Just some some some offshoots that go into like foundations to or foundations. Even if you're missing a ton of foundational knowledge. Right. SPEAKER_00: Okay. That was my sound. And then just this is like kind of a weird question. But when you're done with whatever course you're trying to go, let's say you're trying to go all the way through mathematics machine learning. Now you're done with math academy. Yeah, still do your reviews every day. Right. Like, is there what's the story on that? How long do people tend to have to spend every day to maintain what they learn. SPEAKER_03: Yeah. Right. So you'll, yeah, you'll continue to do your reviews. And you know, actually, so if you finish like all the courses in your sequence, then you actually unlock this kind of Easter egg in the system where it starts feeding you topics from various other courses, including unfinished courses. So if you if you learn like all the stuff on that, like you go through, you learn your math machine learning. You learn math as a proof. You learn your algebra. You learn all the courses that we've got out. You're still going to, you're going to continue receiving topics actually on like differential equations as track algebra. Some probability of statistics that you haven't seen yet. These are like unfinished courses, but there are are decently finished topics within these unfinished courses. And then it comes from like a lot of the material that we made for our original school program, where high schoolers were learning university courses. So we have a bunch of that content floating around that you'll still get. And yeah, but you'll still review what you were learning, what you had, what you had learned and how much review do you need to maintain your knowledge. That's a good question. So, okay, so suppose somebody like finishes all the topics and this is, or maybe say the finished math for machine learning, and they don't really care about any of the other courses. And there's waiting for machine learning what to come out and they just want to continue reviewing the mass that way when machine learning one comes out there, they're fresh for it. You go into, you put yourself into test prep mode, which would keep you in that course and just continue feeding you reviews. And how much of that would you have to do? I'd say just spitballing. I haven't actually gone through this simulation or calculation or anything. But I would say like probably like 15 minutes a day, like an hour a week. Once you finish like all the topics and you're just in review mode, I would, that would be my guess. I would be like around like 15 minutes a day or an hour a week roughly to just be in maintenance mode. SPEAKER_00: And I would say that the standard intuition around space repetition, it doesn't apply because of the dependency. So, if you're using some knowledge in an earlier lesson later on a bunch of pods, then when it comes up, your probability of recall isn't 90% or whatever would be for a standard space of fiction. It could be 99% even if it's been a very long time since you've seen that lesson in particular. So, I would imagine that the traditional review times and intuition there that people don't apply. SPEAKER_03: Yeah, yeah, it's a little different. Right. So, if something is a low level topic that comes up in a lot of high level topics, then you may never ever see an explicit review on that low level topic. Because we might just be giving you reviews on that high level, high level topics. Every single task that we give you, we're trying to get the most bang for buck. We actually, we do a computation of like how much is this going to elevate your entire knowledge profile. Yeah, and so, right, so it's actually, we choose the review topic that encapsulates the most implicit review subject to the constraint that you are getting reviews on. And everything that it's due at that time. I actually have this, I forget what I call it, I think I call it like a review optimizer or whatever, but it's like this, this hardcore like, graph, or like, algorithm that, right, you kind of, it chooses a, or you tell it some topics that definitely have to be covered. And you tell it like the knowledge states of other topics and give it the encompassing between them, all the fractional encompassing. And you say like, okay, give me the minimal set of topics that covers all of these do reviews 100% and also maximizes the amount of additional review that you get, like the amount to which it like pushes off other reviews that would otherwise be coming up. Yeah, so every single task is a very, very, very, very, well, rashes. Yeah, but yeah, thanks. Yeah, it's, but yeah, it does sometimes, sometimes people do get a little confused because they're like, wait, I did a lesson on this topic and then I never saw a review on it. It's because reviews are happening implicitly. That's the idea. Yeah, we have some ideas to make this a little bit more exposed in the system to like show exactly what implicit reviews you're getting. You can kind of just imagine like imagine like you do a task and you see like a little knowledge graph animation afterwards, and it just shows the reviews like kind of trick, like the credit trickling down. Oh, you got implicit credit. You got 50% on this original your review was scheduled for like 10 days from now. Now, now it's 15 days from now, just stuff like that. Yeah, that's what's going on under the other. SPEAKER_00: I think people a lot more visualizations like that. You know, even after you do a lesson, like what did this deal in on the graph? SPEAKER_02: That kind of show you a skill tree. That'd be awesome. Yeah, definitely. It's funny what you said about the game. It's interesting what you said about the Easter Egg as well. I like once you finish your course, you just get shown sort of previews of what's currently being worked on. I wonder if that would be an incentive for people to brush the finish so they can see the new machine that I was working on. SPEAKER_03: Yeah. Yeah, that'd be interesting. Yeah. Let me think. None: Yeah. SPEAKER_03: Well, okay. So I guess the one caveat in the Easter Egg is that we have to have the course connected to this course graph. Right now, like all the stuff that the that are high up that are original school program high schoolers were doing that's connected up to the course graph. So you get you get reviews or you get you get those topics that we haven't officially released yet. But our machine learning course right now is like it's just in development. So even if we have live topics in there, it's not necessarily connected up to the course graph. I don't know that anyone will be getting machine learning topics as part of the Easter Egg. I see. Yeah, it'd be more of the like university courses that we haven't covered yet. But that we do have content from it or your school program. SPEAKER_02: Honestly, it might be a good idea to like tease that on Twitter. I wonder if that would get people kind of high. Yeah. Just to pick it up somehow to the course graph. SPEAKER_03: Yeah, it's not a bad idea. I guess I'm at it. SPEAKER_00: Oh, my God. SPEAKER_03: So the challenge is that like whenever we hook something up to the course graph. That means we just have to be very careful about structuring the course. Because the model depends on the graph making logical sense. And so if you if you're making heavy edits to the course, the connectivity and stuff, it just it gets a little bit more clunky because everything has to be like validated really carefully. It takes some time for the edits to go through. And yeah, I don't know. I guess I guess it'd be worth chatting with Alex or content director about that. See if he thinks that the. That worked for him or. Yeah, I don't know. But then again, like we were trying to get this course out so fast that that maybe it. Hopefully it won't matter. Hopefully it'll just be here before anyone even knows it. But yeah, that's what I'm thinking about. SPEAKER_00: I recognize how annoying your question this is. But you have a timeline on when that'll be ready. SPEAKER_03: Oh, that's that's fine. So. All right, I'm going to. Don't hold me to this. Don't hold Alex to this, but the goal that we are shooting for is end of February. That's the goal. It's like, so we're really, we're. It's kind of like all hands on deck with that car. Like I'm going to be working on that course like full time, basically until it's out as my main project. And yeah, so Alex is working on that. I've got Yuri who's like Alex is right and man. It's working on that too. Got a couple other content writers working on that. So yes, it's making good progress so far. And I think it feels. Feels like it should be achievable. But yeah, that's the goal. That's the goal. End of February. If everything goes. That's pretty similar to the plan. SPEAKER_00: Yeah. Yeah. How much new stuff are you having to learn in order to do this? Is this knowledge you already had? I imagine you got to brush it up on at least some of it. SPEAKER_03: Yeah, yeah. For the most part, I've got a. At least for classical machine learning. I've got a pretty good understanding about how this all works. I taught it for or I taught a lot of these topics to the. Students in math academies in school program for several years. Rode textbook on that. So it's like it's. It's pretty well spun up in my head play. There are times when I. It's like just the degree of scaffolding that's required and just being very precise. It's supposed to be coming up with these well scaffolded examples. It does take a lot of thought sometimes and I do have to. I just revisit the very core, not some bolts of. Of how everything works sometimes. So it's like. It's like a. I'm in a position where it kind of feels like just. Going back to some some stuff that I either. Had a really great understanding of before or I have like a. Pretty good understanding of and just having to refine it further. SPEAKER_01: Right. SPEAKER_03: To a level of just scoping it out. To the level of granularity that we need, which is honestly is pretty fun for me. Because it's like it's like right at the edge of like. It's it's not too. Difficult when you have that kind of background knowledge. But the background knowledge has faded a bit and in some places it's not as like. As you'd like it to be and then so you just get to kind of fill in those little gaps. So yeah, it's been pretty fun. But yeah, for the most part it's like. It's for the most part for me it's less about. Re learning how something works and more about just thinking about. How to scaffold this into kind of create examples that are that can be done by hand. That are that is not going to take you like. 10 15 minutes to do something like just a couple minute problem. And that we can. Have variations on like it's not just a like. Just you you run through like state the back propagation algorithm that we actually have to have like the questions on that. Computation that we can kind of vary up so that we can give you the same problem or the same type of. Of problem several times to drill in the scale more. And yeah, it's kind of it's kind of difficult because it's. Just doesn't seem like that's done very much at all in machine learning resources. As opposed like compared to algebra and calculus. So it's yeah that's that's most of the difficulty is it just figuring out how to scaffold this. But I'm not going to come along well. SPEAKER_00: I was going to say I imagine there are points of it. Probably a lot of points where it only will really work as a learning is trying to if you go and like run some code or do something like that. Is that the case or are you trying to keep it all on the platform somehow? SPEAKER_03: So for all the may for all the topics in the course for all the other lessons. It's going to be drilled down to just the math by hand kind of like the linear algebra course. Now. So initially when I was Jason and I were talking about this course, we're kind of thinking like well, it does doesn't there have to be code for students to write. But really that the core the fundamental skills the building blocks of skills is not really it's not so much about the code so much as the. As as knowing the math that's going on under that. And so we've definitely got plans to make like some little mini projects that kind of like pull together. These mathematical building blocks into writing some some code. And it seems like there are some ways that we could keep this like within the within the platform in terms of like have a little Python editor and you write some Python code and suppose not spit out some numbers for some given inputs. It's kind of like the free response parser almost we've got free response questions where you type in a symbolic expression and we've got a parser that that that figures out what what mathematical expression that represents and evaluates it for a bunch of random inputs. As long as it evaluates correctly, then it means you've got the right expression. We can do this in similar thing with code. SPEAKER_00: Yeah, that's actually very surprising. I was almost certain that there was going to be code. So that's the. SPEAKER_03: Yeah, yeah, not as not as much as as you think. The thing is like with a lot of these machine learning. I think a lot of times that like that. That tutorials and classes are code heavy in machine learning. Oftentimes they're. They're trying to teach like some kind of framework like TensorFlow or or whatever. And it's like that's definitely good to know if you're trying to get into a machine learning position, but it's kind of different from the actual. What's going on with it? It's like you're kind of learning how to use a framework that that does things efficiently. Just good, but like actually doing things by handing hand and knowing what's being done under that. Like it feels like it just boils down to your typical it can be made into something similar to your typical kind of kind of math problems. And it's not always so like for instance like. Gradient descent. It's not like we're going to be asking a student to work out 100 iterations of the great descent or like just do this by hand until you're just. No, it's like, but you kind of like scope it down to like, okay, we're got one iteration. We're got two iterations. Given this set up, say it's it's gone on for like a thousand iterations and this is our function. This is the point we're at right now. Does this meet this particular stopping criteria defined as a number of iterations or what's the slope of the loss function? There just what was the absolute difference between your previous two estimates or whatever things like that. So you can kind of like give practice at various parts of the algorithm without having to execute the whole thing. And then of course, like a coding project would be like, okay, like go implement gradient descent for for this particular function. SPEAKER_00: Right. I had all together. That makes sense. Yeah. Sorry. You need to approach. Yeah. SPEAKER_01: Yeah. SPEAKER_00: We're coming up on two hours. You already. I was hoping to ask you a lot of questions about how you personally learn firstly because obviously there's no there's no math academy course. I'm for the new things that you're trying to learn. The course about them. So I just want to ask about that. But I'm not coming up on some time constraints. SPEAKER_02: But we'd always be glad to do like another podcast like one of the people on. SPEAKER_01: Really just. SPEAKER_02: I mean, that's just my presentation. There's no limit to the questions we can ask really. We can always come up with more questions. SPEAKER_03: Yeah. Yeah. I'd be happy to. It's always a lot of fun talking to you guys. Yes. Such interesting questions. Awesome. SPEAKER_01: Yeah. SPEAKER_03: It's a. You know, I'm kind of used to like sometimes on Twitter. It's just like automatic response. Like I've heard the same question like 10 times before. Like, yeah, with you guys, like there's definitely a lot of like. It's new stuff where I have to pause and think about it. So yeah, it's really fun talking to you guys. Yeah. I'll say a lot of like good concrete examples for things we were talking about that I never SPEAKER_01: heard about before. SPEAKER_03: Like that upper cut like head and lean back. That's such an interesting thing. Yeah. Anyway, yeah, I'm more than happy to do another chat. All right. SPEAKER_00: Awesome. So maybe next month we'll get into how you actually learn. And also I want to get into more productivity stuff. I know we talked about productivity a little bit. But it seems to me you're able to do like a million things. And I love also. There's much to be learned from that. And probably I never learned Twitter as well. SPEAKER_02: You're going to go to a little crazy with that. Okay. SPEAKER_03: Let me just say about the posting. So I used to like, I used to just kind of fire off like one post today, right? I would just like. I'd take something I'd written before. Maybe some thought I had at the moment. It's kind of like Word Smith it into like this beautiful, perfect Twitter post. And maybe cradle image for it. And then eventually so I was doing that for a while. And then so I was talking to Jason actually and he was like, dude, you should just treat this as your Twitch stream. Anything that you're doing just post about it. People will find this awesome. And like really? And he was like, yeah, you should totally do it. So I started doing it. And then how does just getting more. Initially I kind of was worried that like you post too much than like it just kind of tanks your whole visibility because you're just like not like people. But it doesn't seem like that's the case. It just seems like more shots on goal and more opportunities to propose to get traction. And it doesn't even seem like people are particularly head geek about like whether your language is like good or not. It's like you just if you have some like decently interesting idea, even if you misspell like a bunch of words, don't use uppercase. And you just put it out there. And like sometimes people just pick up. So yeah, that's been my new kind of strategy. It's just like stream of consciousness. Like anything that is a halfway course like there's a limit to like. I know that that's a kind of slippery slope where you start. Just like what is it like skits. So posting. I don't know that I want to get to that to that level. Like, you're a long way from that. SPEAKER_00: Alright, that's good. SPEAKER_03: Keep it on the middle ground. SPEAKER_00: Anyway, it's very impressive how much you're able to do. I'd love to get more into the details of that and your learning and maybe it's a more mouth-t listening.