Hacker Newsnew | past | comments | ask | show | jobs | submit | jon_tau's commentslogin

The saying comes from a slide by Noam Shazeer (see: https://www.youtube.com/watch?v=HgGyWS40g-g&ab_channel=Tenso...). It just means the current goal should be to have models with 1 trillion parameters.


It's the "front" of a flashcard.


Completely tangential but it's a common thing for a married woman to take the surname of her spouse's family and suppress the maiden name...


I find that Lex often side-steps technical depth and wanders into the philosophical side of things. He has the opportunity to ask diving questions but he simply scratches the surface with nearly every guest. It's not entirely his fault, he's under time constraints but I'd hope for more intellectual stimulation. That said, I do appreciate his efforts and he seems like a genuinely nice fella.


Yeah I have an issue with that as well. He gets some great guests on the show and spends most of the time pushing a philosophical debate about AGI and futurism instead of letting the guests talk about their areas of expertise. I'd even argue that it's dangerous because it might give the casual listener an impression that ML/AI is much further ahead than it really is if all of these top minds are discussing AGI.

I'd much rather hear LeCun, Goodfellow, Schmidhuber and Bengio talk about what they're currently working on and where they think the field will go in the next year or two instead of their wild guesses about AGI. I guess the futurism crowd is much larger audience though.


Well, consider that this podcast series spun out of the AGI class that Lex ran. As such, it's hardly surprising that he puts a touch more focus on the AGI side of things. And for me, I like that. I can find videos of Lecunn, Goodfellow, Schmidhuber, and Bengio talking about the low level technical details of their work. I actually appreciate hearing them talk about AGI, as my personal interests heavily involve thinking about the connection(s) between the currently trendy AI stuff (DL, DRL, GAN's, etc.) and what might eventually become AGI.


Lex here. This is a point I think about a lot. I hear conflicting advice from brilliant folks I really respect. Some say "go deep on the philosophy" and others "go deep on the technical details of the person's expertise." The latter is something that surprised me, and something I'll definitely do more of in the coming months. In general, one of the things the internet pleasantly surprised me with is that people like depth (even those outside the field). I obviously love the details, especially in ML, CS, math, physics, and psych.

Thanks for the kind words. I work hard on this thing, and hopefully will improve with time.


Podcasts aren't great for super technical depth IMO given the lack of graphics much less equations, etc. But also IMO hearing about some of the technical details as opposed to what I might hear on an NPR are useful.

It's a difficult tradeoff I know whether for a podcast or even even for a lecture topic. I went to a couple of your sessions this IAP and I'll admit I found one fantastic (in part because it was directly relevant to some things I'm working on/talking about around AI privacy) and one not so much because I'm more focused on practical applications than the math underpinnings.


I don’t know. He could ask Jeremy Howard about the architecture of fast ai and we’d get some interesting answer that becomes dated and irrelevant very quickly. But when he asks what Jeremy’s favorite programming language is and Jeremy comes back with Microsoft flippin Access, you get a view into Jeremy’s commitment to making software accessible to the masses. Or when Elon pauses like Elon does and asks the AGI what’s outside the simulation, which simulation is he talking about? Is he asking the AI to describe its God or ours?

I don’t know, i really like these forays into the philosophical.


Thank you. Exactly! I ask these questions in hope to inspire the rare gems of brilliance. Sometimes those come from deep technical questions and sometimes from silly philosophical questions. Both have potential. I fail often with striking the right balance, but hopefully less and less over time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: