Hacker Newsnew | past | comments | ask | show | jobs | submit | rytill's commentslogin

It is not a "narrative", "philosophical paradigm", or him "getting high on his own supply". It is simply him sharing his thoughts about something.


He is in fact getting high on his own supply of narratives and philosophical paradigms. There are no facts in that entire blog post. It's a useless fart in the wind.


Could you please stop posting shallow and curmudgeonly dismissals? It's not what this site is for, and destroys what it is for.

If you want to make your substantive points without putdowns, that's fine, but please don't use this place to superciliously posture over others.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Alright that's a valid answer. Thank you.


It really doesn’t, at all. Every sentence has a clear, non-equivocative meaning and it doesn’t use any LLM tropes. Your LLM sensor is seriously faulty.


What is the goal of doing that vs using L2 loss?


To add to the existing answers - L2 losses induce a "blurring" effect when you autoregressively roll out these models. That means you not only lose import spatial features, you also truncate the extrema of the predictions - in other terms, you can't forecast high-impact extreme weather with these models at moderate lead times.


Yes very good point this to me is one of the most magical elements of this loss how it suddenly makes the model "collapse" on one output and the predictions become sharp.


Yeah, it's underplayed in the the writeup here but the context here is important. The "sharpness" issue was a major impediment to improving the skill and utility of these models. When GDM published GenCast two years ago, there was a lot of excitement because the generative approach seemed to completely eliminate this issue. But, there was a trade-off - GenCast was significantly more expensive to train and run inference with, and there wasn't an obvious way to make improvements there. Still faster than an NWP model, but the edge starts to dull.

FGN (and NVIDIA's FourCastNet-v3) show a new path forward that balances inference/training cost without sacrificing the sharpness of the outputs. And you get well-calibrated ensembles if you run them with random seeds to their noise vectors, too!

This is a much bigger deal than people realize.


To encourage diversity between the different members in an ensemble. I think people are doing very similar things for MOE networks but im not that deep into that topic.


The goal of using CRPS is to produce an ensemble that is a good probabilistic forecast without needing calibration/post processing.

[edit: "without", not "with"]


So, I have heard a number of people say this, and I feel like I'm the person in your conversations saying it's a coarse description and downplays the details. What I don't understand is, what specifically do we gain from thinking of it as a Markov chain.

Like, what is one insight beyond that LLMs are Markov chains that you've derived from thinking of LLMs as Markov chains? I'm genuinely very curious.


It depends on if you already had experience in using large Markov models for practical purposes.

Around 2009, I had developed an algorithm for building the Burrows–Wheeler transform on (what was back then) very large scale. If you have the BWT of a text corpus, you can use it to simulate a Markov model with any context length. It tried that with a Wikipedia dump, which was amusing for a while but not interesting enough to develop further.

Then, around 2019, I was working in genomics. We were using pangenomes based on thousands of (human) haplotypes as reference genomes. The problem was that adding more haplotypes also added rare variants and rare combinations of variants, which could be misleading and eventually started decreasing accuracy in the tasks we were interested in. The standard practice was dropping variants that were too rare (e.g. <1%) in the population. I got better results with synthetic haplotypes generated by downsampling the true haplotypes with a Markov model (using the BWT-based approach). The distribution of local haplotypes within each context window was similar to the full set of haplotypes, but the noise from rare combinations of variants was mostly gone.

Other people were doing haplotype inference with Markov models based on similarly large sets of haplotypes. If you knew, for a suitably large subset of variants, whether each variant was likely absent, heterozygous, or homozygous in the sequenced genome, you could use the model to get a good approximation of the genome.

When ChatGPT appeared, the application was surprising (even though I knew some people who had been experimenting with GPT-2 and GPT-3). But it was less surprising on a technical level, as it was close enough to what I had intuitively considered possible with large Markov models.


> Boglehead

> 140% gain on your holdings this year

Choose one.


Generally true but nvda and pltr are normie stocks and can account for these returns from this year.


Boggle head is basically pick 2-3 vanguard etfs and check back in 25 years.


That's my approach. I got my quarterly statement in the mail yesterday. Looks like the market must have gone up over the past three months. Not sure what to do with this information since it's not like I'm going to change anything.


But then it's not a Boglehead lol


https://www.bogleheads.org/wiki/Passively_managing_individua...

I understand where you’re coming from, but there isn’t a incongruity. Individual stock investments are a relatively small part of my overall portfolio.


> The discussion here assumes that you are not trying to beat the market, but instead passively managing individual stocks to create your own "DIY index fund."


Why would one have motivation to not use activation functions?

To my knowledge they’re a negligible portion of the total compute during training or inference and work well to provide non-linearity.

Very open to learning more.


One reason might be expressing the constructs in a different domain, eg homomorphic encrypted evaluators.


they are one of the reasons neural networks are blackbox, we lose information about the data manifold the deeper we go in the network, making it impossible to trace back the output

this preprint is not coming from a standpoint of optimizing the inference/compute, but from trying to create models that we can interpret in the future and control


Less information loss -> Less params? Please correct me if I got this wrong. The Intro claims:

"The dot product itself is a geometrically impoverished measure, primarily capturing alignment while conflating magnitude with direction and often obscuring more complex structural and spatial relationships [10, 11, 4, 61, 17]. Furthermore, the way current activation functions achieve non-linearity can exacerbate this issue. For instance, ReLU (f (x) = max(0, x)) maps all negative pre-activations, which can signify a spectrum of relationships from weak dissimilarity to strong anti-alignment, to a single zero output. This thresholding, while promoting sparsity, means the network treats diverse inputs as uniformly orthogonal or linearly independent for onward signal propagation. Such a coarse-graining of geometric relationships leads to a tangible loss of information regarding the degree and nature of anti-alignment or other neg- ative linear dependencies. This information loss, coupled with the inherent limitations of the dot product, highlights a fundamental challenge."


yes, since you can learn to represent the same problem with less amount of params, however most of the architectures are optimized for the linear product, so we gotta figure out a new architecture for it


Can you please explain the insight about reduced workweeks you are deriving from what you've linked? It is not obvious to me.


I was trying to point out that the productivity has been steadily increasing without an obvious benefit to the workers (such as pay increase), so basically workers have been producing more without getting a larger share of that increase. Therefore, keeping all the other things constant, reduced workdays might be a benefit for that increased productivity.


Okay, that makes perfect sense. Thanks for the explanation!


> we’d just have to do it

Highly economically disincentivized collective actions like “pulling the plug on AI” are among the most non-trivial of problems.

Using the word “just” here hand waves the crux.


Can you list some long textbooks on a single subject that are amazing?


PAIP

RE4B


Don’t forget the training data!


We are far from open training data... training data might even be incriminating.


100%, though I still feel as though open training data will eventually become a thing. It'll have to be mostly new data, synthetic data, or meticulously curated from public domain / open data.

Synthetic training data sets, even robotically-acquired real world "synthetic" data, can rapidly create training sets. It's just a matter of coordinating these efforts and building high quality data.

I've made a few data sets using Unreal Engine, and I've been wanting to put various objects on turn tables and go out on backpack 3D scan adventures.

Someone will have to pay for it, though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: