Hacker Newsnew | past | comments | ask | show | jobs | submit | DigitalNoumena's commentslogin

Coincidentally I recently watched these videos about this topic. Both great channels!

3D Guide - How to Build the Perfect Medieval Castle https://youtu.be/Syjg6PHYFBo?si=JceRfeOks3hOVqWu

How to Lay Siege to a Medieval Fortress (1000-1300) https://www.youtube.com/watch?v=aQ7hTNoK-OA&t=1900s


How to Lay Siege to a Medieval Fortress:

Bring food. LOTS of it.

Wait.

(However, it's apparently easier to do it the fast way. Food storage in the field was quite difficult, and supply lines are vulnerable.)


With competent defenders, "the fast way" has a high failure rate. And even if successful, a high body count.

Or, if you're referring to gunpowder/cannon tactics - those are the reason for medieval-style fortifications falling out of favor, 1500 CE-ish.


Nope, I meant pre-gunpowder.

But as always, money can break any castle wall. But lobbing putrefied meat into wells, waiting out their stores, and so forth work.


Well constructed medevial fortresses imposed such a high cost on any attackers that were rarely attempts to assault them directly. There was almost always an easier way.

However if was a city or sometimes a large town, the calculus changes because the payoff was much higher. Also it may not have always been possible to wait the defenders out - because of relief armies and the troops getting restless and possibly deserting. An example in the "medevial" period (lets say ~1050-1453) where walls were directly attacked would be Jerusalem


I remember watching a video about a siege the Romans did, lasting a few years or so, where they built a whole friggin ramp up to the fortress, where otherwise they would only have had a thin path up, which would have been impossible to get siege towers and stuff up. Mind-blowing stuff.


Its quite fascinating and defensive innovations drove alot of interesting architechture. For example machiolations were a response to sapping being used especially by Saracen armies (who seemed to favor sapping more than Europeans) against the Crusaders.

Also the movement from square towers to round ones, once again as a defense for sapping, and displacing the tower from the wall or keep in order to be able to flank assaulters attempting an escalade.

The mastery of stone fortifications made walls nearly impregnable to breaches until cannons were developed, and even by then it was a dodgy proposiiton. For instance at Constantinople in 1453 the Turks had huge cannon which could damage the walls, yet it was not the reason the city fell. The defenders were able to repair the walls before the cannon could reload for a second shot. The reason Constantinople fell was a side gate was left open (either intentionally or accidentally) which allowed the Turks to pour through. There were alot of incidents like this such as the defenders of the Krak de Chevaliers (in modern day Syria) being duped into believing their commander had ordered their surrender


>Also the movement from square towers to round ones, once again as a defense for sapping

I thought the move to round towers was due to the increased effectiveness of canon.


Round towers and concentric design started to become popular in the 13th century which was well before cannon were useful. Cannon were not all that practical until the mid-15th century, which is when you started to see the "star" fort design, and the replacement of stone with earth and brick, lower walls that were more sloped.



Yes, that looks like the video I saw and the text seems to describe it.


After a three month siege in 1136, Exeter Castle finally had to surrender when they ran out of wine. O cruel fate.


An app to estimate the risk of your job being automated and how to hedge against it professionally and financially


You can look at one of the discs here: https://sanctuaryonthemoon.com/discs


HyDE is the way to go! Just ask the model to generate a bunch of hypothetical answers to the question in different formats and do similarity on those.

Or even better, as the OP suggests standardise the format of the chunks and generate a hypothetical answer in the same format.


It may interest you that Guy Podjarny, one of the Snyk founders, now has an AI coding company (https://www.tessl.io/about) that looks like a competitor of yours


There seems to be a deliberate, implicit value judgement about "busybodies" that would explain the negative connotations:

> Bassett hypothesizes that “in countries that have more structures of oppression or patriarchal forces, there may be a constraining of knowledge production that pushes people more toward this hyperfocus.”


The article describes the "hunters" as more focused, so I am fairly certain that statement refers to the "hunters" instead of the "busybodies".


Great post! I've been looking to get into the guts of large scale model training (I'm half-way between the design and application layer of LLMs, mostly in python, sometimes a bit of c++) and this will be a great reference to have.

PS. appreciate it if anyone can recommend more material like this


I think the issue of test set contamination is important, but it’s academic - when a model contains a good enough distilled representation of arguably all the code out there, does it really matter whether it can generalise OOD?

Realistically how many of the practical use cases where it’ll be applied will be OOD? If you can take GPT4 there then you are either a genius or working on something extremely novel so why use GPT4 in the first place?

I understand the goal is for LLMs to get there, but the majority of practical applications just don’t need that.


> when a model contains a good enough distilled representation of arguably all the code out there, does it really matter whether it can generalise OOD?

If its contaminated by the test set being in the model’s training set, then the test is no longer (assuming it was in the first place) a valid measure of whether the model has “a good enough distilled representation of arguably all the code out there”.


How I understand it:

Entropy (according to Boltzmann) is proportional to the number of microstates that can give rise to a particular macrostate of a system. The macrostate with the highest number of possible microstates is the uniform one, where all the accessible microstates are equally likely. So if by the 2nd law entropy must increase, the system will tend towards the uniform configuration.

In other words, if a force is pushing water, the configuration with the highest number of states is that where all the particles of water are also moving i.e. indistinguishable microstates, doesn't matter how you rearrange the moleculles, it will look the same as they are all moving uniformly. It would be extremely unlikely that there were a pocket of water that is mysteriously still in the middle of the stream no? Clearly there are not a lot of rearrangements of the mollecules that will keep the macrostate the same, so low entropy, which tends not to be the case.

(Edit) How this connects to the "desire to dissipate energy efficiently":

Essentially, when we talk about "energy" we really mean "free energy". This is the amount of work that we can extract from a system. This is nothing but a measure of how far a current system is to it's maximum entropy state. So dissipating energy = increasing entropy.

The mindblowing part for me is the connection that the process of extracting free energy is the same process that moves a system to its most likely state, uniformity, high entropy. So somehow, the ability to _accelerate_ an already inevitable process lets us reconfigure other systems _away_ from their most likely state!! So if we imagine the arrow of time to progress at the average rate of entropic decay, we are essentially reversing it for some systems by accelerating it for others!!!

Man...


Thanks for writing all that. The key part seems to be:

> So somehow, the ability to _accelerate_ an already inevitable process lets us reconfigure other systems _away_ from their most likely state

But this is exactly the part that doesn't make sense to me. Why would it accelerate? There's no principle I'm aware of that would prefer or cause this accelerated version. There's no such "ability".

Rather, the entropy process just happens in this "already inevitable" way you describe, without any self-organizing "resonant" structures or anything or the sort. I don't understand what would cause anything different.

It's kind of like saying boulders eventually roll downhill, so therefore hills spontaneously turn their rocky surfaces into smooth slides so the boulders can roll down faster. But that's not how it works.


Ah I see, sorry to not address that - got carried away on entropic musings. It seems that is not an assumption he’s making, but it’s what he’s proven:

> when a group of atoms is driven by an external source of energy (…) and surrounded by a heat bath (…), it will often gradually restructure itself in order to dissipate increasingly more energy.

To be honest I cannot go any deeper without reading the paper.


Indeed these are normative claims, not descriptive. The emergence of life is is an extremly complex stochastic process, and likely impossible to describe. However, it does seem plausible that if a) high dissipative systems are more likely to perdure in time given the 2nd law, b) self-replication enhances the global entropic dissipation of a category of systems (because there are more) and c) the process of developing such a system is easier done by replication than by the same evolutionary pathway that led to the appearance of said system in the first place, then given enough time a self-replicating system MUST appear. I would like to see a formalisation of c) though, my formulation is quite tenuous.

I don't see why it would have to lead to a non-Darwinian explanation of evolution. As has been said by in the parent comment, survivability might just be a high-level manifestation of entropic drive of self-replicating systems. It's remarkable that this is a claim that can be falsified, whereas Darwinian evolution, well..


> then given enough time a self-replicating system MUST appear.

The cause "given enough time" seems to leave the questions of how probable it is, and how widespread a phenomenon it would be, wide open.

> I don't see why it would have to lead to a non-Darwinian explanation of evolution.

To be clear, I don't think it does, and I don't read galenmarchetti as going that far, either.

> It's remarkable that this [survivability might just be a high-level manifestation of entropic drive of self-replicating systems] is a claim that can be falsified.

Can it be falsified? It seems a rather broad claim, and it is unclear to me that the falsification of England's theory would rule out some other theory succeeding.


Agree. I cannot comment on how probable or how widespread, but only that it is possible, and in fact it is arguably a certainty based on the premises above. Now if entropy was the only driver then self-replicating systems would be the norm. But alas, the Fermi paradox. I think we are asking too much.

Could it be falsified? I think if you could show that natural selection led to the survival of less dissipative life forms then you could claim falsification.


I take your point about falsification, though it seems to depend on accepting that the Darwinian theory of evolution via natural selection is itself sufficiently falsifiable to present the 'threat' of falsification to the premise in question.


When you have a vast near infinite canvas, it's just a numbers game.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: