Hacker Newsnew | past | comments | ask | show | jobs | submit | mcpherrinm's commentslogin

Yeah, Chrome only partly supports revocation (Not sure exactly the criteria, but our test sites don't match it).

Squash merge is the only reasonable way to use GitHub:

If you update a PR with review feedback, you shouldn’t change existing commits because GitHub’s tools for showing you what has changed since your last review assume you are pushing new commits.

But then you don’t want those multiple commits addressing PR feedback to merge as they’re noise.

So sure, there’s workflows with Git that doesn’t need squashing. But they’re incompatible with GitHub, which is at least where I keep my code today.

Is it perfect? No. But neither is git, and I live in the world I am given.


Yes, I think people who are anti squash merge are those who don't work in Github and use a patch based system or something different. If you're sending a patch for linux, yes it makes sense that you want to send one complete, well described patch. But Github's tooling is based around the squash merge. It works well and I don't know anyone in real life who has issues with it.

And to counter some specific points:

* In a github PR, you write the main commit msg and description once per PR, then you tack on as many commits as you want, and everyone knows they're all just pieces of work towards the main goal of the eventually squashed commit

* Forcing a clean up every time you make a new commit is not only annoying extra work, but it also overwrites history that might be important for the review of that PR (but not important for what ends up in main branch).

* When follow up is requested, you can just tack on new commits, and reviewers can easily see what new code was added since their last review. If you had to force overwrite your whole commit chain for the PR, this becomes very annoying and not useful to reviewers.

* In the end, squash merge means you clean up things once, instead of potentially many times


> If you had to force overwrite your whole commit chain for the PR, this becomes very annoying and not useful to reviewers.

I'd say that optimizing for reviewers who don't know "git range-diff" exists has even less merit that optimizing for contributors who don't know how to edit their commit graph.

I'm regularly using GitHub, GitLab, Gerrit, Phabricator and mailing lists and while some make it more pleasant than others, none of them really put any actual obstacles on your path when manipulating the commit graph mid-review.


Forcing a single commit per PR is the issue imo. It's a lazy solution. Rebase locally into sensible commits that work independently and push with lease. Reviewers can reset to remote if needed.

If your goal here is to have linear history, then just use a merge commit when merging the PR to main and always use `git log --first-parent`. That will only show commits directly on main, and gives you a clean, linear history.

If you want to dig down into the subcommits from a merge, then you still can. This is useful if you are going back and bisecting to find a bug, as those individual commits may hold value.

You can also cherry pick or rollback the single merge commit, as it holds everything under it as a single unit.

This avoids changing history, and importantly, allows stacked PRs to exist cleanly.


Git bisect is one of the important reasons IMO to always squash-merge pull requests: Because the unit of review is the pull request.

I think this is all Github's fault, in the end, but I think we need to get Github to change and until then will keep using squash-merges.


No.

The cases where bisect fails you are, basically, ones where it lands on a merge that does too much - you now have to manually disentangle the side that did too much to find out exactly what interaction caused the regression. But this is on the rarer side because it's rare for an interaction to be what caused the regression, it's more common that it's a change - which will be in a non-merge commit.

The squash merge workflow means every single commit is a merge that does too much. Bisect can't find anything useful for you by bisection anymore, so you have to get lucky about how much the merge did, unenriched by any of the history that you deleted.


git bisect --first-parent

> But then you don’t want those multiple commits addressing PR feedback to merge as they’re noise.

They're not noise, they tell future maintainers why something is the way it is. If it's done in an unusual way they can see in the blame that a couple of lines were changed separately from the rest and immediately get an explanation by checking that commit.


Yes, it's just a number referenced in one of a few databases.

> The 15-digit pet microchip is the international standard (see ISO 11784:1996 and ISO 11785:1996)

https://www.aaha.org/for-veterinary-professionals/microchip-...

https://en.wikipedia.org/wiki/ISO_11784_and_ISO_11785


>the international standard

Except the United States, because of course.

ISO is 134 kHz, US has both 125 and 128 kHz.


lol - the long tail of international standard dissent in US


It does mean Unix timestamps. The blog post doesn’t have the full details.

You can read the RFC draft at https://datatracker.ietf.org/doc/html/draft-ietf-acme-dns-pe...

It says: CAs MUST properly parse and interpret the integer timestamp value as a UNIX timestamp (the number of seconds since 1970-01-01T00:00:00Z ignoring leap seconds) and apply the expiration correctly.


Note that we only do best-effort submission of final certs, so it's not actually guaranteed that they end up being logged.


Two current mitigations and one future:

DNSSEC prevents any modification of records, but isn’t widely deployed.

We query authoritative nameservers directly from at least four places, over a diverse set of network connections, from multiple parts of the world. This (called MPIC) makes interception more difficult.

We are also working on DNS over secure transports to authoritative nameservers, for cases where DNSSEC isn’t or won’t be deployed.


Ah that makes sense. I was wondering why I haven’t heard of cases of successfully attacks like this. Thank you for the info!


This wasn’t the first version of the ballot, so there was substantial work to get consensus on a ballot before the vote.

CAs were already doing something like this (CNAME to a dns server controlled by the CA), so there was interest from everyone involved to standardize and decide on what the rules should be.


We (Let’s Encrypt) also agree 10 days seems too long, so we are migrating to 7 hours, aligning with the restrictions on CAA records.


Yes, you can limit both challenge types and account URIs in CAA records.

To revoke the record, delete it from DNS. Let’s Encrypt queries authoritative nameservers with caches capped at 1 minute. Authorizations that have succeeded will soon be capped at 7 hours, though that’s independent of this challenge.


There is no username in ACME besides the account URI, so the UUID you’re suggesting isn’t needed. The account uri themselves just have a number (db primary key).

If you’re worried about correlating between domains, then yes just make multiple accounts.

There is an email field in ACME account registration but we don’t persist that since we dropped sending expiry emails.


It’s still a valid point IMHO - why not just use the public key directly? It seems like the account URI just adds problems instead of resolving any.


It has these primary advantages:

1. It matches what the CAA accounturi field has

2. Its consistent across an account, making it easier to set up new domains without needing to make any API calls

3. It doesn’t pin a users key, so they can rotate it without needing to update DNS records - which this method assumes is nontrivial, otherwise you’d use the classic DNS validation method


Interesting.

I didn't realize the email field wasn't persisted. I assumed it could be used in some type of account recovery scenario.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: