Hacker Newsnew | past | comments | ask | show | jobs | submit | adontz's commentslogin


Same in Georgia


In particular the following are not accessible ajax.googleapis.com fonts.gstatic.com gmail.com google.com www.googletagmanager.com www.youtube.com


certbot has an plugin for nginx, so I'm not sure why people think is was hard to use LetsEncrypt with nginx.


Maybe it's better these days, but even as an experienced systems administrator, I found certbot _incredibly_ annoying to use in practice. They tried to make it easy and general-purpose for beginners to web hosting, but they did it with a lot of magic that does Weird Stuff to your host and server configuration. It probably works great if you're in an environment where you just install things via tarball, edit your config files with Nano, and then rarely ever touch the whole setup again.

But if you're someone who needs tight control over the host configuration (managed via Ansible, etc) because you need to comply with security standards, or have the whole setup reproducible for disaster recovery, etc, then solutions like acme.sh or LEGO are far smaller, just as easy to configure, and in general will not surprise you.


Certbot is a giant swiss army chainsaw that can do everything middlingly well, if you don't mind vibecoding your encryption intrastructure. But a clean solution it usually isn't.

(That said, I'm not too thrilled by this implementation. How are renewals and revocations handled, and how can the processes be debugged? I hope the docs get updated soon.)


Certbot always worked fine for me. It autodetects just about everything and takes care of just about everything, unless you manually instruct it what to do (i.e. re-use a specific CSR) and then it does what you tell it to do.

It's not exactly an Ansible/Kubernetes-ready solution, but if you use those tools you already know a tool that solves your problem anyway.


From the seeming consensus I was dreading setting let's encrypt up on nginx, until I did it and it was and has been... Completely straightforward and painless.

Maybe if you step off the happy path it gets hairy, but I found the default certbot flow to be easy.


From a quick look it seems like a command you use to reconfigure nginx? And that's separate from auto-renewing the cert, right?

Maybe not hard, but Caddy seems like even less to think about.


I guess I should compare to this new Nginx feature rather than Caddy. It seems like the benefit of this feature is that you don't have a tool to run, you have a config to put into place. So it's easier to deploy again if you move servers, and you don't have to think about making sure certbot is doing renewals.


Certbot is a utility that can only be installed via snap. That crap won’t make it to our servers, and many other people view it the same way I do.

So this change is most welcome.


That doesn't sound right to me. It's been in Debian and Ubuntu for a while:

* https://packages.debian.org/bullseye/certbot

* https://packages.ubuntu.com/jammy/certbot


Last I was concerned with, this was the situation:

https://github.com/certbot/certbot/issues/8345#issuecomment-...

That’s been three years though. The EFF/Certbot team has lost so much goodwill with me over that, I won’t go back.


It's a Python package you can install with pip, never ever installed it with Snap


absolute nightmare to get this to work inside docker compose dude. Nobody has documented a decent working solution for this yet. Too many quirks and third parties like nginx-proxy-manager or nginx-proxy/nginx-proxy on github make it even more terrible


What people often don't realize is that in a big business system a user may have no permission to raw data of some table, but may have permission to report which includes aggregated data of the same table, so report permissions cannot be deducted from base CRUD permissions.

If such SIAAS

    - Checks that query is SELECT query (can be tricky with CTE, requires proper SQL parser)
    - Allows editing said query by superuser only
    - Can be parametrized, including implicit $current_user_id$ parameter
    - Has it's own permissions and users can run the query if they have permissions
It's safe enough. I've seen and applied such "Edit raw SQL in HTML form" many times. It's super flexible, especially combined with some CSV-to-HTML, CSV-to-PDF, or CSV-to-XLS rendering engine.


> - Checks that query is SELECT query (can be tricky with CTE, requires proper SQL parser)

Not only is this difficult parsing-wise, there's also no reason to assume that a select query is read-only even when no CTE or subqueries are involved. Function calls in the select clause can also write data.

> - Has it's own permissions and users can run the query if they have permissions

This is the important one. If the role the query runs as doesn't have write permissions on any table, then the user can't write data, period.

Note that this is often not as easy to implement as it seems. For example, in PostgreSQL, neither set role nor set session authorization actually prevent the user from doing malicious things, because the user can just reset role or reset session authorization in the query. For PostgreSQL to properly respect a role's permissions, the SIAAS needs to actually connect to the database as that role.

Common GUC-based row level security approaches are also incompatible with this idea.


I'm not sure what database platform they used, but in SQL Server, functions cannot have side-effects.

https://learn.microsoft.com/en-us/sql/relational-databases/u...


Dear SQL Server user, welcome to the world of SQL/PSM.

https://en.wikipedia.org/wiki/SQL/PSM

Within this ADA-esque world, packages, procedures, and functions may initiate DML.

Assuming these objects are in the default "definer rights" context, the DML runs with the full privilege of the owner of the code (this can be adjusted to "invoker rights" with a pragma).

Perhaps this is why Microsoft ignores it (as Sybase did before it).


I am not an expert on SQL/PSM, but I have worked in an Oracle shop before, and used PL/SQL extensively. In SQL Server, the equivalent is T-SQL. T-SQL procedures can do pretty much anything (assuming it is executed by a user with sufficient privileges), including creating and altering tables, creating and executing other procedures, running dynamic sql, as well as ordinary CRUD style operations. The "no side effect" limitation applies specifically to SQL functions.


In Postgres, the main difference between functions and procedures is that procedures can do transaction management (begin/rollback/commit), whereas functions can't. Other than that, functions are not limited in any way, and can be written in SQL, Postgres's PL/SQL knockoff PL/PgSQL, any plugin language such as Java, or even be called via the C ABI. Obviously, Postgres can do nothing about side effects here, and doesn't attempt to.

(PS: Postgres does have a concept of "safe" languages. Safe languages are expected to run programs only with the database permissions of the calling context, and prevent all other IO. However, Postgres does nothing to ensure that they do, that's the language plugin's job. Also, those functions can still perform DML and DDL as long as the calling context has the permissions to do so.)

By the way, you can do the same in SQL Server via what Microsoft ambiguously calls an Assembly. Via Assemblies, SQL Server can load procedures, aggregates and, yes, functions, from a .NET DLL file.


Is appears that there are efforts to bring full PSM to Postgres, as I see with PL/pgPSM.

https://postgres.cz/wiki/SQL/PSM_Manual


Transact-SQL, as Sybase originally named it, is not standardized.

As far as I know, the only implementation outside of Sybase and Microsoft is Amazon's Babelfish.

https://aws.amazon.com/rds/aurora/babelfish/

It goes without saying that SQL/PSM is far more pervasive on many more platforms, precisely because it is an ANSI standard.


In Postgres they absolutely can. They are all just happening inside the same transaction scope unlike stored procedures.


Speaking of postgres, you don't even need a function, you can just use RETURNING clause of a modifying query to provide data source for the select:

    select *
    from (
        delete
        from users
        returning id
    )


Hell, your user can have no write access at all, but the function or procedure can be using SECURITY DEFINER and the code inside it will run with the permissions of the function owner rather than the calling user allowing writes to happen.

Trusting a select to be read only is naive.


Most applications backed kdb+ do just this. It comes with its own parser and you can query tables using something like an ast.

For example the user might ask for data with the constraint

  where TradingDesk=`Eq, AvgPx>500.0
which kdb+ parses into

  ((=;`TradingDesk;(),`Eq);(>;`AvgPx;500.0))
As a dev on the system I can then have a function which takes in this constraint and a list of clients that I want to restrict the result to. That list of clients could come from another function related to the entitlements of the user who made the request:

  applyClientRestriction:{[constraint;clients] constraint,enlist(in;`Client;enlist clients)}
Which results in an extension of the constraint like this for two clients A and B:

  q)applyClientRestriction[((=;`TradingDesk;(),`Eq);(>;`AvgPx;500.0));`ClientA`ClientB]
  ((=;`TradingDesk;enlist`Eq);(>;`AvgPx;500.0);(in;`Client;enlist`ClientA`ClientB))
Then that gets passed to the function which executes the query on the table (kdb+ supports querying tables in a functional manner as well as with a structured query language) and the result has the restrictions applied.

It's really nice because, once parsed, it's list processing like in a lisp and not string processing which is a pain.


So you build a view on top of the tables, allows users access to the aggregating views, but not to the underlying tables?


I've successfully replaced django.contrib.auth multiple times. It it not easy, but it is not too hard either. Honestly, everything else they do could be a regular Django app. Looks to me like forking a big project became a marketing move rather than technology necessity.


100% agree. Let's hope they maintain compatibility so stuff developed for Plain works with Django.


In the FAQ they said that extensions would not be compatible


Thank you. I really needed that in my life today.


Location: Georgia

Remote: Yes

Willing to Relocate: Yes

Technologies: .Net, Ansible, AWS (DynamoDB, EC2, ElasiCache, Lambda, Organizations, RDS, etc.), Azure, C, C++ (Qt), C# (WPF), Jira, Go, Linux, Python (Django), TailwindCSS, Terraform, reverse engineering formats and protocols, cryptocurrencies

LinkedIn: https://www.linkedin.com/in/adontz/

Email: [email protected]

I am an Engineer with experience in backend software development, desktop software development, embedded development, electronics, and infrastructure management. I like to know things.

In recent years I was head of DevOps department in a bank and CTO is an early stage startup. Love open source.


A bit of a first-hand witness context

The Georgian Dream party rigged the elections, stole at least 20% of the votes, securing a constitutional majority.

During the election campaign, the Georgian Dream party promised its voters EU membership.

The party approved the composition of the parliament and government with a gross violation of the law.

The party presented an average football player without a higher education as a candidate for the country's president. This same person could not become the president of the football federation due to the lack of a higher education.

The self-appointed prime minister announced that the question of EU membership will not be considered until 2028.

Unorganized protests began simultaneously in five cities. Police dispersed people with a mixture of cold water and pepper spray. Dozens of protesters were hospitalized with chemical burns to their skin.


Honestly, this is so much worse than "catch". It's what a "catch" would look like in "C".


It might look worse than catch, but it's much more predictable and less goto-y.


goto was only bad when used to save code and jump indiscriminately. To handle errors is no problem at all.


yes, yes, yes! see the Linux Kernel for plenty of such good and readable uses of go-to, considered useful: "on error, jump there in the cleanup sequence ..."


..as long as you don't make mistakes. I fixed enough goto bugs in Xorg when I was fixing Coverity-issues in Xorg that I can see the downsides of this easy way of error handling.


We're comparing to go here, not with a language with proper error handling.


If "catch" is goto-y (and it kinda is), then so is "defer".


The biggest difference between try-catch and error values syntactically IMO is that the former allows you to handle a specific type of error from an unspecified place and the latter allows you to handle an unspecified type of error from a specific place. So the type checking is more cumbersome with error values whereas enclosing every individual source of exceptions in its own try-catch block is more cumbersome than error values. You usually don't do that, but you usually don't type-check error values either.


This is a good example of "stringly typed" software. Golang designers did not want exceptions (still have them with panic/recover), but untyped errors are evil. On the other hand, how would one process typed errors without pattern matching? Because "catch" in most languages is a [rudimentary] pattern matching.

https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...


Go has typed errors, it just didn't use it in this case.


In principle. In practice, most Go code, and even significant parts of the Go standard library, return arbitrary error strings. And error returning functions never return anything more specific than `error` (you could count the exceptions in the top 20 Go codebases on your fingers, most likely).

Returning non-specific exceptions is virtually encouraged by the standard library (if you return an error struct, you run into major issues with the ubiquitous `if err != nil` "error handling" logic). You have both errors.New() and fmt.Errorf() for returning stringly-typed errors. errors.Is and errors.As only work easily if you return error constants, not error types (they can support error types, but then you have to do more work to manually implement Is() and As() in your custom error type) - so you can't easily both have a specific error, but also include extra information with that error.

For the example in the OP, you have to do a lot of extra work to return an error that can be checked without string comparisons, but also tells you what was the actual limit. So much work that this was only introduced in Go 1.19, despite MaxBytesReader existing since go 1.0 . Before that, it simply returned errors.New("http: request body too large") [0].

And this is true throughout the standard library. Despite all of their talk about the importance of handling errors, Go's standard library was full of stringly-typed errors for most of its lifetime, and while it's getting better, it's still a common occurrence. And even when they were at least using sentinel errors, they rarely included any kind of machine-readable context you could use for taking a decision based on the error value.

[0] https://cs.opensource.google/go/go/+/refs/tags/go1:src/pkg/n...


You do not have to do more work to use errors.Is or errors.As. They work out of the box in most cases just fine. For example:

    package example

    var ErrValue = errors.New("stringly")

    type ErrType struct {
        Code    int
        Message string
    }
    func (e ErrType) Error() string {
        return fmt.Sprintf("%s (%d)", e.Message, e.Code)
    }
You can now use errors.Is with a target of ErrValue and errors.As with a target of *ErrType. No extra methods are needed.

However, you can't compare ErrValue to another errors.New("stringly") by design (under the hood, errors.New returns a pointer, and errors.Is uses simple equality). If you want pure value semantics, use your own type instead.

There are Is and As interfaces that you can implement, but you rarely need to implement them. You can use the type system (subtyping, value vs. pointer method receivers) to control comparability in most cases instead. The only time to break out custom implementations of Is or As is when you want semantic equality to differ from ==, such as making two ErrType values match if just their Code fields match.

The one special case that the average developer should be aware of is unwrapping the cause of custom errors. If you do your own error wrapping (which is itself rarely necessary, thanks to the %w specifier on fmt.Errorf), then you need to provide an Unwrap method (returning either an error or a slice of errors).


Your example is half right, I had misread the documentation of errors.As [0].

errors.As does work as you describe, but errors.Is doesn't: that only compares the error argument for equality, unless it implements Is() itself to do something different. So `var e error ErrType{Code: 1, Message: "Good"} = errors.Is(e, ErrType{})` will return false. But indeed Errors.As will work for this case and allow you to check if an error is an instance of ErrType.

[0] https://play.golang.com/p/qXj3SMiBE2K


As I said, errors.Is works with ErrValue and errors.As works with ErrType. I guess the word "Is" is doing too much work here, because I wouldn't expect ErrType{Code:1} and ErrType{Code:0} to match under errors.Is, though ErrType{Code:1} and ErrType{Code:1} would, but yes you could implement an Is method to override that default behavior.


If you want errors to behave more like value types, you can also implement `Is`. For example, you could have your `ErrType`'s `Is` implementation return true if the other error `As` an `ErrType` also has the same code.


If you have a value type, you don't need to implement Is. It's just that errors.New (and fmt.Errorf) doesn't return a value type, so such errors only match under errors.Is if they have the same identity, hence why you typically see them defined with package-level sentinel variables.


Probably worth noting that errors.As uses assignability to match errors, while errors.Is is what uses simple equality. Either way, both work well without custom implementations in the usual cases.


Go errors cannot be string-typed, since they need to implement the error interface. The reason testing error types sometimes won't work is that the error types themselves may be private to the package where they are defined or that the error is just a generic error created by errors.New().

In this case the Error has an easy-to-check public type (*MaxBytesError) and the documentation clearly indicates that. But that has not always been the case. The original sin is that the API returned a generic error and the only way to test that error was to use a string comparison.

This is an important context to have when you need to make balanced decisions about Hyrum's law. As some commentators already mentioned, you should be wary of taking the extreme version of the law, which suggest that every single observable behavior of the API becomes part of the API itself and needs to be preserved. If you follow this extreme version, every error or exception message in every language must be left be left unchanged forever. But most client code doesn't just go around happily comparing exception messages to strings if there is another method to detect the exception.


They didn't have them when they implemented this code.

Back then, error was a glorified string. Then it started having more smart errors, mostly due to a popular third party packages, and then the logic of those popular packages was more or less* put back to go.

* except for stacktraces in native errors. I understand that they are not there for speed reasons but dang it would be nice to have them sometimes


Nobody teaches people to use them. There is no analog to "catch most specific exceptions" culture in other languages.


It has typed errors, except every function that returns an error returns the 'error' interface, which gives you no information on the set of errors you might have.

In other statically typed languages, you can do things like 'match err' and have the compiler tell you if you handled all the variants. In java you can `try { x } catch (SomeTypedException)` and have the compiler tell you if you missed any checked exceptions.

In go, you have to read the recursive call stack of the entire function you called to know if a certain error type is returned.

Can 'pgx.Connect' return an `io.EOF` error? Can it return a "tls: unknown certificate authority" (unexported string only error)?

The only way to know is to recursively read every line of code `pgx.Connect` calls and take note of every returned error.

In other languages, it's part of the type-signature.

Go doesn't have _useful_ typed errors since idiomatically they're type-erased into 'error' the second they're returned up from any method.


You actually should never return a specific error pointer because you can eventually break nil checks. I caused a production outage because interfaces are tuples of type and pointer and the literal nil turns to [nil, nil] when getting passed to a comparator whereas your struct return value will be [nil, *Type]


It's really hard to reconcile behavior like this with people's seemingly unshakeable love for golang's error handling.


People who rave about Go's error handling, in my experience, are people who haven't used rust or haskell, and instead have experience with javascript, python, and/or C.

https://paulgraham.com/avg.html


Exceptions in Python and C are the same. The idea with these is, either you know exactly what error to expect to handle and recover it, or you just treat it as a general error and retry, drop the result, propagate the error up, or log and abort. None of those require understanding the error.

Should an unexpected error propagate from deep down in your call stack to your current call site, do you really think that error should be handled at this specific call-site?


Nope, exceptions in Python are not the same. There are a lot of standard exceptions

https://docs.python.org/3/library/exceptions.html#concrete-e...

and standard about exception type hierarchy

https://github.com/psycopg/psycopg/blob/d38cf7798b0c602ff43d...

https://peps.python.org/pep-0249/#exceptions

Also in most languages "catch Exception:" (or similar expression) is considered a bad style. People are taught to catch specific exceptions. Nothing like that happens in Go.


Sure, there is a hierarchy. But the hierarchy is open. You still need to recurse down the entire call stack to figure out which exceptions might be raised.


C also doesn’t have exceptions and C++ similarly can distinguish between exception types (unless you just throws a generic std::exception everywhere).


Yes, python and C also do not have properly statically typed errors.

In python, well, python's a dynamically typed language so of course it doesn't have statically typed exceptions.

"a better type system than C" is a really low bar.

Go should be held to a higher bar than that.


The consumer didn't, but the error in the example is typed, it's called `MaxBytesError`.


Matching the underlying type when using an interface never feels natural and is definitely the more foreign part of Go's syntax to people who are not super proficient with it. Thus, they fall back on what they know - string comparison.


Only since go 1.19. It was a stringy error since go 1.0 until then.


Go didn't have them at the time.

Pessimistically this is yet another example of the language's authors relearning why other languages have the features they do one problem at a time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: