Hacker Newsnew | past | comments | ask | show | jobs | submit | ericpauley's commentslogin

Funny seeing State College referred to as a metro on HN :)

I think one difference is that rural PA and Kentucky have a lot of positive similarities, both being in greater Appalachia. Not as clearly so with CB.


"Metro" may not be merited, but "local area that doesn't feel like Kentucky" is way more characters. ;)

It’s also a little silly for the same reasons discussions of theoretical computability often are: time and space requirements. In practice the Universe, even if computable, is so complex that simulating it would require far more compute than physical particles and far more time than remaining until heat death.

Hehe yeah.. For me, its just inverted search for the God. There must be somethink behind it, if its not God, then it must be simulation! Kinda sad, I would expect more from scientist.

The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...


>The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...

Isn't that 'just' the laws of nature + the 2nd law of thermodynamics? Life is the ultimate increaser of entropy, because for all the order we create we just create more disorder.

Conway's game of life has very simple rules (laws of nature) and it ends up very complex. The universe doing the same thing with much more complicated rules seems pretty natural.


Yeah, agreed. The actual real riddle is consciousness. Why does it seems some configurations of this matter and energy zap into existence something that actually (allegedly) did not exist in its prior configuration.

I'd argue that it's not that complicated. That if something meets the below five criteria, we must accept that it is conscious:

(1) It maintains a persisting internal model of an environment, updated from ongoing input.

(2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment.

(3) It possesses a memory that binds past and present into a single temporally extended self-model.

(4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.)

(5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table.

This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years...


If you remove the terms "self", "agency", and "trivially reducible", it seems to me that a classical robot/game AI planning algorithm, which no one thinks is conscious, matches these criteria.

How do you define these terms without begging the question?


If anything has, minimally, a robust spatiotemporal sense of itself, and can project that sense forward to evaluate future outcomes, then it has a robust "self."

What this requires is a persistent internal model of: (A) what counts as its own body/actuators/sensors (a maintained self–world boundary), (B) what counts as its history in time (a sense of temporal continuity), and (C) what actions it can take (degrees of freedom, i.e. the future branch space), all of which are continuously used to regulate behavior under genuine epistemic uncertainty. When (C) is robust, abstraction and generalization fall out naturally. This is, in essence, sapience.

By "not trivially reducible," I don't mean "not representable in principle." I mean that, at the system's own operative state/action abstraction, its behavior is not equivalent to executing a fixed policy or static lookup table. It must actually perform predictive modeling and counterfactual evaluation; collapsing it to a reflex table would destroy the very capacities above. (It's true that with an astronomically large table you can "look up" anything -- but that move makes the notion of explanation vacuous.)

Many robots and AIs implement pieces of this pipeline (state estimation, planning, world models,) but current deployed systems generally lack a robust, continuously updated self-model with temporally deep, globally integrated counterfactual control in this sense.

If you want to simplify it a bit, you could just say that you need a robust and bounded spatial-temporal sense, coupled to the ability to generalize from that sense.


> so it's no surprise that we're not Boltzmann Brains

I think I agree you've excluded them from the definition, but I don't see why that has an impact on likelihood.


I don't think any of these need to lead to qualia for any obvious reason. It could be a p-zombie why not.

The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model.

"P-zombie" is not a coherent leftover possibility once you fix the full physical structure. If a system has the full self-model (temporal-spatial sense) / world-model / memory binding / counterfactual evaluator / control loop, then that structure is what having experience amounts to (no extra ingredient need be added or subtracted).

I hope I don't later get accused of plagiarizing myself, but let's embark on a thought experiment. Imagine a bitter, toxic alkaloid that does not taste bitter. Suppose ingestion produces no distinctive local sensation at all – no taste, no burn, no nausea. The only "response" is some silent parameter in the nervous system adjusting itself, without crossing the threshold of conscious salience. There are such cases: Damaged nociception, anosmia, people congenitally insensitive to pain. In every such case, genetic fitness is slashed. The organism does not reliably avoid harm.

Now imagine a different design. You are a posthuman entity whose organic surface has been gradually replaced. Instead of a tongue, you carry an in‑line sensor which performs a spectral analysis of whatever you take in. When something toxic is detected, a red symbol flashes in your field of vision: “TOXIC -- DO NOT INGEST.” That visual event is a quale. It has a minimally structured phenomenal character -- colored, localized, bound to alarm -- and it stands in for what once was bitterness.

We can push this further. Instead of a visual alert, perhaps your motor system simply locks your arm; perhaps your global workspace is flooded with a gray, oppressive feeling; perhaps a sharp auditory tone sounds in your private inner ear. Each variant is still a mode of felt response to sensory information. Here's what I'm getting at with this: There is no way for a conscious creature to register and use risky input without some structure of "what it is like" coming along for the ride.


> The zombie intuition comes from treating qualia as an "add-on" rather than as the internal presentation of a self-model.

Haven't you sort of smuggled a divide back into the discussion? You say "internal presentation" as though an internal or external can be constructed in the first place without the presumption of a divided off world, the external world of material and the internal one of qualia. I agree with the concept of making the quale and the material event the same thing, (isn't that kinda like Nietzsche's wills to power?), but I'm not sure that's what you're trying to say because you're adding a lot of stuff on top.


I have more or less the same views, although I can’t formulate them half as well as you do. I would have to think more in depth about those conditions that you highlighted in the GP; I’d read a book elaborating on it.

I’ve heard a similar thought experiment to your bitterness one from Keith Frankish: You have the choice between two anesthetics. The first one suppresses your pain quale, meaning that you won’t _feel_ any pain at all. But it won’t suppress your external response: you will scream, kick, shout, and do whatever you would have done without any anesthetic. The second one is the opposite: it suppresses all the external symptoms of pain. You won’t budge, you’ll be sitting quiet and still as some hypothetical highly painful surgical procedure is performed on you. But you will feel the pain quale completely, it will all still be there.

I like it because it highlights the tension in the supposed platonic essence of qualia. We can’t possibly imagine how either of these two drugs could be manufactured, or what it would feel like.

Would you classify your view as some version of materialism? Is it reductionist? I’m still trying to grasp all the terminology, sometimes it feels there’s more labels than actual perspectives.


That is not what a p-zombie is. The p-zombie does not have any qualia at all. If you want to deny the existence of qualia, that's one way a few philosophers have gone (Dennett), but that seems pretty ridiculous to most people.

You're offering a dichotomy:

1. Qualia exist as something separate from functional structure (so p-zombies are conceivable)

2. Qualia don't exist at all (Dennett-style eliminativism)

But I say that there is a third position: Qualia exist, but they are the internal presentation of a sufficiently complex self-model/world-model structure. They're not an additional ingredient that could be present or absent while the functional organization stays fixed.

To return to the posthuman thought experiment, I'm not saying the posthuman has no qualia, I'm saying the red "TOXIC" warning is qualia. It has phenomenal character. The point is that any system that satisfies certain criteria and registers information must do so as some phenomenal presentation or other. The structure doesn't generate qualia as a separate byproduct; the structure operating is the experience.

A p-zombie is only conceivable if qualia are ontologically detachable, but they're not. You can't have a physicalism which stands on its own two feet and have p-zombies at the same time.

Also, it's a fundamentally silly and childish notion. "What if everything behaves exactly as if conscious -- and is functionally analogous to a conscious agent -- but secretly isn't?" is hardly different from "couldn't something be H2O without being water?," "what if the universe was created last Thursday with false memories?," or "what if only I'm real?" These are dead-end questions. Like 14-year-old-stoner philosophy: "what if your red is ackshuallly my blue?!" The so-called "hard problem" either evaporates in the light of a rigorous structural physicalism, or it's just another silly dead-end.


You have first-person knowledge of qualia. I'm not really sure how you could deny that without claiming that qualia doesn't exist. You're claiming some middle ground here that I think almost all philosophers and neuroscientists would reject (on both sides).

> "couldn't something be H2O without being water?," "what if the universe was created last Thursday with false memories?," or "what if only I'm real?" These are dead-end questions. Like 14-year-old-stoner philosophy: "what if your red is ackshuallly my blue?!"

These are all legitimate philosophical problems, Kripke definitively solved the first one in the 1970s in Naming and Necessity. You should try to be more humble about subjects which you clearly haven't read enough about. Read the Mary's room argument.


> You have first-person knowledge of qualia. I’m not sure how you could deny that...

I don't deny that. I explicitly rely on it. You must have misunderstood... My claim is not:

1) "There are no qualia"

2) "Qualia are an illusion / do not exist"

My claim is: First-person acquaintance does not license treating qualia as ontologically detachable from the physical/functional. I reject the idea that experience is a free-floating metaphysical remainder that can be subtracted while everything else stays fixed. At root it's simply a necessary form of internally presented, salience-weighted feedback.

> This middle ground would be rejected by almost all philosophers and neuroscientists

I admit that it would be rejected by dualists and epiphenomenalists, but that's hardly "almost all."

As for Mary and her room: As you know, the thought experiment is about epistemology. At most it shows that knowing all third-person facts doesn’t give you first-person acquaintance. It is of little relevance, and as a "refutation" of physicalism it's very poor.


Is there a working title or some way to follow for updates?

There is no objective evidence consciousness exists as distinct from an information process.

There is no objective evidence of anything at all.

It all gets filtered through consciousness.

"Objectivity" really means a collection of organisms having (mostly) the same subjective experiences, and building the same models, given the same stimuli.

Given that less intelligent organisms build simpler models with poorer abstractions and less predictive power, it's very naive to assume that our model-making systems aren't similarly crippled in ways we can't understand.

Or imagine.


That's a hypothesis but the alternate hypothesis that consciousness is not well defined is equally valid at this point. Occam's razor suggests consciousness doesn't exist since it isn't necessary and isn't even mathematically or physically definable.

For me the biggest riddle is: why something instead of nothing ?

That's the question that prevent me from being atheist and shift me to agnosticism.


There is both in superposition.

> The big riddle of Universe is, how

A lot of people are more interested in the Why of the Universe than the How, though.

How is an implementation detail, Why is "profound". At least that's how I think most people look at it.


Yeah I guess... But such question is not really intereseting.. Answer is simple, there is nothing behind it.. and people arent confortable with that answer. Hence "How" is more interesting and scientific..

You expect scientists to not ask :‘what is behind all this?’

Ha


Yes, is that (obvious) point being addressed in the paper? At first skimming, it just says that a "sufficiently souped up laptop" could, in principle, compute the future of the universe (i.e. Laplace's daemon), but I haven't seen anything about the subsequent questions of time scales.

Computing the future is cool, but computing the past state is also really cool as it essentially allows time travel into (a copy of) the past.

The real universe might be different and far more complex than our simulated reality. Maybe a species that can freely move within 4 or 5 dimensions is simulating our 3D + uni directional time reality just like we „simulate“ reality with Sim City and Sims.

but then we don't have a universe simulating itself, but simulating a low-fi imitation

You're predicating on particles, heat death, etc as you understand it being applicable to any potential universe. Such rules are only known to apply in this universe.

A universe is simply a function, and a function can be called multiple times with the same/different arguments, and there can be different functions taking the same or different arguments.


The issue with that in terms of the simulation argument, is that the simulation argument doesn't require a complete simulation in either space or time.

It also doesn't require a super-universe with identical properties and constraints.

There's no guarantee their logic is the same as our logic. It needs to be able to simulate our logic, but that doesn't mean it's defined or bound by it.


Indeed, this is a common practice in the broader data. It seems the linked article is filtering to resolvable+hosted domains, a subset of overall domain parking.

Yup. That's why I am suggesting to stop that practice and just remove the IP rather than trusting the landing page someone else maintains. Or if one would like to give bots something to do point it to a multicast address or perhaps MoD/US Military address.

We did a large-scale study of this phenomenon recently: https://www.cs.bu.edu/faculty/crovella/paper-archive/wung-if...

Across a broad sample of typo domains of major sites, most registered domains aren’t actually reachable, implying they are registered for defensive, legitimate, or unrelated purposes. Interestingly, the typo space on major sites is actually very sparsely registered (2% at edit distance 1), meaning that typosquatting may actually be underexploited.


>Interestingly, the typo space on major sites is actually very sparsely registered (2% at edit distance 1), meaning that typosquatting may actually be underexploited.

Anecdotally, the autosuggestions and improved browsing history recommendations may mean this is way less lucrative than it used to be.

Also, anyone doing search like behaviour in their address bar is far more likely to see a knowledge panel style reply for prominent websites vs the 10 blue link format of historical search engine results, which may have included the nefarious domains.

I'd leap to say that because of this, users find their intended domain by using natural language far more than they used to.


I would argue that it is 100% searching in the address bar. Mobile has trained people to do that, and search results are usually solid enough to take you to the right place.

Mobile? Haven’t all desktop browsers switched to having the address bar do search unless you type a fully qualified domain, since Chrome came out c. 2009? Before that there were 2 fields at the top of browser windows, but Firefox and Safari ditched those pretty quickly after Chrome.

Yeah, I'd lean towards a high % also- it would take some time to prove it.

Also, homograph attacks are likely much less of a thing for the above reasons.


A possible explanation why typos for major sites are sparsely registered could be that the domain industry has put a lot of focus the last decade on addressing malicious registrations, and many registrars that focus on the market segment of large companies sell products that monitor for malicious registrations with legal response in case one pops up. It is also seems that bulk registrars has gotten better filters to reduce malicious registrations, which is a service some security companies offer to registrars. In theory it should be quite more difficult today for a malicious actor to go to a major registrar and buy an obvious trademark infringing domain for a major site.

Domain/trademark monitoring also directly compete with defensive registrations. Often it is a question if you want to pay the lawyers/monitoring service, a large number of registration/renewal fees, or both.


My guess is also that not all typos are equal. Should have a stricter edit version for 1-keystroke-away filtered edits (that is delete, swap or add 1 key away / replace one key away) instead of pure Levenshtein. Like Fqcebook is a more likely typo than Fjcebook but they are both edit-1

Someone should make a qwertyshtein() function.

If I understand correctly from the paper what qualifies as an edit distance of 1 is pure Levenshtein distance-1 right?

Just curious because while the edit-1 space can be fairly big, I’d assume all edits have very different probabilities. So the squatted domains probably skew to a higher probability edit. By that I mean mostly keyboard edit typos, eg on a phone: the “cwt” typo is more likely than “cpt” for “cat” because of an and w keyboard proximity. Wonder what the squatting rate is when you filter for edit within one key stroke for example (only really change the add and replace types of edits, not delete or swap)


> Interestingly, the typo space on major sites is actually very sparsely registered (2% at edit distance 1)

It seems to me that "edit distance 1" still describes some very implausible typos.


Yeah corner and comer is an edit distance of 2 but perhaps more lucrative than corner and corker, as a bad example.

I saw rnicrosoft in use the other day, somewhere.

Yes, Levenshtein in that case give too big an exploration space. A keyboard edit distance would probably work better. Delete and swap are still 1 but replace and add should be within say 1-key at most

"... meaning that typosquatting may actually be underexploited."

Missing from the paper is an examination of web user behaviour

Over time, so-called "direct navigation" where the domain name, e.g., example.com, was typed into the browser address bar, has declined. By the time Google terminated "Adsense for domains" in 2012 IMO it had managed to systematically subsume most of the traffic and associated revenue from the typosquatting/domain parking racket

https://web.archive.org/web/20250320184725if_/https://domain...

With the introduction of the so-called "omnibar" or "omnibox" in Firefox^1 and Chrome, typographical errors in domain names are submitted as "searches" to a company that sells ad services. For example, Safari, Firefox, Chrome all sending search traffic to Google, LLC. From the DoJ antitrust litigation we know that Google has been paying ridiculously large sums of money to various companies for this traffic

1. Firefox originally called this the "awesome bar"

https://web.archive.org/web/20250927011424if_/https://www.cn...

Not to mention increasingly common user practice of direct navigation to a search engine webpage, e.g., google.com, then searching for the desired website, e.g., example.com

As everyone knows, one company, in some cases through acquisitions and/or anticompetitive conduct, came to control 1. search, 2. "the web browser", 3. online advertising services on the open web, 4. operating systems (mobile, "chromebook"), ...

If parked domains only get traffic from "direct navigation",^2 then it stands to reason that such traffic has declined as it has been increasingly captured by advertising-sponsored "default browsers" and, ultimately, Google. IMO, it makes sense that domain parking as a means of delivering ads and generating revenue would give way to these domains becoming unregistered or registered to malware distributers or the like

What are the registration histories for the unregistered edit distance 1 typosquatting domains. Consider the number that are "currently unregistered" versus "never before registered"

2. Perhaps the registrants are using other ways to send traffic to these domains


There’s a fundamental trade-off between performance and privacy for onion routing. Much of the slowness you’re experiencing is likely network latency, and no software optimization will improve that.


File extension reference notwithstanding, what an awful choice of vanity domain name. Depending on where you look .so isn’t even allowed to be used by non-Somali affiliates, and there are horror stories online of arbitrary non-renewals.


That's why we have PDF/A: https://en.wikipedia.org/wiki/PDF/A


My two cents: As someone who is actively hiring and looking at a lot of résumés from fresh grads (albeit looking for more systems programming experience), I would personally not move forward with an interview for this CV.

Red flags for me:

* Talks a big talk on AI but it’s inscrutable if any of it goes beyond “I installed PyTorch and ran example code/prompted an API”

* Multiple projects but from demos it’s very unclear what they actually did. (Not “very legible technical work”)

* No GitHub on résumé despite claiming it on “skills”

I can get a good engineer onboarded to AI tooling quickly (heck, some of the referenced techniques have existed for only months), but I can’t reliably take someone from AI consumer to engineer.

These issues are very widespread. I’d say under 10% of junior résumés I look at give me confidence that they’d show up and know how to write real systems instead of just gluing things together.


>I’d say under 10% of junior résumés I look at give me confidence that they’d show up and know how to write real systems instead of just gluing things together

They're juniors. With that kind of mentality, I'm not sure you're looking for juniors, but instead are looking for someone with a few years in industry that is apparently masquerading as a junior. But perhaps my expectation of "real systems" is different than yours.

To put this into perspective, I mentor and have mentored lots of juniors from code schools and traditional, four year university computer science majors in web dev. Having some concept of both the web stack/language and a basic understanding of good coding practices is about the most I'd expect. All thing things that sit on top of it, like scaling the stack, performance optimizations and the like are things I wouldn't even come close to expecting a junior to know. Those are things I'd expect to have to coach on.


> They're juniors. With that kind of mentality, I'm not sure you're looking for juniors, but instead are looking for someone with a few years in industry that is apparently masquerading as a junior.

This is just how the junior job market seems to operate now. Barely anyone wants some open-ended, curious recent graduate who's eager to expand their technical knowledge with new skills that are taught to them at the job. Everyone wants juniors to punch well above their weight - to even have a chance of an interview, ideally your resume should indicate that you're already an expert at every required skill in the job listing. They fish out the top 1-5% of all graduates and the really desperate people who are willing to go work a junior job despite extensive work experience - everyone else is welcome to keep putting in hundreds of applications elsewhere. Of course, it makes sense that you'd want the best - but it feels like there's active pressure now to hire as few people as possible regardless of circumstance. Companies will keep searching for the miracle candidate - if they don't find one, they'll just repost the listing until one shows up. Everyone else has locked the doors on hiring altogether. We're probably going to see a push on juicing more value out of existing workers than paying new ones, so the average graduates will continue having nowhere to go.


Indeed, real systems is a lower bar than you imply. In this case it’s unclear from the CV that I’d be getting someone that has ever written more than a 50 line one-off Python script.

You mention mentoring people in undergrad. Sure, by a year-3 course I’d expect to have to coach beyond basic understanding. To say that basic understanding of performance optimization is out of scope for a BS graduate is not supported by my experience, however. We’re not talking about boot camp grads here.


> These issues are very widespread. I’d say under 10% of junior résumés I look at give me confidence that they’d show up and know how to write real systems instead of just gluing things together.

You’re looking for seniors with junior pay grades.


When I hire, I always look at personal github projects. They are a hint that the person loves coding and loves creating software. I'm not looking for 1000 stars projects, and I don't even look at the kind of project, just the fact that the candidate has done some work in his spare time.

If there's no github project, I ask the candidate what website, web communities he watches/participates in regularly. I check if they are related to programming or building software (bonus point if you read HN :-)).

Both are good signs that there is an interest in the job that goes beyond paycheck.


While those are certainly indicators of interest, is their absence an indicator of lack of interest? In other words, "has side projects" is sufficient to prove "likes programming", but I don't think it's necessary.

There's only 24 hours in a day, and mastering any serious curriculum already takes the majority of those.

Then there's family: some have parents to support, and/or a spouse they can't just ignore all evening to sit in front of a terminal typing git push.

Lastly, plenty of people learn or tinker on their own, but they don't all have the (socioeconomically loaded) reflex of marketing themselves with a github repo.

Of my whole Bachelor's, Master's and PhD cohorts, I haven't known one person to have a github repo outside of employment. Some were building little things on the side but sharing them informally, never in public.

What you're looking for is people with no social life, no responsibilities towards anyone or anything (even just being an immigrant can be a huge time and energy sink), and with the social background to confidently market themselves by putting what's objectively noise online.


I happen to have kids and have always spent about 30 min a day, more like an hour, every day, looking at some side coding side projects. Being doing that before I was dad, now less since I am. But I still do. I'm past 50 now and I must admit the quantity of work I put out on side projects have decreased.

Side projects requires passion, not discipline, not effort, just passion. And that's what I'm looking for. When you like what you do you're generally good a it.

Now, I have hired people who don't have side projects, of course, they're not that common but not rare neither. And the side project question is sure not the main question. But that's a cool differentiator.

Moreover, questioning people on their side project is always more fun for them.

And of course, people with side project sometimes day dream so you have to keep them busy :-)

And having a github repo is a 15 minutes job. And most of the time, nobody will care about your side project. It's not like your marketing yourself. As I have said, I don't care about the stars. Just looking at what you do on the side, what interests you in a domain that is not far away from your job.

And side projects are always cool: I have interviewed people who were working on their game engine, planet trajectories, web sites for friends, playing with machine learning, writing little tools like Javadoc,...


Yeah I had a personal GitHub project in 2010, nowadays it is not feasible with kids. Time is either spent making money or getting together with family, with an occasional something else, coding for fun isn’t usually there (though I wish there was).

You’re looking for special snowflakes, but want to pay usual money. You may find that money also works as a motivation, a transactional relationship between an employer and an employee is healthy in that neither has to pretend there’s anything else that ultimately matters. (The illusion can be kept only until the first round of layoffs anyway.)


Hard to see how this provides substantial benefits over OIDC. Either one requires support from the email provider, but one is already standardized and has widespread support.


OIDC is usually limited to a small selection of providers.


Well the problem is simply user base. There is no point in being provider if you have 100 users. On the other hand, despite OIDC being standardised, there are way too many ways of implementing it. It is essentially impossible to have a "wildcard" support for OIDC providers. How do I know? I just implemented one myself. For example, providers usually support only one or very few authorisation flows, so in reality you would likely end up with a lot of failed attempts to sign up with some "3rd world" provider.

PS: just take PKCE where the provider has no way of communicating whether it is supported, or required, at all.


I have just added OIDC support for bring-your-own-SSO to our application, and it wasn’t as bad as you make it sound: As long as the identity provider exposes a well-known OpenID configuration endpoint, you can figure it out (including whether PKCE is required or supported, by the way!)

The only relevant flow is authorisation code with PKCE now (plus client credentials if you do server-to-server), and I haven’t found an identity provider yet that wouldn’t support that. Yes, that protocol has way too many knobs providers can fiddle with. But it’s absolutely doable.


I didn't say it was impossible, just impractical and that is why majority of services that use SSO only support google, apple, twitter or facebook. You write the code specific to these few providers once and are done with it for good. There is little reason to invest time and money for adding generic support for other providers. It's just the way it is. If OIDC protocol would get streamlined a bit, we could easily have universal support. But then again, these big providers would likely be stuck in the current version and not bother adjusting to the new, simpler version, if it would come to be.


Pkce is trivially easy to announce support for, you put it in the issuer metadata.

code_challenge_methods_supported

https://datatracker.ietf.org/doc/html/rfc8414#section-2


With metadata endpoint, things become much easier, that is true.

Though how would you implement it? Like, user comes to your website and wants to sign in with some foo.bar provider, do you force the user to paste in the domain where you go look for the metadata? What about facebook or google, do you give them special treatment with prepared buttons or do you force user to still put in their domains? What about people using your flow to "ddos" some random domain...?


Fedcm offers some hope here, where the browser gets some capability to announce the federation domains to the RP. It's not straightforward though, of course. In this case though it's inverted - you are providing the url of the MCP server, and the MCP server is providing the url of an authz server it supports. The client is uses the metadata lookup to know if it should include PKCE bits or not.


With DCR (dynamic client registration) you can have an unlimited number of providers. Basically, just query the well-known endpoint and then use regular OAuth with a random secret.

There's also a proposal to add stateless ephemeral clients.


DCR is cool, but I haven't seen anyone roll it out. I know it has to be enabled per-tenant in Okta and Azure (which nobody does), and I don't think Google Workspace supports it at all yet. It's a shame that OIDC spent so long and got so much market-share tied to OAuth client secrets, especially since classic OpenID had no such configuration step.


DCR is now being pushed by AI companies, using the MCP protocol that basically requires DCR.

So it might get some traction, and finally break the monopoly of "Login With Google" buttons.


This is because the MCP folks focus almost entirely on the client developer experience at the expense of implementability CIMD is a bit better and will supplant DCR, but overall there's yet to be a good flow here that supports management in enterprise use cases.


This isn’t fundamental to its design, though. It’s a result of providers wanting to gate access to identities for various reasons. The protocol presented here does nothing to address this gating.


A major frustration in my life is that LinkedIn QR codes will not support all caps. It’s not even a profile capitalization issue; the app will refuse to scan the code if the “/in/“ is capitalized. The resulting size difference is quite noticeable particularly in small format.


I can't help but interpret that as a clue as to which internal groups hold power over there.


Huh?


Lowercase "in" is a major part of their branding. Forcing usage of lowercase "in" in this scenario supports the branding even if it doesn't make sense from an engineering standpoint.


They can redirect from the upper to the lower case URL so that it still looks the way they want.

It might not be intentional that it doesn’t work with uppercase, but they just made it lower case and by default it’s case sensitive on whatever software stack they use to host.


That's why engineering should silently add a toLowerCase and just generate qr codes with lower case as well.


I would not link directly to LinkedIn. They have changed the optimal url many times.


According to the article / is not in basic alphanumeric alphabet anyway?


The article has an off-by-one error. There are 45 characters in the basic alphanumeric alphabet, and / is the missing one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: