Hacker Newsnew | past | comments | ask | show | jobs | submit | jmondi's favoriteslogin

The show Fool Us is wonderful…and this clip in particular is my favorite: https://youtu.be/5_KcQt0z-eE?si=xO5gTByzV0spzS2e

Hello! I've got experience working on censorship circumvention for a major VPN provider (in the early 2020s).

- First things first, you have to get your hands on actual VPN software and configs. Many providers who are aware of VPN censorship and cater to these locales distribute their VPNs through hard-to-block channels and in obfuscated packages. S3 is a popular option but by no means the only one, and some VPN providers partner with local orgs who can figure out the safest and most efficient ways to distribute a VPN package in countries at risk of censorship or undergoing censorship.

- Once you've got the software, you should try to use it with an obfuscation layer.

Obfs4proxy is a popular tool here, and relies on a pre-shared key to make traffic look like nothing special. IIRC it also hides the VPN handshake. This isn't a perfectly secure model, but it's good enough to defeat most DPI setups.

Another option is Shapeshifter, from Operator (https://github.com/OperatorFoundation). Or, in general, anything that uses pluggable transports. While it's a niche technology, it's quite useful in your case.

In both cases, the VPN provider must provide support for these protocols.

- The toughest step long term is not getting caught using a VPN. By its nature, long-term statistical analysis will often reveal a VPN connection regardless of obfuscation and masking (and this approach can be cheaper to support than DPI by a state actor). I don't know the situation on the ground in Indonesia, so I won't speculate about what the best way to avoid this would be, long-term.

I will endorse Mullvad as a trustworthy and technically competent VPN provider in this niche (n.b., I do not work for them, nor have I worked for them; they were a competitor to my employer and we always respected their approach to the space).


Together with next-generation ML accelerators in the CPU, the high-performance GPU, and higher-bandwidth unified memory, the Neural Engine makes M4 an outrageously powerful chip for AI.

In case it is not abundantly clear by now: Apple's AI strategy is to put inference (and longer term even learning) on edge devices. This is completely coherent with their privacy-first strategy (which would be at odds with sending data up to the cloud for processing).

Processing data at the edge also makes for the best possible user experience because of the complete independence of network connectivity and hence minimal latency.

If (and that's a big if) they keep their APIs open to run any kind of AI workload on their chips it's a strategy that I personally really really welcome as I don't want the AI future to be centralised in the hands of a few powerful cloud providers.


>* It seems to encourage highly dynamic code where even identifiers are dynamically created, so often you'll find an identifier, and try to search for its definition but get zero results.

I've had to take care of 3 large ruby codebases at different companies and this is what kills ruby for me.

A lot of ruby programmers think they are being clever when writing crazy dynamic ruby code, but they are only creating technical debt.

Years later when they have left and the "context knowledge " is gone from the team, the Ruby code is a huge mess of "magical" code.

And rails with its "implicit" functionality depending on method name. A lot of Ruby feels like "magical" to me (in that, things work because of some hidden implicit readmson).

I prefer code that is explicit, in your face. You can quickly see what it does and how it does it. Principle of least surprise and "don't make me think".

As a language it's pretty, I like to say that Ruby is really object oriented while python is a mix of stuff (why len(x) instead of x.len??)


For sure. I just started watching the new Dwarkesh interview with Zuck that was just released ( https://t.co/f4h7ko0M7q ) and you can just tell from the first few minutes that he simply has a different level of enthusiasm and passion and level of engagement than 99% of big tech CEOs.

Out of curiosity, what are flat earthers saying? That Russia faked a crash and India faked a success? Why?!

This will likely be a pattern across the planet as unsustainable practices are still the norm and will likely be so for many more decades.

The timescales of climate change, environmental degradation and biodiversity loss are too slow to be perceived as "real" risks. But they create underlying stresses that may push human systems beyond thresholds and tipping points at "random" times.

In other words, indirect impacts may be much more visibly damaging, although the causal links not easy to internalize.

For the majority of people the detrimental outcomes of their own behavior might as well be attributed to supernatural forces, "others" or any other deflection of responsibility.


I manage this with conditional includes in ~/.gitconfig:

  [includeIf "gitdir:~/work/client1/"]
   path = ~/work/client1/.gitconfig
  [includeIf "gitdir:~/work/client2/"]
   path = ~/work/client2/.gitconfig

My advice to retain your sanity: stop using Time Machine and use Carbon Copy Cloner [0] instead. It works. It keeps working. It has excellent documentation for any possible backup and restore cases. It is transparent about what it is doing.

Time Machine works fine until it doesn't. And it won't tell you that a backup is broken until you try to restore from it. The errors are going to be cryptic. There is going to be no support and the forums are not going to help. The broken backup is not going to be able to be repaired. Time Machine uses the "fuck you user" approach of not providing any information about what it does, or doesn't, or intends to do or whatever.

If your data is worth backing up, don't use Time Machine.

[0] https://bombich.com


Real-world training for the newbies, exercise for the journeymen, and teaching moments for the experts. That alone is worth my tax dollars for keeping a robust search and rescue force.

At least some of the CEO's estate should go to paying for it, but pay out to victims' families first.


Why Software Patents are Bad, Period.

https://caseymuratori.com/blog_0027

Patents are out of control, and they’re hurting innovation

https://www.learnliberty.org/blog/patents-are-out-of-control...

Economic and Game Theory Against Intellectual Monopoly

https://web.archive.org/web/20120121014753/https://levine.ss...

PATENTS AND INNOVATION IN ECONOMIC HISTORY

https://gwern.net/doc/economics/2016-moser.pdf

Historical record shows how intellectual property systematically slowed down innovation

https://web.archive.org/web/20140306012646/http://blog.p2pfo...

Criticism of patents

https://en.wikipedia.org/wiki/Criticism_of_patents


This is quickly becoming a standard in apps and it really shouldnt be handrolled since its such a common requirement and easy to get wrong (between serializing/deserializing/unsetting states). In Svelte it is now as easy as using a store: https://github.com/paoloricciuti/sveltekit-search-params

in general i've been forming a thesis [0] that there is a webapp hierarchy of state that goes something like:

1. component state (ephemeral, lasts component lifecycle)

2. temp app state (in-memory, lasts the session)

3. durable app state (persistent, whether localstorage or indexeddb or whatever)

4. sharable app state (url)

5. individual user data (private)

6. team user data (shared)

7. global data (public)

and CRUD for all these states should be as easy to step down/up as much as possible with minimal api changes as possible (probably a hard boundary between 4/5 for authz). this makes feature development go a lot faster as you lower the cost of figuring out the right level of abstraction for a feature

0: relatedly, see The 5 Types of React Application State, another post that did well on HN https://twitter.com/swyx/status/1351028248759726088

btw my personal site makes fun use of it -> any 404 page takes the 404'ed slug and offers it as a URL param that fills out the search bar for you to find the link that was broken, see https://www.swyx.io/learn%20in%20public


All I know is, the parts of my app that I implemented with state machines are by far the most stable, least buggy.

fwiw, if xstate feels like too much for whatever reason, https://thisrobot.life has seemed like a decent alternative to me.


Because the documentation is bad. Oauth is really simple:

Lets say you want to use google as an auth provider. You do this:

"Hey google who is this guy? I'm going to send them to google.com/oauth, send them back to example.com/oauth, and in the headers of the request include the word "Authorization: bearer" followed by a bunch of text"

Google says "Oh yeah I know that guy, here I'll send them back to where you said with a token"

Then later on you can take the token and say "Hey google, somebody gave me this token, who is it?"

That's pretty much it. You have to trust that google isn't lying to you, but that's kindof the point of oauth.

But that's never what the documentation says. It's always 10 pages long and the examples are like "here's a fully functioning python web server using flask and function decorators, oh the actual auth flow, which is really like 3 lines of code, is hidden inside of a library".

To people who write documentation: PLEASE for the love of god show me how to talk to your API both using your library, but also using something like urllib2 or requests or something.

Ideally the documentation is the absolute most minimal way of making the service work, and then adds more and more usefulness on top of that. I'm not going to judge you for writing bad code in an example. The example could practically be pseudocode for all I care. I just want to see generally how your API is supposed to work.

edit: yes, auth0, I am looking at you.


Summary of near-14-minute video, since I certainly wished someone else had posted one to help me decide whether to watch:

Says the KGB works mainly at ideological subversion of the enemy society, not James Bond stuff. This has 4 stages:

- Demoralization. Slow, takes at least a generation to take hold or be reversed. U.S. is 3 generations in, graduates have positions of influence, and now it's mostly done by Americans to Americans.

- Destabilization. 2 to 5 years.

- Crisis. 3 to 6 weeks. E.g. in Central America at that time.

- Normalization.

KGB considers it total war, and we should too. Americans should unify behind stopping their government from aiding communism. Education system especially important.

(There wasn't really anything more specific, it was TV. I would've liked specifics on not just the claimed strategy, but how much real effect they had on mid-20th-C. U.S. education and culture. Of course there's only so much one defector would know.)


Every complex project lives and dies by a relatively small group of true believers.

From the outside, we don't know who they are, but sometimes we get a glimpse of their passion.


Have you considered technical writing? I've talked to a number of folks who are in a similar position, and writing (as opposed to generalist development) has a number of advantages:

- It rewards experience though, except for niche-specific writing, does not require understanding of specific frameworks or programming languages - It is often 'important-but-not-urgent' work, so intermittent availability is less of a deal-breaker - Clear writing is very much an orthogonal skill from programming aptitude writ large, and you don't need to compete with new grads


Oddly reminds me of Emily Dickinson:

  I’m Nobody! Who are you?
  Are you – Nobody – too?
  Then there’s a pair of us!
  Don't tell! they'd advertise – you know!

  How dreary – to be – Somebody!
  How public – like a Frog –
  To tell one’s name – the livelong June –
  To an admiring Bog!

You can also see which companies sent lobbyists to work on this bill.

https://www.opensecrets.org/federal-lobbying/bills/summary?c...


Pokemon games!! All of the algebra is solved, and you can dig as deeply as you like in any direction you like.

Graphics? Check. And simple, top down, tilesheet or character maps work just fine.

Battles? Check. You can leverage anything from purely functional, object oriented, websocket, long polling, SQL, you name it. Whether you use 3 elemental types or flesh out everything from multiturn, semivulnerable, exp/leveling, it's all up to you.

Wanna just build an REST/GraphQL/gRPC API? Or a UI? PokeAPI is an opensource database of nearly all game data from moves, items, and species.

Pokemon is an endless, any-scope, extremely documented, opensource-rich field of exploration.


Conversely, I been seeing more people then ever riding, building and modifying electric and motorized bicycles. You can modify a regular $200 mountain bike with a $200 motor kit from amazon and have a $400 motorcycle that you don’t need a motorcycle license for. If you really don’t want to burn fossil fuels, a $1000 bafang mid drive ebike kit can turn a regular bicycle into a hill crushing monster and can be installed with almost no specialized tools.

The housing crisis is definitely a tougher nut to crack, but I don’t believe the auto manufacturers have quite the same leverage over the American population long term, especially if gas prices and electric vehicle prices keep skyrocketing significantly faster then inflation.


An unsolvable problem. The rapid deterioration of skill will not stop, it is accelerating instead. People's desire for stability as they grow older will not change either.

The only attraction in software development is the relatively good pay. The job itself sucks. You'll spent your life sitting in a chair looking at a text editor, and that's the best part of your day as about 50% of it is distractions. You're quite unlikely to work on something truly creative or thrilling, so it's mostly a boring grind.

Then, as the article mentions, it turns out the grind was for nothing and the rug is pulled every few years and you have to start over again. The job is cognitively taxing so you'll turn into an absent person that lives in their heads, it drains your life energy.

If I would be young now, I'd say fuck it and go install solar panels or heat pumps. It's outside, physical but not too physical, thus healthy. You get to meet lots of people and you see the direct result of your work. There's no office politics and you're contributing to a tangible good thing for the world. Skill requirements don't change much.

You might come home somewhat physically tired (but over time it normalizes), with a clear head and not a care in the world. There's no overflow between work and personal life.

Chose wisely, young ones.


I find a huge disconnect between the main assertion of this article, and the lip-service paid to most workers. In the article, the author states:

> While a lot of ink has been spilled on the future of work, the majority of Americans, and most people around the world, can’t actually do their jobs remotely.

Ok, so a majority of workers can't do their jobs remotely. Granted, many of these are not anchored to a city (medical offices are in the suburbs, truckers are mobile etc.) but a huge swath of people will still work in cities, at least part time. So this strikes me as another example of tech's myopia and why the tech world is increasingly disconnected from the majority of the country.

Simply put, most people can't work from home. Most people can't afford to buy a house 2 hours away from their office and hope to have the leverage to stay employed. Most people don't make six-figure salaries. Most people are afraid of lay-offs, are afraid of offshoring, are afraid of being antiquated.

And as I have said elsewhere, this almost obsessive focus on remote work could very well be very detrimental to those six-figure salaries and signing bonuses. Why pay USA wages when you can hire a Canadian or Mexican at 75% the cost? We are writing our own obituary and everyone seems to be cheering.

Finally, this is also a step _backwards_ for the environment. In cities like Boston, NYC, SFO etc. most workers commuted on public transit, at least part of the way or lived in dense neighborhoods not owning a car. Now they live in the exurbs and need a car to do anything. Now when they do go into the city they don't take public transit. People didn't stop living, they just moved out of the density and thus became more reliant on ICE vehicles to move about.

Much like the long-term damage of education loss was hard to quantify against the immediate risks of COVID, we will be paying the debt incurred by this almost cultish movement to make remote work the norm.


Just speaking for myself, I've noticed that my habit is to eat what is in front of me, and clean my plate. I mean this both literally and figuratively.

If I have dessert in the house, like a bag of chocolate, then I eat one after dinner. If I don't have it in the house, then I just don't eat dessert.

If I have a social media feed full of content, then I'll scroll through all of it until there's nothing else that's new.

So what I've been doing is not entirely quitting Internet stuff, but instead I just massively unsubscribing, unfollowing, and filtering all the feeds. Sort of a Marie Kondo thing. I go through every subreddit I'm in, every RSS feed, every account I follow on Twitter, and i strongly consider "is this really providing lots of joy and/or value?" If not, it gets the chop.

I've cut out at least 2/3s of the stuff I was following since the peak, and it's only going down. Now when I doomscroll it's only for a few minutes. I hit the end of new content very very quickly. When that happens I start to look elsewhere. I've been reading a lot more actual books, done more chores, and been more productive overall.

As for the things I unfollowed? They clearly had no value because not only do I not miss them, I can barely even remember what they were.


On a related note, I seriously recommend everyone to set their merge.conflictStyle to diff3 (or even zdiff3 on newer Gits). It shows you all of your version, thier version, and the original, common-ancestor version. It really should be the default, and, as far as I know, the only reason it isn't is because there are some older diff(1) implementation that don't support three-way diffs.

See https://blog.nilbus.com/take-the-pain-out-of-git-conflict-re... and https://stackoverflow.com/q/27417656.


These are fun thought experiments, but I think having a personal Disaster Recovery plan is a far more applicable security exercise. What would you do if you lost your phone? If you were locked out of your google account? If you forgot your password manager master password? If your home was destroyed in a fire? Having a secure plan for quickly recovering from these scenarios is more important than trying to keep state actors or cybergangs out of your system, unless you are a VIP.

The young man, who does not know the future, sees life as a kind of epic adventure, an Odyssey through strange seas and unknown islands, where he will test and prove his powers, and thereby discover his immortality.

The man of middle years, who has lived the future that he once dreamed, sees life as a tragedy; for he has learned that his power, however great, will not prevail against those forces of accident and nature to which he gives the names of gods, and has learned that he is mortal.

But the man of age, if he plays his assigned role properly, must see life as a comedy. For his triumphs and his failures merge, and one is no more the occasion for pride or shame than the other; and he is neither the hero who proves himself against those forces, nor the protagonist who is destroyed by them.

- John Williams in Augustus


Private Relay uses ingress and egress relays. The ingress proxy does know your IP but not which sites you are visiting and what you are doing. The egress proxy is only connected to the ingress, sees what you visit but does not know who you are. Both proxies are run by different parties.

With a VPN you would have to trust one provider, who sees all of your traffic.


There was a great documentary series about the making of Frozen 2 that shows exactly this - https://en.wikipedia.org/wiki/Into_the_Unknown:_Making_Froze...

I found it fascinating how the team was still developing key plot points (let alone dialog/animation) down to the final few weeks/days before the film needed to be completed!


I have a list of adblockers I use, hope it helps other people here:

Desktop:

- Pi-Hole (network wide adblocking)

- AdGuard (device wide adblocking)

Web browsers:

- uBlock Origin

- uMatrix (not developed anymore but still works, can also use NoScript)

- SponsorBlock (blocks in-video sponsor segments, intros, outros, filler tangents, etc in YouTube)

Mobile:

- Firefox for Android / Kiwi Browser (both have web extension support so you can install uBlock Origin)

- YouTube Vanced (alternate YouTube app blocks ads, also has SponsorBlock)

- NewPipe (alternate YouTube app blocks ads, also has SponsorBlock via a fork [0], different UI than main YouTube app)

- YouTube++ (for iOS, similar feature set as Vanced)

TV:

- SmartTubeNext (ad-free YouTube)

[0] https://github.com/polymorphicshade/NewPipe


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: