My wife moved to Belgium for me 8 years ago and also has dual citizenship. She assumed it would be fine to travel on Belgium passport + ETA but that's not allowed.
She also had to go through a very expensive process to renew her British passport last minute.
My native-Swedish friends all complain about the bureaucracy here, but it is so much more efficient than the British stuff.
Some of that, though, is a side-effect of the ubiquitous "Bank ID" identity tool - which suffers from the same dependency on Apple/Google that the article complains about. Given the current political climate I think the EU is going to have to figure something out to address this sort of thing.
The cost of a passport is negligible (~£10 per year on average), and it’s not reasonable to expect the UK to spend a lot of money architecting the system around a very small minority of dual citizens who don’t have passports.
Why should any special architecting be required given that I am an EU citizen with an EU passport. On the contrary, effort was expended to prevent me from obtaining an ETA to no purpose that has ever been justified.
Yes, I personally am not deeply inconvenienced by this, but that doesn't make it ok. Others are on much tighter budgets than me.
It's such a nightmare at my current job as well. Everything always just breaks and needs investigating how to fix.
Even putting aside the MITM and how horrendous that is, the amount of time lost from people dealing with the fallout got to have cost so much time (and money). I can't fathom why anyone competent would want to implement this, let alone not see how much friction and safety issues it causes everywhere.
With anti-security policies that: break TLS, thwart certificate pinning, encourage users to ignore certificate errors, expand the attack surface, increase data leak risks, etc. All while wasting resources and money.
Zscaler and its ilk have conned the IT world. Much like Crowdstrike did before it broke the airlines.
Not to mention:
> We only use data or metadata that does not contain customer or personal data for AI model training.
I really like the new way the 'undo' works, it's much more intuitive! Especially combined with the redo it will give me even more confidence to play around.
The whole operation log was already so nice. It's saved me a few times when I did some stupid things, but also invited me to experiment and learn :).
I have been super happy to discover jj a few months ago. I am on the path to go from barely getting by with git ui's to be able to do vcs magic with jj.
I'm in a very similar situation: been using git for a long time, but anything more complicated always via some kind of UI (often intellij).
Been using jj without significant issues for about a month and been super happy to be comfortable using the cli and slowly ramping up to more complicated operations.
The documentation still assumes a lot of inherent knowledge which sometimes makes it a little difficult. I love seeing blog posts like these and hopefully some more in depth resources will appear over time. Steve's guide is good, but there are still gaps for me :).
Next I want to learn some more revset language and become a bit more fluent with rebase operations. I love the more simplified cli, conflict resolution and op log!
I love the idea of litestream and litefs and do use it for some smaller projects, but have also been worried it was abandoned. The line is quite thin between "done" and "not maintained".
There clearly still is some untapped potential in this space, so I am glad benbjohnson is exploring and developing these solutions.
Great that the new release will offer the ability to replicate multiple database files.
> Modern object stores like S3 and Tigris solve this problem for us: they now offer conditional write support
I hope this won't be a hard requirement, since some S3 compatible storage do not have this feature (yet). I also do use the SFTP storage option currently.
I have the same issue, but the other way around. I cannot charge my laptop (framework 13, amd) from my power bank. Which sometimes would be super useful.
I don't know nor understand why it doesn't work and if it's a bug in the power bank or the laptop
I have a Framework 13 and I've found that it's fairly finicky about what power sources it will accept. Anything 60W and higher seems to mostly work, but lower wattage chargers are much more dicey.
The one trick I've heard works (but haven't tried) is to "kick start" it by connecting two chargers, one with higher wattage and one with lower, then giving it a minute to begin charging, then disconnecting the higher-powered one. Apparently that's enough to get it past the initial issue and then it will continue charging (more slowly) from the lower wattage charger.
There was a firmware update a while back that was supposed to improve things, but it didn't change the behavior with my 27W charger.
As another data point, the firmware update fixed everything for me and I have no problems charging my Framework 13 from my 18w Pixel charger or 20w iPad charger.
For this cold spell it's a little too late, but the solution for this to add some heat-coil/heat-wire on the bottom of the heat-pump housing that you can activate in those conditions.
The issue is that the defrost mode will defrost all the ice from the fins, but it freezes again before it can all drain through the small drain hole on the bottom of the housing. It's somewhat a known issue with air source heat pumps, sadly. I've not run into this issue myself, but on a Dutch speaking forum (tweakers) there is a long thread about this modification to "survive" the cold spells with this small modification.
You can manually attack the ice with a heating gun or hair dryer to remove it, but that's a faffy.
I think manufacturers should, and probably will, add something like this themselves in the future.
Yes, I've experienced a few poeple drilling the additional drainage holes on the bottom of the outdoor unit, when they experienced similar problems not having a "nordic" unit. With the nordic unit I mean the features mentioned above - heated compressor and the heating condenser vane.
Though, if it's snow blowing directly inside then I think creating some barrier or add additional shielding of the outdoor unit is required,so that you minimize the chance of the snow DDoS-ing the unit (note: check your unit's service manual for the minimum free distances from all sides of the unit, especially the front one that is the most important to be kept enough free space).
I wanted to like atuin. The idea is great. But it just could not match the instant search that ctrl-r with fzf offers sadly. There is always a noticeable delay that annoyed me and made me revert back to fzf for search.
For me another issue was that I needed to do more keystrokes for the same behaviour (search a previously ran command and execute it).
Fuzzy shell history search is just one of those mind-blowing things. I love my history, it's such a trove and I can trust it to work as some kind of external memory (it's enough to vaguely know the kubectl command, or that I want to "du -h | sort" to see what is using disk space, etc).
I also had a rather noticeable delay when launching atuin. As it turns out, this was because it checked for an update every time it launched! You can disable that update check: add a ` update_check = false` to your `~/.config/atuin/config.toml` [1]. That made the delay pretty much disappear for me.
What? I don't expect any tool to interact with the network if it's not made specifically for this (curl, netcat, etc). This would betray my expectations, I find it unacceptable.
But it's up to you. You can disable any sync features by installing with `--no-default-features --features client` set. Your application won't have any networking features built into the binary then
Oh, it's nice you fixed it, thanks! And don't worry, I updated atuin, as it's in my distro's repository (which is why I wasn't worried about disabling the update check).
Intrigued by local-directory-first feature of McFly (It brings up commands you previously executed in that folder first). I tried it for a bit. But there was noticeable lag compared to FZF and also the UI was a bit shaky.
Went back to FZF (in Zsh).
I have a seven year zsh history. That may be a contributing factor for the issues ?
I noticed that before and am glad to see it fixed. My current issue is that typing to search is noticeably slow on a 150,000 entry history, especially for the first few characters. fzf is instant for me.
Yeah we accidentally had this blocking :/ It does only check once an hour though, and can totally be disabled!
We introduced this as we found a lot of people reporting bugs that had already been fixed + they just needed to update, or users on the sync server that were >1yr behind on updates (making improvements really difficult to introduce).
Same, I enjoyed atuin but found myself missing fzf's fuzzy search experience so I ported fzf's own ctrl-r zsh widget to read from atuin instead of the shell's history to solve this. Best of both worlds imo, you get fzf's fuzzy search experience and speed with atuin's shell history management and syncing functionality.
Zsh snippet below in case it's helpful to anybody. With this in your .zshrc ctrl-r will search your shell history with fzf+atuin and ctrl-e will bring up atuin's own fuzzy finder in case you still want it.
It only searches the last 5000 entries of your atuin history for speed, but you can tweak ATUIN_LIMIT to your desired value if that's not optimal.
atuin-setup() {
if ! which atuin &> /dev/null; then return 1; fi
bindkey '^E' _atuin_search_widget
export ATUIN_NOBIND="true"
eval "$(atuin init "$CUR_SHELL")"
fzf-atuin-history-widget() {
local selected num
setopt localoptions noglobsubst noposixbuiltins pipefail no_aliases 2>/dev/null
# local atuin_opts="--cmd-only --limit ${ATUIN_LIMIT:-5000}"
local atuin_opts="--cmd-only"
local fzf_opts=(
--height=${FZF_TMUX_HEIGHT:-80%}
--tac
"-n2..,.."
--tiebreak=index
"--query=${LBUFFER}"
"+m"
"--bind=ctrl-d:reload(atuin search $atuin_opts -c $PWD),ctrl-r:reload(atuin search $atuin_opts)"
)
selected=$(
eval "atuin search ${atuin_opts}" |
fzf "${fzf_opts[@]}"
)
local ret=$?
if [ -n "$selected" ]; then
# the += lets it insert at current pos instead of replacing
LBUFFER+="${selected}"
fi
zle reset-prompt
return $ret
}
zle -N fzf-atuin-history-widget
bindkey '^R' fzf-atuin-history-widget
}
atuin-setup
Yeah, I got so frustrated with the odd workflow (having no sane way to locally test new/more advanced pipelines and having to do lot's of "change .gitlab-ci commits") at work that I started investigating alternatives.
At home, for some hobby projects, I've been using earthly. It's just amazing. I can fully run the jobs locally and they are _blazing_ fast due to the buildkit caching. The CI now only just executes the earthly stuff and is super trivial (very little vendor lock in, I personally use woodpecker-ci, but it would only take 5 minutes to convert to use GH actions).
I am not a fan of the syntax. But it's so familiar from Dockerfiles and so easy to get started I can't really complain about it. Easy to make changes, even after months not touching it. Unless I update dependencies or somehow invalidate most of the cache a normal pipeline takes <10s to run (compile, test, create and push image to a registry).
This workflow is such a game-changer. It also allows, fairly easy, to do very complicated flows [1].
I've tried to get started with dagger but I don't use the currently supported SDK's and the cue-lang setup was overwhelming. I think I like the idea of a more sane syntax from dagger, but Earthly's approachability [2] just rings true.
She also had to go through a very expensive process to renew her British passport last minute.
Silly system.