Thanks! I'd say the user experience is much better. Migadu is a great company but sometimes using it can be a headache. That's really one of our main goals, making it as easy as possible.
I've been using Migadu for a while, and haven't experienced any headaches. Which features/services do you offer that Migadu doesn't which makes things easier?
Excellent production quality, but I would have appreciated a bit more skepticism on the part of the host of Real Engineering.
In particular, adding tritium to a fusing deuterium plasma does not just make all of the fusion a-neutronic. It just makes some percentage of the fusion events a-neutronic. So you're "just" adding a fuel additive that "just" changes the emissions somewhat
Plus, apparently D-T fusion tends to release gamma rays -- I'm sure that will also be tricky to either convert into useful energy or shield against.
I'm all in favor of trying, and I wish them luck, but it all smacked of a last minute publicity event to try to reassure investors that they were just one last upgrade away from producing power.
I would suggest the (longer, dryer) fusion master class series by Dr. Matthew Moynihan
I use Google Podcasts because of its simple UI. I am subscribed to a few creators and the clean UX "just works", also on Android Auto as I listen to most podcasts on the road.
Hope at least they integrate the podcast functionality well into Youtube Music without bloating the Android Auto functionality too much...
It will not get cheaper until Nvidia is disrupted on the software side. There is already plenty of hardware that can do this cheaper, starting but not ending with Google’s TPU
It requires a breakthrough in software in finding new efficient methods in training, fine-tuning, these AI models which currently there is no way around it other than training the whole thing and burning millions in the process.
Until then, unless you are a big tech company that can eat the cost, it doesn't seem wise to waste your entire VC money on expensive fine-tuning and inference costs as your AI model scales to millions.
At this point, I think there is sufficient motivation (dramatically high training costs) that we could see major algorithmic, architectural, and/or training methodology improvements at the code level that make these sorts of things possible on commodity hardware within a few years.
We're already starting to see that with a few projects and I think once the scale tips such that it becomes practical to train something of GPT 4 quality with < $10k, the main focus of current research will shift to generating new models trained on commodity hardware.
My true hope is that the entire problem domain eventually ends up falling within the range of commodity hardware and FANG finds it can't really add any value (other than perhaps convenience) regardless of their superior compute resources, resulting in massive democratization of this technology.
That will of course open things up and make LLMs more accessible to bad actors, but this is ultimately a much better thing than the likes of FANG / OpenAI / etc being the sole gatekeepers of this tech. Just like Google has very little real motivation to fight click-fraud (there have been rumors for years that it is responsible for a double-digit percent of their revenue), these mega corporations will have very little real motivation to stop "bad actors" from paying to use their APIs, so the democratized situation is the less Orwellian one ultimately, since bad actors are going to use it either way.
Are there big reasons the training can’t be done SETI at home style - you could even pay people for use of their graphics cards and do the training multiple times on different machines to make sure results weren’t being gamed.
Training still relies on very low latency connection between all the devices. When distributing training across multiple machines most people use machines in close vicinity connected via infiniband to have the lowest possible latency.
Going from that to the dozens to hundreds of milliseconds of latency on the internet, or the hours if you do classical SETI@Home, is a big step. There are people working on it though.
GPU memory bandwidth is a limiting factor for how fast training can happen, so it’s much more efficient to train models on locally connected high memory GPUs.
Also gradient updates from all nodes would need to get combined at least every few training steps, and it would take a while to sync all gradient updates across the network.
Hi, kind of hijacking this conversation but as Cloudflare is unfortunately routing the majority of websites I visit I have to ask this:
Can you guarantee my Firefox browser will keep on working on 'the open internet' now Chrome moves towards "Web Environment Integrity" and Safari towards "Private Access Tokens" and Cloudflare is supporting and implementing such technologies on scale?
I intent to not participate in these DRM APIs with my Firefox browser and would like to keep browsing the internet.
"will you be supporting WEI and PAT in your captcha/ddos protection services" is a VERY different question to "can you guarantee my Firefox browser will continue to work on the open web"
That usually happens when I'm faking my user agent to use the most popular (windows + Chrome). Once I go back to the default (Linux + Firefox) then CloudFlare seems to allow it.
I have been following Framework from a distance, but latest video of Linus (LTT) interacting with framework ceo really pushed me towards buying framework as my next laptop. Great to see a caring ceo and company