Hacker Newsnew | past | comments | ask | show | jobs | submit | more TechTechTech's commentslogin

Great work! How does it compare to migadu.com?


Thanks! I'd say the user experience is much better. Migadu is a great company but sometimes using it can be a headache. That's really one of our main goals, making it as easy as possible.


I've been using Migadu for a while, and haven't experienced any headaches. Which features/services do you offer that Migadu doesn't which makes things easier?


1. SimpleTransfer - automatically migrates all your old mailbox data / custom folders / custom tags for you

2. Extremely easy-to-use dashboard and overview page with insights on which domains/users are eating up your data

3. Docs. We've gotten a lot of love for our docs. It'd be easier just to show you: https://mymangomail.com/docs/


Thanks, I appreciate your answer. The docs are very well written and useful.


All fun and games, till some day Microsoft acquires Cloudflare and the internet effectively becomes one big corporate intranet.


If you want to learn more about Helion, I recommend you watch this excellent video by Real Engineering https://youtu.be/_bDXXWQxK38


Excellent production quality, but I would have appreciated a bit more skepticism on the part of the host of Real Engineering.

In particular, adding tritium to a fusing deuterium plasma does not just make all of the fusion a-neutronic. It just makes some percentage of the fusion events a-neutronic. So you're "just" adding a fuel additive that "just" changes the emissions somewhat

Plus, apparently D-T fusion tends to release gamma rays -- I'm sure that will also be tricky to either convert into useful energy or shield against.

I'm all in favor of trying, and I wish them luck, but it all smacked of a last minute publicity event to try to reassure investors that they were just one last upgrade away from producing power.

I would suggest the (longer, dryer) fusion master class series by Dr. Matthew Moynihan

https://youtu.be/BGu0cxrWWCA?si=lozD5-5MT1d-WEp9


For a skeptic's response I recommend https://www.youtube.com/watch?v=3vUPhsFoniw


This is sad.

I use Google Podcasts because of its simple UI. I am subscribed to a few creators and the clean UX "just works", also on Android Auto as I listen to most podcasts on the road.

Hope at least they integrate the podcast functionality well into Youtube Music without bloating the Android Auto functionality too much...


As someone who runs 10+ hetzner servers for 3+ years now in both Finland and Germany region, I indeed get what I pay for.

Stellar performance, stable servers with specs as advertised and very good pricing and connectivity.


At the moment of writing the cost estimate for 70B multimodal model with 7T tokens on 1000 H100 GPUs is $18,461,354 with 184 days of training time.

Anyone willing to share an estimate how cost will come down each year as hardware keeps improving and possible new methodologies are found?

Personally I would not be surprised if it is possible to train the same dataset for half the cost 12 months from now.


It will not get cheaper until Nvidia is disrupted on the software side. There is already plenty of hardware that can do this cheaper, starting but not ending with Google’s TPU


Correct.

It requires a breakthrough in software in finding new efficient methods in training, fine-tuning, these AI models which currently there is no way around it other than training the whole thing and burning millions in the process.

Until then, unless you are a big tech company that can eat the cost, it doesn't seem wise to waste your entire VC money on expensive fine-tuning and inference costs as your AI model scales to millions.


I think this has a lot of potential https://www.modular.com/engine


At this point, I think there is sufficient motivation (dramatically high training costs) that we could see major algorithmic, architectural, and/or training methodology improvements at the code level that make these sorts of things possible on commodity hardware within a few years.

We're already starting to see that with a few projects and I think once the scale tips such that it becomes practical to train something of GPT 4 quality with < $10k, the main focus of current research will shift to generating new models trained on commodity hardware.

My true hope is that the entire problem domain eventually ends up falling within the range of commodity hardware and FANG finds it can't really add any value (other than perhaps convenience) regardless of their superior compute resources, resulting in massive democratization of this technology.

That will of course open things up and make LLMs more accessible to bad actors, but this is ultimately a much better thing than the likes of FANG / OpenAI / etc being the sole gatekeepers of this tech. Just like Google has very little real motivation to fight click-fraud (there have been rumors for years that it is responsible for a double-digit percent of their revenue), these mega corporations will have very little real motivation to stop "bad actors" from paying to use their APIs, so the democratized situation is the less Orwellian one ultimately, since bad actors are going to use it either way.


You can train it at half the cost today if you use LambdaLabs cluster at $1.89/H100/hr.

https://lambdalabs.com/service/gpu-cloud/reserved


Well if you select "trainium nodes", it's already "only" $11,085,287.


Are there big reasons the training can’t be done SETI at home style - you could even pay people for use of their graphics cards and do the training multiple times on different machines to make sure results weren’t being gamed.


There is that, I think it's https://vast.ai/ and pretty sure there is also a "community" one I've seen for gen AI but I can't remember the name.


AI Horde


Yes that's what I was thinking of, thanks! https://aihorde.net


Is this inference not training?


Yes it's inference only (and usually pretty slow at that).


Training still relies on very low latency connection between all the devices. When distributing training across multiple machines most people use machines in close vicinity connected via infiniband to have the lowest possible latency.

Going from that to the dozens to hundreds of milliseconds of latency on the internet, or the hours if you do classical SETI@Home, is a big step. There are people working on it though.


GPU memory bandwidth is a limiting factor for how fast training can happen, so it’s much more efficient to train models on locally connected high memory GPUs.

Also gradient updates from all nodes would need to get combined at least every few training steps, and it would take a while to sync all gradient updates across the network.


There's Petals[0], but the problem seems to be that the entire training data needs to be loaded into VRAM and can't be split up across devices.

[0] https://github.com/bigscience-workshop/petals


In 10 years you will be able to do it at home on a machine that costs less than $5k


Hi, kind of hijacking this conversation but as Cloudflare is unfortunately routing the majority of websites I visit I have to ask this:

Can you guarantee my Firefox browser will keep on working on 'the open internet' now Chrome moves towards "Web Environment Integrity" and Safari towards "Private Access Tokens" and Cloudflare is supporting and implementing such technologies on scale?

I intent to not participate in these DRM APIs with my Firefox browser and would like to keep browsing the internet.


How could Cloudflare guarantee websites dont implement WEI in their codebase. that makes no sense


"If you’re a Cloudflare customer:You don’t have to do anything! Cloudflare will automatically ask for and utilize Private Access Tokens"

https://blog.cloudflare.com/eliminating-captchas-on-iphones-...


"will you be supporting WEI and PAT in your captcha/ddos protection services" is a VERY different question to "can you guarantee my Firefox browser will continue to work on the open web"


Heh, he posted the GP comment and went to bed. Good luck getting a response.


I haven't been able to visit a site with Cloudflare's bot protection for over a year because it goes into an infinite loop on Firefox.


That usually happens when I'm faking my user agent to use the most popular (windows + Chrome). Once I go back to the default (Linux + Firefox) then CloudFlare seems to allow it.


what a truly ridiculous question to ask the ceo of cloudflare.


Yes it is 𝕏 (U+1D54F, Mathematical Double-Struck Capital X)


Could be based on MetaHuman (Epic Games) as it is well integrated with Unreal Engine which is already in use for television production.

https://www.unrealengine.com/en-US/metahuman


I have been following Framework from a distance, but latest video of Linus (LTT) interacting with framework ceo really pushed me towards buying framework as my next laptop. Great to see a caring ceo and company

https://youtu.be/iU_iWa9LL_s


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: