Hacker Newsnew | past | comments | ask | show | jobs | submit | oompty's commentslogin

I'm surprised that Discord isn't part of that ban since it seems so much more social media like. One could argue that it has much smaller private and semi-private groups but there are large servers with hundreds of thousands or even millions of users that are basically the same as Reddit content and user wise.


Almost all streamers have some thirdparty "tip/donation" system set up (usually streamelements/streamlabs via paypal or stripe, sometimes also giving some TTS effects on stream) so that's still possible.


>streamelements/streamlabs

I tried donating via one of these a while ago and got stopped by a requirement to link a Twitch account. Could you give an example of a donation page on there without that requirement?


I'm pretty sure Twitch/YT handles the payment processing; StreamElements/StreamLabs only provides things like the TTS stuff


They run their own donations platform as far as I can tell, you can add various payment processors: https://streamlabs.com/donations


Really? After teaching/mentoring new devs and interns for the last two years at my job I definitely think there's plenty of space and opportunity for improvements on version control systems over git, large files and repos being one thing but primarily on user friendliness and accessibility where even existing ones like mercurial do a much nicer job in many ways.


The model looks incredible!

Regarding this part: > Since flux-dev-raw is a guidance distilled model, we devise a custom loss to finetune the model directly on a classifier-free guided distribution.

Could you go more into detail on the specific loss used for this and any other possible tips for finetuning this that you might have? I remember the general open source ai art community had a hard time with finetuning the original distilled flux-dev so I'm very curious about that.


The model looks incredible!

Regarding this part: > Since flux-dev-raw is a guidance distilled model, we devise a custom loss to finetune the model directly on a classifier-free guided distribution.

Could you go more into detail on the loss used for this and other possible tips for finetuning those? I remember the general open source ai art community had a hard time with finetuning the original distilled flux-dev so I'm very curious about that.


What about ones trained on fully licensed art, like Adobe Firefly (based on their own stock library) or F-Lite by Freepik & Fal (also claimed to be copyright safe)?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: