Hacker Newsnew | past | comments | ask | show | jobs | submit | mikelemmon's commentslogin

This post might as well be called “Check out what happens when you use a wrench to put in a screw”.

It’s just the wrong tool for the job.


How is this not the right tool? Where on DALLE-2 website does it say that it should not be used for artistic graphs?

Yes, we all have seen badly generated graphs from DALLE-2 before, so it feels like this is an obviously limitation of AI image generation tools. But why should this be such an obvious thing to absolutely everyone?


Doesn't change the fact that it's a bad experiment. Anyone familiar with DALL-E would predict this result. He should have taken some time to understand what DALL-E can do and proposed an actually interesting experiment.


I don’t think we should limit experiments only to those areas where DALLE-2 succeeds. Seeing failures is also valuable.


Maybe it's sarcasm, but the author seems to downplay these images. Are we really that jaded? This seems like criticizing a dog that learned to speak only basic english...the fact that any of this is even possible is stunning.


Could not agree more. We keep moving the goalposts for what constitutes intelligence, perhaps that is a net good, but I can't help feeling that we are taking incredible progress for granted. Both DALL-E and large language models (GPT-3, PaLM) demonstrate abilities that the majority of people doubted would ever be possible by a computer 20 years ago.

Imagine telling someone in 2002 that they can write a description of a dog on their computer, send it over the internet and a server on the other side will return a novel, generated, near-perfect picture of that dog! Oh - sorry, one caveat, the legs might be a little long!


True if you would have asked a nerd. If you had asked a person on the street, they would have answered, “Why, of course. Can’t computers do more complicated stuff already today?”


I agree. I could not be more blown away if a real UFO landed and aliens introduced themselves. A week ago all of this was impossible.


This isn’t true, the quality of images generated by DALL-E are really good, but they are an incremental improvement and based on a long chain of prior work. See e.g. https://github.com/CompVis/latent-diffusion


Also Make-A-Scene, which in some ways is noticeably better than DALL-E 2 (faces, editing & control of layout through semantic segmentation conditioning): https://arxiv.org/abs/2203.13131#facebook


To people paying close attention, these results are unsurprising. The author is simply accurately assessing the novelty of the results.

edit: fixed typo


Just about all the AI achievements are really impressive in the talking dog quality as you say.

The thing is that current has been producing more and more things that stay at the talking dog level (as well as other definitely useful things, yes). So the impressiveness and the limits are both worth mentioning.


There are a huge number of naysayers in tech for some reason. Maybe it's just a common human defense mechanism.

Reminds me of the notorious Dropbox comment, or even the iPod one on Slashdot.

I agree these pictures are amazing and any nitpicking is distracting from the actual impressiveness.


I don’t think people are jaded. I just think it’s going to take a while for these types of generated images to pass the Turing test, and that’s why it doesn’t feel that impressive yet. It’s very clear they’re made by an AI. Which isn’t bad. It’s just obvious in a way that isn’t fooling anyone. There’s a very specific way that AI generates things like hands, other limbs, eyes, facial features etc. Whoever figures out how to fix that, and then fix the “fractal” like style that’s a hallmark of AI generated images will win the AI race. Maybe it’ll be openai. Maybe it’ll be someone else.


The images already look better than what like 99.9% of humans would be able to produce and it produces them orders of magnitudes faster than what any human could ever hope to produce, even when equipped with Photoshop, 3d software and Google image search.

The only real problem with them is that they are conceptually simple. It's always just a subject in the center, a setting and some modifiers. It's doesn't produce complex Renaissance paintings with dozens of characters or a graphic novel telling a story. But that seems more an issue of the short text descriptions it gets, than a fundamental limit in the AI.

As for the AI-typical image artifacts, I don't see that as an issue. Think of it like upscaling an image. If you upscale it too much, you'll see the pixels. In this case you see the AI struggling to produce the necessary level of detail. Scale the image down a bit an all the artifacts go away. It's really no different then taking a real painting done by a human and getting close enough to see the brush strokes. The illusion will fall apart just the same.


You could upload these to DeviantArt and nobody would notice.


This. Sure if you see 10 of them in a row you notice it. But if you choose the best looking one I bet no one will notice.


Given this is a LessWrong article, my default assumption would be "author is being matter-of-fact while talking about a subject they're ridiculously excited / concerned about".


"The playing cards aren't right."

<side eye>


Oddly, the headline image of this article does not appear to be from an induction stove. Induction stoves do not glow red at all.


they don’t glow from heat but they can glow by design: for safety reasons, some have lights that simulate glowing from heat. I guess the choice of image is a little curious, given the primary distinction between induction vs. traditional is, typically, the absence of external heat.


At risk of sounding dangerously stupid, what exactly is the safety concern? Given that induction is cool to the touch, it's not obvious to me how you'd hurt yourself (which is doubtless another major advantage of gas).


They're not cold to the touch once they've been used: the induction process heats the pan, but heat from the pan will transfer back to the glass. If you use an induction cooker for 20 minutes, and then immediately place your hand on the glass, your hand will be burned.


> Induction stoves do not glow red at all.

Which is great for safety. Yes it gets hot if a pot has been boiling for a while, but it's a far cry from regular resistive ceramic tops.

Just bringing some water to the boil is not enough to make it more than uncomfortably hot. Try touching a resistive heater top after doing that...


The article mentioned that some induction ovens come with lights attached to emulate the feeling of fire…


My (Samsung) induction-capable cooktop has resistive heating as a "backup" for cookware that is not induction-compatible. Theoretically this can be disabled using an app, although the app has never worked.


The Planets is for fans of ambient/electronic post-classical: https://neue.bandcamp.com/album/the-planets

Releasing an album was a creative bucket list item for me. I released my first in 2010 as an accompanying soundtrack to an online digital art project. Then another, and another...enjoy.


I am really liking the track "Aftermath" off your latest album. it's really, really, good. Gives me some nice Tron and Blade Runner vibes.


Yes, it's great!


Perhaps the programming analogy is more similar to a language such as Logo or G-code used in CNC machines, that is used more to provide instructions to build something rather than computation and logical operations.

http://en.wikipedia.org/wiki/Logo_(programming_language) http://en.wikipedia.org/wiki/G-code


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: