Hacker Newsnew | past | comments | ask | show | jobs | submit | more sghiassy's commentslogin

I prototyped this exact thing by parsing the DOM, building corresponding UIKit elements and styling them via CSS.

My output looked exactly like an embedded WebKit UIView though - so then the problem became - what was I making that was appreciably better?


That was exactly my question. It’s not like HTML + CSS / DOM is actually a good model for this domain. It’s just what we’ve been stuck with.


I think the authors point, is that, designers are incorrectly choosing the wrong tool.

In my experience in FAANG, designers use Figma for everything. Like literally everything.

So when you say, they should use a sketch tool when they need free form, they don’t



Sorry but I'm not going to pay 56 dollars to read a 17 year old paper with 18 citations.


Love the app; will use.

Scared of MAGA targeting brown people with this type of social enforcement


What if your primary way of learning isn’t reading? Are librarians still as necessary?


You’ve just ruined every growth Product manager’s dream


Does a football coach need to be in the lineup in order to be great?


Could you add more context? I’m unfamiliar with Grok. Is Elon being a hypocrite?


Grok has absolutely no safety mecanism in place, you can use it for anything it will not block a query, all under the pretext of "free speech". And it's not open source either


>it will not block a query

This is good though?

>And it's not open source either

Wonder why Grok-1 is open-source but not Grok-2. Maybe it will when Grok-3 releases?


> > it will not block a query

> This is good though?

The opinion spectrum on AI seems to currently be [more safe] <---> [more willing to attempt to comply with any user query].

Consider as a hypothetical the query "synthesise a completely custom RNA sequence for a virus that makes ${specific minority} go blind".

A *competent* AI that complies with this, isn't safe to give to open source to the general public, because someone will use it like this — even putting it behind an API without giving out model weights is a huge risk, because people keep finding ways to get past filters.

An *incompetent* model might be safe just because it won't produce an effective RNA sequence — based on what they're like with code, I think current models probably aren't sufficiently competent for that, but I don't know how far away "sufficiently competent" is.

So: safe *or* open — can't have both, at least not forever.

(Annoyingly, there's also a strong argument that quite a bit of the recent work on making AI interpretable and investigating safety issues required open models, so we may also be unable to have safe closed systems as well as being unable to have safe open systems).


Safetyism is bad, but grok definitely has safeguards and will block certain queries.


why are you pro censorship?


I think that’s already been achieved https://youtu.be/o1sN1lB76EA?si=4nMZryeOeoVrZewW


46.05% in the last 6 months 69.10% in the last year 235.05% in the last 5 years

Just the facts


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: