Hacker Newsnew | past | comments | ask | show | jobs | submit | GuB-42's commentslogin

And what's wrong with not wanting to write functions yourself? It is a perfectly reasonable thing, and in some cases (ex: crypto), rolling your own is strongly discouraged. That's the reason why libraries exist, you don't want to implement your own associative array every time your work needs it do you?

As for plagiarism, it is not something to even consider when writing code, unless your code is an art project. If someone else's code does the job better then yours, that's the code you should use, you are not trying to be original, you are trying to make a working product. There is the problem of intellectual property laws, but it is narrower than plagiarism. For instance, writing an open source drop-in replacement of some proprietary software is common practice, it is legal and often celebrated as long as it doesn't contain the original software code, in art, it would be plagiarism.

Copyright laundering is a problem though, and AI is very resource intensive for a result of dubious quality sometimes. But that just shows that it is not a good enough "plagiarism machine", not that using a "plagiarism machine" is wrong.


If I use a package for crypto stuff, it will generally be listed as part of the project, in an include or similar, so you can see who actually wrote the code. If you get an LLM to create it, it will write some "new original code" for you, with no ability to tell you any of the names of people who's code went into that, and who did not give their consent for it to be mangled into the algorithm.

If I copy work from someone else, whether that be a paragraph of writing, a code block or art, and do not credit them, passing it off as my own creation, that's plagiarism. If the plagiarism machine can give proper attribution and context, it's not a plagiarism machine anymore, but given the incredibly lossy nature of LLMS, I don't foresee that happening. A search engine is different, as it provides attribution for the content it's giving you (ignoring the "ai summary" that is often included now). If you go to my website and copy code from me, you know where the code came from, because you got it from my website


Why is "plagiarism" "bad"?

Modern society seems to assume any work by a person is due to that person alone, and credits that person only. But we know that is not the case. Any work by an author is the culmination of a series of contributions, perhaps not to the work directly, but often to the author, giving them the proper background and environment to do the work. The author is simply one that built upon the aggregate knowledge in the world and added a small bit of their own ideas.

I think it is bad taste to pass another's work as your own, and I believe people should be economically compensated for creating art and generating ideas, but I do not believe people are entitled to claim any "ownership" of ideas. IMHO, it is grossly egoistic.


Sure, you can't claim ownership of ideas, but if you verbatim repeat other people's content as if it is your own, and are unable to attribute it to its original creator, is that not a bit shitty? That's what LLMs are doing

If a human learns to code by reading other people's code, and then writes their own new code, should they have to attribute all the code they ever read?

Plagiarism is a concept from academia because in academia you rise through the ranks by publishing papers and getting citations. Using someone else's work but not citing them breaks that system.

The real world doesn't work like that: your value to the world is how much you improve it. It would not help the world if everyone were forced to account for all the shoulders they have stood on like academics do. Rather, it's sufficient to merely attribute your most substantial influences and leave it at that.


If a human copies someone else's code verbatim, they should attribute the source, yes. If they learn from it and write original code, no, they don't have to cite every single piece of code they've ever read

Kind of like this: https://xkcd.com/1368/

And it is the kind of things a (cautious) human would say.

For example, that could be my reasoning: It sounds like a stupid question, but the guy looked serious, so maybe there are some types of car washes that don't require you to bring your car. Maybe you hand out the keys and they pick your car, wash it, and put it back to its parking spot while you are doing your groceries or something. I am going to say "most" just to be sure.

Of course, if I expected trick questions, I would have reacted accordingly, but LLMs are most likely trained to take everything at face value, as it is more useful this way. Usually, when people ask questions to LLMs they want an factual answer, not the LLM to be witty. Furthermore, LLMs are known to hallucinate very convincingly, and hedged answers may be a way to counteract this.


Microsoft also gets millions of dollars from both hospitals, probably. There is a good chance hospitals have computers running Windows and MS-Office. Microsoft also works closely with the Pentagon and whatever "evil" organizations, selling Windows license, cloud services, etc...

Same idea here. Hospitals need some data analytics, which was probably done in Excel before but wasn't sufficient, so they turned to Palantir, because it what they do.

I wish they turned to other solutions that would make better use of public money, I also wish they also didn't use Microsoft software.


I came to realize that logs don't matter. O(log(n)) and O(1) are effectively the same thing.

The reason is that real computers have memory, accessing memory takes time, and bigger n needs more memory, simply to store the bits that represent the number. Already, we have a factor of O(log(n)) here.

But also each bit of memory takes physical space, we live in a 3D world, so, best case, on average, the distance to a memory cell is proportional to the cube root of the amount of memory cells we will have to access. And since the speed of light is finite, it is also a cube root in time.

So "constant time" is actually cube root on a real computer. And I am generous by saying "real". Real real computers have plenty of stuff going on inside of them, so in practice, complexity analysis at the O(log(n)) level is meaningless without considering the hardware details.

Going from log(n) to log2/3(n) is just an exercise in mathematics, it may lead to practical applications, but by itself, it doesn't mean much for your software.


>logs don't matter. O(log(n)) and O(1) are effectively the same thing.

ever heard of a hashtable? that's because O(c) is better than O(log(N)). if they were the same, you would only have heard of binary search.


Hash tables are indeed O(1) in theory while binary search is O(log(N)) in theory, no problem with that.

But in practice, accessing a physical memory cell is not O(1), it is O(cuberoot(N)).

For a binary search, you need O(log(N)) operation, but each operation (accessing a memory cell) is O(cuberoot(N')) where N' is the number of elements up to the level of the tree you are at, ending at N for the last iteration. So that's O(sum(0, log(N), cuberoot(N/exp(n))), which simplifies to O(cuberoot(N)), exactly the same!

In real practice, the choice of binary search vs hash table depends on considerations like cache behavior. Hybrid methods combining hash tables with hierarchical data structures are often used when N is large. But that's my point, you won't choose a hash table over a binary search because one is O(1) and the other is O(log(n)), you chose one over the other because of how it fits your hardware architecture.

Contrast with, say, O(n.log(n)) over O(n^2). When n is large, barring space-time tradeoffs, you should always pick the O(n.log(n)) algorithm, because the reduction in complexity will always beat any sane hardware considerations.


Please explain to me how you can hash n distinct strings into O(n) buckets in O(1) time. Please note that this process needs to work as n goes to infinity.

Hash tables are O(log n) structures when you don't hand-wave away the "compute a hash" part. The thing is, search trees are far worse than that in practice and you aren't hand-waving away the "compare two elements" part. That's where the real speed savings come from.


What I think you are saying is that computing the hash needs to process the entire string, and the length of that string roughly corresponds to log n, therefore it's O(log n). Not sure I am entirely convinced by that reasoning, but let's roll with it for now.

Because if you apply it to binary search, you need to compare the strings at every step, and by that logic, each of these operations is O(log n), which means your binary search is now O(log^2 n).

I guess the crux is that we are still comparing apples to oranges (or multiplication operations to comparison operations), and at the end what probably makes hashing faster is that we are not branching.

Still I don't think it makes sense to think of both hash tables and binary search as O(log n).


Most of this is very true, except for the one caveat I'll point out that a space complexity O(P(n)) for some function P implies at least a O(cubedroot(P(n))) time complexity, but many algorithms don't have high space complexity. If you have a constant space complexity this doesn't factor in to time complexity at all. Some examples would be exponentiation by squaring, miller-rabin primality testing, pollard-rho factorization, etc.

Of course if you include the log(n) bits required just to store n, then sure you can factor in the log of the cubed root of n in the time complexity, but that's just log(n) / 3, so the cubed root doesn't matter here either.


> Already, we have a factor of O(log(n)) here.

Doesn’t that mean that O(log(n)) is really O(log²(n))?


You have to define what n is.

It’s clear from the parent comment that the number of bits needed to represent the input is meant here.

It can be some technical detail.

For example: imagine you have 2 windows, the lower right corner of one window almost touching the upper right corner of the other, so that the bounding rectangles overlap but the graphics don't.

With the inaccurate "false square" corners, you just had to check the bounding rectangles, to know which window to resize, now you have to check the actual graphics (or more likely, a mask).

I am not saying it is the problem, but that's the kind of thing that can happen. Or it may be a simple bug, like a crash, memory corruption, an unhandled exception, the usual stuff, but they couldn't fix it in time and it is better to revert instead of leaving the buggy code or pushing an untested fix.


Just revert the code back to pre-26! This is ridiculous, it can't possibly be this hard and if it is, it just points to the degradation in the quality of Apple software! This is maddening!

This is already the pre-26 bounding box, isn't it? It's the new graphics that don't line up. (Not a great excuse, but the graphics are here to stay at least for a little while.)

> the graphics are here to stay at least for a little while

And that's the reason why I won't buy a new Mac.

Tahoe and Liquid Glass are so horrible that they're going to lose customers because of those. They should realize what they did and just backtrack: it wouldn't be the first time they admit they made a mistake [1].

[1] https://www.theverge.com/2020/5/4/21246223/macbook-keyboard-...


Remember how long it took for them to give up on that stupid touchbar and "butterfly" keyboard. Don't hold your breath.

That’s a hardware issue. They backtrack on software issues fairly quickly. Remember the discoveryd saga and the revert to mDNSResponder?

Still waiting on admission that the magic mouse was a mistake though

The magic mouse have been there, almost unchanged, since 2009. That is a lot for a tech product, and retiring a product after 16 years is not admitting to a mistake. For example, the Logitech G5 mouse and its direct evolutions were among the most successful Logitech products, and it didn't last that long.

No, it is not just refusing to admit that the magic mouse was a mistake, it is considering that it is the best ever. That USB port on the underside is still one of the great mysteries though, maybe it is some quirk of evolution, because it is certainly not intelligent design.


In addition to vertical scrolling, the Magic Mouse can do horizontal (or diagonal) scrolling, zooming in and out, and a couple of other tricks. This makes it worthy for the people who need this for their work. There are mice that can do horizontal or vertical scrolling -- but not both at the same time.

People who do their work on large documents (pics in Photoshop, videos, CAD, music, even Excel, etc.) use these capabilities every day, and they like their Magic mice very much. If you are not one of these people (software development, for example, can be done with vertical scroll only, for the most part), it doesn't mean it's a bad product -- all it means it's a product which is not for you.

I don't use Magic Mouse but am very far from expecting Apple to admit "the magic mouse was a mistake" though.


Pre-Tahoe windows didn't have these stupid round corners (which is the ACTUAL bug which should be fix).

I am using Sequoia and the windows are definitely rounded! Though the radius is pretty small (the curved region is about a quarter of the mouse cursor area), so the fact you can drag it from outside the window doesn't look ridiculous.

> it can't possibly be this hard

Whenever I find myself saying this I remind myself it can in fact be this hard.


I would rather avoid having the government decide what I should run on my devices, private companies are already bad enough.

When Tesla debuted, the cost of batteries made electric cars more like an expensive novelty. The Tesla roadster certainly was fun, but it wasn't a practical car for day-to-day use.

Of course, things have changed.

Had Tesla gone all-in on Lidar, they could have turned the technology into a commodity, they are a trillion dollar company producing a million cars a year. Lidar is already present on cheap robot vacuum cleaners, and we have time-of-flight cameras in smartphones, I don't believe it would have been a problem to equip $50k cars with Lidar.


On this topic, I think it is worth mentioning Framasoft [1]

It is a French organization that offers plenty of alternatives to Google and other big tech products. A lot of them are just rebranded and hosted open source software, but they also develop their own, such as PeerTube and Framaprout (the last one is a joke, but PeerTube isn't).

[1] https://framasoft.org/


Yup, I'm surprised this wasn't mentioned earlier but they're the ones behind PeerTube (which I see posted on HN a lot) and many other tools. They've been building google alternatives for over two decades now and many of their tools are quite mature

https://degooglisons-internet.org/en/


I hate to say it because it's cute but that website is not going to win over large companies to use these tools.

I don't think they can win over large company, they are just a small nonprofit organization, large companies want to work with other large companies.

Where they can make a difference is for fellow organizations and maybe small companies. A lot of them go to Google because that's the most convenient, even if it sometimes against their principles, they are proposing an alternative.

One minor criticism I have is that while they are not hiding the fact that they are rebranding off-the-shelf free software, they could give them a bit more visibility, should users want to self host at some point.


The arts look a lot like Ankama work https://www.ankama.com

One tradeoff is that one of the tradeoffs graphics programmers do is about security. They typically work with raw pointers, using custom memory allocation strategies, memory safety comes after performance. There is not much in terms of sandboxing, bounds checking, etc... these things are costly in terms of performance, so they don't do it if they don't have to.

That's because performance is critical to games (where the graphics programmers usually are), and if the game crashes, no big deal as long as it doesn't happen so often as to seriously impact normal gameplay experience. Exploits are to be expected and sometimes kept deliberately if it leads to interesting gameplay, it is a staple of speedruns. Infinite money is fun in a game, but not in serious banking software...

I am all for performance, and I think the current situation is a shame, but there are tradeoffs, we need people who care about both performance and security, maybe embedded software developers who work on critical systems, but expect a 10x increase in costs.


An interesting example, because your body is literally something you have to hide. That is, it is illegal not to.

Personally, I hide it because that's what society is telling me, especially if children are around, and I have no real reason to go against that. I mean, who wants to see what I do in the bathroom? But should the government want to, I will gladly let them as it will nicely illustrate what I think of them.

There are many things I want to hide more than my body functions. It is a social taboo, not something that has to do with personal safety and security, which is what privacy advocates usually point to. Arguably, it is the opposite problem: something you have to hide, but for personal freedom, you shouldn't have to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: