Hacker Newsnew | past | comments | ask | show | jobs | submit | jbri's commentslogin

It's in the field of copyright rather than patents, but the Righthaven suits suggest some form of precedent in this area.

But that's kind of a sideshow - I believe the real issue being pointed at is if the defendant chooses to argue that a patent is invalid, and succeeds, then the only award they get is the now-invalidated patent.


> again, maybe I'm getting old, but to see the pixels on my current screen I have to get my nose almost right up to the display, which I'm never going to do

The point of going to higher and higher resolutions is that eventually, you don't see the pixels. Pixels are an implementation detail, what you're really wanting to see is the image they represent.


Exactly.

And for anyone who struggles to see the pixels on a typical desktop monitor, that's probably because a lot of tricks are used to disguise them. Try turning off anti-aliasing or sub-pixel font rendering and then tell me you can't see the pixels.


So the point of higher pixels is so you can't see the pixels and to prove this point, you tell people to turn off other (cheaper) technologies that already hide the pixels.

There seems to be some faulty logic here. What is the point of the higher resolution displays if the pixels are already hidden with other technologies?


The tricks aren't perfect--antialiasing is blurry for example. A properly high DPI monitor is a thing of beauty--crisp images and text.


Actually in video/ photography these tricks don't work. So BIG DEAL :)

It's like mixing music with $5 ear buds.


More pixels = sharper image. Try putting a printed high-DPI magazine next to your monitor and compare text and images.


Antialiasing typically produce artifacts and somewhat blurry letters.


Gee, thanks. All these years, and I've been looking at the pixels and not noticing the image! You've changed my life!

ahem...

Condescending explanations aside, you know that there's a limit to the human eye's ability to resolve detail, right? We can resolve up to about 150ppi at 2ft. The Apple Cinema Display is at 109ppi. There's room for improvement, but not 60% more...


Yes, but when you're actually working on a hi-PPI display, you can then lean in to view more detail, rather than zooming in. Much like we inspect things in the real world.

As a photographer, this means a great deal. I can verify the sharpness of an image (a key component in deciding whether to keep it or chuck it) at a glance. Saves a lot of time.


Yeah, maybe...but there's still a practical limit. In order to see the pixels on a thunderbolt display (again, 109ppi) I have to get my face about 7 inches away from the screen.

Closer than about 5 inches, and I lose the ability to focus because the screen is too close -- so there's a band of about 2 inches where I can gain from a higher pixel density than 109ppi, without losing due to eyestrain. And in any case, I'm not going to spend much time in that zone. It's hard to work with your nose in the screen.

YMMV, but I think I'm fairly typical. Most people dramatically overestimate the precision of their eyes.


You can see image degradation from pixelation long before you can make out individual pixels. I can't really make out individual pixels on my MBA (130 ppi) at one foot, but looking at a MBP Retina at the same distance looks dramatically better. On the MBA, the fuzziness from the heavy anti-aliasing used to hide the pixelation is quite apparent, but on the MBP Retina pixels look like sharp-edged solid shapes.


I don't know whether it's due solely to the resolution, but I was pretty shocked to realise I can tell the difference between 300 and 600 dpi photographic prints (assuming there's enough detail in the image to do so, you need to print a 23 MP DSLR shot with high detail at a 7x10" print size to get there). I have had other photographers tell me that they don't see any benefit to retina screens at all... you're giving me the reason why here. :)


109ppi is good enough, but there is a significant difference. It's true that beyond a certain point it doesn't make a difference (1080p phones, I'm looking at you!) but 109ppi is not that point, for most people. Retina web content and applications look decisively better.


Of course, the reason that language was introduced into the standard was primarily to mitigate this sort of attack.


The author mentions that he only evaluated drives that have some form of power loss protection - doing some quick searching around, I couldn't find any Samsung drives in the given price range that claimed to have that.

Did I miss an appropriate Samsung drive in my quick searches? Or is there reason to suspect that a Samsung drive that doesn't claim to have power loss protection would nonetheless handle this case better than the non-Intel drives that did make that claim? Because if not, then I don't think not evaluating Samsung drives compromises the results in any way.


The Samsung SM843T has power loss protection, among other things, and is priced really competitively to the Intel S3500.


Given that these techniques seem to be well-known, it would be interesting to see a JPEG compressor which uses what we know of human vision to minimize filesize while keeping the image visually the same.

JPEGmini seems to go partway with selective compression, but it seems like there's room to improve the actual compression algorithm to do things like gaussian smoothing instead of just truncating the high-frequency components to reduce filesize.


Well the whole point of a compression scheme like JPEG is to use what we know about human vision to minimize file size while not visually changing anything.

But JPEG is a pretty crude algorithm designed mainly to be computationally cheap [to match the capabilities of computer hardware in 1992 when it was created]. Trying to make better encoders within the limits of what JPEG decoders can handle is quite limiting, and yields only marginal improvements.

It’s possible to do much better if we’re willing to accept more CPU time spent in encoding/decoding using fancier algorithms. Unfortunately it’s really hard to get traction for anything else, because the whole world already has JPEG decoders built into everything. For instance JPEG 2000, designed to be a general purpose replacement with many improvements over JPEG, is now 13 years old but used only in niche applications, such as archiving very high resolution images.

But for use cases where you control the full stack, better compression is quite viable. Video game engines for instance devote considerable attention to image compression formats.


I'm trying to write a better JPEG compressor: https://github.com/pornel/jpeg-compressor

IMHO JPEG algorithm isn't crude. IMHO it's actually quite brilliant—simple and very closely tied to human perception. The core concept of DCT quantization hasn't been beaten yet—even latest video codecs use it, just with tweaks on top like block prediction and better entropy coding.

Wavelet compressors like JPEG 2000 beat JPEG only in lowest quality range where JPEG doesn't even try to compete. Wavelets seem great, because their blurring gives them high PSNR, but lack of texture and softened edges make them lose in human judgement.


JPEG isn't crude, agreed. But your comments seem off to me.

The core trick isn't DCT per se, it is transform coding, Which both DCY and wavelets are, followed by (usually) quantization and then entropy coding.

Typical wavelets used are orthonormal transforms, no loss there. The "lack of texture and softened edges" are a choice of the model used, not a consequence of the transform. True also of DCT and blocking artifacts. This should be obvious since both approaches allow for lossless encoders.

Typically wavelets will beat or match JPEG in any situation both in terms of psnr and the like, and perceptually (though this latter is much more controversial and poorly defined -- and to be fair I am not up to date on the literature here but I would be surprised of that has changed in the last decade).

The real reason for the huge popularity of DCT in still compression at first and video codecs later is that it is cheap to implement in hardware. And once there becomes very cheap to use.

JPEG 2000 is needlessly complex, at its core a wavelet codec is also very simple and elegant.


I have tried your JPEG compressor. While it does make red less dull at 2x2 chroma subsampling, it does also blur red (or orange) into (white) backgrounds. IMO the sum is negative, at least with the particular image I tried.

I'm particularly interested in this because lossy WebP only supports 2x2 chroma subsampling. So any optimisation for JPEG could maybe also be applied to WebP.

I have made a topic about this once on the WebP group, which also has the image I tried with your compressor.

https://groups.google.com/a/webmproject.org/forum/#!topic/we...


Thanks for trying it out.

With chroma subsampling there's tradeoff - either bleed black into color areas or bleed color outside. If have any ideas or know research on choosing when which option is best, then I'm all ears.


> Unfortunately it’s really hard to get traction for anything else, because the whole world already has JPEG decoders built into everything.

Patents don't help either.


uses what we know of human vision to minimize filesize while keeping the image visually the same.

This is what JPEGmini is doing, perceptual compression metric.

gaussian smoothing instead of just truncating the high-frequency components

Gaussian smoothing does remove high-frequency components.


If you want to build new crypto, people would be absolutely happy for you to design and present a new cryptographic scheme - hopefully with some advantages over existing ones. People will analyze it, tell you where the weaknesses are, and if they're not critical flaws you can fix them and come out stronger - and if they are critical flaws, you know your scheme is broken before it's been used for anything important.

Likewise, building a hot new secure messaging app with existing well-analyzed, battle-tested cryptographic schemes is generally going to be welcomed.

If you try to do both at once, you're building your application on shaky, untested cryptographical foundations. Cryptographers would similarly probably warn you not to base your application on a new cipher someone else announced at a cryptography conference last week - give it a bit of time for others to analyze it and spot any flaws it might have before you entrust anything sensitive to it.


If people want to support the TrueCrypt developers, there's a "Donate" button on the TrueCrypt website. People have instead chosen to give money to an initiative to have a thorough independent audit of the TrueCrypt code - it seems to me like it would be morally wrong to take their money and give it to the TrueCrypt developers instead. Presumably, if their preference was for that money to go to the TrueCrypt developers instead, they'd have skipped the middleman and donated directly.

Perhaps someone should organize a similar initiative for the Linux source code?


On the other hand, if you allow outside communication (such as unrestricted access to the internet), you open the door for "I'll pay someone who already knows this stuff to phone in the answers for me" - which isn't generally an applicable skill in the real world.

As an aside, I hated open book exams - if you were talented and knew the material, you could typically blast through a closed-book examination in half the allotted time and get out of there, while the open-book exams were far longer and more tedious.


>"I'll pay someone who already knows this stuff to phone in the answers for me" - which isn't generally an applicable skill in the real world.

I would argue that this is also a valuable skill. Knowing who to hire, assessing the person's abilities, figuring out if they can actually get the job done in the time allotted. Sure it's not at the same level that a hiring manager at a tech company in the real world would have to make, but then again the normal CS test questions aren't at that level either.


It shouldn't matter whether these specific charges are made-up or not. At this point they're unproven allegations, and it's up to a court to decide whether they're valid or not.

It's important to realize that anytime you give the police power over people still facing trial, you give them that same power over anyone they can fabricate spurious charges against. That's why it's important to have things like warrants and pre-trial court orders, so that there's some accountability when the police detain someone over unproven allegations.


Yes. And there has been. He was extradited by a Swedish court. And he was put in solidarity by a Danish court before his trial begins. Or is that not a close to warrant/pre-trial court orders?


I'll confess I'm not intimately familiar with the details of this case, and I'm speaking in generalities.

But still - it doesn't matter whether the charges are invented or not, or even what those charges are. What matters is that a judge (not the police) ruled that he should be held until his trial.


Is extremely wrong that a system trying to impose justice put its suspects under psychological abuse before trial, making them hate society, and therefore increasing the possibility of actually becoming criminals.


In theory? Quantum and thermal effects inside an unstable configuration of transistors.

The simple example is a basic SR latch (two NOR gates, where the output of one gate feeds one of the inputs of the other, and vice versa), where you start things off by applying a signal to both S and R. When you remove the signals, the latch will eventually fall back into one of the two stable states - but which state it ends up in is random.

So you can easily produce a stream of bits from a potentially-biased random source, and then do some deterministic massaging of that stream to produced an unbiased stream of random bits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: