Hacker Newsnew | past | comments | ask | show | jobs | submit | JasuM's commentslogin

What happens if you really run out of physical memory (including swap) after overcommitting? Does your process get a signal, or will OOM killer just run without notifying the process that triggered the condition?


Ig you try to allocate more than the system is willing to overcommit (e.g. a huge block all at once), malloc will fail. But if phyaical memory gets exhausted by accessing previously allocated pages, the OOM killer will evebtuslly come around and kill processes without signalling. Signal handlers could still make thenprocess (unknowingly) request more memory, so there is 0 guarantee that a handler could even run successfully.


Oh yeah, I didn't think of that. I wonder if you could write a signal handler carefully to not allocate any memory, stack or otherwise, or is some return address or an internal structure being allocated transparently...


Just trying to allocate stack space for the signal handler may cause the stack to spill into a new page. That is absolutely out of anybody's control. And if that new page cannot be provided, it's game over.


Makes sense. I was thinking that maybe signal handlers could use the regular stack of the thread, but that would of course make everything fall down if the "real" code would write to the stack before updating the stack pointer.


My dad used a 3D printer with a Dremel attached to carve PCBs. The PCBs were those that you normally etch, i.e. they had a full copper coating.


This one's my favorite (although I haven't read the whole thing):

"This movement is designed to double the speed by gears of equal diameters and numbers of teeth—a result once generally supposed to be impossible. Six bevel-gears are employed..."

http://507movements.com/mm_226.html


It's like an automobile differential --- if one wheel stays still, the other wheel turns faster than the input. Hence the warnings to not exceed a certain speed when one wheel is slipping, because the other one turns much faster than what the speedometer indicates.


I wonder if there is some general theorems on what sort of speeds/powers can be achieved given access to some specific types/sizes of gears.


It took five minutes of re-reading and visualizing but once it clicked there was very much a eureka moment. What a clever little design.

Thanks for sharing!


I read and re-read this for a solid 5 minutes and I have no idea how it produces the result. I get that gear D spins orbitally around the left-right axis instead of rotating clockwise, but I can't seem to grok how that would produce 2x rotations.


A differential is an adder.[1] This is a differential hooked up as a doubler.

Mechanical analog computers, especially gun predictors, had lots of setups like that.

[1] https://www.youtube.com/watch?v=s1i-dnAH9Y4&t=757


Thanks for that video. Very informative... like many of the videos from that age. There's something about training videos and physics videos from the 50s / 60s that makes them much better than what people produce today; it's hard to say exactly what it is, but they do feel cruft-free.


The editing, pacing, and style were better.

Educational videos today tend to go to one of two extremes. Either they're a talking head with PowerPoint, like most of the "massively online" courses. Or they have way too many jump cuts, like theatrical movies which want your attention, not your understanding.

There's also an annoying tendency to have distracting music and irrelevant graphics during narration.

Here's an old Jam Handy film, "Spinning Levers", on how a transmission works.[1] There's a whole series of these Chevrolet films on the Internet Archive, covering major vehicle systems. Things to note:

- There's a narrator and a demonstrator. We never see the narrator, and the demonstrator never talks. So the viewer can focus.

- The demo models of parts are really good. They start with a simple version and add features until a full transmission has been built up.

- There's some simple animation. Animation is used to point out how power flows through the gears. This is much clearer than someone using a pointer.

- The editing and narration are very well synchronized.

- There's an entertaining part at the beginning and end, so you don't feel like they're beating you over the head with the boring stuff.

[1] https://www.youtube.com/watch?v=JOLtS4VUcvQ


Not just the 50s/60s - this video explaining differentials is from 1937 and is a marvel of pedagogy:

https://www.youtube.com/watch?v=yYAw79386WI


Oh yes, that's legendary! I forgot it was pre-war! Speaking of war, I also recall pretty well-made video explanations for bomber pilots, about how to fly the planes to avoid anti-aircraft fire.

So I guess I should extend my original statement to the 1930s-1970s range.


They are not trying too hard to make it fun and entertaining. But it ends up much more entertaining than modern chewing gum tutorial videos.


I think you're gonna like this one [1]. The B-25 Mitchell instructional that was linked here some few weeks ago is also amazing [2].

[1]: https://www.youtube.com/watch?v=Ac7G7xOG2Ag

[2]: https://www.youtube.com/watch?v=-YQmkjpP6q8



I wish they would make some for convolutional neural networks.


3blue1brown videos are probably the best equivalent. Maybe a little basic? But really, all his math videos are just ridiculously good quality.

https://www.youtube.com/watch?v=aircAruvnKk


Thanks for the link. Those mechanical computers are really amazing; such an elegant solution.


Not a mechanical engineer but I think this is what is happening.

Note that Shaft F and shaft D are made to rotate in opposite directions.

Then another gear is made to pit those motions against each other, by means of that wacky rotating frame A.

To see why let’s unwrap it from a polar coordinate to something more linear.

Imagine a wheel with gears resting on a rail with teeth. The wheel has an axle going through it. If you drag the wheel’s axle forward horizontally that’s like what Frame A does to impart 1x rotation. But then the rail is also moving backwards, giving you 2x rotation.

I think!


You're entirely correct - it's a modified differential drive.

The two shafts are coaxial to each other and drive their corresponding gears in opposite directions. The frame is fixed relative to one shaft but mobile relative to the other, which is where the extra 1/2 of relative motion comes from.

The traditional way to do this was with a crossed belt. This is the higher-friction lower-slippage version of that.


I think what happens is that E is moving the whole gear D move around. I think you realize this. In other words the motion of E does not change the teeth of E and the teeth of D which are in contact with each other.

But then the gear LeftC is making D rotate around the circumference of E i.e. changing the teeth of E and teeth of D which are in contact. Since this is in the same direction as above, you get 2x speed.

Essentially imaging you sitting on the edge of a plate. E is making the whole plate spin. And leftC is making you run around the circumference of the plate in the same direction as the direction of rotation. So your angular speed becomes twice as much as the speed of the plate.


E is the output gear, and B is the input. The arrangement on the right is to make the frame A and gear C turn in opposite directions, and D adds the two rotations together again to turn E twice for each revolution of B.


This is blowing my mind at the mo, I need to see it working...


I think I got it. The simplest way I can describe it is the gear on the far right is turning the whole square frame on the left because the little shaft turns inside the big shaft. The 2nd gear from the right (that is connected to the big shaft) is turning the "thing-thats-already-turning", so you get 2X rpm.


I can imagine some timing-specific attacks for memory accesses, but they're not likely as robust as attacks against the branch-predictor:

1. This is the simplest one - if the memory being accessed is in a cache (L1/L2, or page in TLB), the function will take a significantly shorter time to execute. If movfuscator achieves conditional execution by manipulating index registers to perform idempotent operations, this will be very easy to detect.

2. Prefetching - if movfuscator reads memory sequentially with a detectable stride, prefetching will shorten the execution time.

3. Write combining - if the code writes to nearby addresses (same cache line), the CPU will combine them to a single write. This will cause a measurable timing difference.

EDIT: One more: Store forwarding - if the code writes to a memory address and reads it soon, the CPU may bypass the memory access (and even cache access) completely.


- Random stuff all over ~/

- User-specific bin folder ~/LocalApps (with lots of scripts)

- Projects etc. in a GoCryptFS-encrypted directory synced with Syncthing with NAS (with NFS+Kerberos on my desktop rather than Syncthing). Vendor-directories (/vendor/, /node_modules/) are not encrypted as not to slow down development, they are bind-mounted to a cleartext directory.

- PDFs, MP3s, etc. in Dropbox


I spent yesterday rewriting my ZSH config. A lot of the time I spent on figuring out how to map random key combos to unused control characters. So if I could implement both the terminal and the shell from scratch (or make a significant addition), I would implement a scenario like this:

  1. Program informs the terminal that it supports "extended keyboard input". This would be done with an escape code, much like bracketed paste.
  2. Terminal informs the program that it is now enabled.
  3. Terminal informs the program of the initial modifier key state.
  4. When the user presses a key (including modifiers), it is sent in a format that standardizes over all the commonly used key codes. Also, if the key represents a Unicode codepoint, that is sent as well. E.g. (mod-status-at-key-press)(key-code)(utf8-char-or-null)(null-terminator).
  5. When the user releases a key, a similar message would be sent.
  6. When the program exits (or a shell runs another program), it asks the terminal to disable "extended keyboard input"


Its port 26 enforces TLS though.


But doesn't bother to check if the certs (local and remote server) have been signed by a trusted authority. Nor does it attempt to pin these certs.

It provides encryption, but no authentication nor authorisation. In short an ever so slight improvement over normal SMTP.


It's actively ten thousand times worse than that. From the article:

> The device uses self-signed certs throughout and they aren't even device specific. It's using the default ssl-cert-snakeoil.pem and ssl-cert-snakeoil.key in the Postfix config.


Requiring the same standard of verification for activating an account. Of course, they want to make it easy so it's unlikely to happen...


If he knew his password he could deactivate the account using the same standard of verification as was used for reactivating it.


He does know it, but Facebook is requiring that he change it before deleting his account.


I'd be very cautious before moving SSH to a non-privileged port (over 1024). Any user on the server might start their own SSH server on the port assuming the real SSH server is dead. While this is hard to exploit (needs access to normal user, needs to kill real SSH server, need to get around SSH server key checking), it still is at least a theoretical reduction in security.


Besides local iptables you can forward the port at the firewall level. Many people (myself included) have observed failed ssh logins going from many thousands daily to on average zero just by changing the port on a netfacing server. Of course a determined hacker who is after you can trivially portscan. But why not block all that noise and a huge percentage of shotgun attacks? If someone is out just to find servers to root with a new zero day they're liable to spray the net searching on the target port rather than portscanning all IPv4 space. Why not buy yourself some free time?

It is basically zero inconvenience to add an extra argument or shell setting.


A user can always start their own SSH server. Just because you've decided to move it to a different port doesn't really encourage them. I suppose you could make this a bit more difficult for them by removing compilers (no really, you don't need compilers everywhere) and making sshd owned by root, mode 700...

However, proper ingress filtering or local iptables/pf rules would stop any unwanted inbound traffic from reaching your server, and you should definitely be using ingress and egress filtering on your network.


removing firewalls has effectively no benefit; a non-root user can trivially download and run an arbitrary distro or package manager (e.g. nix from nixos, portage from a gentoo prefix, etc) and effectively do a chroot + package management without root.


The point was that you could more easily replace the SSH server with a malicious one and e.g. hijack your agent when you connect to it.


That's not the server's fault, it's just Drupal being slow. It was pretty slow in version 6 and has been getting consistently slower. I'm pretty sure that the request count is for dynamic content only.


>> That's not the server's fault, it's just Drupal being slow.

Agreed.

I had a static site running on Drupal 7 on Azure and it was really slow. I did a ton of optimizing the front-end code and caching everything I could and it still took between 3-5 seconds to serve pages.

Once I switched to a static site generator, I was floored at how much faster the pages loaded and the performance was night and day; and I was still hosting it on Azure. The only variable that had changed was Drupal.


Something's wrong with either the site architecture (too many modules), the web services used on the front end (something calling a web API on front or backend?), or the infra (slow MySQL?).

I've built some substantial D7 sites and have completely uncached page response times in the median of 100-300ms. Slow, sure, but anything over 1s and I figure out what's causing such abysmal performance.

With PHP 7 on a standard D7 site, those times are down sub-200ms.

And these are sites with over 100 contrib modules, usually 20-30 extra custom modules, and at least a million+ (Mostly anonymous, so those are cached) page views per day.


Surely the only variable that changed was the entire stack? I'm assuming your static generator basically means you're firing a bunch of bytes, probably sitting in RAM out over a socket. No CPU/processing involved, no dynamic anything.

I'll be the last person to defend Drupal, but I don't think it's fair to say that Drupal was the only variable that changed. What about the amount of stuff coming off disk each time, the removal of an interpreted language, the database access?


While drupal is slow, it is a CMS. How do you do CMS with static html pages? e.g. authentication, classify the content for various access rights, serving dynamic book-module/forum/wiki/nodes/editor/video etc?

I'm not impressed by Drupal's preformance even on a low-traffic site, but then again I could not find a "better" CMS yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: