Hacker Newsnew | past | comments | ask | show | jobs | submit | sacheendra's commentslogin

The authors point out that the PL community values formal methods and hard implementation, but not human factor studies and alternative programming approached like spreadsheets. I'm not from the PL community, but my first instinct was other communities such as social science, software engineering, HCI can do those things. PL has to be about building languages.

However, the example that drove the problem home for me was socioPLT. If award winning work on why program languages succeed is not highly successful, then maybe the community should indeed look into incorporating more research approaches.

The points the authors make about lack of community engagement for non-mainstream problems, lack of female creators in books about PLs, and gatekeeping of the term programming language are also convincing.


No satellite was pointed at the path when MH370 was flying.


So it isn't actually true that it is "constantly scanning every inch of the entire globe with IR cameras with missile/plane/boat/person(?) level resolution" then, or it is true and they know exactly where the plane went down they just don't tell anyone about it for some reason. Its either or.


Every inch of the globe is imaged every N hours, by satellites making continuous observations. Most of the earth is not under sensor coverage at any particular instant. Both are true.

It’s possible that NRO has a later observation of MH370 than any that have been publicly disclosed, but it would be sheer luck to have, like, video of the disappearance.


Constantly scanning != Constantly recording and storing in perpetuity.


I think "repeatedly scanning" would be a better phrase.


Some say otherwise. I don't know what to make of it, but here you go: https://www.youtube.com/watch?v=AwaC4AXFqRI


That's only for Gmail, outlook.com, and samsung browser. It is possible to argue that these are not gateways without mental gymnastics.

It is fairly easy to migrate out of Gmail and outlook to another email provider. Not self host, but that's a different issue.

Samsung browser... Don't know any sites which only work in Samsung browser.


Yes and no. Isn’t there an argument that Gmail and Outlook.com (combined with Microsoft 365/Exchange Online), while superficially interoperable, have enough market share that their actions make running other mail providers very difficult in practice?


> Isn’t there an argument that Gmail and Outlook.com (combined with Microsoft 365/Exchange Online), while superficially interoperable, have enough market share that their actions make running other mail providers very difficult in practice?

Yes, there is an argument to be made for that. I personally agree with that viewpoint. But that doesn’t mean arguing against it is “mental gymnastics”. It’s possible for reasonable people to disagree about things.


Indeed. I hope the EU revisits this soon.


Storage manufacturers now give the application more control over SSD FTL operations through ZNS. https://zonedstorage.io/docs/introduction/zns. Curious to see how it will be used


The project's other github repo is better to learn more about the project: https://github.com/xtofalex/naja

The linked repo doesn't have an informative Readme. An example showing showing Naja differs from existing tools would help people unfamiliar with Electronic Design Automation, like me.


Thanks for the feedback sacheendra. You are right about the lack of documentation, starting with a proper README, definitely on the top of the TODO list. Still working on testing the structural Verilog language support, then will soon focus on documentation. If there are developers around willing to early test structural verilog support or Naja: C++ netlist data structure, don't hesitate to contact me. One thing to note: Naja-Verilog and Naja interact but are not tied: Naja structural verilog parser can be used to build any data structure or in any project needing structural verilog support (Yosys or Vivado synthesis outputs for instance).


Hi again, I've updated the README with a minimal set of informations. I hope this clarifies the purpose of the project. I'll later work on the detailed API documentation. All feedbacks are welcomed.


It is not that surprising. Facebook is complex ecosystem, and therefore experiences lots of outages/problems.

Looking at their status page (https://developers.facebook.com/status/dashboard/), they seem to have a problems every week. These are only the public ones!

I guess they figured they are losing more money due to these problems than the additional 5% they have to spend on infrastructure.


I can see why an org would choose to do this, but the number is still frightening. At Google, we were held to a limit of at most 0.01% cost for sampling LBR events. 5% for debug-ability just seems really high.


This I highly unlikely. Hyperscalers like AWS have custom power delivery, rack organization, redundant networking, etc. which make their instances reliable.

Long running jobs often use checkpoints which require high speed networking and storage, which I don't see an option for. Eg, I cam get EC2 instances with 100gbps networking.

Great job starting the service! But, I think you have ways to go before reaching Hyperscalers level reliability.


It's highly likely that the reliability of this service will be as good or better than AWS.

I have servers hosted in a similar quality datacenter with literally 9 years of uptime. It would be 10 but the customer shut down his services...

By the way, unless something has changed, you can't get more than 10gbps of throughput of a single tcp stream in those 100gbps setups...


Why do you believe an individual stream is limited to 10Gbps?


They have stated as much in the documentation.

See various speeds and feeds depending on instance size and type, and whether inside or outside the VPC at e.g. here:

https://d1.awsstatic.com/events/reinvent/2019/REPEAT_2_Deep-...


yes


It seems like they plan to do hardware-software co-design. Devices (servers, switches?) and firmware designed specifically for their software stack. A software stack (drivers, OS) designed specifically for their hardware. Really interested in seeing where this goes.


They're apparently doing hardware-software co-design for rack-scale servers, integrating some kind of state-of-the-art systems-management features of the sort that "cloud" suppliers like AWS are assumed to be relying on for their offerings.

Kinda interesting ofc, but how many enterprises actually need something like this? If there was an actual perceived need for datacenter equipment to be designed (and hardware-software co-designed) on the "rack scale", we would probably be doing it right and running mainframe hardware for everything.


It starts looking appealing when your monthly AWS bill is deep into the 5 figures and you’re kinda trapped: do you pour your engineering resources into cost-optimizing your infrastructure (doable, but time consuming) or kick the can down the road?

Believe it or not, going private but not having to give up the niceties of AWS or Azure would be quite appealing.


Remember Eucalyptus? Was lead by the former CEO of MySQL. Built an open-source project that attempted to be compatible with AWS. The project is still alive.

https://en.m.wikipedia.org/wiki/Eucalyptus_(software)


To say the project is still alive is a bit of an exaggeration, don't you think?


Looks like the last release was in 2017. Eight years of releases is a relatively good run. May be of interest to those like Oxide, who are considering similar paths, with the added complexity of firmware, BMCs, roots of trust and OCP-style hardware.

https://github.com/eucalyptus/eucalyptus/releases


Also, some companies just want their data in-house and not hosted on a remote cloud. Sometimes contracts preclude the use of 3rd party external companies to house critical data.


This is certainly a need in HPC environments. Think of natgas / big oil, finance, science (protein folding, genetic synthesis, genome research, etc). The cloud simply makes no sense for a lot of these industries. We're talking 20k physical computers and using MPI for their research jobs kind of scale. The cloud is not a good fit for those types of envs where they're using 100% of their compute 100% of the time if at all possible.


I would guess the problem with business as usual is that the cost magnitude of doing this means that only companies throwing off serious cash are doing this.

Which has the side effect of their solutions being bespoke, because acceptable cost looks like "Tell me when I need to stop adding zeros to make this happen."

To go another way you either (1) need to be Amazon-scale already (i.e. "there aren't enough zeros to make inefficiency worth our time") or (2) be willing to say no to huge profits out of ideological purity.

The well-funded client / custom trap is real.


I work for one of those HPC / financial firms that I alluded to above. The "no well funded client trap" is also real :)


"Hey look we have this cool solution" -> "Hey look we have a few small to middle tier customers" -> "Hey look we have a big customer" -> "Hey look our big customer bought our company". It's the silicon valley startup shuffle...


But if you have fun and make money in the process, what's wrong with it? Companies are not (or should not) be like children.


First comment talking about what they are doing.


That is not true. The synthesis and layout have to be done separately for the ASIC. That is a more time consuming process than writing verilog code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: