The authors point out that the PL community values formal methods and hard implementation, but not human factor studies and alternative programming approached like spreadsheets. I'm not from the PL community, but my first instinct was other communities such as social science, software engineering, HCI can do those things. PL has to be about building languages.
However, the example that drove the problem home for me was socioPLT. If award winning work on why program languages succeed is not highly successful, then maybe the community should indeed look into incorporating more research approaches.
The points the authors make about lack of community engagement for non-mainstream problems, lack of female creators in books about PLs, and gatekeeping of the term programming language are also convincing.
So it isn't actually true that it is "constantly scanning every inch of the entire globe with IR cameras with missile/plane/boat/person(?) level resolution" then, or it is true and they know exactly where the plane went down they just don't tell anyone about it for some reason. Its either or.
Every inch of the globe is imaged every N hours, by satellites making continuous observations. Most of the earth is not under sensor coverage at any particular instant. Both are true.
It’s possible that NRO has a later observation of MH370 than any that have been publicly disclosed, but it would be sheer luck to have, like, video of the disappearance.
Yes and no. Isn’t there an argument that Gmail and Outlook.com (combined with Microsoft 365/Exchange Online), while superficially interoperable, have enough market share that their actions make running other mail providers very difficult in practice?
> Isn’t there an argument that Gmail and Outlook.com (combined with Microsoft 365/Exchange Online), while superficially interoperable, have enough market share that their actions make running other mail providers very difficult in practice?
Yes, there is an argument to be made for that. I personally agree with that viewpoint. But that doesn’t mean arguing against it is “mental gymnastics”. It’s possible for reasonable people to disagree about things.
The linked repo doesn't have an informative Readme. An example showing showing Naja differs from existing tools would help people unfamiliar with Electronic Design Automation, like me.
Thanks for the feedback sacheendra.
You are right about the lack of documentation, starting with a proper README, definitely on the top of the TODO list.
Still working on testing the structural Verilog language support, then will soon focus on documentation.
If there are developers around willing to early test structural verilog support or Naja: C++ netlist data structure, don't hesitate to contact me.
One thing to note: Naja-Verilog and Naja interact but are not tied: Naja structural verilog parser can be used to build any data structure or in any project needing structural verilog support (Yosys or Vivado synthesis outputs for instance).
Hi again, I've updated the README with a minimal set of informations. I hope this clarifies the purpose of the project. I'll later work on the detailed API documentation.
All feedbacks are welcomed.
I can see why an org would choose to do this, but the number is still frightening. At Google, we were held to a limit of at most 0.01% cost for sampling LBR events. 5% for debug-ability just seems really high.
This I highly unlikely.
Hyperscalers like AWS have custom power delivery, rack organization, redundant networking, etc. which make their instances reliable.
Long running jobs often use checkpoints which require high speed networking and storage, which I don't see an option for. Eg, I cam get EC2 instances with 100gbps networking.
Great job starting the service! But, I think you have ways to go before reaching Hyperscalers level reliability.
It seems like they plan to do hardware-software co-design.
Devices (servers, switches?) and firmware designed specifically for their software stack.
A software stack (drivers, OS) designed specifically for their hardware.
Really interested in seeing where this goes.
They're apparently doing hardware-software co-design for rack-scale servers, integrating some kind of state-of-the-art systems-management features of the sort that "cloud" suppliers like AWS are assumed to be relying on for their offerings.
Kinda interesting ofc, but how many enterprises actually need something like this? If there was an actual perceived need for datacenter equipment to be designed (and hardware-software co-designed) on the "rack scale", we would probably be doing it right and running mainframe hardware for everything.
It starts looking appealing when your monthly AWS bill is deep into the 5 figures and you’re kinda trapped: do you pour your engineering resources into cost-optimizing your infrastructure (doable, but time consuming) or kick the can down the road?
Believe it or not, going private but not having to give up the niceties of AWS or Azure would be quite appealing.
Remember Eucalyptus? Was lead by the former CEO of MySQL. Built an open-source project that attempted to be compatible with AWS. The project is still alive.
Looks like the last release was in 2017. Eight years of releases is a relatively good run. May be of interest to those like Oxide, who are considering similar paths, with the added complexity of firmware, BMCs, roots of trust and OCP-style hardware.
Also, some companies just want their data in-house and not hosted on a remote cloud. Sometimes contracts preclude the use of 3rd party external companies to house critical data.
This is certainly a need in HPC environments. Think of natgas / big oil, finance, science (protein folding, genetic synthesis, genome research, etc). The cloud simply makes no sense for a lot of these industries. We're talking 20k physical computers and using MPI for their research jobs kind of scale. The cloud is not a good fit for those types of envs where they're using 100% of their compute 100% of the time if at all possible.
I would guess the problem with business as usual is that the cost magnitude of doing this means that only companies throwing off serious cash are doing this.
Which has the side effect of their solutions being bespoke, because acceptable cost looks like "Tell me when I need to stop adding zeros to make this happen."
To go another way you either (1) need to be Amazon-scale already (i.e. "there aren't enough zeros to make inefficiency worth our time") or (2) be willing to say no to huge profits out of ideological purity.
"Hey look we have this cool solution" -> "Hey look we have a few small to middle tier customers" -> "Hey look we have a big customer" -> "Hey look our big customer bought our company". It's the silicon valley startup shuffle...
However, the example that drove the problem home for me was socioPLT. If award winning work on why program languages succeed is not highly successful, then maybe the community should indeed look into incorporating more research approaches.
The points the authors make about lack of community engagement for non-mainstream problems, lack of female creators in books about PLs, and gatekeeping of the term programming language are also convincing.