Speaking of which. My last job I was the dev lead and we were doing everything on prem. Then we hired “consultants” to help us “move to the cloud”. I didn’t know the first thing about AWS when the migration started.
I didn’t know at the time that the consultants were a bunch of old school netops folks who took one AWS exam and all they knew was how to duplicate an on prem infrastructure in AWS.
Of course as always, if you take your on prem thinking and processes to the cloud, you now have the worse of both worlds. All of the process and people overhead of being on prem and you’re spending more for the privilege.
I started studying for the AWS architect associate exam just so I could talk the talk and I realized how much unnecessary work we were doing instead of using managed services. I presented my entire plan to the consultants and they didn’t tell me once that there was a better way.
For phase 2, I wanted to be more “cloud native” there was so much push back from the netops people who could tell they were losing their grip.
Not long after, I changed companies to work for a company where the new CTO - a smart guy who is definitely not young - always prefers managed services.
I’ve worked for small companies and everything I do in AWS, I’ve had to do on prem in the last 20 years. Never again.
Python packaging is definitely in need of something, but my sense is that "yet another tool trying to be a catch-all package/environment management interface" is not it.
I finished this recently, and ended up with a mixed-but-overall-negative view.
As someone who had a reasonable aptitude for mathematics in the past but essentially no practice in the last 10 years, it seemed like the content varied wildly in how well it built on previous chapters, or more generally adhered to the "introduction" label. The programming-based content was thin to the point of being pretty much irrelevant, leaving the whole as something more like a scatter-brained textbook for an odd curriculum rather than a refocused version of a sensible one.
They don't call it out in the announcement, but this was published in the same week as the "Supercomputing" conference - GCP hasn't had an answer for HPC workloads thus far, so maybe this is their attempt.
It's not a proper answer for HPC without an actual low-latency low-contention RDMA fabric. Typical HPC systems have a large fraction of materials science computation that really needs it. There's an example of scaling cp2k (I think) on Azure, but I've not used Azure to be able to say how well it works compared with an actual HPC cluster where you can also monitor the fabric (which is quite important).
Fully agree, but the announcement is pretty bare-bones so no reason to think that adding InfiniBand is totally out of the question, particularly since this offering seems to be pretty far from a standard GCP experience.
Having experimented with IB on Azure, I can comfortably say that the problem is not so much the fabric but the general usability of Azure itself - hot garbage doesn't do it justice.
If that's the case, can you describe how you ended up with customers?
Whenever I consider entrepreneurship as an option for myself, it is always the customer-facing sales part which leaves me feeling like I would struggle.
People come to the website and sign up, from ads, affiliates, social media, or through word of mouth from other customers. Most business does not involve any "customer-facing sales".
They have been describing this as shipping (in the Dell systems mentioned) for a while, I would guess 2 years. It isn't clear to me where the hardware is going or why the company hasn't made more of a splash if performance is so good.
Coming from the scientific computing realm, I've only ever done "real" code in Python until a couple of weeks ago when I was forced to do some work in Typescript. Once I got over the initial worries about the best method to install npm to avoid Python-esque problems and gave nvm a try, I was very pleasantly surprised by the package management process and was able to just get on with the work.
I've tried various Python env management solutions in the past (mostly leaning towards conda), but had recently settled on just using separate LXC/LXD containers for each project.