Goldman has done this for decades, pushing it even further by having developed their own language (Slang), graph db (SecDB), and IDE (SecView). Many engineers resist working it in, but for any strat it's mandatory.
Bloomberg did a similar thing decades ago, where they had their own database and their own take on TCP/IP. But this was done out of necessity since they started in the 80's and the database landscape looked very different than it does today.
They continued with their own db for decades out of inertia and also it worked fine. I think they've long since switched to TCP/IP and the public internet (for a time there was a Bloomberg network parallel to the public internet).
Yes, and our database, ComDb2, was open-sourced. It still powers the company today. And yes, we use TCP/IP :) And yes, we still have one of the largest private networks in the world.
I would never want to work for a place like Goldman anyway, but knowing they had their own idiosyncratic tech environment would certainly be an additional negative.
Strats as a role has been downplayed and realigned a lot over the last several years. Traditionally, a strat was indeed a "quantitative strategist" that sat on the desk, often next to voice traders, developing tools, pricing models, trading systems, etc.
One of the people I regard as the Godfather/OG strat is Emanuel Derman (https://emanuelderman.com/). Another transformative (in my time) person in the Strat complex was Marty Chavez (https://www.rmartinchavez.com/). Both amazing people on and off the trading floor.
it means "strategy". I imagine each "strategy" looks at some data and decides what kinds of trades to make in response, or predicts the price of some securities and sends that to a downstream process which decides what trades to make based on those price predictions.
If this and/or the policies, leadership, and mindset that enable these kinds of things interest you, I highly recommend Abundance by Ezra Klein and Derek Thompson. IIRC, they outline data centers in space in his utopian introduction.
What the "right answer" is will vary widely on application and analyst. That's one reason there are so many coordinate ref systems. If you keep everything in 3857, you'll get answers in ~meters, but whether that's "right" depends on where and how large the distance or geom is and what your precision threshold is. So, really, everyone's needs are necessarily "specific."
DuckDB > geopandas, certainly for anything out of core. Though, I recently gave up on importing 70GB worth of large multipolygons (from a csv in hex wkb), and just used a postgis container. In concert with DuckDB's growth, I'd also mark the advent of geoparquet.
The big change, in my view, over the past decade in GIS software, is in compute and storage efficiency across the typical stack. DuckDB has become a part of this, but h/t to the advances from shapely, geopandas, geoparquet, and GDAL. There's a lot of overlap in that venn diagram, and credit should be spread around. QGIS is great, too, though I feel there is market opportunity to apply 90/10 to its massive feature set and move it to the web.
Many criminal records, petty or otherwise, are public record. When archived, expunged or dismissed infractions never truly become that. A traffic violation or other petty misdemeanor from 20 years ago, that has been expunged from official record, can show up on a background check because companies archive public data. So, there is a flip side to this.
Public data is incompatible with secrecy. Expunged records still appear in newspapers archives if the local reporter on the Crimes beat captured the proceedings. IMO, "expunged" means removed from Official court records - not from the public memory, including newspapers, archived websites, police blotters and prosecutors' files.
The fact that you get it out from your criminal record doesn't mean they get forgotten. Think about a paper writing about your crime. That will be public and archived forever.
reply