Hacker Newsnew | past | comments | ask | show | jobs | submit | randomizedalgs's commentslogin

I'm an active researcher in TCS. For me, AI has not been very helpful on technical things (or even technical writing), but has been super helpful for (1) literature reviews; (2) editing papers (e.g., changing a convention everywhere in the paper); and (3) generating Tikz figures/animations.


For perspective, the CS programs in the NSF already have a two-submission limit per year [1].

Besides reducing the incentive to spam, this rule has had another positive effect: As a researcher without funding, you don't have to spend your whole year writing grants. You can, instead, spend your time on actual research.

With that said, NIH grants tend to me much more narrow than CS ones, and I imagine that it takes a lot more grants to keep a lab going...

[1] https://www.nsf.gov/funding/opportunities/computer-informati...


Describing this as a limit on "CS programs" is a common, but erroneous, understanding of the proposal limit.

This specific solicitation — CISE Core Programs — has a 2-proposal-per-year limit. However, that only applies to this solicitation, and only counts proposals submitted to this solicitation. CISE Core Programs is an important CS funding mechanism, but there are quite a few other funding vehicles within CISE (Robust Intelligence, RETTL, SATC, and many more, including CAREER). Each has its own limits, that generally don't count or count against the Core Programs limit.


For perspective, in the same time period, The number of employees at Google multiplied by five. I wouldn't be surprised if the growth of the software industry, at least, actually outpaced the increase in H-1B visas.


For perspective, in the same time period, The number of employees at Google multiplied by five. It seems likely that the number of highly educated positions, in general, increased by quite a bit during that time.


Cool paper!

As a small comment, this seems closely related to another recent paper: History-Independent Dynamic Partitioning: Operation-Order Privacy in Ordered Data Structures (PODS 2024, Best Paper).

I'm not sure how they compare, since neither paper seems to know about the other. And I'm also not sure which paper came first, since the geometric search paper does not seem to post a publication date.


Whoah, cool. I'm one of the authors of the geometric search tree paper, and we totally hadn't see that paper, but will for sure dig in! Thanks for mentioning it.


I don't think the claim is true in quite as much generality as the author claims. Some deterministic data structures use much more space than time, for example, the deterministic implementation of a Van Emde Boas tree.


I think you're missing out on the time it takes to build the tree in the first place, which is O(M), equal the space complexity. People usually ignore this cost as a "preprocessing" factor, but it's a cost that's really there.

After this initial O(M) time and space cost, you do additional operations which only take up time, not space, so the claim Time >= Space holds here as well.


Maybe quotient filters?


Yes, that's one of the bloom successors I was looking for. Thanks.

Link for people reading this: https://systemdesign.one/quotient-filter-explained/

I remember there's more obscure newer stuff someone showed me in 2019 tho.


I think these are more-often called "cache oblivious" algorithms


IcebergHT isn't just for persistent memory (although I can see why you might think it is based on the paper's title). The paper also gives experiments showing that the hash table performs very well in standard RAM, much better than other concurrent implementations.


But the other implementations were against other PMEM hash tables unless I misread? Like no comparison against common c++ or rust implementations.


I think you may have a backwards. Libcuckoo, CLHT, and TBB are widely used high performance C/C++ DRAM hash tables. I think TBB is the hash table Intel maintains, if I remember right.

So the DRAM experiments are apples to apples. It's actually the PMEM experiments, I think, that are comparing a new hash table on a new technology to previous hash tables that weren't designed specifically for that technology.


You’re right. I didn’t see later in the paper where they compare against TBB.


As a super minor grammar point for the author, "ubiquitous" is a rare example of a word that starts with a vowel but should be proceeded by "a" instead of "an".


I think that's got to be a great test for foreign spies!

"ubiquitous" starts with the sound of "you-biquitous" and so the suffix -n is a duplicated non-vowel. ("y" is probably a vocalic glide, but still not in {a,e,i,o,u}.)

I bet the real rule is some reality about feasibility of pronunciation, even though native english speakers see the rules explained in terms of spellings.


> I bet the real rule is some reality about feasibility of pronunciation, even though native english speakers see the rules explained in terms of spellings.

The rule as often given in English classes is to use "an" if a word starts with a "vowel sound", rather than starting with a vowel. So, it's "an herb" (since the 'h' is silent in (American) English), but "a ubiquitous".

Relatedly, you can infer whether someone likely pronounces "URL" by spelling it out (like "you are ell") or as a word (like "earl") based on whether they write "a URL" or "an URL".


As someone who uses a dialect/accent of English that means I often pronounce these "silent" letters (e.g. I would pronounce the "h" in herb), I've often wondered if selection of "a" vs "an" is supposed to be accent specific.


I think so, yes; if you say "herb" with a non-silent 'h', you'd write "a herb".

By way of example, the name Herb is commonly pronounced with a non-silent H, so even in a dialect where "herb" is pronounced "erb", you'd write "A Herb picked up an herb.".


> I bet the real rule is some reality about feasibility of pronunciation

YUP! Sorry I don't have a good citation handy, but lots of English grammar happens as a result of misaligned word boundaries. "napron" (from the French naperon) became "an apron". Orange (from the Arabic naranj)... "an ewt" became "a newt". etc.


It's regional depending on silent or transmuted consonants in the local vernacular. An 'ospital, an 'orse, a nuncle, it's a yewbiquitous phenomenon.


Oy, keep yer spillage off my napron!


A union

A urinal

...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: