Hacker Newsnew | past | comments | ask | show | jobs | submit | snake_doc's commentslogin

Cell towers just need power to keep functioning, starlink adds no utility in an urban dense environment with fiber.

> Models were run with maximum available reasoning effort in our API (xhigh for GPT‑5.2 Thinking & Pro, and high for GPT‑5.1 Thinking), except for the professional evals, where GPT‑5.2 Thinking was run with reasoning effort heavy, the maximum available in ChatGPT Pro. Benchmarks were conducted in a research environment, which may provide slightly different output from production ChatGPT in some cases.

Feels like a Llama 4 type release. Benchmarks are not apples to apples. Reasoning effort is across the board higher, thus uses more compute to achieve an higher score on benchmarks.

Also notes that some may not be producible.

Also, vision benchmarks all use Python tool harness, and they exclude scores that are low without the harness.


China added ~90GW of utility solar per year in last 2 years. There's ~400-500GW solar+wind under construction there.

It is possible, just may be not in the U.S.

Note: given renewables can't provide base load, capacity factor is 10-30% (lower for solar, higher for wind), so actual energy generation will vary...


> It is possible

Sure, GP was clearly talking about the US, specifically.

> just may be not in the U.S.

Absolutely 100% not possible in the US. And even if we could do it, I'm not convinced it would be prudent.


If you really had to you'd probably run turbines off natural gas but it's not a good idea environmentally.


I am a huge proponent of renewables, but you cannot compare their capacity in GW with other energy sources because their output is variable and not always maximal. To realistically get 100GW in solar you would need at least 500GW of panels.

On the other hand, I think we will not actually need 100GW of new installations because capacity can be acquired by reducing current usage by making it more efficient. The term negawatt comes to mind. A lot of people are still in the stone age when it comes to this even though it was demonstrated quite effectively by reduced gas use in the US after the oil crisis in the 70s. Which basically recovered to the pre crisis levels only recently.

High gas prices caused people to use less and favor efficiency. The same thing will happen with electricity and we'll get more capacity. Let the market work.


> China added ~90GW of utility solar per year in last 2 years. There's ~400-500GW solar+wind under construction there.

Source?


I'm not sure about the utility/non-utility mix, but according to IRENA it was actually ~500GW of added capacity in 2023 and 2024.

https://ourworldindata.org/grapher/installed-solar-pv-capaci...


Wot? Is this what AI generated non-sense has come to? This is totally unrelated.


Nope. Construction induces ECC-driven emergent modular manifolds in latent space during KVQ maths. Can't use any ole ECC / crux why works. More in another reply.



Doesn't apply as long as the improvements obtained there scale with compute.

Now, are there actual meaningful improvements to obtain, and do they stick around all the way to frontier runs? Unclear, really. So far, it looks like opening a can of hyperparameters.


this is a bad example to claim the bitter lesson applies to, it’s about the fundamentals of optimization techniques not about tying to hand-crafted things for the solution space.


Aren’t they all optimization techniques at the end of the day? Now you’re just debating semantics


believe what you want, i guess



Yeah, it's just the wrong architecture for the job, so I found it to be a strange example.

Here's the top model on DAWNBench - https://github.com/apple/ml-cifar-10-faster/blob/main/fast_c...

Trains for 15 epochs and it, like all the others is a 9 layer resnet.


Usually there's more to a ML, data-science idea (that's not a full fledged fledged out journal paper) than beating a SOTA benchmark.

In fact beating SOTA is often the least interesting part of an interesting paper and the SOTA-blind reviewers often use it as a gatekeeping device.


Sure, of course. Wasn't suggesting "are you beating a sota benchmark"? I'm floating the idea of an ablation that matches a realistic scenario for the dataset / task. Personally curious how manifold muon performs compared to AdamW in a throughly explored context. This is the first time I've seen a 3-layer mlp on cifar-10.

I probably should have made the 9-layer ResNet part more, front-and-center / central to my point.


Got you, this time.


SCMP is owned by Alibaba, which is subject to the purview of the Chinese Central Government [1].

[1]: https://www.cecc.gov/agencies-responsible-for-censorship-in-...


Mafia behavior continues… (not my observation, but the Texas senator’s Ted Cruz[1]).

$100k is a big pizzo (protection fee)!

[1]: https://www.bloomberg.com/news/articles/2025-09-19/ted-cruz-...

> “That’s right outta ‘Goodfellas,’ that’s right out of a mafioso going into a bar saying, ‘Nice bar you have here, it’d be a shame if something happened to it,’” Cruz said, using the iconic New York accent associated with the Mafia.


It does go to the government and not to Trump's personal wallet (like the memecoins and lavish gift), it's just a tax that's just not being called a tax, and frankly it's a good idea. The current abuse of H1B doesn't work out positively for anyone but the companies making a boatload of money on exploiting people.


Oh? And taxes can’t be used to buy influence and votes? How naive… Money is fungible… one pocket into another

Exhibit 1: Tariff revenues to bail out American farmers: https://www.ft.com/content/0267b431-2ec9-4ca4-9d5c-5abf61f2b...


Are you querying from an EC2 instance close to the S3 data? Are the CSVs partitioned into separate files? Does the machine have 500GB of memory? It’s not always duckdb fault when there can be a clear I/O bottleneck…


No, the EC2 instance doesn't have 500GB of data. Does DuckDB require that? I actually downloaded the data from S3 to local EBS and still choked.


Works fine for me on TB+ datasets. Maybe you were doing in-memory rather than persistent database and running out of RAM? https://duckdb.org/docs/stable/clients/cli/overview.html#in-...


Wait, do you insert the data from S3 into duckdb? I was just doing select from file.


Nope, just reading from S3. Check this out: https://duckdb.org/2024/07/09/memory-management.html


Maybe its your terminal that chockes because it tries to display to much data? 500GB should be no problem.


These just seems like over engineered solutions trying to guarantee their job security. When the dataflows are so straight forward, just replicate into pick your OLAP, and transform there.


I came from traditional engineering into data engineering by accident and had a similar view. But every time I tried to make a pipeline from first principles it always eventually turned out something like this for a reason. This is especially true when trying to bridge many teams and skillsets - everyone wants their favourite tool.


our approach wasn’t about over-engineering, we were trying to leverage our existing investments (like Confluent BYOC) while optimizing for flexibility, cost, and performance. We wanted to stay loosely coupled to adapt to cloud restrictions across multiple geographic deployments.


What is the current state of the art (open source) when doing oltp to olap pipelines in these days? I don’t mean a one-off etl style load at night but a continuous process with relatively low latency?


Idk what the state of the art is, but I’ve used change data capture with Debezium and Kafka, sink’d into Snowflake. Not sure Kafka is the right tool as you don’t need persistence, and having replication slots makes a lot of operations (eg DB engine upgrade) a lot harder though.


I would say clickhouse


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: