Hacker Newsnew | past | comments | ask | show | jobs | submit | Scene_Cast2's commentslogin

Out of curiosity, how many tokens are people using? I checked my openrouter activity - I used about 550 million tokens in the last month, 320M with Gemini and 240M with Opus. This cost me $600 in the past 30 days. $200 on Gemini, $400 on Opus.

  My Claude Code usage stats after ~3 months of heavy use:

    Favorite model: Opus 4.6          Total tokens: 42.6m
    Sessions: 420                     Longest session: 10d 2h 13m
    Active days: 53/95                Longest streak: 16 days
    Most active day: Feb 9            Current streak: 4 days

    ~158x more tokens than Moby-Dick

  Monthly breakdown via claude-code-monitor (not sure how accurate this is):

    Month     Total Tokens     Cost (USD)
    2026-01     96,166,569       $112.66
    2026-02    340,158,917       $393.44
    2026-03  2,183,154,148     $3,794.51
    2026-04  1,832,917,712     $3,412.72
    ─────────────────────────────────────
    Total    4,452,397,346     $7,713.34

Have you tried the latest (3.1 pro) Gemini? In my experience, it's notably better for a similar type of problems than Opus 4.6. However, I don't really use OpenAI products to compare.

I actually haven't - I tried Gemini 3.0 Pro in Antigravity and was disappointed enough that I didn't pay much attention to the 3.1 release, it was notably worse than Opus and GPT at the time, and much more prone to "think" in circles or veer off into irrelevant tangents even with fairly precise instruction. I'll give 3.1 a try tomorrow, see what happens.

I had the same theory, but IIRC the H2 isn't much better with radio on.

They remind me more of the Korean TV series Cashero. There, the main hero has strength superpowers, but the power comes by crunching through cash in his pocket.

But I use the per-token APIs for my usage, not a subscription, so I'm guessing I'm in the minority.


I also use them per-token (and strongly prefer that due to a lack of lock-in).

However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.


I don't think this is quite right because it's the same model underneath. This problem can manifest more through the tooling on top, but still largely hard to separate without people catching you.

I do agree that Big Ai has misaligned incentives with users, generally speaking. This is why I per-token with a custom agent stack.

I suspect the game theoretic aspects come into play more with the quantizing. I have not (anecdotally) experienced this in my API based, per-token usage. I.e. I'm getting what I pay for.


Same here. Using it on two boxes, makes Linux sysadmin work easier.


Some more love for the updates page. E.g. select a subset of updates to install, be more clear that the last update time could be different if you installed updates via CLI, that kind of thing.


There's also Moth-class sailing or windfoiling for mere mortals.


And they aren’t slow either!!

https://youtube.com/shorts/lste7qSyWlw


Are there any good static (i.e. not runtime) type checkers for arrays and tensors? E.g. "16x64x256 fp16" in numpy, pytorch, jax, cupy, or whatever framework. Would be pretty useful for ML work.


We're working on statically checking Jaxtyping annotations in Pyrefly, but it's incomplete and not ready to use yet :)


This would be an insta-switch feature for me! Jaxtyping is a great idea, but the runtime-only aspect kills it for me - I just resort to shape assertions + comments, but it's a pretty poor solution.

A follow-up question: Google's old `tensor_annotations` library (RIP) could statically analyse operations - eg. `reduce_sum(Tensor[Time, Batch], axis=0) -> Tensor[Batch]`. I guess that wouldn't come with static analysis for jaxtyping?


- /?hnlog pycontract icontract https://westurner.github.io/hnlog/ :

From https://news.ycombinator.com/item?id=14246095 (2017) :

> PyContracts supports runtime type-checking and value constraints/assertions (as @contract decorators, annotations, and docstrings).

> Unfortunately, there's yet no unifying syntax between PyContracts and the newer python type annotations which MyPy checks at compile-type.

Or beartype.

Pycontracts has: https://andreacensi.github.io/contracts/ :

  @contract
  def my_function(a : 'int,>0', b : 'list[N],N>0') -> 'list[N]':
  
  @contract(image='array[HxWx3](uint8),H>10,W>10')
  def recolor(image):
For icontract, there's icontract-hyothesis.

parquery/icontract: https://github.com/Parquery/icontract :

> There exist a couple of contract libraries. However, at the time of this writing (September 2018), they all required the programmer either to learn a new syntax (PyContracts) or to write redundant condition descriptions ( e.g., contracts, covenant, deal, dpcontracts, pyadbc and pcd).

  @icontract.require(lambda x: x > 3, "x must not be small")
  def some_func(x: int, y: int = 5) -> None:
icontract with numpy array types:

  @icontract.require(lambda arr: isinstance(arr, np.ndarray))
  @icontract.require(lambda arr: arr.shape == (3, 3))
  @icontract.require(lambda arr: np.all(arr >= 0), "All elements must be non-negative")
  def process_matrix(arr: np.ndarray):
      return np.sum(arr)

  invalid_matrix = np.array([[1, -2, 3], [4, 5, 6], [7, 8, 9]])
  process_matrix(invalid_matrix)
  # Raises icontract.ViolationError


Parquery/icontract: https://github.com/Parquery/icontract

mristin/icontract-hypothesis: https://github.com/mristin/icontract-hypothesis :

> The result is a powerful combination that allows you to automatically test your code. Instead of writing manually the Hypothesis search strategies for a function, icontract-hypothesis infers them based on the function's precondition. This makes automatic testing as effortless as it goes.

pschanely/CrossHair: An analysis tool for Python that blurs the line between testing and type systems https://github.com/pschanely/CrossHair :

> If you have a function with type annotations and add a contract in a supported syntax, CrossHair will attempt to find counterexamples for you: [gif]

> CrossHair works by repeatedly calling your functions with symbolic inputs. It uses an SMT solver (a kind of theorem prover) to explore viable execution paths and find counterexamples for you


Check out optype (specifically the optype.numpy namespace). If you use scipy, scipy-stubs is compatible and the developer of both is very active and responsive. There's also a new standalone stubs library for numpy called numtype, but it's still in alpha.


Jaxtyping is the best option currently - despite the name it also works for Torch and other libs. That said, I think it still leaves a lot to be desired. It's runtime-only, so unless you wire it into a typechecker it's only a hint. And, for me, the hints aren't parsed by Intellisense, so you don't see shape hints when calling a function - only when directly reading the function definition.

Personally, I also think the syntax is a little verbose: for a generic shape hint you need something like `Shaped[Array, "m n"]`. But 95% of the time I only really care about the shape "m n". It doesn't sound like much, but I recently tried hinting a codebase with jaxtyping and gave up because it was adding so much visual clutter, without clear benefits.


There have been some early proposals to add something like that, but none of them have made it very far yet. As you might imagine, it's a hard problem!


Of typical homelabs that are posted and discussed.

The online activity of the homelab community leans towards those who treat it as an enjoyable hobby as opposed to a pragmatic solution.

I'm on the other side of the spectrum. Devops is (at best) a neutral activity; I personally do it because I strongly dislike companies being able to do a rug-pull. I don't think you'll see setups like mine too often, as there isn't anything to brag about or to show off.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: