Looked into this for my clawdbot, but ended up just using himalaya CLI connected to a new Gmail. Been working great so far - curious about what agentmail is better for
We have had some users get banned from Gmail for using it Clawdbot. Regardless our API is way more agent friendly and I think your Clawdbot would agree.
I once used Charles Proxy to change all the game configs for Candy Crush Saga on my phone back in 2013 by intercepting and replacing the API requests - I made all the puzzles have 1-2 colors and infinite powerups. I guess they didn't care much about the security because I ended up spending way more time in the game
> If you want more granularity (read: opacity), you can call the raw version of the color (append -raw to the variable name) which will return the OKLCH values so you can do something like this: oklch(var(--uchu-gray-1-raw) / 40%).
While I agree that we should be skeptical about the reasoning capabilities of LLMs, comparing them to chess programs misses the point. Chess programs were specifically created to play chess. That's all they could do. They couldn't generalize and play other board games, even related games like Shogi and Xiangqi, the Japanese and Chinese versions of chess. LLMs are amazing at being able to do things they never were programmed to do simply by accident.
Here's an example. I'm interested in obscure conlangs like Volapük. I can feed a LLM (which had no idea what Volapük was), a English-language grammar of Volapük and suddenly it can translate to and from the language. That couldn't work with a chess program. I couldn't give it a rule book of Shogi and have it play that.
Apologies, I was a bit curt because this is a well-worn interaction pattern.
I don't mean anything by the following either, other than, the goalposts have moved:
- This doesn't say anything about generalization, nor does it claim to.
- The occurrences of the prefix general* refer to "Can fine-tuning with synthetic logical reasoning tasks improve the general abilities of LLMs?"
- This specific suggestion was accomplished publicly to some acclaim in September
- To wit, the benchmark the article is centered around hasn't been updated since since September, because the preview of the large model accomplishing that blew it out of the water, 33% on all at the time, 71%: https://huggingface.co/spaces/allenai/ZebraLogic
- these aren't supposed to be easy, they're constraint satisfaction problems, which they point out are used on the LSAT
- The major other form of this argument is the Apple paper, which shows a 5 point drop from 87% to 82% on a home-cooked model
Give a group of "average human" two years, give or take 6 months, and they will also saturate the benchmark and probably some humans would beat the SOTA LLM/RLM.
People tend to do so all the time, with games for example.
> OpenAI shared they trained the o3 we tested on 75% of the Public Training set.
I'm talking transfer learning and generalization. A human who has never seen the problem set can be told the rules of the problem domain and then get 85+% on the rest. o3 high compute requires 300 examples using SFT to perform similarly. An impressive feat, but obviously not enough to just give an agent instructions and let it go. 300 examples for human level performance on the specific task, but that's still impressive compared to SOTA 2 years ago. It will be interesting to see performance on ARC-AGI-2.
I spent 10 hrs this week upgrading our pandas/snowflake libs to latest bc there was apparently a critical vulnerability in the version we used (which we need to fix bc a security cert we need requires us to fix these). The latest versions are not major upgrades, but completely changed the types of params accepted. Enormous waste of time delivering 0 value to our business
Security updates are probably the only type of updates that I wouldn't ever call a waste of time. It sucks when they are conflated with feature updates or arbitrary changes, but by itself I don't understand calling them a waste of time.
They are when the only reason they are flagged as security updates is because some a single group deems a very rare, obscure edge case as a HIGH severity vuln when in practice it rarely is => this leads to having to upgrade a minor version of a library that ends up causing breaking changes.
Like top comment, my first exposure to programming was basic on a ti-86 (better than 83, but quickly outdone by 84 shortly after)
My first program was doubly cheating, not only did I have a program for solving quadratic equation, but I copied the basic off the internet in true open src fashion
When I told my dad I copied code from internet, he was so disappointed and thought I had 0 skills. Now, we pip/npm/etc install anything and are heroes for choosing to "buy" not "build"
We're all on on https://www.sigmacomputing.com/ bc we don't like hosting/managing/provisioning essential tools like this + this seems more complicated to configured.