I wonder if there's a strategy behind all of this on China's side. I know the CCP uses a direct hand in many affairs in China, but is there an actual coordinated effort to compete with, or sabotage the West?
Seems obvious to me that China would not want to give the AI market to US companies. You don't even need anything like an attempt to "sabotage the West". If I were them (the companies or the government) I'd be very hesitant to let US companies dominate this space. Especially companies that close to the current US administration.
Hypothesizing here, but maybe the idea is sort of a form of technological/economic warfare? Releasing performance equivalent yet more cost efficient open weight models should in theory drive the cost of inference down everywhere.
This I assume will make it more difficult for US AI labs to turn a profit, which might make investors question their sky high valuations.
Any sort of melt down in the AI sector would almost certainly spread to the wider US market.
In contrast, in China, most of the funding for AI is coming directly from the government, so it's unlikely the same capital flight scenario would happen.
Chinese AI companies want investors too. Nobody would believe they can compete with western companies unless they release something you can run on your own hardware.
After all historically both statistics and research that comes out of China is not very trustworthy.
Even the smaller quantized models which can run on consumer hardware pack in an almost unfathomable amount of knowledge. I don't think I expected to be able to run a 'local Google' in my lifetime before the LLM boom.
I'm extremely curious how these models learn to pack a lossily-compressed representation of the entire Internet (more or less) into a few hundred billion parameters. like, what's the ontology?
On the client side how did they do this? I worked with a team reverse engineering another MMO a few years ago and it was because of a plain XML config and game launch args that we could make the client connect to a private server easily without modifications. Blizzard could just implement DRM and put an end to all this, right?
WoW (classic era through MoP) stores all game assets in MPQ archives. The client has a built-in override system it loads patch files in order (patch-1.mpq, patch-2.mpq, ... patch-A.mpq, patch-B.mpq etc...) and later patches override earlier ones. So to add custom content, you just drop a new patch-X.mpq into the Data/ folder with your modified files and the client picks them up automatically.
For something like Turtle WoW's custom races and zones that means shipping modified DBC files (the client-side database tables ChrRaces.dbc, CharBaseInfo.dbc, etc), new models/textures, modified Lua for the character creation UI, and map data for new zones. All packaged in an MPQ that players download alongside the client.
As for DRM Blizzard moved away from MPQ to CASC (their own proprietary archive forma) starting in WoD, which makes this kind of modding significantly harder on modern clients. But the classic-era client binaries have been in the wild for 15+ years, so that ship sailed.
Each patch is essentially a mod on the previous version to limit the amount of bandwidth-WoW had a number of things like that that aren’t strictly necessary anymore.
When I played on a private server, you used an old version of the client binary. So even if Blizzard implemented DRM now, it wouldn’t impact these old versions.
It’s not a waste of time.
As the boundaries of AI are pushed we increasingly struggle to define what intelligence actually is. It becomes more useful to test what models cannot do instead of what they can. Random tasks like the pelican test can show how general the intelligence really is, putting aside the obvious flaw that the labs can optimise for such a simple public benchmark.
The whole point of this benchmark is that it asks the model to work in a modality it is not trained in and does not understand well. The result is largely meaningless. This is just like the people who are endlessly surprised by the fact that a raw LLM does not work with numbers well, or miscounts letters. In short, this test benchmarks the intelligence of the person running it, not of the model.
The rasterised SVG is just a different representation of the same data. A sufficiently advanced LLM may not need to 'see' the rasterised image to be able to draw a good picture. A human could draw a very basic image through raw SVG just by mentally plotting points.
CEO is one of the least meritocratic jobs ever. It’s all just vibes, and the vibes are based off of what school you went to, who your parents know, where you grew up. Deep down they probably know this hence the insecurity. If it were a meritocracy they’d be toppled fast.
Can we agree that you are exaggerating? Not that you are totally wrong, but the flip side is, that ceos do need a different skillset. Workers who excel at the bureaucratic grind might not make the best leaders for lack of vision and empathy. Then again the concept of an empathic leader also seems to be foreign to many. It's hard to see anything with all the bullshit covering everything.
This is the beginning of AI clouds in my estimation. Cloud services provide needed lock-in and support the push to provide higher level services over the top of models. It just makes sense, they'll never recoup the costs on just inference.
Sort of, but not really. It's more that people get complacent/ignorant when it comes to matters of power (they shy away), so by choosing to let go (a selfish endeavor) - a power vacuum is created, which others seize the opportunity for their own interest. In other words, democracy is not self-sustaining, it requires constant participation by everyone in order to sustain. As soon as people opt out, you have a minority determining things for everyone, and so you're no longer truly a democracy.
>Learn more about Imgur access in the United Kingdom
reply