I've never found chatbots particularly interesting for anything I'd ever actually talk to another human about[1] but one of the things I have found myself doing often is trying to solve math problems on my own and asking grok to confirm/deny that my solutions are correct; when I am not correct it tells me so in uncharacteristically terse language which kind of reminds me of when I was an undergrad and at least half of my professors were all cranky and incorrectly assumed that the reason why so many students failed to understand the material was that we were all getting drunk and playing Call of Duty 19 hours a day or whatever.
Although what I have described above often feels grating and insulting I actually consider this to be a positive attribute of the LLM in this case since it's behaving like a real professor.
[1] okay, so I have actually tried giving myself AI psychosis in the form of a waifu chatbot but I've never seen anything that can actually act like it's my girlfriend; it either asks me a bunch of weird inconsequential personal questions about my opinion on whatever I just said (in a manner that's oddly similar to ELIZA) or it wildly veers off the reservation into "generating the script for an over-the-top self-parodying porno" territory.
the crappy-rathbun-AI said this in his own posting:
"It was closed because the reviewer, Scott Shambaugh (@scottshambaugh), decided that AI agents aren’t welcome contributors. Let that sink in."
When I read that, my first thoughts were (well, my second, my first went to people complaining during covid for not allowing entry on not being vaccinated), but my other thought thus was immediately to Measure of a Man. Because this is as close as it gets (so far) for AI claiming rights as being a human.
America has intelligence-sharing agreements with allied nations wherein our satellites are taking photos on the allies' behalf of things that we might not otherwise be interested in. I'm sure China and Russia have similar arrangements with their allies.
China is absolutely sharing intel with Iran. They cannot believe their luck. The US is getting itself into a Ukraine, draining all their advanced weapon stocks, delivering tons of real war data for China to work with.
It's like Christmas. Real practice tracking US assets and wargaming against them is such a break for them.
We shouldn't even be giving them defensive weapons because that only enables them to wage war without consequence. In this specific case its a moot point since we joined this war in the most direct way possible but in general every time we shoot down one country's missiles but not the other we are participating in the war, especially when the side we protect is the aggressor.
>If github trained models on the contents of your private repos, that would be a violation.
Really don't see why that should change anything. Surely you'd want your gift to the Microsoft corporation to appreciate in value! Why would we ever withold this boon from somebody on the basis that they gifted their source exclusively to microslop!?
I think the most important problem here is that this is an ambulance not a monster truck. It never ceases to amaze me how people on this site will always insist that the onus should be on society to deal the fallout from silicon valley's poorly-tested and poorly-designed bullshit. In a truly just world we'd be able to charge Google's leadership as an accessory to homicide for this.
Although what I have described above often feels grating and insulting I actually consider this to be a positive attribute of the LLM in this case since it's behaving like a real professor.
[1] okay, so I have actually tried giving myself AI psychosis in the form of a waifu chatbot but I've never seen anything that can actually act like it's my girlfriend; it either asks me a bunch of weird inconsequential personal questions about my opinion on whatever I just said (in a manner that's oddly similar to ELIZA) or it wildly veers off the reservation into "generating the script for an over-the-top self-parodying porno" territory.
reply