It’s part of the change management process that all code is reviewed. This is needed as per several different compliance agreements. What’s probably happened is poor peer reviews from other junior engineers gets missed. That’s a lot of code reviews to send upstream.
The wild fires are entirely because of lack of management. This is why insurance companies noped out. The crazies believe it is an environment impact to manage the lands and then it all burns down. At least someone can blame the fires on something else. I guess it’s a win?
Such wrong think! I think any forum that adopts this would be worth studying for simply helping to create a score to rate how soulless a site is. Maybe they could make the inverse that would rewrite whole forums for happy thoughts. Rose tinted glasses client? The name needs work.
The author's bias - it's different for each specific author. We should not pretend that there are moderators without bias, each AI-driven moderation tool inherits the bias of its human author.
The LLMs that power all that are "aligned", that is, they're subjected to manipulation to install specific bias in them, and so on.
My friends kids have access to his home servers. They don’t get to roam on the internet. It’s shocking to think parents might structure their child’s lives.
I bullet pointed out some ideas on cobbling together existing tooling for identification of misleading results. Like artificially elevating a particular node of data that you want the llm to use. I have a theory that in some of these cases the data presented is intentionally incorrect. Another theory in relation to that is tonality abruptly changes in the response. All theory and no work. It would also be interesting to compare multiple responses and filter through another agent.
reply