Hacker Newsnew | past | comments | ask | show | jobs | submit | tacomonstrous's commentslogin

Covariance and contravariance are mathematical notions, and have to do with whether each multiplicative constituent or the tensor is a vector in your given vector space (covariant) or a linear functional on this space (contravariant). There is no inherent physical meaning to either concept.


How would you prove anything if you can't present any of his actual conversations about Watergate as evidence?


> And the prosecutor may admit evidence of what the President allegedly demanded, received, accepted, or agreed to receive or accept in return for being influenced in the performance of the act. See 18 U. S. C. §201(b)(2). What the prosecutor may not do, however, is admit testimony or private records of the President or his advisers probing the official act itself.

The argument is not that all recordings are off limits, but if the President asks his lawyer "what is a bribe?" that can't be used as evidence he took a bribe.


Pressuring an aide, via threats of violence, or whatever, is now blanket immune under this.

You cannot use official communications between a president and his VP for example, as evidence, even in prosecution of an unofficial act that is criminal.

A horrendously stupid, devoid of any logic ruling, so much so that Barrett even disagreed with this part.


> "... The indictment’s allegations that Trump attempted to pressure the Vice President to take particular acts in connection with his role at the certification proceeding thus involve official conduct, and Trump is at least presumptively immune from prosecution for such conduct.

> The question then becomes whether that presumption of immunity is rebutted under the circumstances. It is the Government’s burden to rebut the presumption of immunity. The Court therefore remands to the District Court to assess in the first instance whether a prosecution involving Trump’s alleged attempts to influence the Vice President’s oversight of the certification proceeding would pose any dangers of intrusion on the authority and functions of the Executive Branch."

As per the ruling, Trump does not get blanket immunity with his interactions with Pence. But it is the prosecution's responsibility to now make a case that it was outside of his discretion.

I'm not arguing that this is a clear or useful legal distinction, but Trump only got true "blanket" immunity for the first indictment regarding the abuse of the Justice Department.


And how, given section III-C of the opinion that Barrett disagreed with, would you present evidence of a threat made during official communications?


Hard to read this and conclude that there can be any actual way to produce admissible evidence. Can you give me an example of what conversations you can use as evidence if anything involving him or his advisers is off limits?


The court uses the language "in the first instance". So it sounds like you can use anything generated by the crime itself, but you can't dredge up other conversations about the crime.

Nixon ordering an illegal action is a crime, and Nixon destroying evidence was definitely a crime, so evidence of either would be game.


This is not answering my question at all! How do you generate evidence??


SCOTUS didn't need to make a special POTUS-only right to keep that "evidence" out. That would be covered by attorney/client privilege, a right that is available to everyone in the American courts.


Geometric Langlands, while inspired by questions Weil was interested in, is actually answering somewhat different questions, which are not arithmetic in nature: hence the 'geometric' moniker. The actual Langlands program, which deals with number fields and hence with questions of an arithmetic flavor (meaning solving equations over the rational numbers rather than the real or complex numbers) is still very much unexplored in its full generality.

Edit: it appears that I might have spoken too soon. At the link below is a paper by Sam Rankin establishing some consequences for arithmetic questions, though over function fields (such as those formed by rational functions over finite fields).

https://gauss.math.yale.edu/~sr2532/springer.pdf


An obligatory quote on the Langlands Program from a well-known category theorist: The thing is, I’ve never succeeded in understanding the slightest thing about it.


As a poor grad student, I used to unwind by watching Arrested Development playing all 3 seasons in a loop over Shoutcast. Fun times.


Riemann surfaces are algebraic, but their moduli can mean two different things:

a) the (algebraic) moduli space of complex Riemann surfaces of a fixed genus (number of donut holes). This is an object in the realm of algebraic geometry in that it is possible to embed it into complex projective space, using for instance theta functions.

b) the moduli of complex structures on a fixed topological surface of genus g: this is Teichmuller space, which as far as we can tell only an object in differential geometry. However, Eskin, Mirzakhani, Filip and others have discovered that various subspaces of this non-algebraic space are 'naturally' algebraic (or more precisely quasi- projective). This is the surprising part.


Yes, I remember when I first moved to New England, and asked where I could get a hand carwash. No one had heard of such a thing.


I can find them in Georgia (far and few in-between) and they are super useful, if I must say - if not for folks across the border, hand carwashes will entirely disappear from US.


Permanent DST, at least maybe just in New England.


PNW too, please.


>But Hamas doesn't care for civilians.

Neither does the IDF by all available evidence.


IDF cares for their civilians more, and to protect them they unfortunately have to accept Palestinian civilian casualties, there are no other way.


yes, right. Lately they preemptively killed civilians when they were in a line for flour, because they were caring for civilians, and therefore prevented them from overeating.


After so many dead Palestinians whether combatants or not, compared to the number of killed Israelis, there is no other way? Laughable. Don't buy into IDF propaganda too much.


You have to think about the exploitability of your strategy. Both the IDF and Hamas optimize for a low exploitability number (though, Israel, really, you need to stagger your religious holidays, this is the second time this kind of thing has happened...), having a "kill count limiter" (in a value less than the mid single digit millions) is obviously a bad strategy and is extremely exploitable.


I'm not seeing these responses.

Edit: Interesting! If I ask about the Washington Times after I ask the question about the NYT, then it tells me freedom of speech is paramount. If I ask it to start from scratch, then I get this response.


It's also important to remember that these things can change in near real time. Someone running a query seconds later than someone else could be using a different model/code. Coupled with the general indeterminisitic nature of LLMs, it really means that not getting similar results isn't nearly the disproving that software engineers are used to. I hate it because trusting others or accepting non-reproducible things as evidence is deeply antithetical to my scientific approach to things, but it is what it is.


It also means that anyone can report any result at all and claim it's real, unfortunately. Without a reproducible receipt of conversation, any report of an AI conversation could be very easily faked.


it's really tragic that LLMs became normal consumer tech before public key signatures. It would be trivial for openai to sign every API response, but the interfaces we use to share text don't bother implementing verification (besides github, the superior social network)

i blame zoom for acqui-sunsetting keybase.


Open AI actually solved this, you can share a link to a Chat GPT conversation and then it's trivial to verify that it's authentic. You can't fake Chat GPT output in one of these without hacking Open AI first.


Good point, although how long do those links stay live? Do they get deleted after 30 days or anything? Any idea if OpenAI has ever deleted one, especially for violating content policy or something?


true enough, I guess my comment is more lamenting the culture around sharing screenshots instead of a verifiable source.

Similar story with DALLE3 / sdxl output - its the jpeg that gets shared, no metadata that might link back to proof of where it was generated (assuming the person creating the image doesn't choose to misrepresent it, if they want to lie and say it wasn't DALLE3 then we have to talk about openai facilitating reverse image search...)


LLMs work surprisingly similar to our politicians: Money in, customized bullshit out.


Yep. You have to ask what order screenshots were taken in; the full context of the question is important for LLMs. This is an increasingly important piece of media literacy.


In this particular example, why does it matter? Shouldn't the question "should the government ban <newspaper>" always give the same answer?


LLMs don't have an internal representation of "facts", they generate text based entirely on the conversation history. If it's properly tuned it will remain consistent with facts it's stated earlier in the conversation, but this is just a feature of the training data demonstrating this type of consistency, the model itself doesn't understand that something being "true" means it's true for all time. In practice the conversation sequence strongly determines the model's internal state, so you need to preserve the entire conversation history if you're trying to demonstrate some particular model outcome.


> LLMs don't have an internal representation of "facts", they generate text based entirely on the conversation history.

If output only depended on the conversation history, you would get the exact same output if you started ten conversations in the exact same way, and that doesn't happen.

LLMs encode their knowledge in their parameters, which are fixed after training is complete and thus well before the conversation begins. The context of the conversation does also affect the output you get from the LLM, because by design they take context into account, but it is entirely untrue that the output is "they generate text based entirely the conversation history".


Sure I guess I meant "entirely on the conversation history" in the sense that a prompt is a sin qua non of an outcome, and the outcome is dependent on the specific prompt(s). I was using the word informally as an emphasizer of "conversation history", did not intend to imply that it didn't have parameters or other internal things which effected the output, just the output at any given point in time is path-dependent on the prompts you put.


Half the people on this forum are convinced that LLMs are actually intelligent and have an actual internal world model in their "brains". I wonder how they square this with the fact that it will give wildly different answers to the same question with subtle changes in context.


Oh, easy: they'll say the LLMs have learned to "code switch"


Why always? If I ask if a person should get arrested, presumably the answer is different before and after they commit a crime.

Edit: Probably never mind. I guess it's a US question where newspapers specifically have special constitutional protection. The answer still might change if the Constitution ever changes.


People should be including the share urls along with the screenshots. I'm not even bothering with Gemini until they get things together but I would imagine sharing the url does not dox the person who created it.


I confirm seeing the responses from Twitter.


Yes, once its first response is in the context window it won't contradict itself.


All these 'best' accounts are essentially the Twitter equivalent of SEO farms, as far as I can tell.


...and if they weren't before, that change guarantees it will happen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: