Interesting! I have the opposite experience. I've gone on a bunch of very low-carb diets, maybe ten at this point (not sure if they were exactly ketogenic or not), and I've enjoyed them. But every time I try low-carb, my sleep gets extremely light (I'll wake up 3 or 4 times a night), I sleep a couple hours less than normal, and I get weirdly vivid dreams. It's always the weird sleep that makes me quit the diet.
It's analogous to the real and imaginary parts of a complex number. Does it makes sense to add a real number and a purely imaginary number? Aren't they different kinds of things? Yes, and yes!
It would really helpful if the article had some worked examples of this arithmetic, with actual numbers, the kind with digits and decimal points in. Then, things like the structure of a geometric product would be absolutely clear.
Let me have a go. I'll use an asciified version of the symbols, with * to mean multiplication of two scalars,, to mean raising one scalar to the power of another, and _ to mean taking a component of a vector.
The dot product makes a scalar, the wedge product makes a bivector, and the geometric product makes a scalar plus a bivector
You will note that the scalar part is much bigger than the coefficients of the bivector part. That's because the input vectors are actually quite similar - pointing z-by-y-z, with a little bit of x. Hence, their projection onto each other is large, whereas the parallelogram they form is quite small (long and thin). The dot product measures the former, the wedge product the latter.
Have i got that right?
EDIT And to clarify this:
> For any basis vector, such as the x axis, the result [of taking the geometric product with itself] is 1
That '1' isn't the scalar number 1, it's the scalar-plus-bivector 1 + 0 (x^y) + 0 (x^z) + 0 (y^z).
Sure, you've defined a map V⊗V → R⊕T, where T means 2-forms. But I still don't see why this is useful, apart from being able to extract from it both the wedge product and the inner product which you started with.
I personally don't find the "bits"y explanation of entropy/cross-entropy/KL etc. to be all that intuitive; as fundamental as it may be, I don't think about compression/encodings all that often. I've always preferred the "surprise" interpretation: http://charlesfrye.github.io/stats/2016/03/29/info-theory-su...
In short: given some event of probability p, -log p = log 1/p is its "surprise". (If p = 1, log 1/1 = 0, so zero surprise; as p -> 0, the surprise gets bigger and bigger; and the surprise for two independent events, p = p1 * p2, is the sum of their individual surprises: log 1/(p1*p2) = log 1/p1 + log 1/p2.)
The entropy of a distribution is its average surprise: Sum/Integral of p log 1/p.
KL(p || q) is your excess surprise if you think something's distribution is q but it's actually p: Sum/Integral p (log 1/q - log 1/p). The KL divergence is always non-negative because surely if you think the distribution is q but it's actually p, on average you're going to be more surprised than someone who knows it's p.
If you're introducing a new term solely for the sake of explaining something, then your fundamentals are wrong.
Bits are fundamental to understanding why we can encode simple numbers in a GNN. If you don't understand that, then surprise-surprise - you need to create another, possibly misleading further down the line, framework.
Hard to say if it's worth the effort, but I've taught myself enough French to read classics (e.g. Hugo, Déscartes, Tocqueville), and it really is an immense pleasure. I imagine I'll get a kick out of it for the rest of my life.
I've been teaching myself French (from scratch) for the past two years. I read nearly as well as I do in English, and I listen well (some slangy television dialogue is still a bit tough).
Some things that have worked for me:
0. Goes without saying, but consistent effort. I've done at least a little French every day for the past two years.
1. Reading a lot. I started with the Harry Potters and now read for pleasure pretty much exclusively in French.
2. Reading on a Kindle. Instant dictionary lookup! This is such a big efficiency boost that I think that reading physical books is simply a mistake.
3. Listening a lot. I listen to about an hour of French podcasts/youtube channels a day.
4. Studying grammar. I mainly study grammar when I run into something tricky while reading, but I really do study.
5. Flashcards. I've only started making them in the past month or so, but yes, they really do work. I feel silly for not realizing that sooner. I highlight interesting words/expressions as I read and periodically dump them onto index cards.
I like to think of a functor/applicative/monad f a as being "more or less" an a.
For example, take a list of Strings. It's "more or less" a single string, it just hasn't made up its mind yet :) When you say fmap reverse myStrings, you're saying: yeah yeah, I know my string hasn't made up its mind yet, but just reverse it. So you reverse all the possibilities. When you say
(++) <$> myStrings <*> yourStrings
you're saying: yeah yeah, I know my string hasn't made up its mind yet, and neither has your string, but just concatenate them. So you concatenate all the pairwise possibilities. The monad stuff says you can do control flow: (yeah yeah), but if my string is this long, and your string is a palindrome...
Promises: A Promise Int isn't an Int, but eh, it's more or less an Int. Just as you can add 1 to a regular Int, you can fmap (+1) promisedInt. Just as you can replicate 3 'a' to get "aaa", you can
replicate <$> promisedInt <*> promisedChar
to promise some repetitive String. (And presumably promisedInt and promisedChar will resolve themselves in parallel!) The monad stuff: if this promised Int ends up being prime, then...
Parsers: A Parser Int (something that can gobble up a string and produce an Int) isn't an Int, but (in some contexts, if it's helpful) you can think of it as being more or less an Int. Saying fmap (+1) intParser says make a parser that parses whatever intParser does, and then adds 1 to the result. Doing
replicate <$> intParser <*> charParser
makes a parser that tries to parse an Int, and then a Char, and then gives you a repetitive String. Monad stuff: if I parse a prime number, then...
Functions that take some fixed argument type, say Float. A Float -> String isn't a String, obviously, but it's more or less a String :) It's just missing a Float. If I say fmap reverse justNeedAFloatToBeAString, I'm making a new function will take a float and then reverse whatever you get by feeding it to justNeedAFloatToBeAString. If I
(++) <$> almostAString <*> almostAnotherString,
I'm making a new function of type Float -> String that feeds a Float (the same Float) to both subcomputations and then concatenates their results. Monad stuff: if this almost-Int ends up being prime...
Functor says you can treat a single f a as more or less a single regular a. Applicative says you can treat any number of f a values as more or less any number of regular a values. Monad says you can do control flow with f a values as if they were regular a values.
Neat, I was just wondering about this. I'm using the same idea with core.async channels in ClojureScript to parse the BitTorrent peer protocol[0], but I wasn't sure how to describe it.