I recently actually tried to do a very similar thing, although a bit tighter in scope. What stopped me what that actually deserializing floating points cannot currently be done at compile time; the only utility available to do so is `from_chars` and it is only constexpr for ints.
I did not see any mention of this in the post; so are you actually simply extracting the string versions of the numbers, without verifying nor deserializing them?
The problem with this is that it will not actually parse double in IEEE 754, as you will accumulate inaccuracies at every step of the loop. In theory, when parsing a float, you are supposed to return the floating point that is closest to the original string number. Your code will not do that. Even if you accept the inaccuracy, if you for some reason load the JSON using a runtime library, you'll get different numbers and consequently and result that depend on those numbers. For my use-case this was not acceptable unfortunately..
Yes, very true. I noticed that even already at 3dp the floats start to compare unequal. The long double helped but it's not really.
I googled and found two examples of constexpr float parsing repositories, but from the sounds of things, you understand this problem better than I and will have seen them already
You can do something similar, no? std::pow is not constexpr (most float stuff is not, presumably due to floating point state) but you can implement 10^x anyway
I mean, surely if you are doing something that requires this level of precision, you could just ask the user to input its current known location? I doubt that even if the user misdialed by ten or twenty meters the difference in compensation would matter (or even if the camera was actually moving around).
Also, in case anyone is interested, the uninformative Jeffreys prior for this in Bayesian statistics (meaning it does not assume anything and is invariant to certain transformations of the inputs) is Beta(0.5, 0.5). Thus the initial guess is 0.5, and it evolves from there from the data.
There is a Vim plugin called EasyMotion, which I absolutely love, which when invoked creates a simple 1 or 2-key "checkpoint" at the start of every word on your screen, and super imposes them on the text. Then it's only a matter of looking at the point where you wanted to move, and press the two keys written there, and there you are!
It's also fairly customizable in that you can specify which characters are actually allowed (so you don't end up with very weird keys to press), and some other stuff. So every time I need to move somewhere I can't be bothered to figure out the standard keys it would take me to get there, it saves me.
> No, not necessarily. Several studies have shown that different people can get different amounts of energy out of the same food, depending amongst others on their gut micro biome, though stress also seems to play a role. It’s never going to be that simple.
Sure, so they just need to compute their at-rest calorie consumption differently, and from there the rest is the same.
> Conventional thermodynamics don’t work when you consider a full human, which is a very out-of-equilibrium and not-isolated system. Conservation of energy does not tell you anything about the efficiency of the energy extraction process.
This is like saying that even if you don't refuel your car it will never stop, because different cars have different mpg ratings. A human is indeed a closed system when you consider the works it outputs and the calories it ingests, unless I somehow missed a newfound capacity for photosynthesis. The fact that it might be a bit harder to compute calorie requirements than what might be naively done does not allow you to just dismiss everything else.
> Sure, so they just need to compute their at-rest calorie consumption differently, and from there the rest is the same.
Not at all, because different gut biomes are more efficient than others at processing some kinds of foods. Famously, some populations are more effective at processing fish than others. Really, "amount of calories" works on average, but people with food issues tend not to be average, otherwise they would not be outliers.
You cannot take any assumption for granted. If it were so easy, we'd have solved it. It's only very recently that we started to grasp the role of the microbiome, and we are far from having explored it all.
> A human is indeed a closed system when you consider the works it outputs and the calories it ingests, unless I somehow missed a newfound capacity for photosynthesis.
You missed a lot of things. Radiative heating and convective heat exchange, for one. Our body spend a lot of energy to heat up when it's cold, and try really hard not to do anything when it's hot. This works differently for different people; I tend to heat up fairly efficiently and I am very rarely cold; my GF is the other way around. Obviously, this is also environment dependent.
Plus, we eat all the damn time. How can you seriously be arguing that we are a closed system? Again the problem is not that it's slightly harder. The problem is that it is multifactorial, that we don't know all the factors, and that the importance of each factor varies with genetics, history, and the environment.
> The fact that it might be a bit harder to compute calorie requirements than what might be naively done does not allow you to just dismiss everything else.
He is a chess teacher who decided to train his daughters in chess from a very young age. What do you know, two of them became the first and second best players, with the best being considered the best woman chess player of all time. Unlikely that they somehow just all got chess genius "genes" from him.
It really does seem that heavy investment from a young age by a good teacher can work wonders.
I don't agree that becoming a chess grandmaster is on the same level or even near the same level as becoming a Fields medalist or being in the shortlist of the best mathematicians alive.
> It really does seem that heavy investment from a young age by a good teacher can work wonders.
This is not the bet. It is not any arbitrary skill.
I specifically said
> I'd bet any amount of money you want that you could just do the same thing with any arbitrary human and create a mathematician of his caliber.
I thought about this for a bit, and I have a feeling that as long as everything is touching the ground, then making covering loops is impossible, and so there exist a simple ordering you can compute.
The ordering is as follows: I'm assuming the isometric rendering of a map as a 45 degrees tilted square, and I'm only considering tile ordering just for simplicity but it should generalize fine. The uppermost tile is where you want to start rendering. From there, you render following the two 45 degree diagonals until you are done (so you don't only look at the y axis). Once this is done, you restart the process from the tile just below the uppermost corner, and so on. This ordering makes sure that all rectangular objects that are aligned with the 45 degree diagonals are rendered correctly.
Now you need an additional trick to render rectangular objects that are transversal to those diagonals correctly. What you do is you keep track of the boundaries of all such objects, so that the rendering loop described above can tell when it encounters one. Once it encounters it, it pauses rendering the current diagonal and considers it temporarily complete. The diagonal on the other side still needs to be rendered fully though --- or at least as far as possible with the same stopping condition. The next rendering pass will likely at some point re-encounter the same transversal object, just at a further point. Stop again, start the next diagonal. Once the rendering encounters the lowest and last part of the transversal object, then that object can be rendered, and the first stopped diagonal can be resumed (and after this resume all the paused diagonals in order).
This should always give you the correct order to render everything without errors. Let me know if this made sense, otherwise I can try to clarify.
This is an interesting approach, though I dont think it will work. Vehicles/units move pixel by pixel, not tile by tile. Each wall tile in my game takes up one tile, and walls sit on the edge of a tile. I dont think this will handle unit/wall rendering order properly unless I also form larger shapes from the walls.
We document that the ban on dumping dioxins in rivers has induced the exit of a number of chemical plants; and following implementation entry of dangerous chemical handling facilities fell by half. Whatever the health benefits of banning dioxins, they come at substantial costs of forgone innovation.
I'm not sure I understand why dynamic programming wouldn't work (and the author explicitly mentioned Knuth). Tex's main job is literally doing line breaks, which is the exact same problem being tackled here. I would expect a similar approach (progressively build a graph of the most promising breaking points) to be effective. Why wouldn't it be the case here?
As someone very familiar with the Knuth–Plass line-breaking algorithm (https://tex.stackexchange.com/a/423578/48), an important difference I see here is that for paragraphs (the domain of TeX), there is no "state" that needs to be preserved across lines: if you know that your paragraph is going to choose a certain break-point, then you can pretty much typeset the "before" and "after" parts independently, each optimally. (With one exception: there is a penalty for hyphens being on successive lines, so we need to track whether the previous line was hyphenated.) This is the "optimal substructures" property that makes it so amenable to dynamic programming.
With the code formatter, to format the part after a certain character, you need to keep track of the indentation depth of all the expressions that have not yet terminated at this point — because you presumably want parallel expressions to be formatted with the same indentation depth, for closing parentheses to match their corresponding opening parentheses, etc.
— knowing that there's a break after the `&&` is not enough; you also need to know the indentation of the previous expressions, to decide how you're going to format the part after the `&&`.
This is what the author alludes to in the post:
> A line break changes the indentation of the remainder of the statement, which in turn affects which other line breaks are needed. Sorry, Knuth. No dynamic programming this time. […] For most of the time, the formatter did use dynamic programming and memoization. […] It worked fairly well, but was a nightmare to debug.
> It was highly recursive, and ensuring that the keys to the memoization table were precise enough to not cause bugs but not so precise that the cache lookups always fail was a very delicate balancing act. Over time, the amount of data needed to uniquely identify the state of a subproblem grew, including things like the entire expression nesting stack at a point in the line, and the memoization table performed worse and worse.
In TeX, paragraphs have each line of the same width (simple case) or can have a \parshape (in general), but these are "global" constraints that don't depend on what breaks you choose.
Not sure if saying you can't use dynamic programming is accurate though, you simply can't use a direction translation of Knuth's algorithm since this misses indentation.
If I recall correctly Knuth uses Dijkstra on a graph with nodes of 'line break at position x' and I don't see why you couldn't use a graph consisting of 'line break at position x indentation y' or something similar.
It's the "or something similar" that's the catch. For another example, search the post for `scriptLoadingTimeout` — to know how to indent the code after a break immediately after that position, you need to know the indentation of the `.timeout` before it, of the immediately preceding `.then(`, and of the `return` at the top — basically you need to know the indentation level of every parent in the expression tree. That means the graph's states are something like "line break at position x, with indentation of parent expression nodes being …, …, …", and then you have too many states as mentioned in the post. There's a combinatorial explosion of the state space. Using dynamic programming with this large state space is still possible, but approaches the running time of the brute-force algorithm. (I do wonder how extensively it was tried, though.)
> I would expect a similar approach (progressively build a graph of the most promising breaking points) to be effective. Why wouldn't it be the case here?
Hi, I'm the author of a C++ library focused on tabular bandits, mdps and pomdps. It's called AI-Toolbox, and it's one of the largest non NN libraries out there.
The library is fully documented, but the text is probably a bit dry. I'd love for somebody to help me improve its accessibility, and I'd be willing to help them along learning how things work.
My email is my nickname and Gmail, feel free to reach out if you are interested.
I did not see any mention of this in the post; so are you actually simply extracting the string versions of the numbers, without verifying nor deserializing them?