Look into using duckdb with remote http/s3 parquet files. The parquet files are organized as columnar vectors, grouped into chunks of rows. Each row group stores metadata about the set it contains that can be used to prune out data that doesn’t need to be scanned by the query engine. https://duckdb.org/docs/stable/guides/performance/indexing
LanceDB has a similar mechanism for operating on remote vector embeddings/text search.
> Look into using duckdb with remote http/s3 parquet files. The parquet files are organized as columnar vectors, grouped into chunks of rows. Each row group stores metadata about the set it contains that can be used to prune out data that doesn’t need to be scanned by the query engine. https://duckdb.org/docs/stable/guides/performance/indexing
But, when using this on frontend, are portions of files fetched specifically with http range requests? I tried to search for it but couldn't find details
I'm conflicted about the implicit named returns using this pattern in go. It's definitely tidier but I feel like the control flow is harder to follow: "I never defined `user` how can I return it?".
Also those variables are returned even if you don't explicitly return them, which feels a little unintuitive.
I haven't written any Go in many years (way before generics), but I'm shocked that something so implicit and magical is now valid Go syntax.
I didn't look up this syntax or its rules, so I'm just reading the code totally naively. Am I to understand that the `user` variable in the final return statement is not really being treated as a value, but as a reference? Because the second part of the return (json.NewDecoder(resp.Body).Decode(&user)) sure looks like it's going to change the value of `user`. My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out (because I'm assuming the tuple is being constructed by evaluating its arguments left-to-right, like I thought Go's spec enforced for function arg evaluation). I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
I'm obviously wrong, of course, but whereas I always found Go code to at least be fairly simple--albeit tedious--to read, I find this to be very unintuitive and fairly "magical" for Go's typical design sensibilities.
No real point, here. Just felt so surprised that I couldn't resist saying so...
> I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
`user` is typed as a struct, so it's always going to be a struct in the output, it can't be nil (it would have to be `*User`). And Decoder.Decode mutates the parameter in place. Named return values essentially create locals for you. And since the function does not use naked returns, it's essentially saving space (and adding some documentation in some cases though here the value is nil) for this:
func fetchUser(id int) (User, error) {
var user User
var err Error
resp, err := http.Get(fmt.Sprintf("https://api.example.com/users/%d", id))
if err != nil {
return user, err
}
defer resp.Body.Close()
return user, json.NewDecoder(resp.Body).Decode(&user)
}
yeah, not really an expert but my understanding is that naming the return struct automatically allocates the object and places it into the scope.
I think that for the user example it works because the NewDecoder is operating on the same memory allocation in the struct.
I like the idea of having named returns, since it's common to return many items as a tuple in go functions, and think it's clearer to have those named than leaving it to the user, especially if it's returning many of the same primitive type like ints/floats:
> I feel like the control flow is harder to follow: "I never defined `user` how can I return it?
You defined user in the function signature. Once you know you can do that, there is nothing magic about it, and it makes it more explicit what the function will return.
It has some functionality of this type (you can see in the overall map a small number of "sensor" nodes), but it's not super well fleshed out or documented ATM.
What I think you can do for sure today is poll a sensor over the mesh, unlike the meshtastic way where you generally automatically broadcast telemetry.
I was able to add it as a "Color Filters" "quick control" toggle from the top right drag-down menu (not sure what you call that) in iOS 17. We'll see how long I last with it. I'm intrigued as well.
As someone who was recently diagnosed and treated for Uveal Melanoma (get your annual eye exam and retinal scans!), and occasionally struggling with some intrusive thoughts about the potential for liver mets, reading about this treatment brought me so much joy. Bless Zhen Xu!
no symptoms. first identified the lesion a few years back and it hadn't changed over a few subsequent appointments. exam this year, it had grown a small amount 5mmx5mm to 6mmx8mm - still considered small, but the change was enough for the Drs to recommend treatment. I have been treated by Dr. Dan Gombos[1] at MD Anderson and received excellent care.
I have a relative with the same disease. They went to a an eye doctor because of visual artifacts. Turns out the tumor was so big it caused retinal detachment. Basically, most people get diagnosed at a very late stage because it's mostly asymptomatic.
Not having to deal with npm and a build step can remove a huge barrier to adoption for a large number of potential adopters, or people that just want some lightweight interactivity in an app.
That's what got me into Vue and I still use build less Vue all the time for tiny little sites that aren't worth setting up a whole CI process for. It's really lovely that it's an option.
Just like how easy jQuery was to get started with back in the day, but a whole framework
Yep, can confirm. I first used Vue in 2016 to write some simple calculators for my group's use in Eve Online. Without its "progressive" affordances, I don't think I would have gotten anything off the ground. I had no idea how to set up a build pipeline at that point, and I think Vue was new enough that there weren't many vue-specific tutorials so I'd have been learning from React tutorials and trying to figure out what to change with zero JS background.
a posteriori knowledge. the pelican isn't the point, it's just amusing. the point is that Simon has seen a correlation between this skill and and the model's general capabilities.
`rustworkx` is older and much more mature than PyGraphina. So at the moment, it includes a larger collection of graph algorithms. But the goal is to keep PyGraphina focused on specific applications like community detection and link prediction with a high-level API like NetworkX.
```
BREAKING CHANGE The following packages are removed from the Pyodide distribution because of the build issues. We will try to fix them in the future:
arro3-compute
arro3-core
arro3-io
Cartopy
duckdb
geopandas
...
polars
pyarrow
pygame-ce
pyproj
zarr
```
Bummer, looks like a lot of useful geo/data tools got removed from the Pyodide distribution recently. Being able to use some of these tools in a Worker in combination with R2 would unlock some powerful server-side workflows. I hope they can get added back. I'd love to adopt CF more widely for some of my projects, and seems like support for some of this stuff would make adoption by startups easier.
LanceDB has a similar mechanism for operating on remote vector embeddings/text search.
It’s a fun time to be a dev in this space!
reply