Hacker Newsnew | past | comments | ask | show | jobs | submit | rupurt's commentslogin

Howdy folks,

I built an agent observability & orchestration toolkit over the holidays https://github.com/run-vibes/vibes. It uses Iggy as the message streaming layer and currently supports PTY streaming for Claude Code to the browser, Cloudflare tunnel, automatic Claude Code hook instrumentation into Iggy and I'm using that to build a cross harness continual learning system that I'm hoping will act as error correction for the model environment. It's using Iggy server v0.6 (io_uring on Linux) and compiles to native platform binaries for Linux & Mac (Windows support is on the list...)


This blog post is intended for Nix users and Zig developers familiar with DuckDB looking to extend it’s capabilities with custom extensions


We're doing Mainframe modernizations/integrations with the cloud @Mechanical Orchard. It's been really fun to see and learn the entomology of how computing got to where it is today.

As other folks have described programming on the mainframe is about learning COBOL + JCL. I would also add that you need to understand how to read/write binary files according to a defined schema (Copybook) and encoding (EBCDIC, COMP, COMP-3, COMP-5). It's also helpful to deeply understand the intricacies of Db2 and different methods of connection (ODBC, JDBC, Db2 CLI).

I've even gone so far as to write an ODBC extension for DuckDB so that we can scan and process queries faster and with less MIPS usage. https://github.com/rupurt/odbc-scanner-duckdb-extension

https://www.mechanical-orchard.com


Not sure what word you were looking for, but entomology is the study of bugs. Etymology, maybe? The study of the history of words?

"People who don't know the difference between etymology and entomology bug me in ways I cannot put into words."


Thank you :) Etymology is what I meant. Been using the wrong word my whole life... :0


Seems like a typo and they meant "evolution"


This is super interesting @richieartoul. I specced out something similar myself and was going to implement it in Zig https://github.com/fremantle-industries/transit.

FWIW I came to a similar conclusion that a lot of the power in Kafka comes from the API and that eventually much of the complexity of managing the cluster will eventually be abstracted away with multiple implementations. I also felt that if I could implement Kafka persistent over the S3 keyspace then I could start with persistence direct to S3 like you've done with warpstream and then layer on a faster hot disk and in memory tiering mechanism to eventually lower end to end latencies.

I love where you're going with this so hit me up on twitter if you ever want to chat more in-depth https://twitter.com/rupurt.


DMd you on twitter


I've created an ODBC DuckDB extension to query any database that has an ODBC driver. It's modeled after the fantastic official Postgres scanner extension https://github.com/duckdblabs/postgres_scanner.

It supports fetching rowsets in batches to minimize network overhead and defaults to the default DuckDB vector size of 2048.

I've tested it against the IBM DB2 & Postgres ODBC drivers and will continue to test and add support for all major databases. If you've got one you'd like to see let me know in the comments.

I've got plenty of improvements in the pipeline including:

  - Write to ODBC database
  - Predicate push down
  - Automatic catalog discovery
  - Performance benchmarks
  - Configurable batch size
  - Relative & absolute cursors
  - OS X builds using rosetta for databases without native aarch64 drivers (IBM/Oracle)


I recommend to model after ClickHouse ODBC integration: https://clickhouse.com/docs/en/sql-reference/table-functions...

It already has the items from your list, like the predicate push-down. Plus, it is memory-safe, thanks to clickhouse-odbc-bridge which runs as a separate process and does not poison the main server process's address space.

PS. I'm one of developers of the ODBC integration.



If there's demand ;) Would be handy for some of the newer DB's like Snowflake.


Actually, Snowflake is the reason I ask :)


Dependency on unixODBC? Wondering if this could used on Windows.



Definitely. I will need to link against the ODBC library Windows ships with. I don't have a Windows machine so wasn't testing against it.

It's now on the list of improvements!


I've found a lot of value in using Zig as my C & C++ toolchain. I really like that my build logic is more procedural, which I find easier to comprehend.


Happy LiveView user here :) Can confirm it can be used in many scenarios. One of my favorites is streaming realtime data to a chart. I serialize my data as JSON in the LiveView template and read it back in from a Hook. Saves me time by not having to create a separate API endpoint. e.g.

  <div
   phx-hook="MyChart"
   data-days="<%= Jason.encode!(days(@balances)) %>"
   data-amounts="<%= Jason.encode!(amounts(@balances)) %>"
  >
   <div phx-update="ignore">
   </div>
  </div>


Of note regarding Plataformatec open source projects: http://blog.plataformatec.com.br/2020/01/important-informati...


Why doesn't he get more involved in the project and stop complaining. Yes Ember is lacking in the prioritizing department but it's a community project and there is always room for more people.

There are a number of forks/patches out there that provide hasOne semantics and saving for multiple records in the same commit (ours are the mhelabs ones)

- https://github.com/mhelabs/ember-data/tree/has_one

- https://github.com/mhelabs/ember-data/tree/parent-child-comm...

- https://github.com/ghempton/data/tree/relational-adapter


yep :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: