Hacker Newsnew | past | comments | ask | show | jobs | submit | OttoCoddo's commentslogin

With the right code, NTFS is not much slower than ext4, for example. Nearly 3% for tar.gz and more than 30% for a heavily multi-threaded use like Pack.

https://forum.lazarus.freepascal.org/index.php/topic,66281.m...


SQLite can be faster than FileSystem for small files. For big files, it can do more than 1 GB/s. On Pack [1], I benchmarked these speeds, and you can go very fast. It can be even 2X faster than tar [2].

In my opinion, SQLite can be faster in big reads and writes too, but the team didn't optimise it as much (like loading the whole content into memory) as maybe it was not the main use of the project. My hope is that we will see even faster speeds in the future.

[1] https://pack.ac [2] https://forum.lazarus.freepascal.org/index.php/topic,66281.m...


Sane choice, albeit delayed, as Zstandard has been the leading standard algorithm in the field for quite some time. I tested most of them for developing Pack [1] and Zstandard looked like the best alternative to the good old DEFLATE.

The dictionary feature [2] will help design some new ways of getting small resources.

[1] https://news.ycombinator.com/item?id=39793805

[2] https://facebook.github.io/zstd/zstd_manual.html#Chapter10


Readers may find Pack interesting: https://news.ycombinator.com/item?id=39793805


Easy to build using this document: https://pack.ac/source

Each binary has its own build script that you can use for yourself. Binaries are used for static builds and to ease future needs. https://github.com/PackOrganization/Pack/blob/main/Libraries...

I know you don't have a duty to look around for your answer, but you too don't have a duty to say yuck to a project that has been done with a lot of effort. I am ok with your comment, but maybe go easy on the next project.


Yes, I am. Pack can hold millions of files with no problem. One field in which it shines is that, aside from being fast at processing large amounts of data, it can process many small files much faster than similar tools or even many popular file systems.

About piping, it can be done, and it is on my list. I will finish features based on their popularity and making sense.

As Pack has random access support, you can choose a file in a big pack, and it can stream it out to the output. It is already able to unpack partially to your file system (using --include="file path in pack"); streaming/piping it would not be a problem.


It makes me happy.

It looks clean and pseudocode-like. It helps readers from around the world with different languages understand.


You can do that with Pack:

`pack -i ./test.pack --include=/a/file.txt`

or a couple files and folders at once:

`pack -i ./test.pack --include=/a/file.txt --include=/a/folder/`

Use `--list` to get a list of all files:

`pack -i ./test.pack --list`

Such random access using `--include` is very fast. As an example, if I want to extract just a .c file from the whole codebase of Linux, it can be done (on my machine) in 30 ms, compared to near 500 ms for WinRAR or 2500 ms for tar.gz. And it will just worsen when you count encryption. For now, Pack encryption is not public, but when it is, you can access a file in a locked Pack file in a matter of milliseconds rather than seconds.


It is not disabled, as you think. It is secured by the internal code in Pack. Almost all parts of Pack are multithreaded.


it’s not safe to disable SQLite’s thread safety as you do here: https://github.com/PackOrganization/Pack/blob/main/Libraries... and then do your own locking. You attempt to pass the flag at open time to enable serialized mode, however quoting the SQLite docs for the build flag you set:

  Note that when SQLite is compiled with SQLITE_THREADSAFE=0, the code to make SQLite threadsafe is omitted from the build. When this occurs, it is impossible to change the threading mode at start-time or run-time.

SQLite’s APIs are often hazardous in these ways, it really should be erroring rather than silently ignoring the fullmutex flag, but alas.


Did you compile it for yourself? Any problem or steps you used, I will be happy to hear, o at pack.ac or GitHub, as it is hard to follow the building here. I should prepare more documents on how to build it. I suspect that there is a problem with the custom build. Error and speed issues are not something you see in the official build.


That was the binary download from the website. You have a build.sh for the Linux binary artifacts, but no equivalent for the windows artifacts so i did not bother preparing a windows build


Pack binary? Can you tell what machine and what steps?

Build.sh can be used for Windows too, using MSYS2 UCRT64.


Windows 11 sandbox, running atop Windows 11. Binary downloaded from your webpage.

Data being packed was an unzipped copy of linux-master.zip fetched from GitHub unpacked with windows zip, selecting skip for the overlapping case files.


What are the parameters you gave to the CLI program? This issue seems interesting, as these files on Windows 11 were tested countless times.

To be clear, you can run Pack as: `pack.exe ./linux-master/`


I ran pack that way, and observed the error I posted


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: