Hacker Newsnew | past | comments | ask | show | jobs | submit | serhart's commentslogin

A FLAC encoder/decoder written in Guile scheme. I struggled to get the decoder working with most test files for a while until recently. It's more or less a fully functional decoder now. It's also 1:1 with the reference meta-flac command currently as well.

https://github.com/steve-ayerhart/guile-flac


Relax, it's just a play on https://www.poetryfoundation.org/poems/43291/sailing-to-byza...

The code is a mouse wheel scroll down.


> Sorry, that poem is currently unavailable.

The poem actually loaded, but then flipped to that. Thank god Yeats' work is protected by clever JavaScript from people in third-world countries who dare try to read it.


Wow, that is a hefty poem. Thanks for explaining the reference.

Edit: Just to share a bit, here's more background on it from Wikipedia. https://en.wikipedia.org/wiki/Sailing_to_Byzantium


Calm down it was a little joke, no need to flag me


That is absolutely not true. Just because early compilers acted more like linker/loader doesn't mean they used the word compiler to mean "linker". When the term was coined it absolutely meant translating mathematical formulas into machine code. Compiler very much had the same meaning it has today.


Sorry that's not my understanding of the history - early compilers were more like what we'd call today template compilers - linking blocks of pre-defined machine code together.


Yes, that is very much what they were. However, they used the term compiler in very much the same way it's used today. It didn't mean something different. The concept of the "compiler" was to translate mathematical symbols (or predefined machine code representing that math) and later English like words into programs a machine could execute.


> Use a good text parsing library. Regexes are probably not enough for your use case. In case you are not aware of the limitations of regexes you may want to learn about Chomsky hierarchy of formal languages.

Most programming languages offer a regex engine capable of matching non-regular languages. I agree though, if you are actually trying to _parse_ text then a regex is not the right tool. It just depends on your use case.


Just because you seem to be unfamiliar with bash doesn’t mean it’s unreadable. Almost any language will look cryptic if you don’t know it. That function is mostly just parameter expansion and very common in most bash scripts. I bet if you read the manual you would easily be able to figure it out. You just have to learn the language.


I'm with jmnicolas on this. I've been using bash on and off for years now (approaching decades). I still get it very wrong almost always. It's the only language where this is a persistent problem for me.

Just yesterday I was struck by the difference between if [[ ]]; and if [ ];

I didn't even bother grokking the difference in the end. I simply found something that worked and moved on with my day.


Single square bracket conditionals [] are the "older" (POSIX) version but do not have certain features that we would like to use in our shell scripts. Bash uses double square brackets to extend the functionality.

This is my understanding. Possibly not 100% correct but essentially correct enough for me to understand the reason/rationale for the difference.


You got it! `test` is a shell builtin. `[` is basically syntactic sugar, and is a synonym for `test` with the additional requirement that the last argument must be a closing bracket `]`. `[[` is a shell keyword with more features.


I'm in no way defending bash as a language. There are lots of gotchas and weird constructs. I avoid bash too. It's just that trim function isn't that cryptic or "unreadable" if you know the syntax.


I think the point is that it takes a lot longer pick the syntax up compared to some if/then/for/loop version of trim even if that would be way more verbose.


Yes, but I think that's antithetical to what languages like bash and perl are trying to do. Someone well versed in the language can do some pretty complex operations in a few keystrokes.


which gets back to the original point that these languages have chosen to be more powerful over being more immediately readable. you have to know many more features of the language to understand what is going on compared to colloquially used constructs like if/then or for loops. If you understand english you are a lot closer to understanding the latter rather than the former, more domain specific syntax.


I don't agree at all. I don't find it less "readable". You just have to be used to it. If you understand English, German isn't that hard of a language to learn. Japanese would probably be a little bit harder to pick up though. Doesn't mean Japanese is unreadable


Maybe "unreadable" and "readable" aren't the best ways to approach this conversation.

It's certainly less readable than, say, my_str.strip()


Are you comparing a call to a definition? `trim_string "$my_str"` is no less readable than `my_str.strip()`.


I mean, I would say it is. Why are there quotes around the variable? What happens when I remove the quotes?

The quotes don't function like the parens. If this were a two argument function, you wouldn't put one pair of quotes around the whole thing. They're clearly transforming the variable somehow, but I'm not sure how and/or why they're necessary.


Sure, but bash only has certain language constructs. This implementation is using parameter expansion and some other built ins. It's definitely more complex than yours, but that's what bash has to offer.


I highly recommend running shellcheck on all of your bash code. It is what finally taught me the practical difference between [[ and [. There are some tests that will always pass in [, for example, but work properly in [[; one project never realized.


My (personal) better recommendation is to avoid bash scripts whenever possible ;)

I generally tell people that, once a script is longer than ~100 lines and/or you start adding functions, you're probably better off with something like Python.

I know that's not a popular opinion with shell enthusiasts, but it's saved me so much frustration both in writing new scripts and coming back to them later for refactoring.


> I generally tell people that, once a script is longer than ~100 lines and/or you start adding functions, you're probably better off with something like Python.

Agreed. My threshold is usually "when you want to start using arrays or dictionaries". I find bash best for file manipulation and running programs in other languages


Bash is a typical Unix tool, and Unix follows the KISS principle. Hence, also Bash scripts should be KISS.

If you need >100 lines of code your code is too complex in the script world, and you should split it into several scripts where each does one thing, and that one thing well. Usually, bash scripts are applied with pipes. A sequence like cat x | tr a b | sort >output is much more likely and easy to handle then a single script which does all these things.

KISS with Bash is a very different approach compared to other scripting languages like Python and Perl. There you can write long code easily and conveniently. However, things can get tough when larger scripts need to be maintained.

I consider the examples in the "Bash Bible" a collection of useful black boxes. It's fine if they just work. Regarding the "unreadable" trim_string for instance, if you have problems to understand that code, and you have to change something then you can simply write your own new trim_string script, even in Python or Perl if you like. Pipes work also well with them.

Update: Another advantage of bash and pipes over Python/Perl is that the Unix system can assign each script in a pipe to a separate thread. That means, simple bash scripts with pipes can work _much_ faster than single scripts in Python or Perl.


I tend to work in Windows, Linux and MacOS pretty regularly... so I tend to follow a similar approach (node instead of python though). Windows-isms under msys(git) bash in Windows just gets painful to coordinate at times.

I am curious if people will uptake PowerShell Core (though not much of a fan there either).


I'm so glad I'm not alone in this.

I never even bothered to learn Bash properly because even that's difficult, and I figured it'd be "good mental money after bad". And, now that Python ships with all distros, I feel even less need to.

I do like Bash's range syntax though:

    for i in {0..5} ; do echo $i ; done
Like Ruby's. It's so nice! D: I lament Python's lack of it.


    echo {1..5} 
is even nicer. Or

    echo {1..5}{1..5}{1..5}


My rule of thumb is to use a real language if there is more than one if fi block.


So, what in your opinion constitutes a "real language", and why?


a "real language" would be any programming language that is primarily designed and presented as a programming language.


Nice recursive definition..

The practice of programming in shell languages was well established when Bash was designed and Bash was definitely designed with that use in mind. So by your definition Bash is a "real" programming language.

Lisp on the other hand, was much more designed as a system and formal notation to reason about certain classes of logic problems.. So, Lisp is not a "real" language?


Bash is primarily a shell, it even has the word shell in its full name.


And a shell can't be a real language? https://scsh.net/


a shell can't be a real language?

No, it can not.


By "programming language" he probably means something that has robust flow control mechanisms and metaprogramming facilities.

You can definitely tell there's a different "feel" to bash and Tcl, Python, Perl, Go, etc, yes? Shell languages basically evolved out of batch processing languages that were meant to only run programs in sequence and it shows.


I agree, but I like phrasing it as "only run commands with substituted arguments." There's some more discussion on this below. :)

In the past I almost exclusively used Python, but I'm starting to like Go.


My disclaimer is that I love bash. I write way too many things in bash. But eventually I go back and refactor the scripts into Python (or lately, Go) because I am so sick of people refusing to touch my bash scripts.

String munging in Python is so much objectively easier to read than bash. I would say that it's because it is similar to a lot of the more popular languages than the sort of cryptic parameter substitution bash provides. And if that makes it easier to maintain some automation, then its worth the effort just to use Python IMO.


Python is a portability minefield. I'll spend more time trying to get the script/package run than reading the bash script.


Couldn't agree more, but honestly I think that is a completely overplayed issue. Just be explicit (This requires Python3.7) and have a requirements.txt file, maybe bundle with a `make install` command and move on.

If people can't figure that out... I don't know how you're going to expect them to read a cryptic bash script.

Don't get me wrong there is totally some good use cases for bash. Init-scripts come to mind when you don't want to lug around a Python VM in a lightweight container, for example.


I don't think it's a matter of familiarity. Most of us here have used multiple languages, and I have no problem saying that the Bash syntax is atrocious.

"How do we close a statement.... errr. Let's spell it backwards".


How is it not a matter of familiarity? Just because it's not intuitive to you doesn't mean it's bad. I find anything not in an S-expression to be atrocious. Doesn't mean I can't learn and understand "horrible" syntax where I have to separate things with semicolons...


I am saying, we are allowed to have an opinion on it. You can be good at something and still acknowledge that is sucks.


> "How do we close a statement.... errr. Let's spell it backwards".

That's on the original Bourne shell and its author's evident love for Algol 68. Wikipedia has a fine summary.


"Almost any language will look cryptic if you don’t know it."

Seems disingenuous to me. Java and Python are in a different class of readability than Bash or Perl.

Example: "abc" + "def" = "abcdef" vs "abc"."def" = "abcdef"


> "abc" + "def" = "abcdef" vs "abc" . "def" = "abcdef"

Would be better comparison if you would use whitespace in the other case as well. Which then makes it just as readable.

Also. "0" + "42" would that be "042" or 42? It may be better readable, but the semantics are unclear.


That's an argument for types. While it's more convenient to use a language without types it's more error prone. So I guess in the end strong typing wins.

  >>> 0 + 42
  42
  >>> "0" + "42"
  '042'
  >>> "0" + 42
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  TypeError: cannot concatenate 'str' and 'int' objects
  >>> int("0") + 42
  42


It is implemented in Guile. The language is just syntactic sugar via macros and utilizes WISP.

https://git.savannah.gnu.org/cgit/gwl.git/tree/gwl/sugar.scm https://srfi.schemers.org/srfi-119/srfi-119.html


I'm pretty curious on how you conclude that GNU is "intentionally obfuscating code" from [1]. I read [1] as good advice to avoid inadvertently getting code into GNU that could be claimed by copyright. Focus on speed instead of memory; simplicity instead of speed. I don't see "make it different for difference sake."

I also conclude the opposite from the cat implementations. I really don't see how the BSD cat is a more "pure" or straight forward implementation. I find the GNU cat to have way more options and easier to read source code. My guess is that novice programmers would find the GNU version easier to grok, which also probably aligns more to the goal of GNU. Or I just like to read obfuscated code.


I really don't see how the BSD cat is a more "pure" or straight forward implementation. I find the GNU cat to have way more options

Way more options is "cat -v considered harmful" not thought any more?


If you aren't trying to be Unix I would imagine you aren't thinking that. GNU probably doesn't subscribe to that notion.


> I find the GNU cat to have way more options and easier to read source code. My guess is that novice programmers would find the GNU version easier to grok, which also probably aligns more to the goal of GNU.

Perhaps, but, also, on a source based unix system (i argue the only real unix system, since this is what unix was from the v5 research days, and continued to be for source license holders through the dawn of PC-BSD and then in open-source BSD onwards), one can do something like this:

    $ which ls
    /bin/ls
    $ man ls > /dev/null # output hidden for example purposes
    $ ls /usr/src/bin/ls     
    CVS      cmp.c    ls.1     ls.h     obj      utf8.c
    Makefile extern.h ls.c     main.c   print.c  util.c
    $ grep include /usr/src/bin/ls/ls.c |sed -ne 2p     
    #include <sys/stat.h>
    $ ls /usr/src/sys/sys/stat.h                                              
    /usr/src/sys/sys/stat.h
    $ grep ^sys_statfs /usr/src/sys/kern/*.c    
    /usr/src/sys/kern/vfs_syscalls.c:sys_statfs(struct proc *p, void *v, register_t *retval)
    $ less /usr/src/sys/kern/vfs_syscalls.c  # oh so that's how it works in the kernel..
    $ (cd /usr/src/bin/ls && cp ls ls.c.orig && $EDITOR ls.c && make && make install && ls -Q)  # yay! my new option -Q works
    $ (cd /usr/src/bin/ls && diff -urw ls.c.orig ls.c |Mail -s 'new patch' [email protected])
whereas, for example, on a binary package based linux or proprietary UNIX, this would entail either separate download of N different tools and disparate compilation steps in the former, or not being able to do it in the latter. While there is nothing constraining a linux or proprietary unix from being made available in such a way, they quite simply arent, and the 'culture' and concept of having a single binary+source+documentation tree which is fully self hosting with a single build system managing the build combined as a single unit as 'what the unix system is' is lost..


"If you have a vague recollection of the internals of a Unix program, this does not absolutely mean you can’t write an imitation of it, but do try to organize the imitation internally along different lines, because this is likely to make the details of the Unix version irrelevant and dissimilar to your results. "

on it's own, not intentionally obfuscating. but when the naive implementation is straightforward and the only clear cut way, and everything else involves abstracting something, then yes.

see also the 'yes' example from the other thread:

https://github.com/coreutils/coreutils/blob/master/src/yes.c https://github.com/openbsd/src/blob/master/usr.bin/yes/yes.c

Really, my main beef is the suppression of history and misrepresentation of a broader culture as 'proprietary' to serve RMS's political aims, whereas in the actual Usenet/BSD culture, it quite simply wasn't. Unix wasn't just AT&T, it was the community around it as well, which lives on in the BSD's with a clear lineage and 'cultural context' which is also truly open source and built around software freedom, but the promotion of GNU devoid of context obscures this fact.


The official Guix channels do not package proprietary software. However, nothing is stopping someone from doing it.


Tried to install nvidia drivers (the only piece of proprietary software I needed) — no luck.

Hope that with 1.0, somebody will find a way.


The only distro I know of which packages nvidia proprietary driver not as an original blob and a bunch of scripts is Debian. To say it's a mess is understatement (list of dependencies by metapackage[0], which doesn't include all driver parts either).

[0]https://packages.debian.org/sid/nvidia-driver

edit: wording


This is trivial to do on NixOS:

  services.xserver.videoDrivers = [ "nvidia" ];
For a full install for eg machine learning, you'd want these lines, too:

  environment.systemPackages = with pkgs; [
    cudatoolkit
  ];

  systemd.services.nvidia-control-devices = {
    wantedBy = [ "multi-user.target" ];
    serviceConfig.ExecStart = "${pkgs.linuxPackages.nvidia_x11}/bin/nvidia-smi";
  };


You can download blob from official site, but it wouldn't work in a lot of cases after installation process while giving you no options for some choices there are (like GLX provider or vendor-neutral OpenGL libs), that's the difference of Debian's metapackage.


(you do need to set config.allowUnfree = true; or the nvidia stuff will be hidden)


Pop_OS! By System76 makes these accessible at the installer. It's why I chose it for a recent Lenovo laptop purchase.

So now you know of 2.


As Ubuntu does on theirs. The thing is, Ubuntu is stated as officially supported by Nvidia's blob itself, although it doesn't remove all the fun of their driver's installation and configuration processes.


Yeah I only mention pop_os because it truly is a painless process for a fresh install and that's worth calling out. Life is too short to suffer installing uncooperative Linux distros on modern laptops.


Pop_OS is based on Ubuntu, which is itself derived from Debian.


Pop_OS does not reuse Debian's packaging for this, does not do it the same way Ubuntu does. It also bundles it with the OS installer image, which can be pretty important.

These are decisions that OS authors make. Even if they are derivative, it is worth calling out these specifics. Especially if it enables painless adoption of the OS on specific hardware rather than saying "well it's just Debian."


Is it possible to convert the packages for nixos to guix? Seems like someone should create a nonfree repo for guix to allow people to simply add a new source for nonfree software.

If I understand properly this is done via GUIX_PACKAGE_PATH which works somewhat like PATH in that multiple sources are searched in order of precedence.

https://www.gnu.org/software/guix/manual/en/html_node/Packag...

Instead of hoping that someone does it you could always do it yourself and host it somewhere everyone can benefit from.


GUIX_PACKAGE_PATH is inconvenient. We prefer to use channels instead:

https://www.gnu.org/software/guix/manual/en/html_node/Channe...


It's definitely not trivial to do. I have the Guix system installed on my mid 2013 MBP. I installed the mainline Linux kernel and drivers along with the proprietary Broadcom wireless module. Everything seems to be running fine.


Did create a package for it? Is it available?


This is going to be a deal breaker for a lot of HPC folks, so it's surprising to me that there isn't an "unofficial official" way to get non-free Nvidia drivers given Guix's aims at reproducible builds for HPC.


Sounds like you can use just the package manager, so I guess there's nothing stopping you from installing just the proprietary blobs with your system package manager and build everything else with guix?


I don't think it has Nvidia drivers, but there is work being done for Guix and HPC. https://guix-hpc.bordeaux.inria.fr/


> By definition, if you're programming Emacs Lisp, you're an Emacs user. Emacs is the interpreter.

That's not necessarily true. Guile supports Elisp as a language. You can write Elisp sans Emacs.


Not true. You can write your own packages in the same high level interface. Including the mainline Linux kernel and broadcom drivers. I have GuixSd running on my 2013 MBP


Is there a...non official repository of these somewhere?


There are a smattering of code snippets around

This[1] is the one I used as a reference for my kernel build which includes brcmfmac

There is also a feature called 'channels' (not widely documented AFAIK) which allows you to describe an outside repository of code you can reference to install packages. There is one[2] that provides Chromium (though I cannot get the build to succeed).

The downside to all of these not being provided in mainline is the amount of time and energy to build them. I suspect there are more people than I would like curating their build of the linux kernel, manually specifying drivers included in the kernel. I really wish the community were a little less hard-set on the principle of 'freedom' and a little more pragmatic; I am sure the community is smaller due to people being turned away at the sheer amount of effort required to get their machine to a usable state.

[1]: https://github.com/wingo/guix-nonfree [2]: https://gitlab.com/mbakke/guix-chromium


As someone who is sympathetic to their cause and their stance, I think the extreme position of excluding all non-free software can actually hamper the movement by making it less accessible to people. I get you always have to draw a line somewhere, but FSF draw it too firmly.


> you always have to draw a line somewhere, but FSF draw it too firmly.

They actually don't draw it exactly where I would. I don't care so much for games if they're open/free or not (it's nice when they are of course), but I am concerned with unauditable code on, for instance, the SATA controllers in my machine, even if it 'read only' code. Where the FSF is more concerned with the first and less with the second.

On the other topic, I think the ability to create (and share) your own recipes/channels for software on Guix allows for people to create fairly easy ways to install non-free software (or using Nix on top of GuixSD) without the Guix team having to support unethical software.


^ I stand corrected: My WiFi (ath10k) WOULD probably work...


None that I know of currently. The OS is version 0.15.0. It's very young and doesn't have a huge user base. I'm sure there will be non gnu repositories that will eventually spring up. I've even toyed with the idea of setting one up.


Even as a toy example, a blog post and code that demonstrates setting up a 3rd-party package repository would be helpful. I'm just afraid of the GUIX team deciding they don't want to support 3rd-party repos because it encourages non-free software use, and deliberately stonewalling development of related features.


Guix has had support for external package definitions for a long time (via GUIX_PACKAGE_PATH) and recently gained a channels feature, which makes this even easier.

At the institute where I work we use this feature to provide package variants that aren't going to be added to Guix proper (e.g. because they are no longer supported upstream or because they are only useful on obscure systems).

This mechanism can be used to include package definitions for non-free software, of course, but this doesn't mean that the Guix project encourages the use of non-free software.

You have the freedom to package up and use non-free software, but we won't make them part of Guix, build them on our build farms, and we won't hold back changes to the mechanism even if that might break 3rd-party collections of non-free software.

You don't really need a blog post about how to set up a package repository, because it's right there in the Guix manual.

PS: It's "Guix", not "GUIX" ;)


Thanks for the reply. As long as it's easy to mix free and non-free packages at my discretion, I'm willing to give it a shot.

Also, thank you for the capitalization correction. It grinds my gears when people capitalize things unnecessarily and I wouldn't want to be one of them!


can you tell me how you got that running in some more detail? I also have a 2013 MBP and would like to try out GuixSD


I don't have access to it now. When I do I can put the package definitions and OS config up somewhere.

When you first boot GuixSD most of the core stuff should work fine. Wireless will not work but you should have ethernet. I created mainline linux kernel and firmware packages [1]. Then you need to get the broadcom b43 firmware to get wireless support. I think I made 2 packages. One for b43-fwcutter and one the broadcom-wl firmware which uses b43-fwcutter to cut and install it. I think just about everything on my MBP was working after that.

[1] example linux package https://github.com/wingo/guix-nonfree/blob/master/gnu/packag... [2] http://linuxwireless.sipsolutions.net/en/users/Drivers/b43/


Here is a gist of the relevant packages and configs I have for my MBP.

https://gist.github.com/steve-ayerhart/81c9dfc773472e08002cd...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: