Hacker Newsnew | past | comments | ask | show | jobs | submit | thosakwe's commentslogin

I remember being 11 or so and installing this homebrew onto my DS...

I had NO IDEA what Linux was at the time, but DSLinux helped me deepen my interest in computer science.

So, thanks to the creators, and everyone who contributed code.


In my humble opinion (I'm still new to Haskell), the best way to do that is to contribute to existing Haskell tooling.

That's something I'm looking into doing myself. It would be great to help improve tools like cabal and the haskell-language-server.

I think that will go a long way towards making Haskell more beginner-friendly, and easier to use in production.


I learned Haskell this year.

After reading this article, the conclusion I drew was, "Cool, so I can `fmap` over my parser now and transform what I parse using functions."

To answer your other questions: I'm not sure it means much for the code that does the actual parsing, nor how you specify the grammar's rules, it's more about being able to transform the output using functions.

If your static analyzer is a function, you could now write `fmap staticAnalyzer myParser`.


> about being able to transform the output using functions.

Can't one always use functions to transform other functions' outputs?

> If your static analyzer is a function, you could now write `fmap staticAnalyzer myParser`.

rolls eyes

    def compile(text):
        ast = parse(text)
        ir, symtable = static_analyze(ast)
        asm = lower(ir, symtable)
        return assemble(asm)
Ability to write that in the point-free style is really not that important, IMHO.


Yes, but if it we're talking about parser monads, then usually you can't apply a function directly to a parser's result without either:

1. Being within the same monad. For example, you can `bind` a `Parser a` to a function only if it returns `Parser b`.

2. Performing an actual action and breaking it out of its monad.

For example, if you're using the Parsec library, and you have a `Parser Int`, you can't get to that int without using a function like `parse`, performs the actual action of parsing input text.

With a functor, you can compose a `Parser a` within an `a -> b` function, instead of having to return `Parser b` in your function.

So if you have a `Parser Int`, and you want to turn it into a parser that multiplies its parsed input by 2, you can write `fmap (*2) myParser`, instead of having to write `myParser >>= \a -> return (a * 2)`.

Parsers being functors means it's easier to compose them with other things, without having to actually perform the parse until you need to.


Right, so you're essentially talking about how in e.g.

    class Parser:
        def __init__(self, text):
            self.text = text
            self.result = None
            # other inners state/context fields

        def parse(self):
            self.parse_top_level()
            return self.result

    def parse(text):
        return Parser(text).result
the free function "parse" throws away the inner context. When one uses parser combinators, there is no single monolithic Parser class which one may extend with whatever additional methods necessary (and maybe take some free functions as callbacks, why not); instead there are lots of smaller functions returning the leftover contexts together with the results, and you have to combine them somehow, together with free functions.

> without having to actually perform the parse until you need to.

I'd rather parse my input before starting semantic analysis on it. Although if you really want to have a one-pass translator, literally being a composition three functions, `parse`, `analyze`, `emit`, with no visible intermediary structures then yeah, this allows for it.

But I'd argue that's more complicated than having three non-entwined passes with explicit, designed data structures serving as interfaces between them instead of invisibly unevaluated thunks; not to mention the sheer complexity of doing everything in one go (why do people even claim the single-pass compilers are simple?)


You make a valid point about the added complexity of combining parser combinators with functors and other concepts.

Honestly, if I were writing a compiler, I would go with your approach as well.


I don't get the last part of the comment. `myParser >>= \a -> return (a * 2)` doesn't have to parse immediately, right? You could push that function at the end of a list and only apply it during `parse` anyway.


Correct, that function doesn't have to parse immediately, but it does have to be aware your monad exists.

This isn't a problem if you're the one writing the function, but if you're using a 3rd party library that doesn't know about your monad, then fmap can be very useful.


Thanks for sharing this great article. It explained lenses very clearly, without me having to read a paper first.


Another reason is that imperative languages have a lot of business inertia around them. It's expensive to rewrite existing code or switch to a new language, and most businesses can't justify this cost.

I love functional programming, but I doubt most companies that sell CRUD apps care about it.


Another reason is that imperative languages are all that's necessary for many (maybe even the majority) of business use cases.

Really a lot of it boils down to if this else that, and little more.


"all that's necessary" implies there's something fundamental about imperative languages that's intrinsically more basic. In fact, the opposite is true. If you're problem statement is "in response to event X, transform Y into Z, stick it in a database, then take A from the database, reshape it into B, and send it back to the user" then weak functional approaches are the natural solution. (languages like Elixir or idiomatic JavaScript).


I read "Start Small, Stay Small" in 2021. More modern advice is on YouTube instead of books. Try channels like Noah Kagan, or Microconf.


https://microconf.com/youtube for the MicroConf YT channel.


It's nice to get a pretty good, well thought out set of advice in one place that you can easily scan, bookmark and refer back to. I don't find video very good for that.


Seeing so many new languages running on WASM is exciting. I wonder if we'll see a language with a just-in-time WASM compiler soon...


DBT (Dynamic Binary Translation) counts, https://github.com/copy/v86


I maintained a project like this for several years. My genuine advice to anyone considering creating an open-source library: either keep it super small forever, or make it closed-source + charge for licenses.


I wonder why tree-shaking wasn't always the default for, say, JS bundlers. If a compiler/analyzer knows what the entry point of a program is, as well as any symbols it exports to the outside world, isn't it relatively simple to figure out what's not being used?

I could be misunderstanding something.


Couple of weird language edge cases

JS allows functions to by called by name.

A.foo();

Can be

A["foo"]()

Because foo is now a string it's possible to add levels of indirection.

Action(name) { A[name](); }

It's possible the list of actions to perform are sent to the client as data.

This amiguity has left enough doubt for most tools, and people making tree shaking a practice on code not commonly performed. The longer that happens the scarier it becomes to start investigating.

Since the browser will tree shake for you the incentive to cleanup your source is also less important.


But in this case A is exported. No one is going to tree shake methods of a class.


It doesn't need to be a class; that's a detail:

  function foo() { ... }
  var callthis = "foo"
  window[callthis]()
And this is true for any object; attaching functions to objects is kinda common.

And of course, "callthis" is usually defined in a much more complex way, such as if (user_input_is_this) callthis = "foo" else callthis = "bar". Or what about callthis = "foo"; callthis += "_bar"?

In general this kind of stuff is tricky in dynamic languages (and also in static languages once you start using reflection); JavaScript isn't really an exception here. You really need to have a deep understanding of the code and logic to truly be certain that something will never be called.


> [...] Often contrasted with traditional single-library dead code elimination techniques common to minifiers, tree shaking eliminates unused functions from across the bundle by starting at the entry point and only including functions that may be executed.

Assuming, that methods of classes count as "function" in the terminology of that wiki page, then it seems to say methods of a class are shaken out. If they are not seen as functions (again, in that terminology), then I guess not.


I think the article is over-simplifying, I believe most tree-shaking is done at the `import`/`export` boundaries.

Code that isn't used locally in a module and is not imported by any other modules is omitted.

---

It helps that there isn't a global object for the local module scope, otherwise in-module dead code detection would also not be possible, since any line of code within it could use dynamic access and static analysis wouldn't be able to prove that it isn't used.


Dead code elimination is only straightforward if your language has sane naming semantics (lexical scope, static name resolution, etc). If your language does crazy stuff like allowing function lookup by the string representation of its name in the source code (e.g. JavaScript) it becomes much harder.


Which is why Google's jscompiler requires a stricter subset of JS for "advanced optimizations" including dead code elimination.


C allows you to call functions by string...

C++ also allows it, but the function names are a bit harder to guess.


> C allows you to call functions by string

please elaborate


Possibly OP meant dlopen/dlsym on itself? But that's a stretch, and not really language feature...


And functions aren't guaranteed to be included in the binary unless you pass additional compiler flags with that strategy.


Very few things are guaranteed in C. If you want to guarantee the function exists let the operating system look up the function names by string for you.


Create a string, look up the address of a function in the library. Assign the address to a function pointer, call it.


I think tree-shaking will be a default in new JS bundler. ESbuild seem to enable it by default: https://esbuild.github.io/

I think the reason some other bundlers like webpack didn't/don't have it by default is because tree-shaking became popular after they were invented and tree-shaking was added to the bundler later on as an extra nice-to-have feature.

It seems webpack has built-in support for tree-shaking from version 2. Latest version of webpack might have it enabled by default (not 100% sure but possible) https://webpack.js.org/guides/tree-shaking/


Tim Pope has a plugin for that: https://github.com/tpope/vim-obsession


That's true so often, you could probably write a bot for that response


And half the time it will reference a plugin made by tpope


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: