Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ruby does have support for callcc, which most lisps, specially the fast ones like Common Lisp, don't, since it is a pain to implement a fast one without implications in your whole language.

Gambit-C is one of the fastest Lisp implementations around, and it does have a full and performant implementations of call/cc. And I'm afraid that Common Lisp happens to be "fast by sccident" rather than "fast by design" - there's probably more extra speed to be gained by having a better implementable standard (which CL isn't, or at least not that much) than by omitting call/cc.

(Also note that call/cc is sort of out of favour these days, delimited continuations is where it's at.)



Well, actually speed was a design goal for Common Lisp. Thus the standard defines the semantics of a compiler, including a file compiler. The CL standard provides stuff like stack allocation, inlining, fast function calling, compiler macros, type system, type declarations, compiler strategy annotations, lots of primitive constructs (like GO in TAGBODY). Most real implementations add several more speed-relevant features.

If you say that Common Lisp 'happens to be fast by accident', then this is completely misleading. 'fast' was a design goal and many implementors have done quite a lot to make demanding applications (like huge object-oriented CAD systems) to be fast.


Well, actually speed was a design goal for Common Lisp.

It might have been one of the design goals, but it wasn't the ultimate design goal (which, I believe, was making a convergent compromise dialect to rule them all), otherwise certain parts of the language would have been most likely simpler. The things you mention are the good stuff. The problem is that there's also some bad stuff that hindered performance of CL on stock hardware until quality compilers were written. Yes, SBCL is quite fast. But you'd have a hard time trying to convince me that even modern implementations of Common Lisp represent some sort of performance optimum in the same way that, say, Stalin Scheme represents an optimum.

Also, wouldn't TAGBODY/GO be replaceable in presence of proper tail recursion by structuring the "TAGBODY basic blocks" into mutually recursive function calls? The same argument could be applied to inlining: Yes, the Common Lisp standard mentions inlining. But did you notice how nowadays, that's more or less considered an implementation issue rather than a language issue? How JVMs are capable of inlining just fine without having to be hinted what to inline and what not to inline?

Now I'd understand some of the hints, perhaps DYNAMIC-EXTENT makes somewhat more sense if the purpose is to say "It's my intent to create this value for a limited period of time, yell at me if I pass it out", but in general, from where I stand, a lot of the design decisions seem to make writing Common Lisp compilers actually more complicated to modern compiler writers.


Common Lisp had several design goals. Speed was an important design goal. The users of Common Lisp wanted to be able to deploy and run large and demanding software on stock hardware.

> ultimate design goal (which, I believe, was making a convergent compromise dialect to rule them all), otherwise certain parts of the language would have been most likely simpler.

No, there was no ultimate design goal and Common Lisp was not designed as a compromise dialect at all. Common Lisp was designed to be a modern replacement for Maclisp and was incorporating several projects working on a modern Maclisp variant, for example NIL, Spicelisp and Zetalisp. Common Lisp was not designed to compromise over Emacs Lisp, Franz Lisp, Portable Standard Lisp, Lelisp, UCI Lisp, Interlisp, Scheme and whatever dialect was there at that time (Common Lisp first round of design was between 1982 and 1984. Then the standardization of ANSI CL was worked on until the early 90s.) If Common Lisp was compromise, then mostly between Maclisp successors. Later during standardization the Condition System of ZetaLisp was integrated, the LOOP of Maclisp and CLOS was developed as a new object system (based on New Flavors from Zetalisp and Portable Common Loops from Xerox). CLOS was also not as a compromise, but a radically new system optionally based on a Meta-Object System.

CLtL1 says: Common Lisp is a new dialect of Lisp, a successor to Maclisp. Influenced strongly by ZetaLisp and to some extent by Scheme and Interlisp.

> that hindered performance of CL on stock hardware until quality compilers were written

Quality compilers were written almost from day one: CMUCL, Allegro CL, LispWorks, Lucid CL. Of those Lucid CL was the fastest with the most optimizations.

Note that there are two or three different types of speed that needs to be addressed in CL: 1) the speed of unoptimized, fully dynamic and safe Lisp. 2) the speed of optimized, non-dynamic, and possibly unsafe Lisp. 3) the speed of production applications which need to be safe, somewhat dynamic, but optimized.

> Also, wouldn't TAGBODY/GO be replaceable in presence of proper tail recursion by structuring the "TAGBODY basic blocks" into mutually recursive function calls?

TCO is done by some implementations. Generally it is not seen as a useful default feature. Common Lisp favors iteration over TCO-based recursion. That's also what I favor. Since there are some implementation targets which don't support TCO (or where TCO makes implementations more difficult, one went to the simpler language). For example some of the Lisp Machines at that time did not support TCO. Today for example the JVM (where Common Lisp also runs on) does not support TCO. Scheme is seen as a different Lisp dialect and CL mostly learned from Scheme the default lexical binding - not more. Remember Scheme is much older than CL. CL did not adopt Scheme's one namespace, its calling semantics and conventions, not its naming conventions, not CALL/CC, not its macro system, ... This again shows that CL is not a compromise dialect - it simply does not support a lot Scheme-style programming - even though Scheme is a decade older than CL. By design.

> Yes, the Common Lisp standard mentions inlining. But did you notice how nowadays, that's more or less considered an implementation issue rather than a language issue? How JVMs are capable of inlining just fine without having to be hinted what to inline and what not to inline?

Common Lisp was not designed for one special implementation platform. Some CL compilers may honor INLINE declarations (many do), others do inlining automatically (like Allegro CL). If an implementation of Common Lisp on the JVM uses its inlining capabilities, fine. But there are many other implementations. ECL for example compiles to C and nobody is would say it is a good thing to stop working on that, just because the JVM exists. The JVM is a language issue. It's the Java Virtual Machine. It was not designed to host Lisp or support efficient implementations of Lisp. One can implement Lisp on the JVM, but it is a not so good fit.

> a lot of the design decisions seem to make writing Common Lisp compilers actually more complicated to modern compiler writers.

You are vague on these issues. Sure a good Common Lisp compiler is complex, but because the complexity is in things like the type system, type inferencing, etc etc. The base language itself is relatively simple. The result is that CL compilers still are producing faster code than most dynamic languages. Native implementations are still much faster and tighter than what the JVM can offer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: