I landed my current job after being contacted from my "Who wants to be hired?" post. I'm not exactly a rockstar ninja, but I was pinged by multiple people: a pleasant surprise and a welcome change from the online job application robo-filter/recruiter dance. Absent a concrete lead, I plan to post there again next time I'm looking for work.
There is some type funkiness: a Perl variable can have both string and numeric components. However, which one is in use is only a problem in serializers (JSON, etc.) -- in normal usage you can rely on choosing the right operator for the type at hand.
If a bit of self promotion is allowed, check out https://github.com/spring1944/spring1944. It's an RTS, so a little more action oriented than the (typically) more detailed turn based wargames, and a bit larger scale than something like company of heroes, but hopefully fun.
Players tend to be around in the evening GMT. Contributions weclome, either in player-hours or pull requests -- hacking in Lua, primarily, with some supporting tools/infra in perl and python.
Gogs uses a mix of GoGits, a partially Go implementation of git, and the traditional git binary via os/exec. A quick scan shows that it relies on the binary for perhaps the majority of tasks.
Another project I'm aware of that aims to re-implement git in go is https://github.com/kourge/ggit, which looks promising but has been slowing down, activity-wise.
Remote: Potentially, but I'd probably just move to you if you're in a neat place.
Willing to relocate: If it involves crossing an ocean, yes, absolutely. Native English speaker, professional level Russian, delighted to have a reason to learn others.
I'm closing in on two years as a full-stack (emphasis backend) web developer/junior devops type, and I'm looking to mix it up a little bit: something in the direction of systems programming, tool development/more serious DevOps, or programmatic data crunching would be very exciting. That said, I'm open to all sorts of experiences, and look forward to hearing from you -- especially if you can land me overseas.
Some of my recent personal projects include:
* a special-purpose sshd for serving git traffic using go and the crypto/ssh package (no crypto code from my hands, honest)
* failure mode, a tool for burn-testing web apps against external conditions (latency, packet loss, etc. -- similar to the Simian Army at NetFlix)
* year seven as one of the lead developers of Spring: 1944, an open source RTS game.
* documentation for Perl 6
I can get into detail about $work projects if we end up chatting. Thanks!
Check out http://mojolicio.us/ for a more modern take on web dev in Perl -- the most common deployment is a nginx reverse proxy in front of the built-in (non-blocking and preforking) web server -- just like suggested setups in node/python/ruby/etc.
One cool bonus that mojo provides is hot deployments, so there's no need to restart the server process to get new code.
Interesting to think about the sorts of cultural markers that other languages have. Maybe this is just another way of saying 'code smell', but there must be similar elements for JS/Ruby/Python (for example) where it the code isn't strictly 'wrong' (or even debatably wrong), but is still clearly indicative of a particular cultural approach towards coding in that language.
Like overriding Array.prototype.push in JS - it might be just fine, but I tend to pause and re-evaluate my attitude towards the code when I see that going on, because it's a very different approach to coding in JS than I personally use.
I tend to see much clearer indicators in JavaScript code. Global variables? Lots of functions outside any closure? Old skool js coder ahead.
You can also tell javascript only programmers a mile off. Anonymous functions everywhere? Functions have multiple concerns? Use 'var aFunction = function()'? Mono linguist and spaghetti code ahead.
In Ruby, using explicit "returns" or ternary operators can be a hint that someone is newer to Ruby. On the other hand, using symbol-to-proc syntax indicates some experience (I.e. list_of_objects.map(&:method))
Great article! Wish I'd found it a few months ago when I was reading HOP and trying to wrap my head around subroutine installation using globs.
One of my favorite design elements of perl is the ability to lexically declare "Ok, now for some magic!" (aka "no strict 'refs';") while keeping strict checking in effect throughout the rest of the code.
Judging by the ample supply of version number rimshots, some clarification might be helpful: this isn't a Perl release, nor is it a new Perl version. It's the name of a general project in the Perl community to try to decouple Perl 5 the language from perl the runtime, and hopefully achieve interoperability between Perl 5 and Perl 6 code in the process. Perl 5 and 6 are distinct languages in the same family, much like (for example) Common Lisp vs Racket Scheme, which is why interop is an interesting and worthwhile goal.
There are some really cool things happening in the specific projects on the linked page, if you're into compilers and VMs.
It would be nice if it actually said this on the page. I had to poke around for several minutes to figure out what this project actually was, and I'm a Perl person myself.
I'm not sure I get the motivation. Isn't Perl a Unix-y language? You can get "interop" by using pipes or other IPC.
This seems like a heavy, Java-esque solution. 50 lines of Perl does a lot, let alone 1000. Not sure I want to see 20,000 lines of mixed Perl 5 and Perl 6.
Well, the goal is to make sure that the plentiful modules on the CPAN (all in Perl 5) are usable for everybody involved. You're right, directly mixing Perl 5 and Perl 6 isn't likely to be a common usage - at least, doesn't seem that way to me.
The p2 goal is to use as much non-conflicting p6 syntax within p5, and mark conflicting parts in special lexically scoped syntax blocks.
But data and methods should be shared, the ast, compiler, vm, threads and event model ditto.
Syntax blocks also allow easier ffi and sql interaction,
{ use syntax "C"; c declarations ... } being a nice FFI language,
compared to "extern { c decls; ... }", which is also nice.
We want to use efficient signatures, methods, classes, tasks, coros, promises,
hyper operators, types, lazy lists and so in perl, regardless if you call it p5,
p6 or p2.
That being said, I still contend that a Perl 5 parser is still inextricably separable from the runtime. Any parser that wants to be fully compatible with Perl 5 will need to be able to also run Perl 5 code. That is, the parser will need to be coupled with the runtime.
A consequence of the parser needing to be able to execute arbitrary code is that the parser is undecidable. This follows from the halting problem, which is the point of the article I posted and is also pointed out in the one you posted.
As a sidenote, I find the distinction between "static" and "dynamic" parsing to be rather silly. The only people I've ever heard make that distinction are Perl apologetics. No need to be so defensive: parsing is allowed to be ambiguous, and it is allowed to be undecidable -- but it's not considered ideal.
Dynamic and static parsing is different to just calling eval within the parser.
"Dynamic parsing problems" such as solving the halting problem, by e.g. changing the prototypes or eval'ing code in BEGIN blocks are just small practical problems. In practice the parser only has to be GC-safe, which needs a few days work.
| BEGIN b:block { p2_eval(P, b) }
On the other hand the "dynamic vs static parsing" on perl11.org talks about compiling and extending parser rules at run-time, which needs a fast vm to do this. To be able to support macros.
Not perl5, perl6 or nqp. Efficiency should be near the C level, but you shouldn't be forced to go the rakudo way with its insane bootstrapping and serialization overhead just to support BEGIN blocks in its stdlib and support run-time parsing.
The JVM or .NET could do that, but I don't buy the overhead, esp. when you look at fast parser combinators in lua, lisp or look at maru, which is basically a jitting parser, bootstrapping itself.
Of the two types of macros I'm familiar with, neither demands a "dynamic" parser. C-style macros (conceptually) work as a text pre-processor prior to the compiler's lexing phase. Lisp-style macros work on the syntax tree after the parse phase has finished.
In some Lisps, such as Common Lisp, macros are simply functions from syntax tree to syntax tree. Since these execute at compile time, you can still run into the same types of problems you get with Perl's BEGIN. Scheme's syntax-rules macros are more limited: they cannot execute arbitrary code, but they can do pattern matching and substitution on syntax trees. Despite this limitation, syntax-rules covers the vast majority of macro use cases. The term rewriting systems in Haskell and Cat are reasonably similar.
My point to all of this is that macros are no justification for "dynamic" parsing. The only distinction I can find of the two kinds of parsing is that a "dynamic" parser sometimes needs to execute bits of the program before it is able to progress. This is not necessary for macros!
As far as parser combinators go, I conjecture that they will not be powerful enough to deal completely with Perl's grammar. In general, they're limited to LL(k) grammars. Still, you may be able to jump through some hoops to get what you want out of them.