Hacker Newsnew | past | comments | ask | show | jobs | submit | mattetti's commentslogin

> Series of modular synthesisers produce real-time audio, minimising memory load. > Miniature speaker is acoustically tuned through internal air spacing to amplify and clarify sound within the LEGO SMART Brick’s enclosure. > Responsive audio effects are tied to live play actions; there are no pre-recorded clips.

Very interesting!


The Unreal Engine editor also uses the UE engine which is pretty cool (but I wouldn't recommend writing desktop/mobile apps in UE)


Adding more context: Fluid uses a mix of CRDT + OT to maintain state across multiple clients. I wrote a quick high level explanation of how Fluid uses eventual consistency and why it matters for real time collaboration: https://matt.aimonetti.net/posts/2020-09-solving-real-time-c...


> Elm has a low entry barrier

Sorry Vincent, you lost me there. Elm is interesting for sure, but it's far from having a low entry barrier. Something has basic has having a button play a sound takes forever to figure out. You almost have to learn everything all over again. It's a cool project and I'm glad it's working out well for you, but Elm has a serious learning curve.


Just my 2c:

Paradigm wise: Yes functional programming can seem strange and hard if you haven't done it before. But that really isn't the fault of Elm, elm makes FP as simple as it can.

Language wise: Yes, the Elm architecture takes a bit to grok, but for me it was just an afternoon. Once you grok it, there isn't really much else you need to learn - you can hit the ground running, look up what you need when you need it.

Tooling wise: Just some basic tooling you get will get you very far, no setup either, just install and use.

If you do js interop, yes that can be cumbersome, but it is intentional. Elm strives to keep js out and write libraries in Elm. This is so we can have more of the guarantees that Elm provides. (no runtime exceptions!)

I'd rather say: "Elm has a low entry barrier for a FP language". One of it's main goal is to bring FP to the masses after all. :)


Elm is one of the simplest languages you will ever stumble across. I realise the simple is subjective, but you must be having some kind of mental roadblock, because it really does almost ALL the work for you.


Same as Tom for both TypeScript and Go. Can't be happier (I don't use the vim plugin tho, even tho I was a vim user for many years)


The faster compilation time & speed up is quite real in our real life prod application.

From Go 1.6 to 1.7: Test suite (CI), from 3:34 to 1:48 Docker image building, from 3:05 to 1:50

https://twitter.com/mattetti/status/763913903600349184


On our code base, we're seeing >2x build speed increases vs. go 1.6! Compiling the entire go monorepo, including vendored dependencies, dropped from 73 seconds to 29 on 4 CPUs when moving from 1.6.3->1.7rc6. Huge improvements to CI times, when amortized across go build, go test, docker builds etc.


That's a ridiculous speed improvement. Holy shit.


[flagged]


Yes, compile times have gotten worse in some releases, but there were good technical reasons for the trade-off, mainly the general quality improvements of the go environment. But the good news is, they are not resting but trying to gain back the lost speed - you do not observe this always in larger programs.

Calling people happy about this "g-sheep" is unnecessary name-calling.


It would be interesting to see some benchmarks from each version... :)


Wow. That's impressive. Thanks for the example numbers, Matt!


Thanks to the Go team, getting such an improvement without having to do anything is awesome. And I can't believe there is more coming in 1.8


Following the twitter link and got this: "Something is technically wrong. Thanks for noticing—we're going to fix it up and have things back to normal soon."


I've noticed this too on random pages. Seems Twitter is having issues today.


The link works for me. :)


From Go 1.5.4 to 1.7, I can see that the compilation time has improved. Didn't measure it, though.

However, the binary size didn't change appreciably. From 11129144 bytes, it reduced by 0.00179708340551618345%.


How large-ish is your codebase?


That's what I was wondering. Our largest Go application takes maybe 5 seconds to compile clean. We largely use Go as a number of Microservices though.


167912 LOC


We've been running splice.com on Go for 3 years now and handle 5TB of audio/binary data per day. Our memory usage is around 10-15MB per server and the GC pause time has been really low. You do need to stream your IOs instead of reading everything in memory. In regards to dependency management, we honestly had no issues and now with vendoring, it's even easier. We do use a main repo with lots of smaller packages and only a few 3rd party dependencies that are vendored or available via private git repos. We don't use Google cloud but I heard they have 2 repos, one that has auto generated code and one that has hand written code (but less complete).


Could you elaborate on what you mean by stream your IO? Is this just using buffered io it is there a concept I could read up on here?


yes buffered io, you use readers to read/write small chunks at a time instead of loading everything at once. Go offers way to do both since in some cases, such as loading an entire file in memory can be fine/better/faster.


OT about splice.com: how do you version control DAW projects, you have a custom parser + text diff? Would you support Orion and Reaper? :)


Your tagline should be: splice.com - no more sample packs. Haha.

Nice business model.


the main changes are related to the SSA compiler backend and smaller binary files. 1.7 should also better support binary only packages (so vendors can sell compiled packages without providing sources, it kinda works right now but was hacky/buggy).


Stability increases, in our case, we just update Go on our CI and we get smaller binaries, better performance as updates come in. The code is always backward compatible so our code always compile. We obviously make sure our test suite passes and don't see any regressions. It was nice to get http/2 for free in a recent update, especially with cloudFlare now supporting http/2 push.


We've been slowly migrating https://splice.com from Angular 1 to Angular 2, so far it's been great. We switched to TypeScript, we cleaned up our code organization. We are writing new components in ng2 and migrating old code we we go. The betas have been somewhat painful with bugs and breaking changes, but that's the price to pay. When it comes to performance and consistency, ng2 is a big winner. Why not React you might ask? React is great, but having to figure out/argue for each piece of the frontend stack isn't something we were willing to do. ng2 has its quarks bug at least it comes as a whole and that's super useful for us at this moment.


Re: switching to TypeScript, manual rewrite or transpilation? My codebase is CoffeeScript, so I'm also interested to hear if you have any experience with that (with Angular 2).


My advice would be to compile your CoffeeScript to JS, then start from there. That's assuming that CoffeeScript still produces readable, idiomatic JavaScript...?

As of TypeScript 1.8(?) you can set an "allowJs: true" flag in tsconfig.json, which tells TypeScript to include JS files in your build.

Then you can just manually add type annotations and ES2015/2016 goodness to your code and change the suffix to '.ts' on a file-by-file approach.

I'm doing this at the moment with a fairly large AngularJS 1.5 project, using Webpack with awesome-typescript-loader as the build system, and it's working perfectly so far.


I'm using ts-loader, have you tried both and found awesome-typescript-loader to be better? I'm trying to decide if I should switch to it..


This is exactly what we're doig.


Progressive switch, we use TypeScript for new code and everything we touch old code, we try to convert from JS to TS. The tooling is amazing, even the skeptics in the team wouldn't go back to js now. Defining interfaces, using types really reduced the feedback loop and sped up our dev time. The one thing that is wish is that templates could verified by the compiler instead of waiting for runtime (react uses views written in code and therefore you get that for free when using react + TS)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: