Hacker Newsnew | past | comments | ask | show | jobs | submit | defer's commentslogin

No, the SMS is initiated by the device upon calling emergency, not requested by the emergency service. The standard is called AML.

The format is not secret either, it's just binary encoded.


Ahh OK, well that just sounds reasonable.

This is hilarious to me:

  Android, which uses it for some large component of the system that I've never quite understood
Ninja is really a huge part of AOSP, the build system initially used makefiles. Things got complex really fast with a custom declarative build system (soong) and a failed/aborted migration to bazel. Google developed kati (https://github.com/google/kati) which converts Makefiles to ninja build files (or should I say file), which really is huge:

  λ wc -l out/build-qssi.ninja    
    3035442 out/build-qssi.ninja
Going from makefiles/soong to ninja is painful, it takes several minutes even in a modern machine but it simply flies once ninja picks it up.


As someone, who has not used Ninja, what advantage is there, compared to Makefiles? And is it worth introducing yet another tool, to translate one to the other? Especially, when the Ninja files are that huge, possibly human-unreadable.


The Ninja files being that huge is likely more to do with the Android build environment or the tool that generates them. The main advantages of Ninja as a build executor are that the language is simple and it processes the build graph very quickly.


They are huge because android has hundreds of smallish makefiles but the generated ninja file is a single flat file.

The advantage in android is that the different build systems will generate ninja, so they can interoperate.


JVM does garbage collection, this can stop all threads at safepoints while GC occurs.

Those stops can be enough to ruin your low latency requirements in the high percentiles. A common strategy is to divide workloads between jvms so that you meet the requirement.


CoralRing and CoralQueue (available on GitHub) are completely garbage-free. You can send billions of messages without ever creating garbage for the GC, so no GC overhead. This is paramount for real-time ultra-low-latency systems developed in Java. You can read more about it here => https://www.coralblocks.com/index.php/java-development-witho...


Interesting. But how do you ensure a worker that picks up a task does not pause on gc as well?


CoralRing does not produce garbage, but it cannot control what other parts of your application choose to do. It will hand to your application a message without producing any garbage, now if you go ahead and produce garbage yourself then there is nothing CoralRing can do about that. Ultra-low-latency applications in Java are designed so that nothing in the critical path produces garbage.


Maybe they run a small heap with a zero-pause JVM like Zing, as pause-less GC generally has lower throughput than normal GC.


Java doesn't have real pause-less GC.


Well, "pause too short to matter" just doesn't have the same ring to it.


One millisecond is not a short pause.


Modern low-latency GCs never reach 1ms on all the workloads I've put them through. Mind you I don't GC terabytes of RAM so who knows what happens there.


You can generate binder wrappers from aidl, that would work. This is fairly common when doing platform work (for those like me who work on the operating system rather than on apps).

However, this would be a terrible idea because usually the android api is stable at the Java wrappers (I.e. ActivityManager), not at the aidl level, which would make this very fragile for app development across a multitude of devices and platform versions.


I also want to know this, but in reverse.

I build older android (the OS) versions inside docker containers because they have dependencies on older glibc versions.

This is a memory-heavy multi-threaded process and the OOM killer will kill build threads, making my build fail. However, there is plenty of available (but not free) memory in the docker host, but apparently not available in the container. If I drop caches on the host periodically, the build generally succeeds.


And perhaps k8s is a specific category to consider here. I've read and thought I've experienced where 'active' (as opposed to in-active) page-cache does count towards k8s mem limit.


Well, it's understandable. If you talk about it with other linux-on-MacBook tinkerers I'm sure they'll be more sensible to your cause.

But generally, if you pointed that out to me I'd say the same, picking that hardware puts you in a harder path. Also, I'm not sure if Linux users in general have any interest in winning people over.


I’m not talking about Linux users in general. I’m talking about the subset who hang around forums and other online spaces and purport to help newbies with problems.

Ironically enough, then, I figured out on my own that the biggest obstacle I was encountering with my Intel MPB was Ubuntu’s distribution of firmware, where they officially pretend not to know about certain non-free device drivers, even though they vaguely gesture at the process of acquiring them in their respective official docs. Whereas Debian straight-up distributes them in their ISOs.

Can’t say I’m a big fan of Ubuntu’s wink-wink, nudge-nudge approach to “standing up for free software,” where they get you started down a path then steadfastly refuse to help because VIRTUE! Debian’s approach is saner. And so I’ll be tinkering with Debian, I guess.


Not quite, treble is more about separating hardware specifics in a versioned, backwards compatible way. It's more for replacing the previous HAL system which in itself already abstracted the kernel for driver support (not necessarily just the kernel because modern platforms run things like audio, sensors, telephony outside of the kernel purview in specialized dsps or secondary mcus).


But that's just the kernel. This is more akin to compiling the kernel plus all packages and applications that make up the operating system.


I'm very conflicted with flutter. The developer story is tempting, cross platform consistency but extendable to each system's specifics.

On the flip side there's the odd programming language, the bundle size, the game-engine like rendering (seemingly wasteful, but may improve as hardware evolves).


I find it odd that you consider Dart an "odd programming language", given it is a very "mainstream" language. One of its core goals is to be unsurprising and easy to learn.


except their web story is horrific. they render the entire thing in a canvas which means a giant F.U. to non English-FIGS and a big F.U. to accessibility as well as extensions


Definitely, I remember from reading the documentation that they could also target HTML5/css (https://docs.flutter.dev/development/tools/web-renderer) instead of canvas but I'm not sure how complete / coherent the output is.


what does FIGS stand for?


A short search revealed that it’s a term of art in localization, referring to French, Italian, German, and Spanish. Typically these are the first targets when localizing an initially-English product.

I have no idea what, if anything, Flutter’s canvas-based approach has to do with localization.

Also Flutter exposes a hidden DOM with accessibility information for the canvas. This might someday be superseded by a system like AOM: Accessibility Object Model, which is an API for directly constructing an accessibility tree for non-DOM content like a canvas.


Flutter, because it's using canvas, doesn't play well with text input for non E-FIGS languages. Even when it does work it's a 2nd class experience. Open a flutter demo, switch your OS to a Chinese, Japanese, Korean, type some CJK, watch as it shows placeholders while it goes and downloads fonts. Now try to select the text and use the OS's reconversion features (something you might not be aware of if you don't regularly use non-roman character languages) Install some language or accessibility helper extension. Go run a Flutter demo. Watch as the extension has no way to find the text because there is no text to find, just pixels.

That flutter even got this far strongly suggests the people making it are monocultural. That they are thinking "maybe someday there will be a solution" is not the right way to approach building an inclusive GUI framework in this day and age.


Aha, I think we misunderstood your initial post. You meant "non-English and non-FIGS" languages. Thanks for replying with additional context. I definitely learned a few things.

I see what you mean about Flutter having to download the 16+ MB CJK fonts and lack of visibility for extensions trying to read DOM text. It's possible there's fixes for some of these: the browser local font API could make your local system's fonts available to the Flutter runtime, which would be significantly faster, and the accessibility object model could make content visible to extensions (but only if they were rewritten to read AOM data!).

Also TIL about "reconversion" in CJK IMEs. Pretty neat!

I'm working with some of these same issues right now with a web-based PDF viewer app that runs PDFium in WebAssembly. Trying to make PDF content visible to extensions, IMEs, etc. the same way DOM content is turns out to be quite difficult.

Still, all of this works out of the box with DOM content. It's frustrating how many of these things have given up on "render to DOM".


First thing that comes to mind (as a GUI developer that is currently translating a product) is that the text renderer can't handle accents above and below Latin characters. (e.g. Let's buy crème fraîche in Curaçao) I'd also be surprised if it can handle right-to-left strings as well.


Flutter uses Skia for text rendering and layout; the same rendering engine used by Chrome, Android, and others. I'm beyond confident it can handle all of those situations.

Flutter demos have a localization dropdown for selecting different locales, of which the gallery demo at least supports many, well beyond FIGS: https://gallery.flutter.dev/#/

Note those demos have other problems that make me crazy, like the copious overuse of non-selectable text. Why the default "Text" widget is non-selectable is a total mystery to me: https://api.flutter.dev/flutter/widgets/Text-class.html You have to instead use the "SelectableText" widget: https://api.flutter.dev/flutter/material/SelectableText-clas...

I have plenty of complaints about Flutter, but accessibility and localization aren't among them.


Except Skia is not a text renderer. Skia won't correctly position glyphs for you. It can only render glyphs for you at positions that you provide.


Your understanding of Skia may be out of date.

See SkParagraph: https://skia.googlesource.com/skia/+/refs/heads/main/modules...

And the experimental SkText API: https://skia.googlesource.com/skia/+/refs/heads/main/experim...


It's possible; I haven't looked at Skia for some time. As far as I can tell, in the past, https://skia.org/docs/user/tips/#does-skia-shape-text-kernin... applied.


I'm not familiar with fuschia but those times are what I'd consider normal for an initial compilation of an operating system in regular consumer workstations.

I work on the android operating system and very rarely compile the whole thing from scratch in development environments. Incremental builds plus adb sync (think rsync between compiled artifacts in host and device) make it into a manageable workflow.

Even incrementally, it takes a few minutes to see your changes and that can be a source of frustration for newcomers who are used to instant feedback. Being productive requires making good decisions on how often and what to compile as well as strategies for filling up compilation time.


The last sentence is an eyebrow raiser


I work in videogames - making any change to code takes about 2-4 minutes to compile(huge C++ codebase with a billion templates, it actually takes about 40 minutes from scratch with distributed build, few hours without it), plus second that again for the game to start and test. And god forbid you made any changes to data - then it's about 20-30 minutes to binarize the data packs again and deploy them. Really makes you slow down and think any change through before trying it. The "pray and spray" approach to coding simply doesn't work here.


The compile times are part of the reason why scripting languages like Lua are popular in games. In engine code it's fine, but you don't want to wait minutes to see a change while working on higher-level parts of the game.

The data thing though, that sounds like your engine is poorly optimized for development time. There should be a way to work with data without repacking everything.


While I remember those days of C++ development - I never want to go back there. It's surprising people are willing to put up with this shit at this in this day and age. No wonder there's so much crunch time.


Even when I mostly do managed languages, I keep getting back to C++, why am I willing to put up with it?

It is the language those managed language runtimes are written on, the main language for GPGPU programming, two major compiler building frameworks, and works out of box in all OS SDKs while providing me a way not to deal with C flaws unless forced by third party libraries.

I could be spending my time creating libraries for other ecosystems, but I have better things to do with time.


I sometimes wonder if these sorts of observations provide some reason to make either embeddedable languages or NOT self-host a compiler for a new language, but continue to write the compiler in C or C++. This is a weak argument compared to self-hosting a language, but might be something to consider in passing.


Indeed, that is why many times a language isn't self hosted.

It wasn't because it was lacking in capabilities, rather the authors decided they would spend resources elsewhere.

Naturally it has the side effect to reinforce the use of the languages that are already being used.


C++ is fine. It's just nobody likes designing and writing lighter weight tests and simulations to keep local dev workflows productive.

For things like AAA games and OS development, I'm not convinced simply picking another language solves the problem. At least not while keeping all the same benefits. C builds faster, sure, but it doesn't have the same feature set.


I'm trying to ! https://youtu.be/fMQvsqTDm3k?t=86

I'm getting 150 ms of iteration time on small cases, 200-300 on average ones


We do 0 crunch or overtime as a company policy, but yeah, it does slow down things a bit.

>>It's surprising people are willing to put up with this shit at this in this day and age

All major APIs, the SDKs for PS5/Xbox are all only provided in C++ so it's almost a necessity. Same reason why we all use Windows - the platform tools are only provided for windows.


How are you building your code ? Here's my iteration time when working on the main codebase I'm on, https://github.com/ossia/score which is very template-heavy, uses boost, C++20 and a metric ton of header-only libraries: first on some semi-heavy file and then on a simple one https://streamable.com/hfkbzg


I'm actually curious why they bother converting game data at all into binary formats, before production builds. The logic is all there to load the data and you already have the raw data local; I would assume it takes more logic to unpack it and would seem faster to just use the raw data.


The data before packing and the data after unpacking are probably not the same formats.

Some of the packing steps are also probably lossy (eg. Take this super high poly count model, and cut away 99% of the polygons). If you skip that culling step, the game probably won't run.


Yes, that's exactly what it is. The "data" might be 8K textures, as part of the binarization it gets converted into a format that the client can understand, but also follows a "recipe" for the texture set throught the editor(so convert it a 2K texture with a specific colour spec). Same for models etc.

And yes, the client itself usually can't read the raw data, and even if it could there is not enough ram on consoles to load everything as-is. The workstations we use for development all have 128GB of ram just so you can load the map and all models in-editor.


Have you tried ccache now that it has MSVC support?


We use clang for all configurations and all platforms, and we use FastBuild(we also used to pay for Incredibuild but it wasn't actually any faster than FastBuild in our testing).


Yeah, I mentioned it because I see peers doing different things to be productive during compilation times while newcomers will stare at compiler output. Some will jump to writing documentation, take care of issue management, work on some other ticket entirely, etc.


I'm vastly oversimplifying the issue (also I'm not a doctor), but didn't studies show that this type of multitasking is bad for our mental health and increases the likelihood of burnout?


Not a doctor either, just going on articles I've read on this. The sort of multitasking that causes those problems is when both tasks need frequent attention. If you can essentially leave a task alone for several hours, with some sort of notification if there's a problem, that's fine. Even things that don't take much attention but are ongoing tasks, like doing the ironing, are fine depending what else you're combining it with.


This resonates. I usually wrap my long running commands in something that sends a push notification when they finish so that I don't jump around seeing if things completed or failed. I find the distraction of the push notification less disrupting than continually checking for completeness.


osx comes with a command "say" which is a text to speech tool. I'd do things like make && say "build complete" || say "compile failed" with different voices I thought were funny. generally worked great.

One day, I stepped away and had a particularly intimidating voice say "your build has failed" and apparently knocked out my headphones. I came back just in time to hear that, and see a couple coworkers jump at the sound.

After that, I was much more consistent at disabling sound when I stepped away. I got a little teasing about that day, but generally it worked great.


This is possible in Linux as well. The program is called speech-ng IIRC.


espeak-ng / espeak. But notify-send(1) mitigates the potential for scaring your coworkers.


I use espeak and it works on Termux too.


I used a similar approach before working mostly remote, now I just curl to pushbullet to get the notification wherever I might be working.


This is actually a great idea. I've used from-cli-notifications for things that I knew would take hours, but for coding related stuff I always think "it's not that long." So, I would immediately go into "active attention multi-tasking" and then recheck if the compilation tab finished, and then decide from there where to shift my active attention, without giving it some breathing room.

Lately I've actively tried not to do that (it's a hard habit to break, and I still feel guilt sometimes), though.


This makes sense about moving your active attention to different object(ive)s. I think I slowly stopped doing "active" multitasking and switched to this low-attention multi-tasking once my burn-out got worse. FWIW: I don't think multi-tasking on its own caused the burn-out either, it was a combination of a few things probably.


It's certainly possible, I honestly wouldn't know. Anedoctaly, I find it worse on my mental health to just sit around waiting for things to finish.


fast cycles are liberating, because it's easy to just ask the compiler or your testing harness if you're doing the right thing. I can't speak for the parent, but in my experience, typing isn't the hard part.

With slower cycles, I think more about how much to try before submitting work. Some times I feel comfortable pounding out quite a bit of code. Other times, I know there's some subtlety so I need to double check things. I don't want to stumble on forgetting a const declaration, or something silly like that. Iterations are slower, but you can spend time in flow thinking harder about each loop.

Although, sometimes, I do just stare at the console waiting for feedback. That's usually a good time to go to the bathroom and maybe grab a snack.

Not necessarily multitasking. Just being careful about what plates are spinning, and which I can set down or pick up between steps.


> fast cycles are liberating, because it's easy to just ask the compiler or your testing harness if you're doing the right

Type checking is still a very quick part of compile, so it can still support a "fast cycle" workflow if most simple errors are detected via incompatible types. You just need to go all-in on type-driven development, rather than simple reliance on unit tests.


ah, I alluded to that with "ask the compiler".

but yeah, types are great. Quickcheck is great too. But you'll have to pry my oracles from my cold dead hands. Computers show me over and over how stupid I am. Yeah, if I have a regression, I'm adding a specific test for that.


> newcomers will stare at compiler output.

I know at least for me, I sit in quiet contemplation and think while the project is compiling. I expect the context switch of going between writing docs and writing code on every compile would be too much for my brain. Is it managers making you feel like you need to be doing something else while your code is compiling? I guess I just never felt like I was being unproductive while my code was compiling.


I just browse HackerNews.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: