Hacker Newsnew | past | comments | ask | show | jobs | submit | tyhoff's commentslogin

The most important part, no matter if it's email, Nextdoor, WhatsApp, is that we can use it to disperse information quickly, and then get off the platform and meet up in person. Our stoop coffees are broadcasted via WhatsApp, and then some people tell their immediate neighbors via text, and then people show up.


You say the 'watch parties' are too much, but these are actually the events that are the best. They are at 8p, people come over when their kids are in bed, we have 4-10 people show up, everyone's in their sweat pants, and we watch a TV episode. We discuss it for 10-15m afterwards, and we generally part ways.

It's such a low-effort and small event, and it allows people to get into other people's homes in a low-judgement way. It's been one of the more successful events at getting neighbors to become friends with each other.


> We discuss it for 10-15m afterwards

Yeah, that sounds awful. That's the great part though, everyone can participate on their own terms. Do the stoop coffee, do the watch parties, whatever they feel energizes them and brings joy.

It's such a great initiative.


Surely it's just very individual? Watch party of that kind sounds like something I'd never wish to do, but I'd gladly hang out outside for a beer or two.


Author of the post here - would love to hear about your trials and tribulations of building battery powered devices.


I’ve been working with IoT for about 8-ish years and the one thing that has rang true for across platforms, designs, customers, and use-cases is that you can only squeeze so much performance from a setup that wasn’t properly optimized for low power consumption.

I’ve had customers approach me desperately to me trying to make IoT device survive just one night on a small LiPo battery, enough so the sun in the morning will charge it up again, but their solution was a cobbled together mess with a ESP32 looking for their administration network to connect to, a uBlox modem powering up and sending off a 4MB packet every 5 minutes. Turns out, it would have more power efficient to leave the modem powered on and connected to the cell network as you need to exchange something like 25-50 packets per handshake or 2 packets per minute if you’re just idling.

I’ve had the curse of the guy who just fixed everything because I have a background in both hardware and software in addition to knowing cell networks at a low level and TCP/IP stack (usually DTLS, in this case). When I optimize stuff, I will attack it from all directions. For example, it costs more power to receive messages so for anything nonsensical (such as a periodic data packet) I use a non-confirmable UDP packets, i.e. fire and forget. I try to avoid using heavy RTOSes on my devices and opt for a simple messaging library to properly format data for optimal transfer over the cell network. My devices I build have low powered MCUs with a restart timer to wake up periodically. I managed to make a solar powered environment sensor with only an super capacitor as reserve power.

This went on a bit, but I think my point is that for well-architected, low-power devices, you need to start from the ground up, and sometimes that means ditching your IoT platform and spinning your own hardware and firmware. My last observation is that many hardware engineers are not the ones who install or test the solutions they design and are unaware of the power consumption outside of the specs in the data sheet.


Nice job.

Effective capacity also drops with load for many batteries, and there can be subtleties. Read all the data sheets and applications manuals from your suppliers.

Even very good firmware engineers can need reminders that everything you do with a battery operated device drains the battery.

Keysight, R&S, and Tektronix/Keithley all have nice battery test devices and battery test simulators. You can rent one if buying one takes your breath away.

Also IoT devices can require you to use very fast ammeters or sourcemeters to correctly measure net current or power. The RMS reading on your multimeter might not even register fast spin-up and spin-down on a BLE device. That's another use case for the Qiotech tool. Again, the big instrument makers make even nicer stuff. Call an FAE.


Good insights here. I'm going to steel a couple of notes here and update the post if you don't mind!


Thanks. Of course. I'm flattered.


Use a CR2032 battery or be prepared for a life of misery. :)

To a first, second, and third approximation: CR2032 is the largest coin cell that exists. Anything else has terribly weird quirks and may not actually be better than a CR2032.

Basically, there are only three battery choices:

1) Alkaline

2) CR2032

3) Full blown LiPol rechargeable

Any divergence is pain. Lots and lots of pain.


We're building a bicycle product involving loadcells, and one of the issues with them is that strain gauges typically have pretty low resistance. As we need readings at a relatively high sample rate (measuring pedalling dynamics) it's a lot of fun waking up loadcells, getting the sigma delta ADCS to get nicely-settled 24-bit results across multiple channels and synchronising the whole thing across 4 separate sensor nodes. Basically we have to pedaL and chew minimal joules at the same time.

Latest hardware has coulomb counters to keep track of charge state; you probably already know it but the nPM1100 has some good press.

* https://www.nordicsemi.com/products/npm1100


Whats wrong with using grafana?


<3 Me and a few others created this course in 2014. Glad you found it useful! I got so much joy out of the content creating the content for the lectures and labs.


Hope you liked the course! I was the one who first made and taught the course. To my knowledge, it’s still a required course for all CS freshman and is still taught by students.


You speak of such truths :D. It's a seriously under-documented knowledge gap on the Internet. There are an infinite number of Cortex-M4 MCU's but only a few I would recommend hardware companies actually use, primarily due to power consumption of production work-loads (sleeping and deep sleep) and the software packages that are bundled, both of which are generally not assessed before choosing hardware.


Curious, which ones are good & which ones are bad?


Any STM32 L series or nRF can do very low power draw if used correctly. Gotchas include making sure other components on your board aren't drawing power.


It’s primarily about the number of hours. At Pebble, we’d be able to get a pretty good gauge on battery life with about 10-20 devices and about 24 hours worth of data. We recorded the battery’s percentage drop over the course of each hour, averaged all samples together, and then used that to estimate how long the battery charge would last.

This strategy is covered here: https://interrupt.memfault.com/blog/device-heartbeat-metrics...

Of course it’s better with thousands of devices, but we were always surprised how accurate it was (as long as we had a few Android users in the office - it reliably got worse battery life due to worse Bluetooth performance)


Preface: I love Stacked Git and have convinced many co-workers to use it and they all love it. It's really great, especially when used with Phabricator!

---

A question for the maintainers...I was enjoying the command `stg publish` to allow myself to keep refreshing patches locally but every so often push to a branch that others non-stg users were pulling and pushing to. The command was removed in 1.0.

I'm curious why it was deprecated and removed and if there is a good replacement flow for working with non-stg users on branches I can't force push to.


Ah ha, someone did use `stg publish`!

I removed it because it was challenging to test and maintain, its semantics are somewhat complex, and it wasn't clear that it provided value to anyone.

If you wouldn't mind making your case on the issue tracker [1], I will reconsider the fate of `stg publish`.

[1] https://github.com/stacked-git/stgit/issues


I'm not using it any more, but for a while I used stg publish as a way of working with a stack of patches but also providing other people with a "this branch doesn't rebase, you can pull it" interface. It was also nice when working continuously with the same stack of patches (periodically rebasing them onto a moving upstream, cleaning them up, etc) to have the publish view's commits as a record of "this is the delta since the last publish". But it's been years since I had a need for the functionality...


`stg publish` was cumbersome to use and I wrote simple bash script:

    cat ~/bin/stg-update 
    #!/bin/bash
    set -e
    current=$(stg top)
    git push -f origin HEAD:$current
It pushes changes into remote branch with same name as a current patch.


These two commands ultimately do different things. `stg publish` was in the business of not force pushing and only committing the 'delta' between what lived on a branch and the currently applied patches, so that you can `git push [no -f]` to a branch.

Definitely cumbersome to use, but it had fair reasons to be cumbersome.


Author here. Firmware, for better and worse, is stuck in C/C++ land, so many of the topics here are essential to keep a complex, multi-MCU, multi-board setup in working order.

I'm actually really curious what you all do in C/C++ to prevent bad operations from ever being performed in the first place.

For example, should we just change our internal malloc / free to use double pointers instead?

  bool malloc(size_t n, void **ptr) {
    *ptr = <memory>
  }

  void free(void **ptr) {
    ... <free>
    *ptr = NULL 
  }
This way, if anyone actually tries to use that pointer, it will crash the system (hardfault), instead of potentially using corrupted memory, which IMO is much worse.


> I'm actually really curious what you all do in C/C++ to prevent bad operations from ever being performed in the first place.

1. Go as far as you can with static memory. Don't call the allocator: its slow, sometimes requires inter-thread communications.

2. Dynamic uses often can use the stack: almost always in L1 of your local core. Stack allocations are self-cleaning, just don't pass those pointers to anyone else.

3. unique_ptr<blah> covers most of the true dynamic issues.

4. shared_ptr<blah> covers the rest of them.

---------

The above advice is just general-purpose / low-performance code.

If you're entering high performance coding (aka: you're actually worried about fragmentation, memory sizes, and such), then things get more complicated. Embedded is often RAM-restricted, so you end up in this complicated place where you need to think of highly-efficient techniques.

That's almost always a complexity / resource tradeoff. Efficient techniques are just innately more complex than general purpose techniques. Try not to reach for efficient coding unless you know its something you need to do


> Dynamic uses often can use the stack

But they should also be very careful when deciding when to use the stack. Uncontrolled input + alloca = a bad time.


This gives a false sense of security, since the most common use-after-free bugs are if you kept a pointer around. Your method doesn't stop things like this:

    void *p1, *p2;
    malloc(100, &p1);
    p2 = p1;
    ...
    free(&p1);
    *p2; // oops!
Obviously its trivial in an example like this, but when p2 leaves the lexical scope then there's a problem.


If you are transferring ownership, you would do

    p2 = p1; p1 = NULL;
However if you are intentionally doing multiple ownership, then yes you can still have problems.


For MCU code, it's best practice to statically allocate all memory. If you do need dynamic allocation, it's better to use either fixed size pool allocators, or allocate only pools.

For performance critical code running on a more powerful system with an OS, it's still good to stick with static allocation, or malloc and free only during initialization time.

Over the last decade I've written firmware for space ships, self driving cars, and autonomous aircraft, and I've never needed more than an allocate only pool or an O(n) LRU cache data structure. I've written plenty of software that was hard on the heap, but rarely does memory need to be freed during runtime, and it's rare to need raw pointers in C++ when writing such code.

Ideally, your "firmware" is written in a way that abstracts away the target hardware enough that it's basically just weird software. Then, you can unit test it, and even run linters and memory sanitizers on it.


> This way, if anyone actually tries to use that pointer, it will crash the system (hardfault), instead of potentially using corrupted memory, which IMO is much worse.

Oh you sweet summer child. Null pointer deference is undefined behaviour, the compiler will happily turn it into corrupting memory under the right circumstances.


> Oh you sweet summer child

Should have clarified. On the embedded systems I work on, generally Cortex-M4, you set an MPU region on address 0x0 and it turns into a hardfault, so it's pretty easy to catch.


If you were writing assembly that would be reliable, but in C it isn't. The compiler knows that NULL dereferences are undefined behaviour and so won't necessarily compile them into memory accesses.


Yeah, in C you need to use assertions (or simple checks) for things that might be null.

That said the compiler isn't infinitely smart (thank god) and complex null derefs will "safely" make it to runtime. (What a sad world we're in.)


A good optimizing compiler can remove the dereference entirely if it notices, unfortunately.


How big is this MPU region? Pointers to large structs can easily allow access past the protected region.


It is undefined by the standard, a compiler is thus free to make null pointer references into hard faults. You need to know your tool chain and your environment.


Why is it stuck in the C/C++ land? I think if you can write a firmware in C++, then it should be possible to write it in rust, too. Am I wrong?


It is very possible to write firmware in Rust, as long as the device is using a chip that’s supported.


One thing I try to do in c++ is pass things by reference in lower level libraries so that client code has to be defensive but library code assumes non null.


There is at least the option to use a more modern language if it translates to C.

Nim for example does that, and has compiler options for embedded systems.

https://nim-lang.org/docs/nimc.html#nim-for-embedded-systems


Author here! I don't think we are in disagreement, just mostly in different interpretations. You are talking about a car, with likely 200-300 embedded systems backed into it.

If the light sensor or the braking module goes into a completely unexpected state where it can no longer be communicated with, it has corruption, etc, you need to do something about it.

The solution isn't to reboot the car, it's to reboot the individual module and quickly get it back into working order. Rebooting the braking or lighting module should take on the order of milliseconds (if done correctly), and that is much, much better than having a piece of software in a corrupted state for seconds/minutes/hours.

In a car, there is a mothership board that is orchestrating everything, pinging every module, and ensuring everything is in working order. Just like a kubernetes setup, if any one piece fails to ping or looks off, just reboot it and get it working ASAP.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: