Hacker Newsnew | past | comments | ask | show | jobs | submit | tonyarkles's commentslogin

Boids is so satisfying!

Counterpoint: if a small part of the process is getting tweaked, how responsive can the team responsible for these apps be? That’s the killer feature of spreadsheets for business processes: the accountants can change the accounting spreadsheets, the shipping and receiving people can change theirs, and there’s no team in the way to act as a bottleneck.

That’s also the reason that so-called “Shadow IT” exists. Teams will do whatever they need to do to get their jobs done, whether or not IT is going to be helpful in that effort.


i've seen many attempts to turn a widely used spreadsheet into a webapp. Eventually, it becomes an attempt to re-implement spreadsheets. The first time something changes and the user says "well in Excel i would just do this..." the dev team is off chasing existing features of excel for eternity and the users are pissed because it takes so long and is buggy, meanwhile, excel is right there ready and waiting.

I always see this point mentioned in "App VS Spreadsheet" but no one gives a concrete example. The whole point of using a "purpose" build app is to give some structure and consistency to the problem. If people are replicating spreadsheet feature then they needed "excel" to begin with since that is a purpose built tool for generalizing a lot of problems. It's like I can say well my notebook and pen is already in front of me, I can use this why would I ever bother opening an app? well because the app provides some additional value.

I have never heard of shadow IT. What is that?

It's when the users start taking care of IT issues themselves. Maybe the name comes from the Shadow Cabinet in England?

Where it might not be obvious is that IT in this context is not just pulling wires and approving tickets, but is "information technology" in the broader sense of using computers to solve problems. This could mean creating custom apps, databases, etc. A huge amount of this goes on in most businesses. Solutions can range from trivial to massive and mission-critical.


I think the term is mainly just because it tends not to be very visible/legible to the organization as a whole (and that's probably the main risk of it: either someone leaves and a whole section of the IT infrastructure collapses, or someone sets up something horrifically insecure and the company gets pwned). Especially because most IT departments hate it so there's a strong incentive to keep it quiet (I personally think IT organizations should consider shadow IT a failing of themselves and seek out ways to collaborate with those setting it up or figure out what is lacking in the service they provide to the rest of the company that means they get passed over).

That's quite possible. I've done a certain amount of it myself. A couple of programs that I wrote for the factory 15+ years ago are being used continually for critical adjustment and testing of subassemblies. All told it's a few thousand lines of Visual Basic. Not "clean code" but carefully documented with a complete theory of operation that could be used as a spec for a professionally written version.

My view is that it's not a failing, any more than "software development can't be estimated" is, but a fact of life. Every complex organization faces the dilemma of big versus little projects, and ends up having to draw the line somewhere. It makes the most sense for the business, and for developer careers, to focus on the biggest, most visible projects.

The little projects get conducted in shadow mode. Perhaps a benefit of Excel is a kind of social compromise, where it signals that you're not trying to do IT work, and IT accepts that it's not threatening.

There's a risk, but I think it's minimal. Risk is probability times impact, measured in dollars. The biggest risks come from the biggest projects, just because the potential impact is big. Virtually all of the project failures that threaten businesses come from big projects that are carried out by good engineers using all of the proper methods.


It's where you have processes etc set up to manage your IT infra, but these very processes often make it impossible / too time consuming to use anything.

The team that needs it ends up managing things itself without central IT support (or visibility, or security etc..)

Think being given a locked down laptop and no admin access. Either get IT to give you admin access or buy another laptop that isn't visible to IT and let's you install whatever you need to get your job done.

The latter is often quicker and easier.


I'm laughing because I used that exact same phrase: "shit umbrella". Like some of the other replies mentioned, telling your team it's raining is great. The balance I found was to let them know what's coming and why but to let leadership's "pivots" to stabilize for a few days before sharing the unfiltered shit stream with my reports. This meant that the team still knew what was going on early but didn't panic as much when there was a sudden crazy random request from leadership that would be highly disruptive.

> but for technical tools it can be pragmatic

I've thoroughly enjoyed using ImGui for tooling around image processing, computational geometry, a bunch of 3D projection stuff. The fact that it's based on OpenGL or Vulkan or whatever backend you want is a big win for this kind of work. I can just take a bunch of pixels, throw them into a texture, and render some quads with those textures painted on them after going through some 2D transformations or 3D projection/transformation. It's quite beautiful for all of this. ImPlot for doing basic data plotting and the built-in ImGui widgets for controlling the whole thing.


Hahahaha well that behaviour is certainly fun!

I recently had a less wild but similarly baffling experience on an embedded-but-not-small device. Address 0 was actually a valid address. We were getting a HardFault because a device driver was dereferencing a pointer to an invalid but not-null address. Working backwards, I found that it was getting that invalid address not from 0x0 but rather from 0xC… because the pointer was stored in the third field of a struct and our pointer to that struct was null.

   foo->bar->baz->zap
Foo = 0, &bar = 0xC, baz = invalid address, *baz to get zap is what blew up.

In my own experience with “modern-ish C++” (the platform I work with only supports up to C++17 for now), once we started using smart pointers, like unique_ptr and shared_ptr, iterator invalidation has been the primary source of memory safety errors. You have to be so careful any time you have a reference into a container.

In a lot of cases the solution is already sitting there for you in <algorithms> though. One of the more common places we’ve encountered this problem is when someone has a simple task like “delete items from this vector that match some predicate” and then just write a for-loop that does that but doesn’t handle the fact that the iterators can go bad when you modify the vector. The algorithms library has functions in it to handle that, but without a good mental checklist of what’s all in there people will generally just do the simple (and unfortunately wrong) thing.


How much performance do you give up with smart pointers?

Not enough that we’ve ever noticed it being at all significant in the profiling we do regularly. The system is an Edge ML processor running about 600 megapixel/sec through a pipeline that does pre- and post-processing on multiple CPUs and does inference on the GPU using TensorRT. In general allocation isn’t anywhere near our bottleneck, it’s all of the image processing work that is.

To give you a bit of insight, around the same timeframe (late October/early November) I directly observed two high-accuracy RTK GPS receivers reporting high accuracy (2cm), full 3D DGPS lock with carrier phase, and positions wandering within about a 5m circle horizontally. The altitude was staying pretty consistent (within about 1m, which was outside of the reported accuracy but not bad) until there was a sudden 60m altitude shift. This was all while they were sitting static on the ground, verified both by the crew and the accelerometer, gyro, and RADAR data.

There wasn’t a software fix per se, but we were able to quickly add a check to verify that the Kalman Filter’s position variance estimate was on the same order of magnitude as the accuracy level that the receivers were reporting and put a big red warning up. This wasn’t a flight-critical system, but it is the first time we’d ever seen that behaviour from those receivers and we’ve used them for 5 years.


Not my area at all, but I'm extremely surprised that a fly-by-wire system would use GPS as an altitude reference. Is that really the case?

It’s a combined signal system, using pressure based sensors + gps.

And inertial guidance too?

I don’t know what airbus uses I only looked into the schematics of commercial avionics like Garmin. I doubted though IMU drift and calibration introduce more error than they can provide in useful signal, old school pressure sensors + gps adjusted manually or automatically for regional pressure settings (pilots get these numbers through radio when they enter a new pressure area) is accurate enough (~1m). I’ll let a real avionics engineer correct me here, I’d be curious if that signal is worth the hassle + I can imagine such tiny SMD sensors ARE the biggest victims of radiation hallucination.

i would expect a huge shift like that to violate the gaussian assumption of the kalman filter? (which i guess is what you're checking, sort of?). regardless i would expect the kalman filter to smooth the shift over some substantial time at least?

I think it more likely these receivers fell for a spoof GPS signal or some software bug internal to the receiver than a solar bitflip.

In our case I don’t believe it was solar bitflips but rather wildly changing ionospheric conditions. I was primarily pointing out how you can have a software-only fix for problems like this.

Without going too far into the weeds, the fact that the receivers in question were reporting high accuracy under uncertainty is definitely a software bug in the receivers from my perspective. There was a different receiver with a completely different chipset in it on-site too that was experiencing similar issues but was reporting low accuracy. Without going into too much detail, I’ve got pretty good reasons to believe it wasn’t spoofing.


> moving my legs using her upper body, it felt quite intimate and I admired her for being so professional

This highlights something that I've been chewing on a lot lately. I'm not sure what you specifically meant by the word "intimate" when you said that, but I do think it's really interesting to distinguish between "intimate" and "sexual", even though they often coincide.

As an example, years ago I was staying with some out-of-town friends after a break-up and they wanted to introduce me to a couple of lovely single women they knew. I hadn't really been taking great care of myself in the fallout of the breakup, so I went and shaved and got cleaned up. While doing my hair, I realized that my eyebrows were pretty unruly and somewhat sheepishly asked my friend's wife if she'd be comfortable taking some tweezers to them and helping me get them cleaned up. It wasn't, even a little bit, a sexual moment but it ended up being incredibly and unexpectedly intimate. We were both pretty surprised by it and ended up getting closer (as friends) afterwards.


What a nice story! Cleaning up eyebrows shows nicely the discrepancy between intimacy (feeling love and care) and erotica (feeling lust?).

The hair grooming in the article probably felt similar.

Thinking about the physio therapy, her upper body felt very warm and soft but it was probably a rather standard technique for firmly moving the joints and ligaments in legs and hips.

What it made most intimate was not just the softness of her body but also the care she took for the movements, knowing that it would help.

So my limbic system went into oxytocin producing mode, which the aware mind easily picks up with warm thoughts. I think that's where the bridge between intimacy and sexual thoughts can happen, but there my thinking was not firmly going into that direction, it just felt warm and comfortable, even a bit emotional.

In your case the feelings apperently came from both directions, it was not a professional/client context after all.


> What it made most intimate was not just the softness of her body but also the care she took for the movements, knowing that it would help.

100% that was a big part of it too for me. It was the care and attention that was going into it, plus the element of trust that goes into giving someone consent to inflict sharp but short-lived pain.

I’d actually be really curious on the physiotherapy side of it whether there is actually a combination of intimacy and professionalism happening on the other side of it. I’ve done physio with people who did not give me warm and fuzzies at all, and with people who, like for you, left me with that nice oxytocin sense of satisfaction. I wonder if the people who left me with that feeling are good at what they do because they have some added degree of empathy or mirror neurons or whatever that makes them feel good when they treat their patients softly and intentionally.


Indeed i think it is a win win between caregiver and patient, which has little to do with financials. One of the feats of the limbic system is promoting emotional resonance which can happen in both directions and does not to have to imply romance.

I mean... you're completely right and some of the stuff is as ridiculous as you're suggesting (e.g. the furniture). However... what we're agreeing on is that the fungus is absorbing alpha/beta particles and gamma rays that are coming off the radioactive material, which in theory should mean that it would act as a radiation shield. Whether it's a more effective radiation shield than other options is the big question, and for space travel in particular the question I'd want to know is how effective is a given mass of this fungus relative to other options (e.g. water).

Id like to know about its failure modes. Does the fungus die when kept in less than ideal conditions? How quickly?

With respect to the frequency changing as the signal's downsampled, I'm pretty sure the author isn't correctly keeping track of the fact that by having fewer samples they're effectively changing the sample rate. It looks like the FFT every time is using 2048 bins, which is somewhat unexpected. They're not documenting how they're taking a 2048-point FFT with fewer than 2048 samples. Otherwise, fantastic article!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: