Hacker Newsnew | past | comments | ask | show | jobs | submit | thebricksta's commentslogin

It doesn't really matter if it is a mistake; the point is this sort of thing should not happen. Processes should prevent it.

Oops, we lost your data - but don't worry, the team here all agrees that it was a mistake.


I have a similar take to yours.

Early in my career, I really bought the stuff the Agile founding fathers promoted (Martin, Uncle Bob, Beck, etc). I tried bringing it into my code and pushing it on my teams, but it never went well. I tried finding great examples they've implemented to use as, well, examples - but never found anything. Turns out, these gurus of coding almost never release any open source code to be scrutinized outside of toy examples in their books.

I've realized the reality is that many of their suggestions are actually pretty reasonable, but they take time to implement. Most business software is written under a relative time crunch, and the time required to slow down and properly implement them is excessive and unaffordable.

More specifically, in the case of DDD, most businesses cannot afford to make every developer a domain expert so they can properly refactor to the DDD guidelines. The required think time and one-on-one time with an expert to learn the domain would be far too costly.

Further, it pushes a different type of complexity into the implementation that a non-expert cannot understand, which actually slows down future development when new hires fumble the domain. Thus, new hires end up having a longer ramp up time with less productivity and more handholding during it.


I studied a number of books in that field last year, and much of this entire thread is missing some key points.

The learning psychology field often differentiates practice from deliberate practice, where practice is just about anything and deliberate practice is focused learning.

In deliberate practice, you have to work at near your skill level, you need to apply proven learning methods, it should take maximal effort, you need a feedback mechanism to course correct, etc. A great deal of focus goes into creating effective mental models and intentionally removing ineffective mental models. It requires good coaching. As you progress in expertise, your practice should involve more risks and failures. (And there's still much more to be said here)

Gigging and noodling aren't going to make for efficient practice as it likely won't be full concentration, won't involve feedback, won't involve challenging material at your skill level, etc.

Also, you can spend 10,000 hours jamming alone in your bedroom on same set of guitar tabs and make shockingly progress compared to someone who spends just a few hundred hours intensely studying music books with a metronome and tape recorder.


The article doesn't do a great job at explaining that this isn't always just filtering, sometimes it's aggregation too.

A mobile client may need data points to display a single page that require calling 20 different APIs. Even if every single backend offered options for filtering as efficiently as possible, you may still need an aggregation service to bundle those 20 calls up into a single (or small set) of service calls to save on round-trip time.


You still have to aggregate somewhere. You can do it on the client or the frontend backend, it still has to get done. In the case of the latter we’re adding one extra hop before the client gets their data.

This pattern is advocating for reduced technical performance to accommodate organizational complexity, which I think the parent finds odd. You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios (after the client called)


> In the case of [backend for frontend] we’re adding one extra hop before the client gets their data.

> You either have the client call 20 service/data?client_type=ios or you have the frontend backend call 20 different service/data?client_type=ios

The article touches on this point, and it mirrors what I've seen as well. The time from client -> backend can be significant. For reasons completely outside of your control.

By using this pattern, you have 1 slow hop that's outside of your control followed by 20 hops that are in your control. You could decide to implement caching a certain way, batch API calls efficiently, etc.

You could do that on the frontend as well, but I've found it more complex in practice.

Also a note: I'm not really a BFF advocate or anything, just pointing out the network hops aren't equal. I did a spike on a BFF server implemented with GraphQL and it looked really promising.


You won't necessarily have to have ?client_type=xyz params on your endpoints if the BFF can do the filtering, so it saves having to build out all sorts of complexity in each backend service to write custom filtering logic. Of course, you'll pay the price in serialization time and data volume to transmit to the BFF, but that's negligible compared to the RTT of a mobile client.

I'd much rather issue 20 requests across a data center with sub-millisecond latency and pooled connections than try to make 20 requests from a spotty mobile network thats prone to all sorts of transmission loss and delays, even with multiplexing.


> You still have to aggregate somewhere.

Tbh, I'm not entirely sold on this - although I see this (server-side aggregation a cross data sources) as the main idea behind graphql. So seems like it belongs in your graphql proxy (which can proxy graphql, rest, soap endpoints - maybe even databases).

But for the "somewhere" part - consider that your servers might be on a 10gbps interconnect (and on 1 to 10gbps interconnect to external servers) - while your client might be on a 10 Mbps link - over bigger distances (higher latency).

Aggregating on client could be much slower because of the round-trip being much slower.

In addition, you might be able to do some server-side caching of queries that are popular across clients.


I agree with your assessment here, but one additional benefit is the capability to iterate faster on the backend. You have control over _where_ the aggregated data is coming from without waiting months for users to update their mobile app so that it sends requests to a new service, for example.


My company uses this pattern extensively, just as indicated in the post. Frontend teams deliver their own backend-for-frontend and the backend teams just worry about their own microservices. Generally, it works out pretty well most of the time.

The big issue I've been seeing is that occasionally frontend teams will decide to develop "features" by stringing together massive series of calls to the backend to implement logic that a singular backend could do much more efficiently. For example, they commonly will have their backend-for-frontend attempt to query large lists of data, even walking through multiple pages, in order to deliver a summary that a backend service could implement in a SQL query. Unnecessary load on the backend service and on the DB to transmit all that data needlessly to the BFF.

I know the easy answer is to blame the frontend devs, but this pattern seems to almost encourage this sort of thing. Frontends try to skip involving the backend teams due to time constraints or just plain naivety, and suddenly the backend team wakes up one morning to find a massive increase in load on their systems triggering alerts, and the frontend team believes its not their fault. This just feels like an innate risk to promoting a frontend team to owning an individual service living in the data center.


Many of these backend systems will be Off-the-shelf products, e.g., SAP, Salesforce, Marketto, and lots of other systems people use. Many of these systems don't have APIs that work well for interactive frontends. And you have many different frontends, for your sales people, your project engineers, your marketing department, your different suppliers and partners. These different apps have different data & api needs, and will need to aggregate data from different systems. Systems that cannot be modified to execute the most optimal SQL query for your frontend. And it's too much business logic to put in an API gateway. And all these systems have their own lifecycles, so different project timing, different releases, etc. A BFF makes a lot of sense to make all of this manageable.


this is what happens when you give frontend teams tools with to much power in an environment where frontend and backend team don't communicate well with each other.

i have seen this first hand with a customer. to their credit, they didn't even have a backend team, until i was brought on board to help them rewrite some of their slower frontend functionality into some more efficient backend functions


The optimist in me hopes this really is a quality of life improvement for the cattle. The pessimist in me fears this a way to begin patenting livestock.


Being out in the sun itself would be a huge step up in quality of life of cattle. Doubt free range cows represent a majority.


Either way, 1/2" long hair probably isn't a big overheating factor


It's a fun concept, and maybe will be useful in some weird edge case of a lawsuit, but no. Most recent music infringement lawsuits seem to argue that some combination of the sound design, groove, rhythms, chord progressions, melody or reduced melody, structure, and lyrics wind up giving a song the same "feel" as a prior song, and that's the basis of the copyright infringement. Then pseudoscientific experts come in and pick and choose common musical elements that both the songs share to attempt to justify the claim, oftentimes wrongfully taking credit for inventing genre-wide defining musical elements. Adam Neely did a good job touching on this in his recent analysis of the Dua Lipa Levitating lawsuit [1].

An AI generated song machine would have to nail a lot more elements than just the melody notes to properly stop music copyright cases. In my view, a more interesting project that might be more effective in defusing lawsuits would be to try to catalog all of the musical tropes that define genres, then attempting to detect how common they are in that genre. In an ideal world, maybe this would be able to drive a metric of how similar specific two songs are vs. picking any two songs in that genre at random.

[1] https://www.youtube.com/watch?v=HnA1QmZvSNs


>Adam Neely did a good job touching on this in his recent analysis of the Dua Lipa Levitating lawsuit [1]

He most certainly did not. Of all the different takes out there, his is very weak.

>Most recent music infringement lawsuits seem to argue that some combination of...

There is a very good reason: as he mentions, the chords diversity use in pop songwriting is typically so poor that based only on that, the amount of things considered plagiarism would thus be ridiculous. If the similarities affect almost all dimensions (style, arrangement, rhythm, melody, ...) to the point of being "essentially the same", then it's exactly what people would want the law to exist for.


> He most certainly did not. Of all the different takes out there, his is very weak.

I think Adam Neely did a good job explaining what infringement lawsuits mean in the context of popular music production. Whether or not you agree with the strength of his case on this particular lawsuit, well that's not quite the point I was trying to make here. Still, what do you consider to be a strong take on this case?


It's unfair to point weakness without argument. If you could elaborate your point, that'll be enlightening..


Most superstar pop singers have fantastic singing voices and great pitch control. Autotune shows up because of some mix of 1) the modern pop aesthetic demands superhuman tuning, 2) some degree of autotune artifacts are expected as part of the modern sound, and 3) it can intentionally be used as an effect (T-Pain).

To give some more detail about both 1 and 2 -

Pitch control is more than just hitting the note; its about how well you can onset at the right pitch, how well you can hold the pitch once hit, how well you can jump each pitch interval and land on the right pitch, how well you can pitch through different articulations, different vocal ranges, etc. The modern pop sound has accepted that superhuman levels of pitch control that lock the vocal into tune with the perfectly tuned synthesizers/samplers are more important than a natural sound.

Also, since we've been using autotune for so long, it has almost become natural. We expect to hear it to some degree on every track, especially in more difficult vocal areas. If it wasn't present, one might feel the song sounds "indie" or worse, dated.

Lastly, one thing that fascinates me about the autotune complaints are that it's just one stage of a very long vocal processing chain. To my ears, the tweaks provided by dynamics processors are much more dramatic than autotune when applied to a reasonably proficient singer. Autotune is just one step of a processing chain that can easily run through 10+ processors to end up at the right sound.


The best definition I've seen for the success of a piece of music is this: "What emotion is the artist trying to convey and how well does it convey it?"

Throughout the composing, arranging, recording, mixing, and mastering process, there are thousands of choices to be made, and the correctness of each choice is entirely linked back to that goal: Does the choice help to convey the emotion, or does it detract from it?

To that end, there is no correct choice, no correct or optimal harmony, no correct note, no correct rhythm, no correct timbre. It's all contextual in relation to conveying the desired emotion.

I'm really not sure how you could ever train a NN to make choices in that regard without first trying to teach them how to understand the impacts of their choices on the emotions conveyed.

At best, you may be able to train a NN to reproduce emotionally-void works in a particular style, and perhaps assign some emotion through the timbres selected (ambient music comes to mind here). Still, this isn't much of an achievement. You could easily codify the rules taught in Music 101 about harmonization and melody composition to a computer and have it spit out bland but pleasant excerpts, no deep learning required.


As someone with a fair amount of experience in both domains, I'd agree with this.

I see a lot more utility in sculpting sound and helping streamline the production side of things than the uses of NNs on the composition side.

There already are tools for automatically composing melodies and harmonies, and some of them are actually quite good, but they're hardly used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: