Hacker Newsnew | past | comments | ask | show | jobs | submit | ethangarofolo's commentslogin

5-hour mission? Surely something so awesome can't actually exist...


Back in my day, we used to do 17-hour overnight missions, where the crew would have to sleep in ships, since the simulation would continue through the night.

We once did an away mission where the crew was stranded on a remote planet and running out of water. We split the crew, where one half we going to find a viable water source and the other needed to track down a local scientist who had the parts we needed to repair the ship.

After dressing them up in our weirdest costumes, we trekked them outside. The water group were sent to a local convenience store with a couple of dollar bills and told to "act natural" as they picked up some bottled water. I was happy to find that they absolutely did not act normal.

The other group went to a local staff member's house, which had been decorated to look like the scientists laboratory.

Needless to say, the whole experience was quite memorable, and highlights how the computer controls are only part of what makes the simulation fun.


I agree with you. Remote optimizes for one set of things, using a model that works great in a low-trust environment like open source. And you can probably get pretty good results with it. And I find Zoom calls far more fatiguing than in-person meetings.


Why wouldn't it?

We're talking about a mortgage on the rental property itself not on the landlord's personal residence.


> We're talking about a mortgage on the rental property itself not on the landlord's personal residence.

Yes. Everyone in this conversation understands that. There's no relation between the amount and terms on your mortgage for your rental property and the market price to rent a given unit.

If I take a 20 year mortgage does that mean I should charge more rent than if I took 30 years because my payment is higher? Should I drop rent by half when I'm done paying my mortgage since I no longer have a payment? It's nonsensical.

There's especially no reason to expect that rent should cover 100% of the principal fraction of your mortgage payment. Taxes, maintenance, interest, etc. are pure costs which are understandably passed on to tenants. The principal fraction of a mortgage payment is not a cost, it is building your equity.

If rent is covering all costs and "only" 50% of the principal fraction of your mortgage payment you're still making a profit. Because you're paying, let's say, $1000 of the mortgage payment with your own money, but you're getting a $2000 increase in equity.


Indeed - it is increadible that so many people just assume rent should basically buy another person a house by default.


If the increase in equity should be satisfactory, that's an argument for just letting everyone live in the place for free. I mean, you're coming out ahead.

I know other landlords with multiple units. If they don't think something will be an income producing property, they won't buy.


My point is that you charge market price. Because that's how markets work.

Price to rent ratio varies very widely between different markets. If you don't want to buy in a market with a high price to rent ratio that's fine. There's simply no rule that guarantees a price to rent ratio such that rent covers all costs plus the full payment on a 30 year mortgage in all markets at all times.


Because that's how it normally works? Why would any renter pay that much? If they have that much disposable income each month, they could just buy instead of renting.


You aren't aware that plenty of people rent houses?

By renting instead of buying, you don't incur any of the costs or headaches associated with with home ownership or repair. There are also plenty of people who are only living in one place for a limited period of time, and additionally, plenty of corporations will rent out homes in decent locations if their employees need to be there for extended projects.


I am aware; I was a renter until a couple of years ago. I have rented houses in the East Bay (Alameda, CA) and Seattle, and in each case, my rent was 70%-80% of what the monthly payment on a 30 year mortgage on the same house would have been. The same was true of apartments I rented in Manhattan.

Maybe it doesn't work that way in other markets, but you'd have to be nuts as a renter to pay enough to cover your landlord's mortgage.


Let me assure you then, that in most markets, you will pay far more than the cost of a 30 year mortgage plus taxes and insurance for renting a house. You're subsidizing the cost of vacancies and also paying for some of the maintenance.

A 3 bedroom, 2 bath home in a decent but inexpensive part of the DFW where I live will run you $2K per month. These prices will skyrocket as rental prices catch up to the mortgage market. But just for comparison, 2 years ago that same house would have been on the market for about $220K and would have had a monthly mortgage of about $1200 a month on a 15 year loan or $800 or so on a 30 year loan.


In my neighborhood now, people rent for considerably more than they would pay per month if they bought here. They want to live here but nothing is for sale. It’s large single family homes. Renters tend to be people who have frequent job transfers and people who are having custom homes built nearby.

Also I’ve never rented anything for less than what I could own something comparable. Maybe you can in some markets. But generally I expect to pay more monthly. It’s a trade off for avoiding everything that comes with home ownership. Down payments, maintenance, and the overall commitment. It’s hard to just pick up and move when you have to go through the process of selling, making contingent offers on another place, multiple closings, all that.


What markets have you rented in?

I have been a renter in NYC, Seattle, and the Bay Area; every single time, I looked up how much the property last sold for and calculated how much a mortgage on it would cost me per month. My rent was normally 70%-80% of the expected payment on a 30 year mortgage.

I was never paying below-market-rate rent, either! Maybe it doesn't work out this way if you're renting in an area with lower average property values?


I do, yes, and I ask my team to do the same. Even in software development it's very helpful. It helps the team stay in sync more, reducing the need to meet. It helps me stay in sync from yesterday me to today me as well. It avoids duplicated work as well. I consider doing this part of professionalism 101 at this point.


I don't know if linking to one's own work is considered gauche here, but since it fits with the original question, my own book Practical Microservices fits this description (https://pragprog.com/book/egmicro/practical-microservices). It takes the reader from inception of a projection to a functioning system, explaining the basics of microservices, event sourcing, and CQRS along the way. Each chapter builds on the previous ones.


There are some similarities, and there are definitely worse technologies you could choose for a message store than Kafka. It's worth calling out the difference between event-sourced and event-based. The former is necessarily the latter, but that doesn't go in the opposite direction.

Event-based just means that communication happens over events. Event-sourced means that the authoritative state of the system is sourced from events. If the events are literally the state, then how those are retrieved begins to matter.

Kafka breaks down as a message store in 2 key ways that I mentioned elsewhere in all these threads.

> The first is that one generally has a separate stream for each entity in an event-sourced system. Streams are sort of like topics in Kafka, but it would be quite challenging to, say, make a topic per user in Kafka. The second is Kafka's lack of optimistic concurrency support (see https://issues.apache.org/jira/browse/KAFKA-2260). The decision to not support expected offsets makes perfect sense for what Kafka is, but it does make it unsuitable for event sourcing.

If my only tool were Kafka, then I wouldn't be able to use the messages in the same way that I can with something like Message DB. And that's okay, different tools for different jobs.


In addition to what the sibling comments to this one said, Message DB solves a different problem than what queues solve. Message DB is a good fit in a microservice-based architecture, and the "micro" in "microservice" comes from concept of Dumb Pipes / Smart Endpoints (https://martinfowler.com/articles/microservices.html#SmartEn...). Message DB is a "dumb pipe," whereas queues and brokers fit into the "smart pipe" side of the divide.

Message DB is a message store, a database optimized for storing message data. The database doesn't track the status of anything---that responsibility falls to consumers of the data.

Message queues are fine pieces of technology when what you need is a message queue. Message DB, on the other hand, is used for event-sourced systems. "Event-sourced" in turn differs from merely "event-based." Since I've started building event-sourced systems, I haven't run across the need for message queues, but ymmv, and of course, like all humans, I too have my own hammer/nail biases.


> Message DB is a good fit in a microservice-based architecture

I don't know about that, I have seen many microservices that process data that comes in from messages on message queues; frequently that is a better design than receiving the data over synchronous http PUT or POST.

By "message queues" I mean AWS SNS/SQS and Azure event hubs. Would a SQL Db be "a good fit in a microservice-based architecture" replacement there?

or are you suggesting that such a microservice's first action on receiving a message from a queue would be to store it a local, private Message DB? That might make sense, I've seen enough tables that store a JSON blob already.


100% with you that messages are a better way for microservices to receive their input. I don't see how a service could receive its input over HTTP and still retain its autonomy.

A message store like Message DB can at the same time serve as record of system state and communication channel. Writing a message to the store is the same as publishing it for consumers to pick up. Messages are written to streams, and interested parties subscribe to those streams by polling them for new messages.

The store itself isn't aware of which components are subscribing or managing their read positions.

So to your question, the first action of a component receiving a message is do whatever it does to handle the message and then write new messages (specifically events) to record and signal what happened in response to the original message.


Take a look at that 2nd link in the grandparent post. I remember it as one that was often cited when this debate first went around the Internet half a decade or so ago.

The almost-but-not-quite unstated major premise of the whole argument is that you're putting a key piece of smarts into the queue: Tracking whether a message has been processed.

One could argue that that is the real antipattern. It'll hurt you big time if you're storing messages in a database table. But it'll hurt you even if you don't. For example, by tracking that kind of information inside of the queue, you're losing the ability to add a second listener without either affecting what messages are being seen by (and therefore the behavior of) the existing listener(s), or modifying the queue itself. Which you don't want to have to do any more than you have to on a critical piece of shared infrastructure like that.

It might be fine if it's an oldschool monolith that's just processing some sort of work queue on multiple threads. But, if you're doing microservices, you're probably trying to keep things more flexible than that, and want to allow them to evolve more independently.


> The almost-but-not-quite unstated major premise of the whole argument is that you're putting a key piece of smarts into the queue: Tracking whether a message has been processed. One could argue that that is the real antipattern.

Indeed.

Lots of good writing on the tradeoffs, for example: https://sookocheff.com/post/messaging/dissecting-sqs-fifo-qu...

At the risk of beating the point to death, the transition from SOA to Microservices is marked by the transition of smart pipes to dumb pipes, and dumb pipes have no knowledge of whether a message has been processed. In the microservices era, the basic presumption is that the transport has no knowledge of the state of the message processors' progress through a queue.


> the transition from SOA to Microservices is marked by the transition of smart pipes to dumb pipes, and dumb pipes have no knowledge of whether a message has been processed.

That doesn't to seem to be characteristic of Microservices at all. It's more about small bounded context and independent deployment of the service. All of which is completely possible over the "smart pipes" that message queues that I mentioned will give you.

Insisting that this wheel must be re-invented or worked around to make it a microservice seems very odd.


> That doesn't to seem to be characteristic of Microservices at all.

Many have their own "unique" definition, which is why "microservices" is bordering on no longer meaning anything. Some people say it and mean SOA—the characteristics you mention are pretty much characteristics of SOA when done right, though independent deployment isn't a necessity.

Martin Fowler attempted to codify a definition, and this is inline with the definition that sbellware is referring to. He (and others) refer specifically to smart endpoints and dumb pipes:

https://martinfowler.com/articles/microservices.html#SmartEn...


Indeed. And having been around in both the SOA period and the Microservices period, and having witnessed the transition, I'm comfortable with maintaining the assertion that "Microservices" as a term was specifically introduced to demarcate the period characterized by big-vendor smart message transports and the period characterized by dumb transports we settled on as a reflection of all that we'd learned by trying to rely on messaging "magic".

It's a very similar transition to the shift from EJB to Hibernate (or EJB to Rails) or from SOAP to REST.

In the mean time, a lot of folks who don't have that background and perspective picked up on Microservices and made a lot of presumptions based on a lot of experience with web apps and web APIs. It's this that made "Microservices" a largely meaningless, muddied term. This is ironic because "Microservices" was introduced as a means to disambiguate the many competing meanings of "Service-Oriented Architecture" that had come into existence as many and vendors wanted to be seen as being involved with it without actually having been involved with it.

So, there's two meanings of "Microservices": One that comes from a background of service architectures and one that comes from a background of web development. My background is in both of them, but I hew to the meaning of "Microservices" which is closer to "SOA without big vendor smart pipes of the mid-to-late 2000s" than "The same old HTTP APIs we see the world through as web developers".

That's largely a lost battle now, just as SOA was lost to the competing interests of message tech vendors.

The term "Autonomous Services" is a much more precise and unambiguous in its intent, and conveys more specifically what's intended by "SOA done right", a.k.a.: "Microservices". And even more specifically, "Evented Autonomous Services" does an even better job of conveying the intent and the implications or the architecture than what "Microservices" can do now.

Or as Adrian Cockroft originally put it: "Loosely-coupled service oriented architecture with bounded contexts".

Knowing what the implications of all those words are, it's quite impossible to look at what is commonly asserted about the meaning of "Microservices" in 2019 and see much left of the great value of what was originally intended. Much of it still remains unlearned.


> you're putting a key piece of smarts into the queue: Tracking whether a message has been processed.

yes, AWS SQS does that. e.g. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQS...

Are you saying that this is bad? It doesn't seem so.

> you're losing the ability to add a second listener without either affecting what messages are being seen by (and therefore the behavior of) the existing listener(s).But, if you're doing microservices, you're probably trying to keep things more flexible than that, and want to allow them to evolve more independently.

I'm not following. Both AWS SNS/SQS and Azure Event hubs were able to handle that scenario just fine, by design.

In AWS you attach multiple queues to the same SNS endpoint and have a pool of subscribers on each queue, in Azure you declare a "consumer group" and a pool of subscribers in it. Or declare a new consumer group if need be.

We regularly scaled up AWS from 3 listeners to 30 depending on load, and back down again, or replaced instances one by one, and still it guaranteed at-least once delivery (in practice, exactly once in the vast majority of cases) across the subscribers. We _never_ "lost the ability to add another listener" either scaling up in the same pool, or creating a new pool.


> Are you saying that this is bad? It doesn't seem so.

It's not "bad" per se, but it's not what it seems on the surface.

ACKing a message doesn't mean that the message will not be received more than one time. As long as the smarts for recognizing recycled messages are embedded in the application logic, all will be fine.

Here's a good explanation written about SQS, but applies to all message technologies that work based on ACKs: https://sookocheff.com/post/messaging/dissecting-sqs-fifo-qu...

So, while it's possible to add more consumers, depending on the implementation of the technology's internals for tracking consumer state, you may loose messages or, more typically, receive messages more than once or receive them out of order.

This often happens without the developers' and operators' knowledge. Without foreknowledge of the causes and effects of these things, developers and operators may know there there's some kind of strange intermittent glitch, but don't presume that it's because message processors are processing messages that had already previously been processed (or are not processing messages that had been skipped).

What you can't do is add a new consumer that is interested in processing historical messages beyond the retention window of typical cloud-hosted message transports.

But that's ok because message queues are typically just transports, not message stores, and they serve the needs of moving messages from one place to another. A message store can do that as well, but has different semantics and serves other needs.

So, as long as ACK-based message processors work within the retention window, everything's good. It's when that's not the case that the problems arise. Lost messages and recycled messages are pretty much the only guarantee over the lifetime of a messaging app. As long as that's within tolerances, then it's fine.

Specifically, like all "dumb pipe" technology that came about in the post-SOA age, a message store doesn't use a protocol based on ACKs. Instead, a consumer tracks its own state and does not defer this responsibility to the transport.

And inevitably, if any consumer of any technology wants to guarantee that it never processes a recycled message, this tracking logic has to also be implemented in the application logic of even SQS, RabbitMQ, etc consumers.

There's no such thing as a messaging technology that can guarantee only-once delivery, as you alluded to. There's a good examination of this aspect of messaging and distributed systems here: https://bravenewgeek.com/you-cannot-have-exactly-once-delive...

The best guarantee from message transports that we can hope for amounts to a "maybe". And that's ok. The conditions and countermeasures are well-known. As long as we're not blind-sided and haven't taken "only once" literally, things will be fine.

If we don't realize that application logic is always responsible for making the ultimate decision as to whether to reject a recycled message, we're going to have problems that can be difficult-to-impossible to detect and correct.

Message stores and event stores aren't alternatives to message queues. They're technologies that support architectures that are themselves alternatives to each other.


> Here's a good explanation written about SQS, but applies to all message technologies that work based on ACKs

Deals with FIFO and ordering. You also mention "historical messages".

Sure, if those are things that you need, then you almost certainly want a different tech to SQS, likely Kafka. I won't disagree with that, except as in making it the focus of the criticism of SQS and similar message queues. It isn't that thing.


Message DB happens to be implemented using a RDBMS, but the streams in it end up being very clear partition points. Some thought would be required to move data to a different database, but an event-sourced model isn't the same as coupling through a traditional RDBMS schema.

Edit: Fixed a typo


The technologies you listed are message brokers, while Message DB is a message store. The former transport messages, and the latter is a database specialized for storing message data and sourcing system state from those messages. Kafka can, for example, move a lot of events around, but it isn't suitable for event sourcing for 2 key reasons.

The first is that one generally has a separate stream for each entity in an event-sourced system. Streams are sort of like topics in Kafka, but it would be quite challenging to, say, make a topic per user in Kafka. The second is Kafka's lack of optimistic concurrency support (see https://issues.apache.org/jira/browse/KAFKA-2260). The decision to not support expected offsets makes perfect sense for what Kafka is, but it does make it unsuitable for event sourcing.

Being built on top of Postgres, Message DB gives you access to event sourcing semantics using a familiar database technology.

We use Message DB in our production systems, and I'd be happy to talk more about it if you have other questions. We've found it very reliable.

As a disclaimer, I'm listed as a contributor at the Eventide Project, the project Message DB was extracted from, though I did not write any of the code behind Message DB.


I think the biggest difference is that it’s a Message Store as highlighted by the preceding comment.

Some clarity by example: In financial systems, it’s extremely important to keep track of all the transactions between your microservices (if your architecture is based on that). You could potentially lose a message delivered to you via a broker if your service fails to write it to a persistent storage. If the producer of that message never stored that message or it was produced on wire and transmitted, there is no way to recover it anymore. A system designed around a Message Store can mitigate such problems. You can build that similar architecture with brokers as well but for each of your application, you will have to implement something analogous to a Message Store to handle idempotency and things like that.


Incorrect. Kafka is a message _storage_. The server can be seen as a distributed persistent append-only log. All broker logic is encoded in the client. Source: maintainer of one of client implementations.


Is this using NOTIFY/LISTEN to stream messages or some other way to get new messages as they arrive?


Good question. Consumers poll for updates. One of the stored functions in Message DB is designed for this very query.

Polling sounds very crude, but for the systems Message DB is designed for, it's a virtue. No back pressure problems, for example.


Imagine the beauty of it! Before making an in-app purchase, you'd get to fill out as much paper work as you do for a mortgage. Disclosures, acknowledgement.

We could make Candy Crush as exciting as opening a bank account!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: