Man, I love the idea of Scuttlebutt but I hate the developer UX. I'm writing some apps that I wanted to put on SSB but have all but given up on the idea. Something about SSB, as a dev, leaves me with a lot of questions and no idea where to even get answers from.
So I'll write my app outside of SSB, hopefully in a way that's mostly compatible, and possibly with future integration.
I may also toy with an SSB-like protocol myself, as the fundamentals of SSB is a work of art imo. I really enjoy what Gossip brings to the table, and how SSB focuses on human->human relationship to bring P2P to the table.
> On the other hand I tried introducing Rust for a small part of a larger ecosystem and the cold compile times were so bad we rewrote the functionality in C. It shaved minutes off our CI build times, which costs actual money.
Yea, we have this issue (as a shop now using Rust for all of our backend).
I have a couple of hacks in place to cache the majority of the build thankfully, only needing to compile source code unless something changes. When our build cache works our builds take ~60s. When it doesn't, ~15m.
Oh yea, I wasn't picking on Rust. If anything I tend to defend Rust haha.
As far as what we did to cache, nothing fancy - using Docker build layers. I add my Cargo files (lock/toml), include a stub source lib.rs or main.rs to make it build with a fake source, and then build the project.
This builds all the dependencies. It also builds a fake binary/lib for your project, so you need to strip that from the target directory. Something like `rm -rf target/your?project?name*` (I use ? to wildcard _ and -)
If you do that in one layer, your dependencies will be cached with that docker image. In the next layer you can add your source like normal, compile it, and you'll be set.
We lose our cache frequently though because we're not taking special care to centralize or persist the layer cache. We should, for sanity.
Nope, but that shouldn't be required I'd think? It's not locally, for example - as long as each layers resulting hash has not changed the cache is reused. Likewise on CI, if I repeatedly build the caches never miss, it works great. It's more a problem of sometimes, after a days or a week, the layers seem to be gone. But never in-between your pushes to a PR, for example, those always seem to stay.
I think that's happening is our garbage collection on old docker layers is being too aggressive. But because it works so well between commits, pushes to CI and etc - I don't worry about it. The majority of the time I want a cache to work, it works. So it's been a low priority thing for me to fix haha.
edit: Oh, and I forgot, we may have CI jobs running on different machines. Which of course would also miss caches, since we're not persisting the layers on our registry. I'm not positive on this one though, since like I said it never seems to fail between commit pushes _(say to a PR during review, dev, etc)_. /shrug
Indeed, but unless you are doing some crazy meta-programming, you can keep reusing the same binary libraries, as inside the same company most projects tend to have similar build flags anyway.
Alternatively it is also quite common to use dynamic libraries, or stuff like COM, XPC.
C++ build times get high only if you start messing with the type system and ask for it to be recompiled every single time again and again.
Rust is slower overall, which is why people tend to complain. And if you start messing around like in C++, then you get even crazier times.
But in neither case it is a dealbreaker compared to other languages. Go proponents claim compilation speed is everything, which is suspicious. I do not need to run my code immediately if I am programming in C++ or Rust. And if I am really doing something that requires interactivity, I should be doing it another way, not recompiling every time...
I think it's fairly naive to say build times get long "only if you mess with the type system" whatever that means. It's pretty easy to tank your compile/edit/debug cycle just by adding or removing things to a header file. Maybe modules will improve things.
I've worked in C++ code bases with just a few 100k loc where one starts architecting the software to avoid long compile times. Think about how insane that is, you choose to write and structure code differently as punishment for the sin of writing new code. Not to improve the software performance or add new features.
The worst example of this is the pimpl pattern. You make th explicit choice to trade off compile times to hide everything behind a pointer dereference that is almost guaranteed to be opaque to the compiler, even after LTO, so the only "inlining" you may see is from the branch predictor on a user's machine. That's bonkers!
Of course compile times increase with code size, that is not what people are talking about when they say "X language compiles slowly". They are talking about time per code size.
Messing with the type system is using it for things you really should not in any reasonable project. For instance, some of the Boost libs with their overgeneralizations that 99% of users do not need.
Run the code (with some debug logging statements, very likely), find a small mistake, make a few characters worth of correction, press your IDE's keyboard shortcut for recompilation and re-running.
If the last step is slow you can lose momentum and motivation to iterate quickly on the problem at hand.
Sure it doesn't apply to a lot of projects, that's true. But it's not charitable to claim it only applies when learning.
it depends on the problem domain. when working on say a desktop app with graphical ui features it can be very useful to be able to change/experiment quickly.
with an n tier web application you wont often be able to do that anyway.
Does your article address strategies for mitigating this bias? I, as a politically left leaning person, often recognize my biases but I have difficulty absorbing the "other side" of data, notably because it feels like 90% of the internet is either lies, stretches, or opinion pieces. So my strategy to mitigate my bias is to look up counter information, but I struggle to find anything of real value.
Hell, I even struggle to find real value supporting my biases. The internet is just a mess.
So in light of the over saturation of purposefully biased content on the internet, how might a person mitigate their own personal biases?
While I addressed the issue in general and around what the impact it is causing, I can tell you a couple of things which you can attempt to mitigate your biases:-
First of all, Do you research manually. Don't stay on Twitter only for your source of news. Having multiple sources will help you in establishment of truth. The best part? The extension mentioned by the OP can actually be very helpful!
Secondly, Remember that it is all about the perspective. It is NOT necessary that the way you see things is the SAME as how I see things.
There is a difference between Data and Information, Because not every time, All the data will be informational.
> you're better off getting it to generate a 50+ character random password (including uppercase, lowercase, numbers and symbols).
Lol I wish. Almost all of the important sites I use, like various bills and loans, use terrible password schemes. One even, until recently, enforced an 8 character limit! I think they raised it to 16 iirc. Oy.
Hell, even the company I work for frequently fails my password generator settings. The arbitrary character requirements of my ~20 character password would sometimes not be satisfied when I was creating accounts in our dev system/etc. Which is annoying as hell, but I can't convince management, because our users (older) tend to use some of the worst passwords out there.. so I can understand where it's coming from.
I'd imagine the risk is pretty great, for the same reason I wouldn't paste a password here that I use. Sure, you may not know who I am, but if you were the site itself you get a ton of information on me.. which is more than I'd like you to know if you _also_ know one of my passwords.
I agree, the risk is minimal. Nevertheless.. security, heh.
Will you be inspecting the code every single time it loads to ensure that has not changed? You are receiving this code from an untrusted 3rd party every time you visit. There is a big difference to trusting a known entity like lastpass, 1password, etc, all of whom are vulnerable to supply chain attacks. It is another thing entirely to trust a random website on hackernews.
Payload decoders and password generators are some of the biggest honeypots out there. Combining this with an attack taking advantage of hidden form autofill, you could gain quite a bit of information to go along with that password.
If I were to use it, I'd probably just pull my own copy off github.
If I were to recommend novices to use it-- I'd tell them to use a password manager locally, and something like that to generate a secure password to get into their own machine locally / get into their password manager-- which mitigates most of the risk if it turns rogue.
I'm not sure what fast or slow is. I think on average I spend 1-10s compiling. Eg when I run my tests, or run a server. However that is during my normal workflow. Ie the thing I do 500x a day.
If I was to nuke my target directory _(the compile cache)_, it would take minutes. Not sure how many honestly, because when I do it, I rarely wait for it.
So the compiles are definitely slow, but for me it's only bad when dependencies are needing to be compiled. Which is rare.
If you don't cache your dependencies well, like in a poorly written CI pipeline, things will suck though. 20 minute builds at work (slow build machines) are common for us due to some poorly done caching.
Any comments on optimal lighting for a home office? Or, as your setup is?
A nice bright working area sounds nice. Bonus if it's good light, and not something that will feel unnatural.
Sidenote: I've got a nice big window that I face, and I often want it open, but my room is so dark I have to keep it mostly closed as it overpowers my monitor, eyes, etc.
I have several compact fluorescent bulbs each outputting 8,600 lumen. Since the bulbs measure about 10" long, the only fixture I found to work is suspend a bare socket and attach a giant paper lantern.
When selecting light, I pay attention to the temperature of the light (I like 5000K for daylight) and the Color Rendering Index (CRI) which is a measure of the quality of light. Technically you can make "white" with only 3 wavelengths, however the broader the spectrum of light (many wavelengths), the better it shows on reflected surfaces. It's not highly relevant to the topic of eye strain with screens but it's a nice touch that makes the area more comfortable.
Recently, I started exploring LED lights and there are some really good options appearing on the market. I tested some spot lights with a CRI of 92 which look even better than halogens. So when it comes time for me to replace my CFLs, I might end up with a better option.
My home office is the only place I really feel the benefit of Philips Hue bulbs. Being able to fine-tune and change the white color temperature throughout the day (and night) is fantastic.
Personally, I prefer indirect light. So I have light strips which illuminate the wall behind my monitor , and a few other bulbs directed at the ceiling which reflect light back down.
What's the advantage? While a common protocol at first glance sounds nice, I'd imagine the devil and validity of any test is in the actual implementation. So TAP, as far as I can tell is just a BDD spec.. which basically just amounts to some lines of text for someone to write BDD against.
Am I missing something? I was actually recently looking into BDD-like tools recently, so I feel like I'm a potential user for TAP, but I just can't grok the value add in it offhand.
edit: oh, maybe TAP is the inverse of what I was thinking. It's about output of test suites, for common post-test tooling? Aka to give statuses about the state of tests?
Can anyone recommend a resource for learning how to diagram?
Notably, I don't want to free-design directional relationships that are counter intuitive to what people expect. Eg, if Service A calls Service B, does the arrow go to Service B? What about the response? I imagine there are a ton of little questions like this and I'd like to learn to write what people expect to see.
There are standards like UML, but I’ve never found those to be as useful to start with. It’s best to treat this as a “design problem”: first, answer the question: “what am I communicating”.
Then, maybe look into conventions - but often you’ll find less need for one “master UML diagram” and instead use many, smaller diagrams embedded within reference docs.
So I'll write my app outside of SSB, hopefully in a way that's mostly compatible, and possibly with future integration.
I may also toy with an SSB-like protocol myself, as the fundamentals of SSB is a work of art imo. I really enjoy what Gossip brings to the table, and how SSB focuses on human->human relationship to bring P2P to the table.