I find myself caught in the y/t/f <enter> loop all the time. I use a site blocker, but if I do want to visit a site, I end up turning off the blocker and forgetting to re-enable it. This seems like much a better solution that allows me access to the sites I need, but ensures I don't get lost in them, and tracks my usage.
I am seriously so pumped about this, funny that I needed to visit one of my blacklist sites (hacker news) in order to find this. Please keep us posted with updates (settings page of the app?), if I continue to use this I'd be happy to pay for it in the future.
One thing I'd love to be able to do right now is whitelist certain times in the middle of the day. For example, lunch. Punching in 30 mins on a site is an easy fix, but it'd be nice if that were built in.
Thanks so much! There's an in-product mechanism that shows messages on major updates, but I might create a mailing list / Slack channel for a more dedicate space for feedback and updates. If I do, it'll add it to the next in-product message.
Once I get through the incoming bugs, I'll spend some time thinking about what options to provide in Intention while making sure that the product stays simple. Appreciate the kind words!
BOM looks like <$10 (there's barely anything inside). Maybe $15 with those small batteries. I doubt it's economical to recycle. What's fascinating is that people are willing to pay so much over the BOM. But I guess that's Apple speciality making people do it.
I was also wondering this. I use Ideavim for IntelliJ as my primary editor. It is awesome, it will read your vimrc and allows you to set up mappings for any editor action.
I used IntelliJ for about 7 years. I loved it. When I learned Vim I used it with IdeaVim. There were minor things that bugged me, but I learned to live with that. But one day I decided to try Emacs with Evil mode. It took me long time to get my config right, but at some point I realized there are way too many small things in IntelliJ that annoyed me and beyond submitting feature requests on YouTrack there was little I could do. When I discovered Spacemacs I switch to that. I realized that Emacs can vim better than IdeaVim, in fact it probably vims better than Vim itself.
It's perfect for a bedside table. Instead of needing to fumble around for a cable when I'm tired, I just place it on the mat. It's handy to have several pads around the house and office so your phone can always been juiced up. My phone rarely goes below 50% anymore, and I basically only bother with cables when I'm traveling. It's awesome.
You are able to go without tests because you're writing entire apps on your own. You know what each file, class, and module is supposed to do, and how to test it for regressions manually.
If you're working on a large team with a large suite of applications, this isn't the case. Very often I'll need to make changes to code that I don't explicitly own, or that I've never seen before today. If this code doesn't have tests, this is a recipe for disaster. I don't really understand what this code is supposed to do. How do I know that my change didn't break any existing functionality? Is it obvious how or even possible to test the functionality in this file manually? Maybe it's a background job that processes a file off of an SFTP server, how long will it take me to set up a manual test for that?
It's also about iteration time. Automated tests allow to you check the correctness of every single change nearly instantaneously. I don't want to need switch to a browser and wait for a form to submit just to check that my code still works after a minor refactor, or to ensure that my typo was fixed successfully. Tests mean that code is much more likely to be refactored often, and will lead to much cleaner and easier to maintain codebases.
This argument, and most of the others at this same level, have a glaring error. They only justify integration tests. They do not justify unit tests (in nearly all cases).
Unit tests wouldn't catch the examples you gave:
> How do I know that my change didn't break any existing functionality?
You don't. And with many unit tests ... you still don't. A software engineer with a few years experience will know this.
> Maybe it's a background job that processes a file off of an SFTP server, how long will it take me to set up a manual test for that?
This especially, no unit test is ever going to catch. This behavior would depend on so many things: encryption, networking, scheduling, starvation, graceful error handling, retry logic ... Not something any amount of unit tests could reasonably verify.
So this must be an argument against unit tests ? And yet, I don't think so.
In practice, you see the opposite. These arguments are designed to force people to write unit tests, and actually used against integration tests (which are complex tests that require tweaking in intelligent ways every time you make a serious change). Of course integration tests and "instantaneous" are usually opposites.
So I don't understand this argument. It only makes sense on the surface. It does not actually work on real large code projects. So I don't understand that you make this argument.
And I especially do not understand that nearly every large company follows advice like this. I mean, I can come up with cynical reasons: "more pointless work with zero verifiable results, but nice "quantifiable" (if meaningless) numbers ! Of course we're in favor !". But really, I cannot come up with a coherent to explain the behavior seen in developers in a large company.
Most statistics about unit tests make no sense. Coverage for instance, says nothing about how many edge cases you handle. Says nothing about many guarantees (like bounded and reasonable allocation for "weird" inputs, same with cpu usage, that it still works together with the OS and other daemons on the system it'll be installed on, ...). Since tests have no "real" purpose for software (they're code that doesn't run) number of tests is meaningless too. Number of lines of test code ... that's beyond useless.
The bigger a piece of software, the more useless unit tests become and the more critical realistic integration tests become ... and the less likely you are to actually find them.
And frankly, for small pieces of code, I find that specialized ways to "test" them that provide actual guarantees of correctness (like Coq) actually work. Unit tests ... never catch edge cases, not even with the best of developers. Of course, even Coq sort-of only makes sure you KNOW about every edge case and requires you to verify it's correctness before letting you pass, so bugs in your understanding of the problem will still be bugs.
But at least you have a theoretical guarantee that a developer actually thought about every edge case. Also, it's pretty hard to use. The language could be clearer, but that's not really the problem. The problem is that Coq WILL show you that no matter how well you think you understand a problem, your understanding is not complete. Often in surprising, interesting ways.
The SFTP file processor is actually a quintessential example of how unit tests can help new developers edit existing code.
A unit test for the processor would mock out all of the SFTP fetching, or move that to a different class, and focus on only the core business logic of processing the file. The core logic could easily be unit tested, and changes could be made to the file processor without needing to replicate the entire SFTP environment in order to determine if there were regressions in the core business logic.
The alternative is needing to spin up a test SFTP environment, or somehow do that mocking in my manual test, just in order to do something as simple as refactor the class or make a small business logic change. A unit test empowers any developer to make those changes, without needing much knowledge of the environment the code runs in.
> without needing to replicate the entire SFTP environment in order to determine if there were regressions
Jep. And then it doesn't actually work. Because it tries to read every file at the same time. Because the strings it passes in to resolve the server don't work. Because it never schedules the thread reading from the socket. Because it uses a 100 byte buffer (plenty for tests, after all). Because ...
And even then, what you're describing is not a unit test. A unit tests tests a single unit of code. Just 1 function, nothing more, and mocks out everything else.
So you can never test a refactor of code with a unit test. Because a refactor changes how different pieces of code work together, and unit tests by definition explicitly DON'T test that (all interactions should be mocked out). So a unit test would be useless.
The fact that something might go wrong in the integration test doesn't mean unit tests for the core logic aren't helpful. Besides, you're probably going to be using an external library for the SFTP connect, so it's very likely to go just fine.
And you can totally use unit tests for what I'm describing. Two classes
- SFTPHandler. Connects via SFTP, downloads latest files off the server, passes contents as a string to the processor class `FileProcessor.process(downloaded_file)`
- FileProcessor. Has one public function, process, which processes the file - doing whatever it needs to. This function can then very easily be unit tested, just passing strings for test files into the function. You can also refactor the `process` function as much as you like, not needing to worry about the SFTP connection at all. The `process` function probably calls a bunch of private functions within that class, but your unit tests don't need to worry about that.
I've used a setup like this in production, it works just fine, and allowed us to improve the performance of the file processing logic and make changes to it very easily and often - without worrying about regressions to the core business logic.
In my experience if there's one thing absolutely guaranteed it's that unit tests decrease the performance of whatever they're testing (Because it eventually leads to local wins that are big losses for the program as a whole, because this encourages doing idiotic stuff like allocating large buffers and keeping enormous non-shared caches and maps)
Now in the example given, performance does not matter, so I do wonder why you'd mention it at all.
How about you just answer me this question: Did you still see significant bug volumes after implementing the unit tests for the FileProcessor ?
Obviously I believe the answer to be "yes". I feel like your statement that changes were made "very easily and often" sort of implies that yes, there were many bugs.
Note that testing based on past bugs is not called unit testing. That is, as you might guess, regression testing (and has the important distinction that it's a VERY good practice to go back through your regression tests once a year, and throw out the ones that don't make sense anymore, which should be about half of them)
Besides, I've very rarely seen tests actually catch bugs. Bugs come from pieces of code not doing what developers expect them to do in 2 ways :
1) outright lack of understanding what the code does (this can also mean that they understand the code, but not the problem it's trying to solve, and so code and tests ... are simply both wrong)
2) lack of consideration for edge cases
3) lack of consideration for the environment the code runs in (e.g. scaling issues. Optimizing business logic that processes a 10M file then executing it on 50G of data)
None of these has a good chance of getting caught by unit tests in my experience.
But developers seem to mostly hate integration tests. Tests that start up the whole system, or even multiple copies of it, and then rapidly run past input through the whole system. When it fails, it takes a while to find why it fails. It may fail, despite all components, potentially written by different people, being "correctly written" just not taking each other into account. It may fail because of memory, cpu starvation, filesystem setup. It may fail occassionally because the backend database decided to VACUUM, and the app is not backing off. It may fail after a firmware upgrade on equipment it uses.
The problem I have with these "issues" is simple: they represent reality. They will occur in production.
And in some ways I feel like this is a fair description: unit tests are about "proving you're right", even, and perhaps especially, if you're wrong. "You see, it isn't my code ! Not my fault !".
It still seems like the core of your argument is "Some things are hard to test, so you might as well not test anything at all" - which I really don't buy.
> How about you just answer me this question: Did you still see significant bug volumes after implementing the unit tests for the FileProcessor ?
Kinda tricky to answer this, since there were test for this class from the start. But, overall the answer is "no" - we did not see significant bugs in that class. Occasionally we'd get a bug because the SFTP connection failed, or execution took to long and the connection closed - the type of bug that to you seems to negate the value of uniting testing the FileProcessor. But, without the unit tests for the FileProcessor, I'd have those bugs plus more bugs/bad behavior in the business logic. How is that better, exactly?
The idea that tests reduce performance is ridiculous. Improving performance requires making changes to a system while ensuring it's behavior hasn't changed. This is exactly what testing provides. Without tests, you can't optimize the code at all without fear of introducing regressions.
> It still seems like the core of your argument is "Some things are hard to test, so you might as well not test anything at all"
Nope. My argument is twofold:
1) unit tests don't actually provide the guarantees that people keep putting forward as reasons to write them
2) this makes them a dangerous, as they provide "reassurance" that isn't actually backed by reality
> But, without the unit tests for the FileProcessor, I'd have those bugs plus more bugs/bad behavior in the business logic. How is that better, exactly?
So the tests failed to catch problems with the program's behavior. Maybe it's just me, but I call that a problem.
Testing the business logic as a whole is not a unit test, except in the marginal cases where all your business logic fits neatly in a single function, or at least a single class. If it's actually a few algorithms, a few files, a bunch of functions and you test everything together, that's an integration test and not a functional test.
If you use actual (or slightly changed) data to test that business logic, as opposed to artificially crafted data, that again makes it not a unit test.
> The idea that tests reduce performance is ridiculous
If you use tests to optimize code you're optimizing a piece of code in isolation, without taking the rest of the system into account. That works for trivially simple initial optimization, but falls completely on it's face when you're actually writing programs that stress the system.
> Improving performance requires making changes to a system while ensuring it's behavior hasn't changed.
The system as a whole, sort of. Individual units ? Absolutely not.
Besides, tests merely provide FALSE assurance behavior hasn't changed. Time and time again I've had to fix "I've optimized it" bugs. TLDR is always the same. Unit tests pass so the code "must be right" (meaning they don't run the code outside of unit tests, and the unit tests only directly test the code, not blackbox). Then in the actual run edge cases were hit.
And "edge cases" makes it sound like you hardly ever hit this. Just yesterday we had a big issue. What happened ? Well we had a method that disables an inbound phone line (e.g. for maintenance, or changes). All unit tests passed. Unfortunately we really should have tested that it does NOT disable anything else (method was essentially "go through list, if it matches, disable". Needless to say, it disabled every line). Regression testing added.
We had someone optimize the dial plan handling. He didn't realize that his "cache" was in fact recreated at the very beginning of every request, when the code evaluated a dial plan, in other words, it was a serious performance regression rather than an improvement. Really looked like an improvement though. Unit tests ... passed. Of course, "the behavior hadn't changed". About 2 calls got through for ~6 minutes (normally thousands). Now we have a test that turns up the whole system, with the actual production dial plan, and then goes through 10000 calls. If it takes more than 1 minute, test fails. Developers hate it, but it's non-negotiable at this point.
I can go on for quite a while enumerating problems like this. Billing files erased (forgot to append). Entire dialplan erased on startup (essentially same problem). Lots of ways to trigger cpu starvation. Memory exhaustion. Memory leaks (the system is mostly Java). Connecting unrelated calls together (that was pretty fun for the first 2-3 minutes). Ignoring manager signals (one server having to do all calls ... which is not happening)
This is a repeating pattern in our failures. Unit tests ... essentially always pass. And time and time again someone tries to make the point that this must mean the code is correct.
OK, I get that you don't like unit tests. You also seem to have a very strict and unhelpful definition of what a unit test is. I don't really care for the academia of it, I just want to know that my code works. So, if you call it a unit test or an integration test or a functional test or a behavioral test - whatever. The important thing that it allows me get some idea of whether or not the code is doing what it's supposed to do.
What do you propose instead of testing? Manually QA every single piece of the system for every change? Not a lot of companies have the headcount for a the fleet of QA people that would require.
For what it's worth I think you both make great points. Whomever one agrees with perhaps mostly hinges on whether unit tests can get in the way of integration tests.
I'm inclined to believe that some of the exhaustive unit testing culture can provide a false sense of security, and the work involved to write them can get in the way of writing proper integration tests (or 'degrees of' if you see it as a spectrum).
Provided that proper test coverage includes both integration tests and unit tests, it probably won't hurt to do both. I like unit tests as part of my process (the red-green TDD process can be quite addictive and useful), but under pressure I'd prioritize integration tests, knowing that the more time I spend unit testing, the less I'll spend integration testing.
Splitwise will continue to work and run. The API is not being shut down, they are simply not authorizing new API applications. The article is wrong and massively misleading.
The article is massively wrong and misleading. The API is NOT being shut down. Current applications can still work, but new ones are no longer being accepted.
I love that they're trying to pin this on Venmo, like it's somehow their problem and not a completely misleading article built on misinformation.
> Venmo seems to be confused. According to statement below from Venmo and the message on the developer site, the API is no longer available. But now Venmo tells me the API is still available to existing developers, but not to new ones, which can be a sign of an impending shut down. We’ll update with more info soon once Venmo figures itself out. Parts of this article have been changed for accuracy.
I find myself caught in the y/t/f <enter> loop all the time. I use a site blocker, but if I do want to visit a site, I end up turning off the blocker and forgetting to re-enable it. This seems like much a better solution that allows me access to the sites I need, but ensures I don't get lost in them, and tracks my usage.
I am seriously so pumped about this, funny that I needed to visit one of my blacklist sites (hacker news) in order to find this. Please keep us posted with updates (settings page of the app?), if I continue to use this I'd be happy to pay for it in the future.
One thing I'd love to be able to do right now is whitelist certain times in the middle of the day. For example, lunch. Punching in 30 mins on a site is an easy fix, but it'd be nice if that were built in.