I'm a note-taking power nerd who has used all the buzzy apps on Macs. Obsidian is by far the best — it's become my personal journal, my knowledge base, a quick and dirty blog, a place to keep loose notes.
The development velocity of the (TWO PERSON!) team behind this app is ridiculous. They're constantly pushing updates, and seem to handle all facets of app development with aplomb.
>The development velocity of the (TWO PERSON!) team behind this app is ridiculous
This! I just checked their page for a "careers" button to see if they got VC funding and are raising a team. Nope, still just 2 people. It's not that they are able to move fast, it's that they are moving fast while making a product that looks good, does the job, and is snappy. Kudos to the (2 person) team!
I'll echo this as well. Not only the development pace is ridiculous for such a small team, they're very responsive to support requests via mail and Discord as well.
I had a small problem with the app once. I contacted them via email and it was resolved in a couple of hours and they were kind enough to offer different solutions -- solutions that didn't fill their pockets. (Needless to say, I'm sticking with their services.)
The fact that they can manage all that is almost a testament to how useful the app they're creating must be for them.
I too am just a happy user/customer and wish them nothing but success.
Agreed - and I love how the way that Obsidian stores its files is very transparent. Just a folder on my local drive. I've tried Craft, Notion, Bear, etc. but always had concerns about data portability when I scaled up to thousands of notes and manual exports became impractical.
me too. obsidian hasn't really clicked with me yet but i love the idea of everything being simple plain text files in a folder.
last time i checked out logseq or any of the other outliners, none of them have a note section underneath each bullet point, which is something i use a ton in dynalist
Lob | YC S13, YC Continuity | Senior Software Engineer, Engineering Manager, Senior Frontend Engineer | Full Time, ONSITE | San Francisco, CA
Lob exists to create APIs that help developers automate things in the offline world. Our first product was our Print and Mail API (programmatically send letters, postcards, checks). Our second is address verification, CASS-certified by the USPS.
I'm Lob's head of engineering. We are building an open, collaborative, experimental, evidence-driven culture because we think the best innovations will come from everywhere in the company—especially from engineers. Read more about our engineering team here: https://lob.com/blog/category/engineering
We're currently looking for:
- experienced software engineers who can lead entire projects
- a frontend engineer who can take ownership of our frontend ecosystem
- engineering managers who are technical, great at hiring, and have a track record of coaching strong problem-solvers
We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.
Lob | YC S13, YC Continuity | Senior Software Engineer | Full Time, ONSITE | San Francisco, CA
Our first API was to programmatically send physical mail. Our second, announced 2 months ago,
(https://venturebeat.com/2017/05/31/ycs-continuity-fund-leads...), is CASS-certified address verification. Our long-term goal is to provide the building blocks for developers to automate the offline world through APIs.
I'm the head of engineering at Lob. In between my last job and this one, I spoke to 42 organizations before I found what I was looking for in Lob: an exceptional team at the beginning of its growth phase, and also a company with a track record of being deliberate about its culture and which is intentionally building a good place to work.
We are a small and mighty engineering team with a ton of product and infrastructure problems to solve as we keep pace with rapid growth. So, we're currently looking for experienced software engineers who can take ownership of entire projects. We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.
Lob | YC S13, YC Continuity | Senior Software Engineer | Full Time, ONSITE | San Francisco, CA
Our first API was to programmatically send physical mail. Our second, announced last month (https://venturebeat.com/2017/05/31/ycs-continuity-fund-leads...), is CASS-certified address verification. Our long-term goal is to provide the building blocks for developers to automate the offline world through APIs.
I'm the head of engineering at Lob. In between my last job and this one, I spoke to 42 organizations before I found what I was looking for in Lob: an exceptional team at the beginning of its growth phase, and also a company with a track record of being deliberate about its culture and which is intentionally building a good place to work.
We are a small and mighty engineering team with a ton of product and infrastructure problems to solve as we keep pace with rapid growth. So, we're currently looking for experienced software engineers who can take ownership of entire projects. We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.
Lob | YC S13, YC Continuity | Senior Software Engineer | Full Time, ONSITE | San Francisco, CA
Our first API was to programmatically send physical mail. Our second, announced just yesterday (https://venturebeat.com/2017/05/31/ycs-continuity-fund-leads...), is CASS-certified address verification. Our long-term goal is to provide the building blocks for developers to automate the offline world through APIs.
I'm the head of engineering at Lob. In between my last job and this one, I spoke to 42 organizations before I found what I was looking for in Lob: an exceptional team at the beginning of its growth phase, and also a company with a track record of being deliberate about its culture and which is intentionally building a good place to work.
We are a small and mighty engineering team with a ton of product and infrastructure problems to solve as we keep pace with rapid growth. So, we're currently looking for experienced software engineers who can take ownership of entire projects. We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.
Thanks for sharing. Cathy O'Neil is one of the most credible voices in the room when it comes to understanding the unintended consequences of applied big data algorithms.
The original post is sensationalistic and massively overstates what's currently possible when it comes to micro-targeting.
The USDS is working on vets.gov, not 18F. Apparently this effort involves taking hundreds of microsites that all look and behave differently and combining them into one unified effort. It's not surprising that this would take a long time, and my understanding is that this is one of USDS's top priorities.
I think it's easy to underestimate how hard it is to ship something on the scale of vets.gov from the outside.
Source: had early talks with the VA to join this project. I didn't, but came away with a lot of respect for the team working on it.
Right. I am a veteran that uses government portals for certain services, and you are absolutely right. There are so many moving parts behind the "single" portal the government wants veterans to use for access to services and information. I have been using those services for many years and haven't even seen them all, let alone used them all.
I'll give vets.gov credit for mobilizing content with responsive design.
But as expected, "uber-portal" strategies that re-purpose content from other sites quickly suffer from entropy and information rot. The portal just becomes a cut-n-paste curation exercise.
So the follow-up paper that assesses the impact is here [1]
TL;DR is that developers just didn't find it useful. Sometimes they knew the code was a hot spot, sometimes they didn't. But knowing that the code was a hot spot didn't provide them with any means of effecting change for the better. Imagine a compiler that just said "Hey, I think this code you just wrote is probably buggy" but then didn't tell you where, and even if you knew and fixed it, would still say it due to the fact it was maybe buggy recently. That's what TWR essentially does. That became understandably frustrating, and we have many other signals that developers can act on (e.g. FindBugs), and we risked drowning out those useful signals with this one.
Some teams did find it useful for getting individual team reports so they could focus on places for refactoring efforts, but from a global perspective, it just seemed to frustrate, so it was turned down.
From an academic perspective, I consider the paper one of my most impactful contributions, because it highlights to the bug prediction community some harsh realities that need to be overcome for bug prediction to be useful to humans. So I think the whole project was quite successful... Note that the Rahman algorithm that TWR was based on did pretty well in developer reviews at finding bad code, so it's possible it could be used for automated tools effectively, e.g. test case prioritization so you can find failures earlier in the test suite. I think automated uses are probably the most fruitful area for bug prediction efforts to focus on in the near-to-mid future.
I was one of the interviewees for the study (or at least, I remember ranking those three lists as described in the experimental design).
My impressions were that the results of the algorithm were pretty accurate, but they were not very actionable. Very often, the files identified were ones the team knew to be buggy, but there were good reasons they were buggy, eg. the problem the code was solving was complex, that area of the code was undergoing heavy churn because the problem it solved was a high priority, or the code was ugly but another system was being developed to replace it and it wasn't worth fixing when it was going to be thrown away anyway. In some cases, proposals to fix or refactor the code had been nixed repeatedly by executives.
Basically - not all bugs are created equal. Oftentimes code is buggy because it's important, and the priority is on satisfying user needs rather than fixing bugs.
I work in software reliability (bug finding through dynamic program analysis) which is a related domain of this research.
Most of these machine learning based software engineering research tools are based on unrealistic scenarios, full of over-promises and very little to deliver in real life.