Hacker Newsnew | past | comments | ask | show | jobs | submit | Raphomet's commentslogin

You are living my dream. Happy to hear it's going well.


I'm a note-taking power nerd who has used all the buzzy apps on Macs. Obsidian is by far the best — it's become my personal journal, my knowledge base, a quick and dirty blog, a place to keep loose notes.

The development velocity of the (TWO PERSON!) team behind this app is ridiculous. They're constantly pushing updates, and seem to handle all facets of app development with aplomb.

No affiliation, just a happy user!


>The development velocity of the (TWO PERSON!) team behind this app is ridiculous

This! I just checked their page for a "careers" button to see if they got VC funding and are raising a team. Nope, still just 2 people. It's not that they are able to move fast, it's that they are moving fast while making a product that looks good, does the job, and is snappy. Kudos to the (2 person) team!


The product velocity is probably because it's only two people, not despite their team size.


It might also be because they're avid users of Obsidian itself.


Smaller teams move faster. This has always been true.

Scaling up teams is to deal with scope, not speed, and usually leads to much slower progress as processes and procedures are layered on top.


I'll echo this as well. Not only the development pace is ridiculous for such a small team, they're very responsive to support requests via mail and Discord as well.

I had a small problem with the app once. I contacted them via email and it was resolved in a couple of hours and they were kind enough to offer different solutions -- solutions that didn't fill their pockets. (Needless to say, I'm sticking with their services.)

The fact that they can manage all that is almost a testament to how useful the app they're creating must be for them.

I too am just a happy user/customer and wish them nothing but success.


Agreed - and I love how the way that Obsidian stores its files is very transparent. Just a folder on my local drive. I've tried Craft, Notion, Bear, etc. but always had concerns about data portability when I scaled up to thousands of notes and manual exports became impractical.


> The development velocity of the (TWO PERSON!)

Well in general, isn't the velocity inversely correlated with the number of people in a team?



Especially if your problem domain is determined by those two people.


And isn't this a side project? They are also the dev team behind Dynalist?


Yes they develop Dynalist as well but as far as I know it is on "hold" and they are focusing on Obsidian atm.


I would love a fusion between the two - an MD backed infinite outliner.

Logseq is the closest thing, but it feels a little clunkier.


me too. obsidian hasn't really clicked with me yet but i love the idea of everything being simple plain text files in a folder.

last time i checked out logseq or any of the other outliners, none of them have a note section underneath each bullet point, which is something i use a ton in dynalist


That actually scares me a bit. I liked Dynalist but stopped using it because of random UI issues on mobile, it just didn't feel worth it.

Hope they do better with Obsidian.


Don’t forget their cats!


Anyone know what tool the author used to make those nice animated SVGs?


Had the same question. Apparently he created them manually! https://twitter.com/hassenchaieb/status/1240346726822752262?...


I found this on his Github: https://github.com/hassenc/svgAnimator


I'm wondering the same thing!


Lob | YC S13, YC Continuity | Senior Software Engineer, Engineering Manager, Senior Frontend Engineer | Full Time, ONSITE | San Francisco, CA

Lob exists to create APIs that help developers automate things in the offline world. Our first product was our Print and Mail API (programmatically send letters, postcards, checks). Our second is address verification, CASS-certified by the USPS.

I'm Lob's head of engineering. We are building an open, collaborative, experimental, evidence-driven culture because we think the best innovations will come from everywhere in the company—especially from engineers. Read more about our engineering team here: https://lob.com/blog/category/engineering

We're currently looking for:

- experienced software engineers who can lead entire projects

- a frontend engineer who can take ownership of our frontend ecosystem

- engineering managers who are technical, great at hiring, and have a track record of coaching strong problem-solvers

We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.

Apply at https://lob.com/careers if this intrigues you!


Lob | YC S13, YC Continuity | Senior Software Engineer | Full Time, ONSITE | San Francisco, CA

Our first API was to programmatically send physical mail. Our second, announced 2 months ago, (https://venturebeat.com/2017/05/31/ycs-continuity-fund-leads...), is CASS-certified address verification. Our long-term goal is to provide the building blocks for developers to automate the offline world through APIs.

I'm the head of engineering at Lob. In between my last job and this one, I spoke to 42 organizations before I found what I was looking for in Lob: an exceptional team at the beginning of its growth phase, and also a company with a track record of being deliberate about its culture and which is intentionally building a good place to work.

We are a small and mighty engineering team with a ton of product and infrastructure problems to solve as we keep pace with rapid growth. So, we're currently looking for experienced software engineers who can take ownership of entire projects. We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.

Apply at https://lob.com/careers if this intrigues you!


Lob | YC S13, YC Continuity | Senior Software Engineer | Full Time, ONSITE | San Francisco, CA

Our first API was to programmatically send physical mail. Our second, announced last month (https://venturebeat.com/2017/05/31/ycs-continuity-fund-leads...), is CASS-certified address verification. Our long-term goal is to provide the building blocks for developers to automate the offline world through APIs.

I'm the head of engineering at Lob. In between my last job and this one, I spoke to 42 organizations before I found what I was looking for in Lob: an exceptional team at the beginning of its growth phase, and also a company with a track record of being deliberate about its culture and which is intentionally building a good place to work.

We are a small and mighty engineering team with a ton of product and infrastructure problems to solve as we keep pace with rapid growth. So, we're currently looking for experienced software engineers who can take ownership of entire projects. We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.

Apply at https://lob.com/careers or drop me a line at [email protected] if this intrigues you!


Lob | YC S13, YC Continuity | Senior Software Engineer | Full Time, ONSITE | San Francisco, CA

Our first API was to programmatically send physical mail. Our second, announced just yesterday (https://venturebeat.com/2017/05/31/ycs-continuity-fund-leads...), is CASS-certified address verification. Our long-term goal is to provide the building blocks for developers to automate the offline world through APIs.

I'm the head of engineering at Lob. In between my last job and this one, I spoke to 42 organizations before I found what I was looking for in Lob: an exceptional team at the beginning of its growth phase, and also a company with a track record of being deliberate about its culture and which is intentionally building a good place to work.

We are a small and mighty engineering team with a ton of product and infrastructure problems to solve as we keep pace with rapid growth. So, we're currently looking for experienced software engineers who can take ownership of entire projects. We hate contrived interviews, so our process rewards practical problem solving (based on real problems we've faced) and excellent communication.

Apply at https://lob.com/careers or drop me a line at [email protected] if this intrigues you!


Thanks for sharing. Cathy O'Neil is one of the most credible voices in the room when it comes to understanding the unintended consequences of applied big data algorithms.

The original post is sensationalistic and massively overstates what's currently possible when it comes to micro-targeting.


The USDS is working on vets.gov, not 18F. Apparently this effort involves taking hundreds of microsites that all look and behave differently and combining them into one unified effort. It's not surprising that this would take a long time, and my understanding is that this is one of USDS's top priorities.

I think it's easy to underestimate how hard it is to ship something on the scale of vets.gov from the outside.

Source: had early talks with the VA to join this project. I didn't, but came away with a lot of respect for the team working on it.


Right. I am a veteran that uses government portals for certain services, and you are absolutely right. There are so many moving parts behind the "single" portal the government wants veterans to use for access to services and information. I have been using those services for many years and haven't even seen them all, let alone used them all.


18F is/was most definitely contributing to vets.gov.

https://www.vets.gov/playbook/platform/

I'll give vets.gov credit for mobilizing content with responsive design.

But as expected, "uber-portal" strategies that re-purpose content from other sites quickly suffer from entropy and information rot. The portal just becomes a cut-n-paste curation exercise.


Hi! Just curious, why did you stop running the code? Seems like a useful thing.


So the follow-up paper that assesses the impact is here [1]

TL;DR is that developers just didn't find it useful. Sometimes they knew the code was a hot spot, sometimes they didn't. But knowing that the code was a hot spot didn't provide them with any means of effecting change for the better. Imagine a compiler that just said "Hey, I think this code you just wrote is probably buggy" but then didn't tell you where, and even if you knew and fixed it, would still say it due to the fact it was maybe buggy recently. That's what TWR essentially does. That became understandably frustrating, and we have many other signals that developers can act on (e.g. FindBugs), and we risked drowning out those useful signals with this one.

Some teams did find it useful for getting individual team reports so they could focus on places for refactoring efforts, but from a global perspective, it just seemed to frustrate, so it was turned down.

From an academic perspective, I consider the paper one of my most impactful contributions, because it highlights to the bug prediction community some harsh realities that need to be overcome for bug prediction to be useful to humans. So I think the whole project was quite successful... Note that the Rahman algorithm that TWR was based on did pretty well in developer reviews at finding bad code, so it's possible it could be used for automated tools effectively, e.g. test case prioritization so you can find failures earlier in the test suite. I think automated uses are probably the most fruitful area for bug prediction efforts to focus on in the near-to-mid future.

[1] http://www.cflewis.com/publications/google.pdf?attredirects=...


I was one of the interviewees for the study (or at least, I remember ranking those three lists as described in the experimental design).

My impressions were that the results of the algorithm were pretty accurate, but they were not very actionable. Very often, the files identified were ones the team knew to be buggy, but there were good reasons they were buggy, eg. the problem the code was solving was complex, that area of the code was undergoing heavy churn because the problem it solved was a high priority, or the code was ugly but another system was being developed to replace it and it wasn't worth fixing when it was going to be thrown away anyway. In some cases, proposals to fix or refactor the code had been nixed repeatedly by executives.

Basically - not all bugs are created equal. Oftentimes code is buggy because it's important, and the priority is on satisfying user needs rather than fixing bugs.


This seems worth a followup post to mentioned that the idea didn't pan out.


I work in software reliability (bug finding through dynamic program analysis) which is a related domain of this research.

Most of these machine learning based software engineering research tools are based on unrealistic scenarios, full of over-promises and very little to deliver in real life.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: