Hacker Newsnew | past | comments | ask | show | jobs | submit | l5t's commentslogin

Good point, this is why we finetune Whisper with our own medical dataset to improve the performance


It takes 40% of your doctor time to put his notes in your medical records. We have been testing with doctors to reduce this time drastically so that they can spend more quality time with you. We also interviewed a lot of patients who take notes to be able to remember what’s been discussed during a consultation. So it could also benefit the patient


Your response completely ignores the concerns raised in the comment you are replying to. "It saves your doctor 40% of their time" is not a reasonable response to "I don't want my private medical details divulged to a third party".


Perhaps avoiding commenting on patient privacy concerns may be a better answer than any evasive answer may ever be.

Their website's privacy policy doesn't say anything about their product. It's also incorrect (they seem to have switched tracking providers without updating their privacy policy). I would quote the sections that are incorrect, but their terms and conditions forbid copying any content from their site. Their T&C also forbids me from linking to their page or terms.

I very much doubt that using this product as advertised is even legal under the GDPR. The company is French and health data, even pseudonymized, is strictly regulated. The "demo" doesn't feature any explicit consent at the very least.

It looks like they're making all privacy risks and concerns the doctors' problems. After all, they're the ones violating the law when they use this product.


At least in US this company would responsible for violating the law. Not only the physician.

I honestly have no idea how these products are even developed. Does nobody think about any laws at all? This is something I really hate about the current state of "hustle" development.


Thanks for looking into our legal documents. The ones related to the Nabla Copilot product are different from the website T&C and privacy policy, and can be found here: https://www.nabla.com/legal-documents/

Our product is GDPR compliant.


I read the French legal documents and your notices regarding GDPR.

What I don't understand from those, and statements made by your team in this thread, is how some claims can be compatible.

- That the product is GDPR compliant.

- That you don't store the PII or health data.

- Yet all data is stored at Google servers.

- Also, you reserve the right to re-use said data. (Which, since this is for R&D purposes, should probably qualify you for the need to ask the CNIL for an authorization as "health data repository"? [0])

- That none of the data is sent outside the EU or to additional 3rd parties.

- Yet it uses a fine-tuned "GPT-3" (a term that to the best of my knowledge exclusively refers to Microsoft/OpenAI's US-based API service, not to on-prem GPT-like LLMs like GPT-J or GPT-NeoX).

All in all, I can feel the enthusiasm but it does feel like this thread would have been so much more reassuring with some proactive comments about the privacy/health data issues, rather than have everyone voice the obvious concern with no prepared answers.

[0] https://www.cnil.fr/fr/la-cnil-adopte-un-referentiel-sur-les...


You're absolutely right, we should have been much more upfront with the privacy/security aspect of the product and add this link to the post: https://www.nabla.com/blog/privacy-security/

I hope this link will clarify our position.

Here are additional answers related to your points. - We do use Google Cloud to host our backend in EU or in the US but also the data for the Care Platform product. For the Copilot product, we don't host any data. They are hosted locally on the practitioner browser. - Our T&C reserves the right to re-use data in the event we will store the data in future versions of Nabla Copilot. In any case, the reuse of data, even health data, is allowed by GDPR for the improvement of the service provided if the data controller (practitioner) authorizes us and if they have informed the patient. - We did not say that "none of the data is sent outside the EU". Actually we say the opposite in the Copilot APD Annexe 1. We specifically mention Google and OpenAI and we comply with GDPR with a data protection agreement with both these companies.


> that they can spend more quality time with you

Maybe its just the doc offices i've been to, but i think this would actually just increase the patient churn in a hospital. There's no way a doc is going to increase their patient time by the 40% saved if the hospital can toss them in front of another patient.


It's definetly not just you... Tho that may still be a plus considering the shortage of doctors in a lot of countries. But the privacy concerns still remain.


I can absolutely see the benefits of this, that's not the question. The question is whether or not the benefit justifies the risk.


We currently support English and French and we will be supporting more soon


Scalability. More context on the move from the 2017 post https://medium.com/wit-ai/open-sourcing-our-new-duckling-47f...


We will be releasing an import/export in the next 2 weeks


Wonderful! Thanks!


We understand the concern here at Wit. It's free because we no longer need to generate revenue. Wit developers spend time validating data that is critical to improve the models for the Wit community. When we joined Facebook in Jan 2015, we updated our Terms of service to mention that you can retrieve your training data if Wit closes at some point. FYI, the training data (expressions, intents, entities) is the critical component. The plan is to continue supporting and growing the product. The release of Bot Engine Beta a few months ago is a proof.


The fact that you're saying "it's free because we no longer need to generate revenue" is exactly the problem. It means you didn't even think this through. If you had, your answer would be different.

It's just like Twitter saying "We don't need to generate revenue directly from our API and that's why we're opening it all up because what's good for the community is what's good for us!", except that this isn't 2006 and everyone knows better.


It's free because Facebook wants the training data. Pretty obvious I would have thought.

If you're ok with helping Facebook improve its AI then I see no reason to worry about the fact that it is free.


Our vision is to make it easy for developers to make apps or devices that can leverage AI/NLP/Speech. As you probably know, the key is the amount of data you can collect. The more data, the better your models will be, the better you API is. When you are a startup, you need at some point to generate revenue to continue to grow/hire even if you objective is to gather more data


Thanks for reporting. Will be fixed soon


lovely


haha


Synopsys: Presentation on developing voice interfaces for children.

Oren Jacob shared clips of children talking to the ToyTalk apps and anecdotes of his research with children to demonstrate the challenges of designing voice applications for kids. He also touched on the regulatory issues surrounding recording children and how ToyTalk approaches these challenges.


Curious to know what would happen if 2 or more bulbs broke at the same time?


In Engineering, there is no such thing as "at the same time" unless it's a feature of the design.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: