Hacker Newsnew | past | comments | ask | show | jobs | submit | parentheses's commentslogin

This tool is not just used for safety. ;)

You can spoof or disappear a mashed file. You can trigger vulnerabilities by breaking internal assumptions of a program.


I just downloaded and I'm presented with a seemingly required google login.

I really appreciate that this is free, but I do feel like the privacy-first approach is incompatible with requiring google login.

edit: FWIW, I bought MacWhisper and would buy this if it didn't require the Google login.


Yes, I can remove the login. I only use your email for communication such as news, updates, and bug fixes (I’m not collecting any additional data)

I’m curious, if you already have MacWhisper, what makes you interested in Scripta? What motivates you to buy it (Am I right we are talking about lifetime payment or you are ready even for monthly subscription)?


I am only trying it out for now. But what I'd love is for things to be quite automatic. I join a zoom, get a prompt "what do you want? record, record+transcribe or record+transcribe+summarize (or use some custom prompt)"


Fully agree to this. I find the cost of cloud providers is mostly driven by architecture. If you're cost conscious, cloud architectures need to be up-front designed with this in mind.

Microservices is a killer with cost. For each microservices pod - you're often running a bunch of side cars - datadog, auth, ingress - you pay massive workload separation overhead with orchestration, management, monitoring and ofc complexity

I am just flabbergasted that this is how we operate as a norm in our industry.


I think this is the kind of investigation that AI can really accelerate. I imagine it did. I would love to see someone walk through a challenging investigation assisted by AI.


No.


What the commenter is saying pertains to the _decision_ to use Android. That is why this is happening. That is NVidia.


Such decisions cannot be reversed on a whim.


I look at this as the equivalent of writing a MUD as you ladder up to greater capabilities. MUDs are a good educational task.

Similarly AIs are just putzing around right now. As they become more capable they can be thrown at bigger and bigger problems.


This dynamic would create even more gate-keeping using credentials, which is already a problem with academia.


Totally agree!

I feel like this means that working in any group where individuals compete against each other results in an AI vs AI content generation competition, where the human is stuck verifying/reviewing.


> Totally agree!

Not a dig on your (very sensible) comment, but now I always do a double take when I see anyone effusively approving of someone else's ideas. AI turned me into a cynical bastard :(


It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research


I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).

So yes, you use it to write the paper but soon it is public knowledge anyway.

I am not sure if there is much to learn from the draft of the authors.


I think the goal is to capture high quality training data to eventually create an automated research product. I could see the value of having drafts, comments, and collaboration discussions as a pattern to train the LLMs to emulate.


Why do you think these points would make the usage dangerous?


They have to monetize somehow...


I suppose in that position your head has lower elevation, allowing for better circulation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: