> i would fire any friend who attempted to use an AI as a proxy for talking with me.
Mitra does not replace calls that you already make to your friends where you want to talk to them. In fact, I encourage you to talk to your friends more directly.
Mitra is useful for 3 main reasons when it comes to friends:
1.) When you or the other person are not comfortable saying certain things or having certain, tough conversations.
2.) When you or the other person may not have the time to have a full fledged conversation about something.
3.) When you want to call your friends for pure entertainment purposes by having Mitra say funny things.
So if a friend used this on you it does not necessarily mean that they view you in a negative light.
Mitra, just like Snapchat or Twitter DMs or Instagram DMs, is simply another way you can communicate with your friends that is more engaging and fun than the others.
- "The Declaratory Ruling limits the use of AI-generated voices in robocalls, but it does not impose a total ban, as some headlines have suggested. Instead, it clarifies that callers who choose to employ AI-generated voices must comply with existing TCPA regulations."
- "The TCPA does not ban robocalls to residential phone lines (i.e., home landlines), even without consent from the called party as long as (1) the calls are not made for a commercial purpose..."
Appreciate the context, that is helpful. Do you notify the other party of recording in two/all party consent states? Or is recording calls the default behavior of the app?
> Eleven (11) states require the consent of everybody involved in a conversation or phone call before the conversation can be recorded. Those
states are: California, Delaware, Florida, Illinois, Maryland, Massachusetts, Montana, Nevada, New Hampshire, Pennsylvania and Washington.
These laws are sometimes referred to as “two-party” consent laws but, technically, require that all parties to a conversation must give consent
before the conversation can be recorded.
Virtual avatar generation models can act as world navigators in complex environments. This is a potential new direction for general purpose robotics as well.
We introduce a novel video model simulating human movement in rock climbing environments using a virtual avatar. Our diffusion transformer predicts the sample instead of noise in each diffusion step and ingests entire videos to output complete motion sequences. By leveraging a large proprietary dataset, NAV-22M, and substantial computational resources, we showcase a proof of concept for a system to train general-purpose virtual avatars for complex tasks in robotics, sports, and healthcare.
Of the full distribution of possible video qualities one can take on a modern phone camera, the vast majority of video qualities will be fine for the AI to understand fine details. Obviously, if you somehow or for some reason, take a video with really bad quality, it will not give you what you want.
Same explanation goes for the walls. If you take a video of just a really dark wall with really bad holds, it is probably won't give you what you want either.
This is an end-end system that just takes in video frames. Camera parameters are one of the things that is predicted. It gives promising results for a wide variety of environments (cliffs, diff types of bouldering walls, diff outdoor walls, etc.), though not always accurate. Path planning is also part of the end-end system. Will share more details in the paper.