Hacker Newsnew | past | comments | ask | show | jobs | submit | SilverBirch's commentslogin

It's also just very basic police work. We're investigating this company, we think they've committed a crime. Ok, why do you think that. Well they've very publicly and obviously committed a crime. Ok, are you going to prosecute them? Probably. Have you gone to their offices and gathered evidence? No thanks.

Of course they're going to raid their offices! They're investigating a crime! It would be quite literally insane if they tried to prosecute them for a crime and how up to court having not even attempted basic steps to gather evidence!


A company I worked for had a 'when the police raid the office' policy, which was to require they smash down the first door, but then open all other doors for them.

That was so that later in court it could be demonstrated the data hadn't been handed over voluntarily.

They also disconnected and blocked all overseas VPN's in the process, so local law enforcement only would get access to local data.


that's kinda the normalization argument, not the reason behind it

"it is done because it's always done so"


Well, yes, it is actually pretty normal for suspected criminal businesses. What's unusual is that this one has their own publicity engine. Americans are just having trouble coping with the idea of a corporation being held liable for crimes.

More normally it looks like e.g. this in the UK: https://news.sky.com/video/police-raid-hundreds-of-businesse...

CyberGEND more often seem to do smalltime copyright infringement enforcement, but there are a number of authorities with the right to conduct raids.


“Americans are just having trouble coping with the idea of a corporation being held liable for crimes.”

I’m sorry but that’s absurd even amidst the cacophony of absurdity that comprises public discourse these days.


I'll bite.

How was TikTok held liable for the crimes it was accused of?


Was it ever actually accused of crimes? Was it raided? Was there a list of charges?

It always seemed to me that TikTok was doing the same things that US based social networks were doing, and the only problem various parties could agree on with this was that it was foreign-owned.


It was force-sold to Oracle.

That had more to do with wish to control it and steal it then crimes.

That wasn't a punishment, that was a reward.

American companies held liable for crimes include Bank of America ($87B in penalties), Purdue Pharma (opioid crisis), Pfizer for fraudulent drug marketing, Enron for accounting fraud. Everyone on hn should know about FTX, Theranos, I mean come on.

Corporations routinely get a slap on the wrist (seconds of profitability), not required to admit guilt, or deferred prosecution agreements.

I'm not sure what you're getting at, physical investigation is the common procedure. You need a reason _not_ to do it, and since "it's all digital" is not a good reason we go back to doing the usual thing.

It's a show of force. "Look we have big strong men with le guns and the neat jack boots, we can send 12 of them in for every one of you." Whether it is actually needed for evidence is immaterial to that.

If law enforcement credibily believes that criminals are conspiring to commit a crime and are actively doing so in a particular location what is wrong with sending armed people to stop those criminal acivities as well as apprehend the criminals and what ever evidence of their crimes may exist?

If this isn't the entire purpose of law enforcement then what is exactly?


No, a search warrant isn't intended to [directly] apprehend criminals, though an arrest warrant may come later to do that.

But one could reasonably assume that a location that is known to be used for criminal activity and that likely has evidence of such criminal activity likely also has people commiting crimes.

When police raid a grow-op they often may only have a search warrant but they end up making several arrests because they find people actively commiting crimes when they execute the warrant.


It can be both things at once. It obviously sends a message, but hey, maybe you get lucky, and someone left a memo in the bin by the printer that blows the case wide open.

Or maybe they are storing documents with secrets in a room or even in the bathroom.

Isn't it both necessary and normal if they need more information about why they were generating CSAM? I don't know why the rule of law shouldn't apply to child pornography or why it would be incorrect to normalize the prosecution of CSAM creators.

EU wants to circumvent e2e to fight CSA: "nooo think about my privacy, what happened to just normal police work?"

Police raids offices literally investigating CSA: "nooo police should not physically invade, what happened to good old electronic surveillance?"


I'm sorry I really don't understand this. I have a computer. I put it in a warehouse. You have a computer, you shoot it into space. What problem have you solved?

Is this all an effort to utilize more efficient solar panels? Are solar panels really the limiting factor for data centres?


I think some people like this because it pushes libertarian "no need for environmental or regulatory review" buttons. There's a kernel of a valid argument there, but it seems overstated given that companies seem to have little trouble getting approval for big datacenter buildouts here on earth.

There's also the coolness factor, I guess.

But why does no one talk about launching RackSpace servers into space? I think the whole thing is tied to AI because of the hype, not because sending servers into space has any kind of material advantages.


Not to mention the fact that you trade one source of pollution for another. You think giant rockets to lift tons of equipment into space is good for the environment?

A lot of people are getting very irate over this, but I think this is more or less fine. Everyone in these companies know perfectly well they're just part of the Elon Musk extended universe. He hasn't behaved like they're separate companies for quite some time. Was it a financially wise move for xAI to buy twitter? Well, no, but everyone in twitter was quite happy they got a price that was the same as the inflated original purchase price, so that was quite clear, and everyone involved in xAI is the Elon train so they've got nothing to complain about and if you squint the rationale of training data and distribution kind of makes sense. With SpaceX again, it's a private company, if they want to go along with Elon Musk's non-economic moves because they want to be onboard the Elon train? Sure, why not, they've followed him this far and he genuinely has delivered an incredibly ambitious feat with SpaceX.

I think the only exception is Tesla where it's a public company and Musk has an obligation to the shareholders, but honestly, this seems like it would be SpaceX bailing out Tesla & xAI, not the other way around.

I guess what I'm saying is "Who is actually invested in any of these Elon Musk ventures who isn't happy to give Musk a blank cheque to operate however he wants?"


I mean... this article is just Gobbledegook. Gold went up because of political instability, a deal was announced and gold went down because of political stability. Oh sorry no it "executed a deterministic fallback". No one is impressed because you name check ISO standards.

Even the conclusion feels unnecessarily contrived:

> The lesson for 2026 is clear: Do not trade the headlines. Trade the Anchor. And right now, the geopolitical anchor is in Türkiye.


Buy rugs?

Makes sense after a rug pull I guess? /s

We saw this before with Twitter/X - network effects are crazy, and so even if 30% of your users jump to another platform, that network effect is still going be grinding down that other platform, and in reality it's probably 0.003% of users. It's a true testament to network effects the continued (reduced) existence of Twitter. I see no reason why Tiktok - a bigger, more successful, less controversial, platform is going to suffer from this.

Twitter is dead. What signs of life remain are the scavangers that consume every carcase. TikTok will go the same way. Twitter is basically just moltbook at the moment. tiktok will become clawtok. It's inevitavle. The reason tiktok became such a hit was it lacked the bushit of meta. Now it has the bullshit of meta, it will simply lose users.

To be honest I think the true story here is:

> the fleet has traveled approximately 500,000 miles

Let's say they average 10mph, and say they operate 10 hours a day, that's 5,000 car-days of travel, or to put it another way about 30 cars over 6 months.

That's tiny! That's a robotaxi company that is literally smaller than a lot of taxi companies.

One crash in this context is going to just completely blow out their statistics. So it's kind of dumb to even talk about the statistics today. The real take away is that the Robotaxis don't really exist, they're in an experimental phase and we're not going to get real statistics until they're doing 1,000x that mileage, and that won't happen until they've built something that actually works and that may never happen.


The more I think about your comment on statistics, the more I change my mind.

At first, I think you’re right - these are (thankfully) rare events. And because of this, the accident rate is Poisson distributed. At this low of a rate, it’s really hard to know what the true average is, so we do really need more time/miles to know how good/bad the Teslas are performing. I also suspect they are getting safer over time, but again… more data required. But, we do have the statistical models to work with these rare events.

But then I think about your comment about it only being 30 cars operating over 6 months. Which, makes sense, except for the fact that it’s not like having a fleet of individual drivers. These robotaxis should all be running the same software, so it’s statistically more like one person driving 500,000 miles. This is a lot of miles! I’ve been driving for over 30 years and I don’t think I’ve driven that many miles. This should be enough data for a comparison.

If we are comparing the Tesla accident rate to people in a consistent manner (accident classification), it’s a valid comparison. So, I think the way this works out is: given an accident rate of 1/500000, we could expect a human to have 9 accidents over the same miles with a probability of ~ 1 x 10^-6. (Never do live math on the internet, but I think this is about right).

Hopefully they will get better.


500,000 / 30 years is ~16,667mi/yr. While its a bit above the US average, its not incredibly so. Tons of normal commuters will have driven more than that many miles in 30 years.

That’s not quite the point. I’m a bit of an outlier, I don’t drive much daily, but make long trips fairly often. The point with focusing on 500,000 miles is that that should be enough of an observation period to be able to make some comparisons. The parent comment was making it seem like that was too low. Putting it into context of how much I’ve driven makes me think that 500,000 miles is enough to make a valid comparison.

But that's the thing, in many ways it is a pretty low number. Its less than the number of miles a single average US commuter will have driven in their working years. So in some ways its like trying to draw lifetime crash statistics but only looking at a single person in your study.

Its also kind of telling that despite supposedly having this tech ready to go for years they've only bothered rolling out a few cars which are still supervised. If this tech was really ready for prime time wouldn't they have driven more than 500,000mi in six months? If they were really confident in the safety of their systems, wouldn't they have expanded this greatly?

I mean, FFS, they don't even trust their own cars to be unsupervised in the Las Vegas Loop. An enclosed, well-lit, single-lane, private access loop and they can't even automate that reliably enough.

Waymo is already doing over 250,000 weekly trips.[0] The trips average ~4mi each. With those numbers, Waymo is doing 1 million miles a week. Every week, Waymo is doing twice as many miles unsupervised than Tesla's robotaxi has done supervised in six months.

[0] https://waymo.com/sustainability/


Wait, so your argument is there's only 9 crashes so we should wait until there's possibly 9,000 crashes to make an assessment? That's crazy dangerous.

At least 3 of them sound dangerous already, and it's on Tesla to convince us they're safe. It could be a statistical anomaly so far, but hovering at 9x the alternative doesn't provide confidence.


No, my argument is you shouldn't draw a statistical conclusion with this data. That's all. I'm kind of pushing in the direction you were pointing in the second part - it's not enough data to make statistical inferences. We should examine each incident, identify the root cause and come to a conclusion as to whether that means the system is not fit for purpose. I just don't think the statistics are useful.

> The real take away is that the Robotaxis don't really exist

More accurately, the real takeaway is that Tesla's robo-taxis don't really exist.


Because it is fraud trying to inflate Tesla stock price.

The real term is “marketing puffery.” It’s a fun, legally specific way to describe a company bullshitting to hype its product.

Puffery is like saying this is the best canned chili ever made. Selling a can of chili but never actually giving the chili just an empty can is fraud.

Puffery should really be limited to subjective things like flavor and not self-driving cars.

The Robotaxi service might be puffery, selling "full self driving" is just fraud.

A robotaxi rollout where each car needs a safety driver is fraud.

What's even more unbelievable is that a significant number of people are still falling for it

We've known for a long time now that their "robotaxi" fleet in Austin is about 30-50 vehicles. It started off much lower and has grown to about 50 today. There's actually a community project to track individual vehicles that has more exact figures.

Currently it's at 58 unique vehicles (based on license plates) with about 22 that haven't been seen in over a month

https://robotaxitracker.com/


No, they exist, but they are called Waymo

But deep learning is also about statistics.

So if the crash statistics are insufficient, then we cannot trust the deep learning.


I suspect Tesla claims they do the deep learning on sensor data from their entire fleet of cars sold, not just the robotaxis.

>One crash in this context is going to just completely blow out their statistics.

One crash in 500,000 miles would merely put them on par with a human driver.

One crash every 50,000 miles would be more like having my sister behind the wheel.

I’ll be sure to tell the next insurer that she’s not a bad driver - she’s just one person operating an itty bitty fleet consisting of one vehicle!

If the cybertaxi were a human driver accruing double points 7 months into its probationary license it would have never made it to 9 accidents because it would have been revoked and suspended after the first two or three accidents in her state and then thrown in JAIL as a “scofflaw” if it continued driving.


> One crash in 500,000 miles would merely put them on par with a human driver.

> One crash every 50,000 miles would be more like having my sister behind the wheel.

I'm not sure if that leads to the conclusion that you want it to.


From the tone, it seems that the poster's sister is a particularly bad driver (or at least they believe her to be). While having an autonomous car that can drive as well as even a bad human driver is definitely a major accomplishment technologically, we all know that threshold was passed a long time ago. However, if Tesla's robotaxis (with human monitors on board, let's not forget - these are not fully autonomous cars like Waymo's!) are at best as good as some of the worse human drivers, then they have no business being allowed on public roads. Remember that human drivers can also lose their license if [caught] driving too poorly.

> Remember that human drivers can also lose their license if [caught] driving too poorly.

Thank you, yes, I could have said that better. But yeah as a new human driver if I’m too sloppy and get into too many incidents , the penalty is harsh and I’d say that “none of the autonomous companies are held to the same standard” but that’s not 100% true: we do have cities and states refusing to play ball or issue permits here.

But that’s exactly right. my opinion was/is that there should be a probationary period for the first year of new autonomous technology — or major deviations from existing and proven technologies — too. And if it causes too many accidents or violations, then, it should be held to the same standard I am


It does. She just ran over a bus shelter, like she was vibe driving a Tesla on autopilot or something.

[flagged]


They might have forgotten how to share an anecdote and their sister might just be a regular awful driver

Are they effective? Do you have data on the number of people they've correctly identified vs false-positives. In fact, do you have any evidence they're even trying to limit false positives?

The reason they are able to very efficiently send a dozen ICE agents to a random persons home to hold them at gun point until they can prove their immigration status is because the goal is to send ICE agents around holding people at gun point and they're happy if they happen to also get it right sometimes.


If I understand correctly, you're saying that in a majority of cases (or something approaching that) the targets of these raids are not subject to lawful deportation?

I would be curious to have data / information showing that.


I'm saying we have absolutely no concrete statistical data, and in the press we have many cases where law enforcement has been deliberately negligent in order to deport people who were here legally. We can actually see them deliberately trying to avoid doing the things you would do if you wanted to establish the people you were trying to deport were here illegally. So it's fair to say, until we have some evidence that these people were here illegally the sensible thing to do is to assume they are innocent.

It's also kind of a problem to say "Oh well, we've got no concrete data, let's continue to let them deport whoever they like and shoot anyone who gets in the way".


If I'm reading this correctly, they're just straight up violating the law. They're sharing information with ICE under an obligation to share information of aliens, but they're actually sharing everyone's information in an effort to identify aliens. That seems like a pretty slam shut case if there were any mechanism to investigate and prosecute it.


It has become quite clear in recent months that the the rule of law will not be enforced on the federal government or their allies.


I heard a law professor on NPR a few nights back saying how, at the executive level, the rule of law is dead and has been for some time. They cited Jan 6 but recognised how politically divisive that example was, so also gave the failure to enforce the TikTok ban as a less partisan example.

If you take your hands off the wheel you can go a surprisingly long time before you crash. This hands-free period will have to come to an end at some point.


I remember a lot of stuff Bush did in the aftermath of 911 that was illegal. Anyone remember Snowden? And Obama did a drone strike on a US citizen. This has been going on a long time but maybe we used to play pretend better.


> This has been going on a long time

This has been going on forever, everywhere.

Laws have always applied selectively, particularly when it comes to whatever group is responsible for enforcing them.


There are very clear differences, so I find your argument disingenuous at best. While the legality can be doubted for the examples you gave, those administrations released their legal rationale.

The TikTok rationale essentially came to ‘we want genz voters’


I agree.

> This hands-free period will have to come to an end at some point

What would that mean? Do you expect the government to put their hands back on the wheel, does the US "crash" and become a dictatorship and/or does it lead to WW3?


It might take some time to end though, executive power without laws is very close to dictatorships, and some dictatorships take a long time to dissolve (if they dissolve at all). They might not even have an end. As an example, look at Russia, from an empire to a dictatorship to an oligarchy. It never seemed full democracy and there's no hope of it changing in the next decade. There's a lot of speculation on what will happen at the end of Trumps presidency


If we are to learn from the brutal Soviet sanctioned forced deportations of the Baltic nations following world war 2, then justice will come but it will take time.

Once the Baltic nations gained independence they tried everyone involved in the administration of those orders, which took place without trial or oversight and often resulted in the replacement families being deported if the actual tagets could not be found.

Ofc Stalin or any of the power brokers at the time were long dead, so instead it was a parade of lower level admin workers, all who were elderly in their 80s or 90s and who at that time were young, simply doing the bidding of their employers.

The lesson: don't be a bag holder for people who will die before you leaving you to hold the responsibility for their crimes.


Selectively ... Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect.


It's pretty clear for decades. When exactly did some higher up in the US gov end up in jail for ordering eg. mass killings abroad, or colluding with others that engaged in mass crimes like initiating wars and conflicts.

US will not lock up a single asshole who helps kill thousands of people abroad (not even inconvenience them with a simple court appearance to have to justify themselves), but it sure can lock up thousands on flimsiest justifications like FTA in court because of whatever, or technical parole violations, or driving on suspended license, basically for failures to navigate bureaucracy while poor.

I'll believe in rule of law when at least shits who materially support mass killings of children will start getting locked up. But alas, no. No such thing.

Until then it's all just bullshit that normal people have to submit to, and ruling class gets to excuse itself from with endless lawyering, exceptions, and nonsense, while it's clear they're still just scum psychos doing scum psycho things.


it's been quite clear for about 50 years now


Maybe, but there was also clearly an inflection point just over 12 months ago, and another 8 years prior.


For me it was when Eric Holder, the Attorney General under President Obama, straight-up ignored a Congressional subpoena. Maybe the actual event happened earlier than that, but in that moment I marked "rule of law" as a dead letter.


2000 election decided by the Supreme Court here. Will never forget the phrase "hanging chads."


Yeah, power to execute laws is given to the executive branch. Power of the executive is bestowed upon... one person.

From https://archive.is/E6zXj :

> But, as Chayes studied the graft of the Karzai government, she concluded that it was anything but benign. Many in the political élite were not merely stealing reconstruction money but expropriating farmland from other Afghans. Warlords could hoodwink U.S. special forces into dispatching their adversaries by feeding the Americans intelligence tips about supposed Taliban ties. Many of those who made money from the largesse of the international community enjoyed a sideline in the drug trade. Afghanistan is often described as a “failed state,” but, in light of the outright thievery on display, Chayes began to reassess the problem. This wasn’t a situation in which the Afghan government was earnestly trying, but failing, to serve its people. The government was actually succeeding, albeit at “another objective altogether”—the enrichment of its own members.


> Power of the executive is bestowed upon... one person

This is the unitary executive theory. It’s a novel Constitutional theory that even this SCOTUS seems reluctant to honestly embrace.


They are not reluctant


> They are not reluctant

Read the Fed case transcripts.


> I'm reading this correctly, they're just straight up violating the law

HHS says “under the Immigration and Nationality Act, ‘any information in any records kept by any department or agency of the government as to the identity and location of aliens in the US shall be made available to’ immigration authorities.” If that’s true, they’re following the law.


Key part of what you wrote: "as to the identity and location of aliens" - so whatever claim they have to access health information applies to aliens. The big question is: are they harvesting citizens' health records illegally as part of this effort, and if so, when do those responsible see jail time?


> are they harvesting citizens' health records illegally as part of this effort, and if so, when do those responsible see jail time?

I’m honestly curious if this would be a Privacy Act or HIPAA violation. The article seems to be unsure on this.


They're unsure because a lot depends on the legal status of children born to non-citizen parents in the US after a executive order tried to revoke birthright citizenship: https://www.bmj.com/content/390/bmj.r1538

If that EO was legal, then sharing the data is, too. If it wasn't, then it's probably a privacy violation, but the CMS isn't allowed to make that call themselves, they have to rely on court decisions for it. And challenging EOs is not trivial.


I’m also unsure, but I haven’t understood HIPAA to constrain governmental actions. It’s a short law so I will review it (not a lawyer all the same).


I thought undocumented migrants weren’t allowed to use Medicare or Medicaid. How is that data useful to track them down, then?


HHS is broader than CMMS. Someone who was formerly legal could now be illegal. But more prominently, Miller and Noem have focused on illegally deporting pending asylum cases to juice their numbers. Those folks may show up in HHS (and IRS) data.


I’m against using health data to benefit ICE but what you’re saying doesn’t make sense. There needs to be a critical mass of data for it to be useful to Palantir. If they are passing Medicare and Medicaid data, does that mean that undocumented migrants are getting Medicare and Medicaid?


> If they are passing Medicare and Medicaid data

It’s not. Palantir “receives peoples’ addresses from the Department of Health and Human Services (HHS)” [1]. That’s broader than Medicare or Medicaid.

If you’re on a legal visa and have to get a prescription filled, I think you’ll wind up in those data. (Same if you are legally on Medicare with a spouse who overstayed their visa.)

> does that mean that undocumented migrants are getting Medicare and Medicaid?

Not necessarily. As I said, these data are broader than CMMS. And the targets of the current ICE are not undocumented migrants. (I live in Wyoming, near the Idaho border. The farm workers are fine.)

[1] https://www.404media.co/elite-the-palantir-app-ice-uses-to-f...


I think it's pretty clear they are using Medicare and Medicaid data. They both fall under the umbrella of HHS so it would technically be correct to say they got data from HHS, but it seems like it's specifically Medicare and Medicaid data.

“Several federal laws authorise the Centers for Medicare and Medicaid Services (CMS) to make certain information available to the Department of Homeland Security (DHS),”

> > does that mean that undocumented migrants are getting Medicare and Medicaid? > Not necessarily.

Not necessarily, but probably. I was told explicitly that undocumented migrants weren't getting Medicare and Medicaid services, but at this point, I don't know who to believe.


If you go to an emergency room at a hospital which accepts Medicare (so, essentially all of them), you will be screened, and if in danger, medically stabilized (modulo difficult pregnancies in some states with anti-abortion laws, unfortunately).

I assume if you then fill paperwork out, they’d have your data - though I’m not sure why you’d agree to fill it out if you know you can’t pay, and that you’re just going to be discharged.


Great question. I thought that only citizens could access public healthcare benefits.


I'm open to either conclusion, but what law / right do you think is being violated?

As a general rule, the first amendment protects the right to say, e.g. "John Doe lives at 123 Main St." John may not like that people know that, but that doesn't generally limit other peoples' right to speak freely.


It's right there in the article, there are specific federal laws authorizing them to make specific information available - for example, they can make any record kept about the identity or location of aliens available. Right, that's a specific limitation on what they can share, even the HHS spokesperson made clear they don't share information on US citizens and permanent lawful residents. But then the article goes on to reveal that ICE has all the personal data of every person receiving Medicaid.

If the law says you can share aliens information, but not Americans information, and then you do share Americans information I think you're probably breaking the law, and at the very least there should be a process to find out what the basis is for you doing it. Normally these things would be decided by a court.


"If a cop follows you for 500 miles, you're going to get a ticket". - Warren Buffet

'Show me the man, I’ll find you the crime'. - Lavrentiy Beria (Stalin secret police)


> if there were any mechanism to investigate and prosecute it.

If only there was an independent Judikative or something idk...


Where did you read they're sharing everyone's information?


You are not reading this right.

> There is no data sharing agreement between CMS and DHS on “US citizens and lawful permanent residents,” they added.


Which doesn’t quite say those data weren’t shared.


Please stop using the word alien to refer to humans.

It's dehumanizing and it leads to a path where you can justify humiliating, torturing, and murdering other humans. Which is already happening with ICE.


> Please stop using the word alien to refer to humans

It’s the legally-correct term.

For what it’s worth, I’m a naturalized American. When I was doing my citizenship paperwork, I found the term fun. The word doesn’t dehumanize. Murdering people does.


Genocide starts with separating people into them and us, and this process starts with words.


> Genocide starts with separating people into them and us

This is an unsubstantiated slippery slope. We can categorize people, even sort them by desirability for some purpose, without resorting to dehumanization much less genocide. (Citizenship and immigration necessitate an us-them delineation. So do team sports, families and like club memberships. Us and them are fine. Us versus them is dangerous.)


It's not enough by itself of course, but in the US it's very much a "us versus them" if you haven't noticed.


And to add to that, it's not a neutral environment. If there's 1% of scenarios that are incorrect, people will figure out they haven't been billed for something, figure out why, and then tell their friends. Before you know it every teenager is walking into Amazon Fresh standing on one foot, taking a bag of Doritos, hopping over to the Coca Cola stand, putting the Doritos down, spinning 3 times, picking it up again and walking out of store, safe in the knowledge that the AI system has annotated the entire event as a seagull getting into the shop.


It's pretty common for crypto wallets that have been linked to illegal activity get blacklisted. So by sending a bit of crypto to the guy that figured out who he is, if/when the government investigate and freeze accounts the guy who busted him will get their account frozen too.


Aka "dusting"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: