Similar thing happened to me. I built an online Japanese - English dictionary and used AdSense to monetize it. One day I got an email saying that my domain has been permabanned because my website appears to be promoting rape and pedophilia. As an example of one of the many offending pages on my domain they sent me urls to the definition and translation of words "pedophilia" and "rape" in Japanese.
Of course none of my competitors using the exact same data set had any such problems.
I tried for YEARS to appeal it. There are simply no humans working at Google and nobody reads your emails.
Edit: Actually, I did get a response a couple times but it was obviously automated. They just said to remove the ads from the pages where such words are displayed. So I added a simple rule and a column in the database to hide ads for those keywords. That just triggered the bot to move down the list of their "obscene" language. Next it was the names of various sexual positions, acts and fetishes (Japanese does have a very rich vocabulary in that topic), then manga slang, even silly sounding onomatopoeias that when explained in plain English are "vulgar", etc.. It seems once your website is flagged there is simply no way get clean.
The crazy thing is that Google even took this action despite it also being against their interests. Sure, your business specifically is relatively inconsequential to them, but they must've made hundreds of thousands of similar mistakes. That's a fair amount of ad inventory to miss out on, surely?
Google accounts have enough worth & history associated with them that they should be able to create some kind of appeal process whereby if you jump through the right hoops proving identity and such, you could eventually reach a human who can intervene?
It feels like they're religious about the idea of having an algorithm decide everything. Works pretty well for some things, but they sure do burn some customers/clients pretty badly along the way for other things.
One possible problem is that there are always more people willing to step in. I assume that I can still find a Japanese to English dictionary by searching Google. If one, or N, such websites get taken out by algorithmic flaws or bad actors reporting competitors, then others will simply rise up and take their place (or Google will subsume their content into the instant answers section). In this way, Google may not really be losing anything even if they are constantly burning their partners - there are always new partners coming up.
Yup. And since almost everyone is using AdSense it doesn't make much difference to them. The funny thing is that my AdSense ban didn't affect the SERP position at all. I was still the second result for "japanese dictionary" on Google and had a steady million pageviews a month for years.
> since almost everyone is using AdSense it doesn't make much difference to them
You’ve hit on the actual problem. This is why so many believe the real fundamental problems the culture is facing with tech companies start with antitrust enforcement.
Yeah, see how in the article, the guy is planning to 'spend his next 1-2 weeks fighting this case' instead of giving Amazon the ultimatum that if they don't fix this quickly, he's going to take his business elsewhere !
But then they have fewer ad impressions to sell. That said, since their system is an auction, some level of scarcity does potentially increase their revenue. Maybe they've done that calculation and figured out that they wanted to dump some of their inventory.
To pay for the 15 minutes it would take a person to look at this issue and fix it they’d need a lot of impressions. Far more than they lose by just blocking the site.
It begs the question seeing as they've written a bot to determine what's offensive why don't they hook it directly to the ad servers and auto disable ads on the offending pages only?
I detest Google, so I absolutely have to say this is an absurd way to look at the world. Google doesn't care if it's actions are pro or anti consumer, they care about money and power and go about pursuing it in a way that tends to be anti-consumer.
Neither are the ads they serve; furthermore the amount of profanity detection required scales linearly with the number of pages on which ads are served, so this really shouldn't be a barrier.
It's that Data Scientists don't realize most of their models still simply fit a conditional expectation - and given all their power, it's going to be us, not Amazon or Google, who has to adapt to the distribution as not to repeatedly get hit as some sort of outlier.
It's a dystopia where AI works because of us trying to conform to it, because otherwise we are out of luck. At some point, we self select into Amazon-humans, Google-humans etc.
> It's a dystopia where AI works because of us trying to conform to it, because otherwise we are out of luck
But all technology is like that. Cars work, because we conformed our environment to it. Email works, because we conform to checking our mailboxes regularly. Telephones work, because we pick them up when they ring.
If you are interested in knowing more, the whole work of the french philosopher Jacques Ellul is based around this idea.
AI is not going to be any different in this respect. It is going to be us meeting it halfway. And that is going to be dystopic in the same way as it has been with other types of technology.
Wow, not sure why you're getting all the down-votes. I don't necessarily agree with you, but it's an interesting perspective regardless.
You also touch on the Amish approach to new technology. Each new thing is assessed for impact on life/tradition/etc and adoption is limited accordingly. For example, phones might be considered useful enough to have one per town, but distracting enough to be kept out of individual homes. Or, a computer might be allowed in the back-office of a wood shop for purposes for managing online sales, but nowhere else in the community.
The difference is we have a choice in those matters. I.e. my telephone does not work in that sense, because I will not pick up for an unknown number in most cases. A communicable human is on the other end.
With "AI" (machine learning / neural nets), we're letting opaque boxes arbitrarily control and ban accounts at will. It's like forcing people to use cars that randomly decide that it won't drive that day for no good reason.
"AI is not going to be any different in this respect"
The other examples all include accountable humans, "AI" here does not. We're already seeing issues of machine learning applying sexist biases for example because of datasets and poor training. What are you going to do, hold google, or any other faceless corporation, accountable for its bad translations? Good luck.
It's only going to get worse in the future as other stratified aspects of society get cemented by "AI".
So, I think I explained myself badly. I fully agree on the problems of AI you point out. I agree with you on the reduction of freedom AI will bring. But then again, every relevant piece of technology took away such a piece of freedom.
I think one might underestimate the freedom that have been let go when other technologies came in, even though those technologies feel established already. You can choose not to pick up on the unknown number when the phone rings, but the moment of connection with a real human being you might have had at that point was interrupted by the phone ringing.
Society expects us to be reachable at any time, but that came with trading in the freedom to not be arbitrarily disrupted by default.
The point I am trying to make is this: all technology requires and has required humans to adapt to it, and consequently handing in a bit of freedom. I want to point out that AI is no different in the fact that it does so, even though like every other piece of technology it will do so in a unique way.
Is that dystopian? Perhaps. Many philosophers on technology have thought so, and many have disagreed.
> The other examples all include accountable humans
But when those faceless corporations were being sexist because of bureaucratic reasons in the sixties, how were they being held accountable? Did anything really change?
Widely adopted technology is something that lies outside of the humans that make it up, like an ant colony is different from the ants it is made off. You cannot hold any individual ant accountable to the way the hive is organised. Technology is an entity that lies outside of those ants, and that has its own desires and goals.
>At some point, we self select into Amazon-humans, Google-humans etc.
That's one worst case scenario. I've been thinking once humans start deselecting Google (no more gmail, youtube red, etc) then there's more room to also start blocking google's ad network. I know Google's dark pattern to force compliance is to pretend they don't know you and put you on a captcha treadmill, but that too will crumble if their adsense network yields less and less value as users and website operators move on from being plugged into the googleverse.
At least today you can operate outside the googleverse but you're punished for it. We should be holding on to that and trying to expand it rather than end up in the future you described where we can do nothing but comply.
Ha! I’m sorry but it’s already screwed. Compared to where we stood when people were actually arguing for a free and open internet and against centralization and corporatization we have arrived at almost all of their dire predictions and the last few seem to be well on their way. No one is fighting it anymore, they’re just trying to grab the scraps that fall off the table that the big five sit at and hoping they don’t get stomped into oblivion by the giants.
A lot of the free and open Internet arguments were about copyright and open source, both of which turn out to be almost irrelevant in the face of automated Kafka-esque monopolistic bureaucracies.
Turns out being able to pirate MP3s is a poor consolation prize when you can have your startup permabanned from AdSense or Amazon for no good reason, and there's nothing you can do about it.
copyright and open source, both of which turn out to be almost irrelevant in the face of automated Kafka-esque monopolistic bureaucracies.
What do you think the free software movement was all about, if it wasn't about avoiding Kafka-esque monopolistic bureaucracies? Read Stallman's "The Right To Read", from 1997.
Do you mean the consumption of open source software has been embraced, i.e. they love using free software but are not big fans of the backside of the model?
Do you have a good resource for how web 3 would work? All I read is hyperbole and "could be"s but I am looking for actual concepts of how it would work. Any ideas?
Web3 means a lot of things to a lot of people. In this context the decentralization enabled by blockchain or federation is probably what the GP is talking about. With a decentralized app you can't be banned in the same way (though this presents other problems).
While blockchain based technologies are often cited as an example of Web3, and some of the worst hype offenders are pushing this. It's not exclusively blockchain.
In some ways Web3 is a return to earlier internet where there were open protocols rather than everything being controlled by companies. Redecentralize[0] has a great set of founder interviews talking about some of the projects and challenges.
To be clear it's not necessarily anti-company, just a recognition that a handful of companies controlling everything is problematic. Coupled with a recognition that running at the scale of the Big 5 is prohibitive (e.g. server costs, preventing malicious actors, people to handle problems).
The UX around these things is hard, though getting better. Mastodon[1] is a reasonable alternative to twitter now (doubly so if you cross-post).
Gitcoin[2] is a good example of a Web3 app (all the core functions are smart contracts on the Ethereum blockchain).
There are plenty of example of Web3 apps(dapps) on Webby[3] and DApp.com[4] (though the latter you have to wade through all the Decentralized Finance stuff).
That's actually very interesting. One problem I have started to run into recently however, is that you can't mention the word "blockchain" any more without being laughed out of the room (in some circles), and in many cases rightly so. The word has just been to much vaporware-ified and misused because VCs and other decision-makers were eating it up like crazy.
However, it seems like blockchain-based technologies are to be an integral part of web3, or are there alternatives to it? Is there a way to talk about the concept without mentioning blockchain?
Blockchain(s) are certainly one of the building blocks, but it should be possible to talk about what you're building without mentioning the underlying technology. Indeed, it should be somewhat irrelevant unless people want to dig into the detail.
That's why efforts like redecentralize are so important. It's more talking about objectives like privacy, censorship resistance, etc.
There are a subset of people that go, let's build with "the blockchain" for the hype and another subset that recognizes that blockchains are one of a number of tools in the arsenal of solving a problem.
Efforts like Matrix, SSB, OpenBazaar, and IPFS are all 'Web3' technologies without being on the blockchain.
If you do need to talk blockchains, two terms that likely aren't so loaded are 'smart contracts' and 'dapps' (distributed apps).
Blockchain is a very heavy technology that solves the Consensus Problem. That's amazing, but almost no real world problems except cryptocurrency has a Consensus Problem.
For almost everything else, there are much simpler, easier-to-understand and _far_ computationally lighter cryptographic solutions that aren't blockchain.
When the blockchain / dapp hype got huge for a minute in 2017, I asked a “crypto enthusiast” friend to pitch me on why I should use his dapp (some sort of gambling game).
He went on and on about double spending and consensus. i asked him why anybody, even his “ico investors” would care about any of that just to play a (slow) game.
He went back into double spending and byzantium generals and using Brave for tokenized payments.
This pattern repeated with every single dapp I came across.
Definitely sounds like an inability to pitch correctly. In the case of a gambling game there are a number of things that you could pitch in a better way:
* It's possible to gamble even in jurisdictions that make it illegal.
* The rules are encoded and visible, the house might always win, but at least they're not cheating.
Neither of those require a fleet of GPUs powered by a hydroelectric dam to balance the accounting.
The former point almost certainly makes small disposable servers in a variety of jurisdictions a requirement. There might be many millions of crypto users around the world, but there are very few running their own nodes, and fewer still actively mining. Block generation is practically as monopolised as the banking system it supposedly replaces.
To the latter, I worked a cryptocurrency casino and the provably fair algorithms were implemented using the core Node.js crypto libraries and completely independent from all the blockchain nodes. The sheer frequency of by users completely rules out any chance of leveraging a smart contract or block data as a base for the RNG.
The real problem is this. How do we determine which speech to punish in 2021? The algorithm is if one person in a non dominant group reports that the words made them feel bad, after the fact, then the words caused actual harm and must be punished. How do we know ahead of time? I don't know. But I do know that it leads us to strange situations where certain words cannot even be quoted, and the even the banning itself cannot be discussed. And the outlawing of discussing the ban cannot be discussed. And so on.
web3 as a whole is very early and still being built and developed, that's why there is not one definition of web3. The biggest commonality is that the solutions are focused on decentralization and censorship-resistance, as those are two of the main issues plaguing the web today.
Imagine the web in the 90s, some people saw promise while others asked themselves what it actually is. Fast forward 5/10/20 years and what web3 is will be a lot more clear as we'll have mainstream web3 applications deployed and used then.
Are those the main issues plaguing the web? I would lean that the foremost problem is radicalization, and that decentralization and censorship resistance will make the internet an even better tool for radicalization than it is today
> I realize it's complicated, but there are things that make me worry the future of the internet might not be so great.
The solution isn't easy, but it's simple: stop using Amazon (specially AWS), Google (specially AdSense), etc. Once they lose half their annual revenue, they'll start taking things seriously.
It's like they didn't hear you say "the solution isn't easy", and so they had to say what you already said, and probably feel like they are saying something smart and worth hearing as they do.
Happened to me for Amazon. Customer for 20+ years, wanted to sell something, account closed. No appeal, no explanation, last mail said "DONT SEND US MORE MAILS".
The question is: How do you enforce that or even complain about it? Say your Amazon account gets locked for whatever reason due to an automated decision (probably all of these decisions are automated). There is no way to contact Amazon without an account.
It's not like they have a link on their website that says "Request human appeal under GDPR" (I haven't checkt tbh, but I'm 99.9% sure they haven't).
I wonder what happens if you file e.g. in small claims court against a company like Amazon? They'd probably never get the message, and even if you win due to them not showing up and making their case, good luck enforcing the judgement.
Short of hiring a major law firm whose letterhead might get someone's attention and/or making major waves, I don't see how Joe Sixpack can force a human appeal without major monetary outlay.
I think their strategy of burying their head in the sand and just ghosting you works probably pretty well for the large majority of cases where people simply won't bother (or be able to bother). The cost for the one or two cases that have the energy and commitment to fight is comparably minor and quickly resolved once it hits the front pages somewhere.
> I wonder what happens if you file e.g. in small claims court against a company like Amazon? They'd probably never get the message, and even if you win due to them not showing up and making their case, good luck enforcing the judgement.
This is ultimately their weakness. Whether it's the binding arbitration exploit that Uber had to deal with or small claims court default judgments these organizations are highly susceptible to coordinated and distributed actions in the real world.
You need to view this as asymmetric warfare where you're using your opponents advantages against them. If they're bigger then you swarm them with small entities. If they can avoid dealing with the public by using AI intermediaries find venues where they simply can't and repeatedly pressure them there.
"Don't struggle only within the ground rules that the people you're struggling against have laid down."
It's amazing how much quicker a company responds when they hear something from their legal department rather than some form that they're dumping into the void.
To that last quote, you can often intuit when someone has taken this strategy as the Goliath in the situation will start using terms like "proper channels" and "cowards".
BoA had it happen a couple of times after they screwed over home owners and tried to ignore the courts. It seems like there should be a significant consequence to forcing the aggrieved party to form a posse with the sheriff in order to collect the local branch's office furniture.
A court letter, even from small claims, is something you don't ignore. Last I checked, FANG do show up to small claims court, because loosing there is enforceable by law which can get expensive to ignore, if the accuser is willing to let things escalate.
In GDPR cases, you don't go to court, you go to your local data protection agency (pray you don't live in ireland). They will contact Amazon and ask for a statement. Ignoring them isn't something you can do if you value your revenue, because unlike single customers, the agency's task is to go after you with a problem for years, ghosting doesn't work, and they can issue legally binding fines you can't ignore.
Enforcement of either a court order or fines is easy if you got the title on them. If they continue to ignore you can probably get an attachment order, ie, you plus a court official plus some police officers arrive at the nearest amazon HQ and will demand the fine to be paid or they start taking company property to be auctioned off to pay for the fine.
Alternatively you can have a letter delivered by court service, which also is hard to ignore because the court service will require someone at amazon to be read a cover letter, followed by a signature and rather official stamp. After that your letter is considered to be proven in content and delivered. That is something that raises a lot of red flags in legal departments.
Seems we need an automated way to do the filings, like the apps that automatically file against your bogus parking tickets, etc.
Both enables the aggrieved small biz, and by making it easy, creates a potentially massive problem for the Amazons & Googles who make ignoring the toxic effects of their system a way of doing business
Happened to me in the exact same way with eBay. User (relatively inactive) for 13+ years, go to sell something (boots lol) with original pics, full description, everything set up correctly, and banned within 5 minutes. No way to appeal, and last email after they said they'd refund my $1 ad post charge, was that my account was being deleted.
I was using the Youtube API for personal access, only related to my personal account and information.
Had my API key revoked because it hadn't been used for 90 days. They provided a link to reclaim it, yielding a LONG form where most questions were only applicable to businesses and service providers.
I answered to the best of my abilities, requesting them to give me access again,also exhausting other available communication options on the matter.
That was about 6 months ago, all I got so far was automated emails telling me it's in review.
I recently got policy violation notices citing shocking content for articles about land usage in West Virginia and uneven distribution of GDP around the world. Their AI is definitely not good.
Dystopian like the movie Brazil isn't it? Arbitrary, automatous machinations lacking in human supervision, with anonymous accusations believed automatically, convicted instantly, and banishment forever.
This is the AI tyranny they warned you about. Let's say the future is fully automated AI locking you out of things for social credit score violations. There's no due process or appeals, there's no one working in customer service. They don't really care about you individually because you're just taking up space and not generating much revenue anyway.
One of the problems with very large corporations like Google is the marginal benefit of each additional customer is negligible. They have so much money that they don't really need extra business and would rather pursue other objectives. Look at all the stuff getting cancelled lately for whatever reason. The whole thing takes place extra-judicially and is not in the monetary interest of these companies. Stakeholder capitalism if you will. I think the old profit driven system was better because at least you knew what the rules were. Now, the way these big companies make decisions is largely opaque and based on secret rules and arbitrary decisions often based on nothing but whim, or worse, a "good enough" algorithm.
Arguably, this has nothing to do with AI, but an imbalance of power. From a business perspective, it makes perfect sense to do whatever is just good enough to turn a profit with minimal effort.
As long as there is healthy competition, there is enough incentive to do better, and a customer can switch services without experiencing significant hardship. However, with a lot of the big tech companies, that is no longer the case, and once you run into a problem there is neither free-market nor regulatory mechanisms that can help any individual solve their problem.
Currently, the big tech companies can eschew responsibility by claiming the rights of private business -- having very strong autonomy over how and with whom they do business -- and despite them being de-facto public utilities at this point, the claims are made that regulation is not necessary / not allowed / not productive / harmful. Why this is accepted is puzzling to me, as it is very commonly known macro-economic theory that the usually claimed self-regulating free-market mechanisms no longer work for when large imbalances, eg. monopolies, exist. We have the government explicitly for this reason, to create a counter-balance against forces so large that any individual cannot deal with on their own.
And while it is true that we can expect regulatory intervention to be difficult, especially as the political process cannot possibly keep pace with technology, it is indeed very strongly preferable to giving free reign to private entities with explicitly anti-consumer interests (eg. a corporation has to make their shareholders money, not protect their customers or the environment).
So what's the solution? Maybe regulatory agencies need to be given more teeth, including funding and updated charters. Yesterday's Cartel is today's Big Tech.
this has nothing to do with AI, but an imbalance of power
An imbalance of power because of employing AI with suboptimal results, but not caring about the negative externalities because, well, they're external.
Is this AI, really? And isn't it immaterial if it is in the first place? If it's a dumb algorithm or a smart one (or a random one, at that) locking out people without recourse, the result seems much the same.
This is the next logical step in inverted totalitarianism: society ran and policed by commercial AI, without recourse or due-process because it "isn't government." This occurs as corporations take over the functions of previously public commons and forums, commonwealth/community infrastructure, and manage public-private "partnerships" (PPP's), effectively cannibalizing, ruining, and gaming government and public goods for profit.
1984 wasn't conceived from a vantage-point that exists now; the future from here has the potential to be far, far worse. These threads needs to be made into a novel and a movie to infotain people what dystopian concerns are tangible through unrestrained or under-regulated deployment of technologies on the horizon. China is already doing pre-crime and panopticon surveillance.
No, it's a question about why we're tracking what people are reading, and whether that's a good thing. It's why librarians were the only significant contemporary dissenters to the Patriot Act. They've created ethical standards around what they do, unlike tech professionals (outside of the FSF.).
I hate ads and am a privacy advocate. However, I'm not going to stretch the facts. Yes it's technically possible to hand back the searched words to ad providers. But as of writing, you have no proof that OP did that. Using adsense doesn't mean you're plugging in that data back into Google. What reason does the app developer have to do that, unless they were paid or something to do so?
"Why would a dictionary need ads" well, because someone made the app. And maintains it. And provides it for free in exchange for a tracking cookie. Since the datasets are most likely free, they not possess the ability to even sell it. Who knows, you certainly don't unless you're the author or did the research.
Privacy and security are not absolute. It's a model. It's a series of tradeoffs. To parrot "but m'privacy!" is only doing harm and isn't solving privacy related problems.
> Why would every website's business model be based on ads and tracking?
Certainly there are for-pay services as well, but the absolute vast majority of people prefer to pay with their privacy than with their cash, to the point where starting a for-pay service of that type today without a known brand name behind it is not going to pay your hosting bills let alone put food on your table.
Adsense is Google and Google is an American company, you have to be very careful about the words in use. Usually it means a massive blocklist of words, when finding them don't display the ads on that page.
Even bigger newspapers sometimes don't have ads on the more morbid type of news pages.
I think Google's problem is that they aren't moral enough. They have no dedication to an ideal of actually caring about people or empathy, or any number of other things. They are greedy, mechanistic, and arrogant. Moral people aren't like that. They don't even uphold American values of liberty, free market, or freedom of speech. If they _were_ moral, a lot of those things would be solved.
I think that, in this specific case, this is pretty clear: If there's a culture that does not allow a dictionary to define rape, there's something wrong with this culture.
I'm certain most Americans agree with me here which makes me assume that this specific problem is not one of culture, but probably scale and an absence of responsiblity at Google.
The more general question is interesting though, because it could go several ways. For example:
You could argue that people should anticipate that the platforms they rely on are not under their control (and should maybe act on that).
Or one could argue that the platforms should anticipate the diversity of cultural standards they are catering to by easing their moral rigidity. (For example through a more diverse/decentral company structure, etc.)
Here in Europe, some approach a somewhat similar question with some form of data nationalism, for better or worse. It plays into the same realization that there is an unresolved cultural difference between global platforms and local standards and intends to politically support local initiatives, corporations, etc. That, I think, doesn't solve the problem, but shifts the level of granularity.
But that's not the case at all. The dictionary that defines rape is still up and still ranking in Google search results. There's just no advertising allowed on that page. And you could argue that Google is being overly-puritanical, but you could also argue that most of their advertisers don't want to be associated with such words, even in a neutral context.
I think this is two arguments. The first one goes along the lines of: Defunding isn't problematic since it is not literally the only way to earn money.
For me that's equivalent to the idea that deplatforming isn't problematic, because people can still publish elsewhere or, worst case, still talk to other people.
Key to both ideas is to reject the social significance of operational scale as well as the power dimension of gradual influence.
Practically there is quite a lot of power hidden in the leeway and the bigger a company gets the more problematic their influence is for society as a whole.
The second argument, I think, is that there is nothing problematic about content demonetization because we always can trivially construct a plausible advertising interest against any unfashionable content, hence it's not primarily seen as a chilling effect but something innocent that just, by accident, ends up continuously narrowing the conversation towards the presentable and trivial.
I think this argument isn't great. Just because there's innocent intentions at play it does neither show that there are only innocent intentions at play nor that the overall venture does not, in the end, have bad consequences for society.
If our ad-ecosystem would allow advertisers to nudge a TV station towards what news they show, it would be a bad ecosystem for society, even if it's understandable that someone does not want to show their brand next to real talk.
I'll go along with you that defunding is a form of ipso facto deplatforming and therefore bad. I think it's trumped in this case by the advertisers rights to free association (and Google's desire to attract advertisers). But if (and I think this is your real objection) that defunding/deplatforming was aimed at a protected class or political identity, then that concern would rule.
That's not the case here, however. And I don't think we should be so concerned about a slippery slope that we can't allow any discrimination on the part of advertisers or Google.
By the way, I think I sense an undercurrent of "but that's just stupid" in regards to the objection to the extremely neutral use of dictionary definitions. You haven't made that argument explicitly, but for what it's worth, I'd agree with you on that personally. But that's not my call to make, or yours. (And if I'm imagining that undercurrent, then my apologies.)
Why should it matter? It’s a dictionary, words are in it. What will they find objectionable is utterly random and should not be a factor... what people find objectionable enough to hardcode exceptions for can be rather idiosyncratic https://twitter.com/techdrgn/status/1359221506165805060?s=21 for example.
More importantly, the issue is that there’s no recourse in these cases. It’s downright stupid that you can report a dictionary for this and get them permanently banned. If the issue is don’t run ads on naught word pages then google should make this list public and stop ruining businesses by practicing “I’ll know it when I see it” style moderation by algorithmic bots without human oversight.
15 or so years ago, eBay appeared to be buying adverts for all noun searches on Google. Certainly when I searched for “plutonium” and “antimatter” and a few other ridiculous keywords, I saw ads telling me I could “buy it cheap on eBay”. I tried this experiment in response to news stories criticising eBay for the same with the nouns “women” and “slaves”.
Did you think they were actually selling plutonium?
This is the same problem in a different skin: words to not equal intent. When we only judge by words we restrict good faith information and promote bad faith euphimisms that do real harm.
It's not the machines, it's the advertisers. If you're paying for publicity, you don't want it next to things anyone might find unsavoury. You don't want your products to be associated with X.
In that case the ad networks themselves should provide tools that automatically detect forbidden words and disable ads, instead of arbitrarily enforcing their policy on random small players.
There are completely legitimate reasons which still cause trouble [0] like description of Broadcast TV episodes with (gasp) murders and killers..
Why am I seeing the words "Hugging" and "Cuddled" in summaries?
The TV Calendar is monetised using Google Adsense. As part of Googles drive to be more advertiser friendly (you may know this as the Adpocalyse - it didn't just affect Youtubers) their system is now telling me the Calendar has SHOCKING CONTENT on it, and that ads will be removed from such pages.
Associated with words defined in a dictionary? How pedantic do you have to be to say "I don't want to advertise with Britannica because their dictionary has the word rape"?
I think it’s more that you don’t want your ad to show up on a page about rape, but you’re fine with it showing up on a page about joy. Coca-Cola works hard to associate their brand with happy experiences. They’d probably take out ads on the positive words in a dictionary but not the negative ones. If this site was making no distinction, it would lose Coca-Cola as a sponsor. Not really defending Google, but the puritanical explanation isn’t the best one.
I mean in some aspect I agree in spirit with your point but your point is off topic.
It may be questionable to have your stuff advertised on a page about the history of Germany...
But lets be honest... that's not why these things are being taken down. They aren't being removed because advertisers are complaining. Please show me where anything of that sort is mentioned - either in the OP of this thread or specifically in the article about "buckle-less belts"...
In the example of this thread... they are being removed because someone (presumably a competitor) reported their app for SUPPORTING "rape and pedos". A dictionary with a definition of rape getting removed for "supporting" rape by defining it - while competing dictionaries aren't removed for also defining the same words? Why aren't they getting removed for having a definition of rape that advertisers don't want to be listed on?
Please... show me where I'm wrong that the dictionary application was removed for 'advertiser complaints'... or the original article has its stuff removed for advertisers complaining.
Isn't it obvious that the same people who are actually looking for a translation of some word would not have any issue* seeing that some word in an ad?
* - at least not bigger issue than seeing any ads at all, but those would run ad-blocker anyway.
He is specifically taking about pages with Ads on them. Corporations don't want to be associated with said terms, so they wouldn't want to be advertised in that context.
No, you're missing the point completely. His competitors were using the same data source. The problem here is not ads appearing next to "bad" words, or what and who gets to decide what "bad" words even are, that's a whole other discussion.
The issue here is that Google, Amazon or $bigcorp allows its "blocking" algorithm to be misused, to be weaponized by competitors and that it allows no possibility of human review, that you can't get any human via phone or email to review a completely arbitrary (and often enough plain wrong) decision a random algo has committed (through human instigation or on its own rampage, as it happens often enough). That nobody has to account for and take responsibility for algorithmic decisions ("Sorry, computer says no. Good bye).
I have become so frustrated with this issue (we seem to discuss it every other day on HN) that I've come to think that you should not be allowed to offer services using automation without a clear and working process of escalation to a human in case of trouble. If we allow this behavior to continue and to spread (and look, I get it, doing business this way is very attractive and scales beautifully), things will only get worse.
We'll get to the point where it's "sorry, your car was deactivated because $passenger reported that you used a bad word in his presence. Do not contact us again". Maybe this will wake people up and show them that this way of offering services is broken.
Or "sorry, our algorithm decided you get the death penalty based on 100 reports of people that you did $badthing. chops head off Sorry, don't contact us again." Too much hyperbole? We'll see. Algorithmic determination of prison sentences already exists.
This doesn't explain the fact it was only applied to one business. If it was a simple keyword matcher banning sites that use words on the naughty list then at least it would be fair and equitable stupidity. Google applying it seemingly at random so whether or not you can make an ad-driven living is mostly down to luck makes it much worse.
And this is a passed down requirement, coming from the major advertisers. Google is simply enacting their will to never advertise on anything remotely related to a laundry list of taboos. I have a vague memory of something about this being in ToS (which I read ages ago, before joining the company) and I think we got smarter about this (preventing display on particular content; think like demonetising particular videos on YT).
Source: I work in Google. Not in Ads, but this topic has been discussed in TGIF at some point.
I think what we're learning from many debacles on YouTube, Amazon, and Google is this:
Algorithms are not even close to substituting for human customer service, human moderation, and human judgment.
Maybe this means web-scale sites like Amazon's marketplace aren't viable. I'm fine with that implication. Stores like Target and Walmart seem to have the same prices with human-curated inventories, and I don't really miss the millions of extra products Amazon has.
Comcast has lots of humans and they suck worse than Google.
I don’t think the answer is humans or algorithms, the answer is organizations incentivized to solve customer problems. Google doesn’t care about customers, if they did, they would fix this. Customers are locked in because where else will you go.
Just like Comcast doesn’t care because there’s only one cable company, Google doesn’t really have any AdWords or Adsense competition.
I think the solution is more competition so use ddg. Once customers have mobility, Google will work to retain them.
> Comcast has lots of humans and they suck worse than Google.
It's harder to step on a landmine with Google, but Google's landmine is far more devastating.
We've already seen lots of news stories about algorithms shutting down people's Gmail accounts, developer accounts, YouTube channels, etc.
I personally went through 6 months of hell when my brother's Gmail account was locked due to "inauthentic behavior". He had become disabled and unable to log in, and when I logged in from my machine, Google's algorithm flagged it.
I went through an infuriating loop of trying to unlock it and hitting the same algorithm. There was no number to call. I would've paid $10,000 to get back into that account, which would've required 10 min with a customer support rep.
I tweeted at Google and they told me I had to log in to Twitter with his account in order to get help (obviously impossible for me). It was the most angry I have ever been at any company. It locked me out of important medical documents, family messages, social media -- everything.
I finally fixed it by asking a friend who was a marketing exec at Google to help. He said there's an internal form you can use for this situation -- only available to Google employees.
Other people have had similar experiences and had livelihoods destroyed.
Whatever you think about Comcast, they will never ignore you and let your entire digital life get locked away without any kind of recourse.
Look, it's not fascism if they went to college and pay your salary. And even if it is fascism, they're in the Cabinet now, so technically that makes them the government, doesn't it?
In a sense, Comcast actually has very few humans. Most of their humans are 100% policy-constrained. They are not empowered to make decisions. Their only advantage over a machine is the ability to understand human language. For all practical purposes, they are an automated system.
> Stores like Target and Walmart seem to have the same prices with human-curated inventories, and I don't really miss the millions of extra products Amazon has.
I'm sure most people are generally happy to wait 2 days for something to arrive as well. Honestly, it almost feels like Amazon is trying to become an Alibaba and potentially look to take them on with the level of product quality that is now available there.
Exactly, it's a shit pit of affiliate marketers pumping anything for that <2% referral fee they get. As a side hustle they probably drop ship everything they are selling via Amazon affiliate links too.
With two major differences - 1-2 day delivery (vs 1-2 week) and an almost guaranteed no-questions-asked returns policy (Ive never, in 15 years of using amazon, been denied a refund)
I've ordered non-prime items on the local Amazon, where I'd first receive a "sorry, this is taking longer than expected, please hang with us" and then the package would come _way_ later straight from China.
That sucks - I'm in the UK so my experience might be different, but with the exception of something going wrong with delivery (Which I'd estimate is ~5-10% of the time, but that's not exclusive to Amazon), packages are here within 48 hours.
The cynic in me says that you have identified why big tech companies spend so much money funding GPT and similar research:
Soon, your appeal will be replied to by an AI, too. That way, most people will have the impression that they were reconsidered by a human and found in fault, which will likely make a large percentage of them give up. We're stonewalling real humans by building fake humans :(
And that means GPT could possibly reduce support costs for Amazon.
> The cynic in me says that you have identified why big tech companies spend so much money funding GPT and similar research
I don't think this is cynical. It's discussed openly. They want to provide the tools to enable all companies to automate customer service, which is where Facebook's big chatbot push came from.
Software jobs are being automated away, but not by AI. If you think about it, any time-saving library, service, or tool is going to reduce headcount at an efficient organization.
For example, I currently have a company that I technically manage. It has one employee who does maintenance on the infrastructure (AWS Elastic Beanstalk), database (AWS RDS), and code for five different products. The only other people who touch the code are security auditors.
That just wasn't possible 10 years ago. Managing servers alone for those products (~50 EC2 instances, ~10 load balancers, 10 database instances) would've been a full-time job.
Yep, this came up at a previous company (with 700 people at the time) that had existed before the cloud did. As AWS became more and more prevalent in 2010/11 they had to decide whether they wanted to be in the business of running physical servers and racks in data centres or whether they wanted to move to cloud hosting providers.
It was a non-trivial exercise, and to the best of my knowledge they still have some physical servers, but slowly but surely almost everything moved to Amazon.
Those physical server operations employees slowly left for other jobs.
They company kept growing and employing tech people though. Just tech people doing other things.
Some companies would be happy to pay that AI to develop their b$ software, because once in a position to abuse you customers, why not abuse them with less costs.
AI won’t be replacing software engineers anytime soon but something to keep in mind:
While people talk up Googles ai (at least they once did) the truth is while they may put out some good research now and again most of the stuff they deploy is crap by design - they can’t afford to spend a dollar per day per customer to run a complex algorithm on your data, they can only spend fractions of a penny.
This severely limits the quality of the ai deployed by large consumer oriented megacorps.
On the other hand you look at some of the ai being deployed in the b2b space or anywhere there is a real exchange of money and you’ll see some pretty expensive and interesting ai research and deployments.
> Maybe this means web-scale sites like Amazon's marketplace aren't viable
But the market is driving more and more money/resources to those companies/services.
Let alone Amazon/Youtube/Google are very different businesses, you can say Google has non-existent customer support, but at least for consumers, so far my experience with Amazon is more positive than negative.
> my experience with Amazon is more positive than negative
I don't dispute that. But a lot of other people have issues with dropshippers, fake products, radioactive products[1], unreliable reviews, etc.
When I stopped using Amazon, I realized I was returning ~90% of all my purchases just because the product was fake, described wrong, low quality, or just not durable enough to last even a month.
Yes, it's great to be able to return things, but it's a lot nicer to feel reasonably confident that the person who sold a product to me actually believes it's a good fit for what I want as a consumer.
Human-curated? Both Walmart and Target have unfortunately switched over to the same online seller marketplace business model that Amazon has. The introduction of this on Target is fairly new, but soon I believe Amazon sellers will be going multichannel and searches for products on any of those sites will return pages and pages of white-label crap.
> Human-curated? Both Walmart and Target have unfortunately switched over to the same online seller marketplace business model that Amazon has.
On both of those sites, you can change the filter to show only products that are available in stores.
Those in-store products still have whatever guarantees they did 20 years ago (selected by a human working for Target, probably going to be pulled off shelves if they result in too many returns).
Algorithms are not even close to substituting for human customer service, human moderation, and human judgment.
Algorithms are human judgement.
In a strict code sense an algorithm is a way of encoding a set of assumptions that you decide on when you're developing the code to run it. Even in unsupervised machine learning an algorithm is the result of human judgement because someone decided to delegate the decision making process to a machine.
Every time a computer makes a judgement call there is a human who is responsible, and should be accountable, for that process existing.
I think you're making a semantic argument that redefines "human judgment" to such a broad definition that this entire discussion becomes meaningless.
To pull it back a bit, "human judgment" in my usage meant "input processed by human using a brain" and algorithm meant "input processed by machine".
Given those premises, the argument that "algorithms are human judgment" is untrue both by definition and by observation.
For example, when I ask Google/Siri/Alexa a question like, "What is the best way to set up my router?", I will get a wildly different answer than if I ask a GeekSquad employee.
I guess they don't have to. Cell phones and IP phones have terrible voice quality compared to a land-line; median quality is far worse than the worst long-distance connection I ever got in the 90s.
People use them because they are cheap and convenient though. It's the same with algorithms. If you charge margins high enough for good customer service you will get out-competed by those that don't on price.
An important point to make imo is that algorithms will not be close to human decision making until we hit AGI. No matter how complex they get until that point you can always cheat them in absolutely ridiculous and more importantly reproducible ways. Meaning scams scale and can do way more damage than a human error.
> Maybe this means web-scale sites like Amazon's marketplace aren't viable. I'm fine with that implication. Stores like Target and Walmart seem to have the same prices with human-curated inventories
Not only are they viable, they are hugely more profitable without the human expenses. This inhuman model - yes, I think that is exactly what it is - will displace everything else that simply can't compete in runaway capitalism.
>Not only are they viable, they are hugely more profitable without the human expenses.
I don't believe this. The reputation loss alone will cost you a lot of potential money. Stadia is basically dead because the Google brand is completely untrustworthy. B2B customers want to avoid Google as much as possible because it's just a liability that threatens to ruin their business.
I think this is key. When Amazon becomes widely known as untrustworthy, then it will fail and fail quickly. There is already a _lot_ of counterfeiting on Amazon, and a lot of people avoid it for anything safety critical or areas where it is most prevalent. If that becomes more widely known and widely practiced, it would make a dent. Small places naturally verify their stock. Amazon doesn't.
In a non-dystopian world, though, the algorithms can make life way better. We could have filters that catch more scams and abuse than typical humans do... and then reasonable humans behind that are responsive and fix the carnage for where they go wrong.
Human-equivalent AGI is not sufficient for doing better than human moderation, despite all the problems with current generation AI content moderation.
The reason it isn’t sufficient to be human-level is because individual humans have all kinds of biases and blindspots which scammers design their scams around.
I’m kinda thinking it may be a necessary prerequisite, as scams could involve anything a human wants, but it isn’t enough.
It doesn’t have to displace everything else, it simply has to be regulated by law. A landlord cannot come to your physical store and remove few products. The same logic must apply here, with the platform providing sufficient means for interaction with local authorities in place of registration of the business.
> Not only are they viable, they are hugely more profitable without the human expenses
I agree with you, but what I meant by "viable" was more from a consumer perspective.
> This inhuman model - yes, I think that is exactly what it is - will displace everything else that simply can't compete in runaway capitalism.
While I think you're mostly right, Amazon is eventually going to burn enough buyers that they go back to the more trustworthy brick-and-mortar stores.
Most people I personally know have already done this. They may search for something on Amazon, but they will go and buy it somewhere else.
I'm obviously part of a very specific subset, but it's going to hurt Amazon eventually, especially if the US takes a more consumer-friendly regulatory approach than it has for the past few years (which it will).
Nowadays when I want to buy anything on Amazon it takes so much effort to go through which listing is the Sponsored/ad-based listing vs organic in search. Once that hurdle is crossed, it comes to going through the reviews to see which reviews can be trusted and which can not be. And even after that hurdle, once I place order, I am not sure the item I received is counterfeit or genuine. Too much work and definitely not customer-focused experience!
I guess going to Target is much easier to get the necessary shopping done.
Amazon became almost useless for many types of articles. For any type of item, there are 10-20 resellers selling the exact same cheap item, just with different color and branding. It's essentially become an expensive dollar store.
I've abandoned Amazon because of that, until they finally add a "no third party sellers" option. And also a "don't show items we don't actually send to your country" option.
I was trying to buy a Nintendo Switch yesterday here in the UK, and it said 'Sold by Amazon' and 'Dispatched by Amazon' for the one I chose. So then I clicked that item, went through to the checkout and it said "Sold by Mila AG".
I'm a technically proficient developer who's very careful about where I spend money online. If you're not very internet savvy, it'd have been really easy to miss that. In the end I've ordered from a national chain retailer which are doing click and collect nearby.
I too have recently surprised myself by shopping at Target.com after being unable to find a trustable product on Amazon (latex pillow). The experience was worse but I trust their merchandising more.
The most infuriating is that the search barely works. It will happily ignore keywords so you get hundreds or thousands results only 10 of which have all of the keywords that you mentioned (and they aren't the first 10). There is also no support for negative matches.
With a combo of AdBlock Plus and uBlock Origin and kill all the carousels (for many years now). Since then my shopping in Amazon is a breeze. They do make changes every now and then and I have to block 'this new carousel' that just appeared, but overall my eyes get polluted less.
Check out FakeSpot[0]. They do meta analysis on reviews to try and remove/filter inorganic reviews. They have a browser extension which really helps, imo.
Doesn't exactly remove counterfeit products, but goes a decent way towards correcting the fake review problem. Once you've crossed that bridge, you can focus more on the top results without paid bias skewing them.
I hate that I have to even ask this question of what seems like a super helpful extension but how does this company make money? They are a business that's hiring right now with an office on Wall St in NYC but doesn't appear to have any products for sale. I'm hoping its just cynicism and paranoia speaking on my part here.
ooof... well, thanks for asking, cause I hadn't stopped to think about it before.
Searched around online and found it mentioned a few places, but ultimately it looks like they collect sales and order info and resell it... Damn. I'd be happier to pay a small fee instead.
FYI. On a related note - I was downvoted a couple of days back for an almost similar comment on the ABP uBO combo.
https://news.ycombinator.com/item?id=26058260
It seems ABP is shady that they let some businesses bypass their blocker.
Maybe it depends on the Amazon ( i've never gotten a counterfeit or seen what appear to be counterfeit products on .fr). In any case, you know you can return and report if it's indeed a counterfeit.
> i've never gotten a counterfeit or seen what appear to be counterfeit products on .fr
That you know of. There's really no way to know. Counterfeit products could even be manufactured at the official facility, just with no (or even failed) quality control.
In the US you certainly can't for some items. We ordered a food product from amazon and what arrived was not what was described. Amazon said they won't take returns of food items. That finally convinced my wife to de-amazon our purchases (I'd been pushing in that direction for a while).
Yup, I'm increasingly straying away from Amazon. If I need a new set of cooking utensils, do I wade through the hundreds of Chinese no-name entries on Amazon with hundreds of questionable reviews all across the spectrum, or do I just pay a little extra and get the quality ones from Williams-Sonoma?
Just another example of how a virtual monopoly can get away with terrible, shameful and nonexistent customer support because they can. It's monopolistic behavior, but an passive form of it. Instead of actively engaging in monopolistic activity, they remove essential customer support because they have no competition. This really needs to be regulated quickly. Amazon, Google, Facebook all coomit the same behavior by hiding behind bots and algorithms with no customer support and there's nothing we can do because they are so dominant.
Amazon and other FAANG companies have to be broken up by anti trust rules.
The existing anti trust rules are enough to push these monopolies to stop the anti competitive activities. Just like anti trust rules were used against Microsoft many years ago, which opened door to online competition like Amazon, Google, Apple etc.
I have zero faith that regulation here will realistically have any effect other than inching Amazon a bit closer to being a quasi-state service, and I also have zero faith in that improving customer service at all.
I should have added: regulate them to increase competition. When the long distance telephone market was deregulated in the 1990s, ending the monopoly of the Bell Operating Companies, prices crashed rapidly and service quality improved - albeit for a time.
Similarly, when you “don’t have a choice” except to work with, say, Google, there is no incentive for them to provide good service. Find a way to introduce competition and everything magically transforms.
If Amazon removes a product for wrong claims, Amazon pays the damages times N. A court, quick judgement, low or no expenses for the merchant. Apply to every jurisdiction Amazon works in.
Then Amazon would be wary of automatic bans.
Of course this must apply to every site like Amazon.
EU (and UK) citizens have the right that automated decision making that affects individuals must:
* be documented, and individuals must be informed about it.
* include simple ways to request human intervention or challenge decisions.
* be regularly checked to ensure it's working as intended.
This applies to anything with "legal or similarly significant effect on individuals". As examples, the ICO there has "automatic refusal of an online credit application" and "e-recruiting practices without human intervention".
IANAL, but I imagine this would cover the OP's situation if they were EU-based, and most similar complaints (Apple unilaterally pulled my app down incorrectly and won't answer my emails, Google wrongly closed my Gmail account). If a lawsuits & penalties start appearing under that umbrella, it might help change the monopolies positions.
Doesn't help in the US of course, but hopefully they'll follow suit in the coming years, which would apply quite a bit more pressure here.
Amazon's automation and lack of accountability towards sellers is mind-deranging. I have a popular product on Amazon US that had a QC issue in 2017. Despite having it been resolved later that year, I am still banned from selling my product in multiple marketplaces (AU, EU, etc). (Which makes little sense, since at the time of the QC issue, I was only in the US anyways). I've had other marketplaces (JP, IN, etc) accept the product, and whenever I can get support people in those locales to answer my emails, they promptly enable my account, citing it having been blocked due to a technical issue.
But other important Amazon marketplaces just won't respond, or don't follow up. I Had the UK tell me if I got a VAT they would enable my account; I did the months of work required to get VAT registered and then responded to UK support that I was ready to go, and have had all of my emails (~1 a month) ignored for over a year now. Why tell me to get VAT and then just pretend I don't exist? They just don't care. Even worse Amazon will INVITE me to join certain marketplaces, which I have to pay to join, to only later then find that I'm randomly blocked in that marketplace as well. Oh yeah, no refunds either.
I've escalated to jeff@amazon and had multiple people there tell me they're in the middle of unblocking the account everywhere, and weeks go by and nothing happens, and then they too just stop responding my emails. At this point jeff@amazon doesn't respond anymore either, even though I made certain to not repeat my request more frequently than once a month. It is like hair-rippingly frustrating and it is the biggest stress point for my business currently. The worst part is this is my main source of income so I'm just relying on their support people who are wholly unaccountable.
I wish I had options as to otherwise but Amazon is such a behemoth now that most of the customers who were purchasing through my own ecommerce store a few years ago now demand to purchase on Amazon or not buy at all. I'm losing a ton of money because of this and have absolutely no way to get help. I cannot even use words to convey how frustrating it has been..
The countries you listed have very strong consumer laws that might make it a higher risk for Amazon. And while you are likely very honest and truthful there are probably a 100 less scrupulous people for every one of you.
Some people seem to think no web market is better and we should go back to brick and mortar mega corps like Target and Walmart, but good luck getting shelf space there.
I wouldn't have a problem with that, but the messaging that they're sending me should be consistent. They should tell me that my account is banned in these marketplaces for the QC issue years past and leave it at that. Instead what they do is string me along, have me pay fees, tell me to submit paperwork, and then I hear nothing. This leads me to believe that there's nothing really wrong with my account, they are just disorganized. What a waste of so many days of my life I've spent trying to resolve it.
Having even a quite high false positive rate would be totally acceptable, if the process involved humans on the Large Faceless Company side.
You'd get an email, "Hello, there might be a problem, here are the details" and if you made some good faith attempt to reply, be able to talk to an actual person, who could potentially escalate your issue.
I agree with you in this instance, you make a good point. 99% accuracy sounds great on paper, but that's a very painful 1% remainder - and in this case, there's no way a rational human would have acted in the same disastrous way the AI did.
That said, in many other applications of AI, a 1% error rate is probably a huge success. AI does not need to be perfect to be considered successful, it only needs to do better than a human. In this example, AI failed miserably at that, but there are plenty of applications where AI errs far less frequently than humans do at the same task.
The problem here is you need two algorithms. Their current algorithm might be 99% sensitive to detecting fraud. But they need a 99.99999999% algorithm that’s specific to not fraud. So by running then positives through the second one, they can detect the false positives and lessen these episodes.
I work at a medical simulator training device company - we’re quite small, just 20~ people making these devices in-house. We’ve been grappling Amazon’s Seller Central for quite a while now. Because our simulators look medical in use they keep taking them down and banning them. It’s like playing whack-a-mole and it’s been basically impossible to reach a real life person to explain the issue.
As someone who used to work in the same industry (2011-2015 as a programmer at Surgical Science) I'm curious which company you're working for.
I know our salespeople had constant troubles with the U.S. customs because we always had to declare whether our devices were medical equipment or not and different TSA agents had different opinions on how it should have been filled in. They often got upset regardless of how the form was filled in.
Legally speaking it is not a medical device, but some agents disputed that because it was used in the medical field at hospitals. Other agents were upset for wasting their time if you did declare it as medical equipment.
In my experiences, most of the times when "TSA agents" get annoyed, is because they're part of the $10/hr contractor service. As long as it's not the gun they use to test them on a regular basis, they don't appear to care one bit.
The real TSA agents however, do appear to care. And I'd also argue they're also quite competent. But Snowden was indeed correct that 95% of the people you deal with at the "federal government" are contractors - of varying quality.
Oh wow yeah Surgical Science - their LapSim mentor simulators for laparoscopy are certainly our direct competitors. I’m at Laparo Medical Simulators - we specialize just in laparoscopy and we’re working on delivering an enterprise VR simulator (no goggles), though our main market currently is simpler box trainers which undercut the competition on price.
What you’re describing is something we run into with the US specifically quite often. It’s kinda crazy, but the US has really become a stagnant market; nationalistic economically, and difficult to do business with unless someone gets to take a big piece of the slice, and very conservative when it comes to innovation.
I would really like to talk more if you’re up for it.
Despite the six figures listing fees he pays to Amazon he still has to blog it to get attention. Meanwhile, my less value AWS account gets less than 24 hour response time from actual humans.
Because unlike Amazon, AWS does not have a competitive moat, their product/service is a commodity that is offered by other competitors and can be trivially brought in-house if needed, thus they know they are already lucky just to get business in the first place and will do whatever it takes to retain that business.
You can tell who Amazon values. It sees it’s sellers as easily replaceable. If you are no longer paying 6 digit listing fees, someone else will quickly take your place. With AWS, your business is harder to replace and more valuable.
Or rather: you can tell who earns the money for amazon. Current revenue is pretty much 50/50 for AWS and advertizing -- retail is barely breaking even. Source: this weeks Economist article on Bezos stepping down.
Well, I proactively quit both my Amazon and Ebay accounts.
The occasion was trouble with Chinese dealers, in which both Amazon and Ebay weren't helpful at all, but the main reason is that I simply have enough of soulless mega-corporations, that try to build monopolies.
This forces me to use smaller (often local) alternatives, and honestly I don't miss anything.
So my 2ct: Stop complaining, instead do something.
Stop using Amazon, Google, Ebay, Facebook and the likes.
YOU are the one that makes them unavoidable, so just stop doing so.
I wish people would just stop using Amazon. Online stores are everywhere now and Amazon is worse than Walmart now when compared to just shopping directly from the product website or smaller online boutiques. Even the shipping times are nothing special anymore.
I agree but amazon has still the best policy on returns. Everyone claims they have a no-quibble return policy but only amazon does. Must suck for the sellers.
I dropped Amazon because their return policy has lots of quibbles nowadays.
I ordered a book, they sent the wrong book. They argued over whether it was the right book (they had different isbn), they made me buy an envelope to ship it back and only refunded my money once they received it and cleared it.
So I was out the price of the book for three weeks.
Old Amazon wouldn’t have screwed up, would have mailed out a replacement book immediately, etc.
They seem to be stupid in handling exchanges and cheap now.
> a competitor is obviously trying to take down our listings
This doesn't seem obvious at all. It could be any number of other malicious actors. Off the top of my head: an unsatisfied customer, a troll, a disgruntled former employee, etc...
Profit motive is next level motivation. It affords a level of continuous effort where passion motives wear out over time.
Put another way: if a single unsatisfied customer is able to take down a product then there must be dozens or hundreds of profit motivated take downs. The article is not naming an exact competitor, but dismissing the accusation as "it could be an entirely different random person" does not change the issue's substance: Amazon is broken.
> if a single unsatisfied customer is able to take down a product then there must be dozens or hundreds of profit motivated take downs
Not sure what the intent of this observation is - the same is just as true: 'if a single competitor is able to take down a product then there must be dozens or hundreds of profit motivated take downs'.
> The article is not naming an exact competitor, but dismissing the accusation as "it could be an entirely different random person" does not change the issue's substance: Amazon is broken.
Considering there are only so many no-buckle belt vendors, stating that it obviously a competitor casts aspersions on these other vendors without any evidence. But no, it indeed does not change the substantive critique of Amazon.
It seems like big tech companies live in a bubble where they think society is in fact an algocracy and everything is an engineering problem that can be solved with an algorithm.
Or they work on a team that's got a KPI saying "reduce the support contact rate by 3%" and there's nobody higher up who realises this is a terrible idea.
Big tech companies are full of people acting rationally, seeking to maximise metrics that are easy to track and sound good, but have negative externalities that are hard to see or to measure.
It may be years before it happens, but I think this sort of thing will end up being the reason these global companies ultimately won't be able to compete against the more focused online stores.
There are already categories of items I don't ever try to buy on Amazon: farm equipment, fencing supplies, pet food.
And I'm currently looking into email solutions that won't leave me completely screwed when Google arbitrarily decides it's time to delete my gmail account. There's just too much associated with my email address for it to be in the hands of someone who won't answer the phone.
IMO, these companies are winning because they're better than what came before them, but they're far from the perfect solution.
Absolutely. These Google horror stories have me ready to pay for an email saas for the first time, and starting to consider that I may need to move all my personal content off Google just to be safe.
> Why does Amazon not have any decent phone support or a dedicated representative that could help with issues like this? If you call, you’re going to talk to someone overseas who has no authority to help in any way, a complete waste of time.
This sounds an awful lot like the experience of the Terraria developer banned from Google earlier this week.
This is probably the only recourse in this new age of AI bots deciding human affairs:
- Seller is banned by an AI agent for arbitrary reasons. There is no reasonable phone recourse
- Seller tries to make a _human_ point in a blog post hoping it goes viral
- The blog post is noticed on different platforms and buzzword of AI wrong doing is amplified to a certain level
- If some arbitrary threshold of _human_ reaction is noticed on the social media, the company in the wrong will probably do something about its AI bot going haywire
Thus, the bot's common sense holes will be covered by collective wisdom of the crowds (which maybe, just maybe, will be integrated in the bot's training loop).
Why not add a government body that, upon submission, starts a fine ticker that forces any company to reply after a set period of time.
If no human reply is given, it starts to exponentially pile up the fine, until the issue is solved.
They shouldn't be able to allow people to make their livelihoods on their platforms and suddenly cut them off without even giving proper support. They should pay for this - innocent people go through an immense amount of stress and get stripped off their money and the product of their work.
If the answer is: "they're too big to have humans manage this", then you know what the reply is - cut them into pieces, half, fourths, who cares. Slap a user account cap on that shit with a number that a human workforce can handle.
I feel no sympathy for growth problems of this massive companies. I really hope European Union starts to crack on this shit, and hard.
I could think of several reasons why regulation would not be a good idea. Cutting into pieces? Maybe, but how would you cut pieces of FB or Amazon that credibly resolve the problem? Remember that lobbying for antitrust splits will be pursued by someone with enough political ambition to offset the pressure in a long run. What would be their benefit?
For now I think that voting with your feet is still a recourse, complemented by vocal messages on social media to maximize hit-back.
I really enjoyed what EU did with GDPR and secure EU citizens data protection, with regulators that everyone can access. It seems to be working well (it could be working better, but it's still a relative new thing).
This has companies make an effort to comply, or other companies refusing to comply and they exit the market not providing products/services to EU citizens, and that's fine!
That's why I believe regulation would do wonders here. Imagine Youtube, or Amazon, have to make up for the loss of revenue of someone they mismanaged and refused customer support? Sprinkle that with fines, and it would start to shift the whole "AI for customer support", which simply doesn't work.
Regarding the cutting into pieces, as long as the user base is cut into manageable size. Amazon has several user bases: AWS users, Amazon Buyers, Amazon Sellers, etc.
In this case, Amazon Sellers are treated like shit, literally. So maybe Amazon Seller platform should be spinned off and have an independent structure dedicated to it. Because currently there's contrafaction, fraud, and a lot of illegal things happening while they give no proper recourse to the damaged parties - unless you're Nike, or Nintendo.
>Remember that lobbying for antitrust splits will be pursued by someone with enough political ambition to offset the pressure in a long run. What would be their benefit?
In this case, for the whole European Union project to start to walk the talk, they have to start to make visible changes. Else the credibility of the union will start to crumble.
In the UK we have something called the "small claims court". The maximum compensation you can claim at the small claims court is £5000, but the important thing is: even if you lose, the company you're suing can't claim their legal fees back. All you can lose is your court fee (somewhere from £35 to £150 depending on the claim amount).
The idea of the small claims court is that you don't use a lawyer, so there are no complicated forms to fill in, and if it goes to court you just present your evidence in front of a magistrate in plain English with no legal mumbo jumbo.
Starting a case in the small claims is usually a splendid way to force your way past the army of bots and AI customer service shields, and get your complaint seen by a real person. It usually gets things resolved very quickly. But not always... here's a good story:
Last year I was overcharged £6 by Opodo on the credit card surcharge for a flight booking. I spent two hours going round in circles with their Facebook Messenger automated customer service bot. There were no other contact methods given. So I took them to court, claiming:
Claim: £6.23
My time interacting with chat bot (2hr 22min) @ £100/hr: £236.67
My time preparing claim (1hr) @ £100/hr: £100.00
Court fee: £35
Total: £377.90
... which I think is extremely reasonable of me. They didn't file a defense, so I won the case by default. Even after winning the case, I still had no method of contacting them, and no payment was forthcoming, so I went to the next step which is to file for a warrant.
This cost an extra £95 but it included a bailiff visiting their office, who would be able to take goods to the value of my claim. Bring it on, I say! So I apply for my warrant and wait.
Nothing happens for 6 weeks (this is in the depths of the first lockdown so I wasn't surprised by the radio silence).
Then I get a call from a very helpful chap who tells me he's the bailiff. He says I probably shouldn't hold my breath waiting for this money because the registered office is just a plaque on the fall with no staff or office... and he has FIFTEEN warrants outstanding for Opodo.
The next step would be a warrant against Opodo's bank which would allow me to freeze their account... but there was no easy way to do this and would probably require a lot of paperwork. So I gave up. My belief in the UK criminal justice system shattered.
Then about a month later I got an email from Opodo:
"We are contacting you from Opodo after receiving the Judgement for Claimant. In order to proceed with your refund of £377,90, ..."
... and within a couple of weeks they had paid me in full! I'd highly recommend the small claims court - it takes literally five minutes to file a claim here... check out: https://www.moneyclaim.gov.uk/
And whats always shocking every time I need to contact a large company via some fast response required, its never available.. Phone calls are passed around department to dept that cant deal with significant situations, email support takes far too long, live chat rarely exists.
It's mind blowing how such large companies don't have these services but at the end of the day it's a numbers game for them.
There just are more people that don't complain and their bottom like is still fine.
Really really frustrating as there is no even recommended customer service requirements for companies. I'm sure in the above case there is money lost for this action by the competitor. Will the OP be reimbursed ? Of course not.
Those companies have grown way too big. Their in/actions can and do affect people's lives way beyond reasonable. I think they should be regulated, at least the way they do dispute resolution. One of the things governments could do is to tell them to shove their EULAs to a proper place and allow people to sue them for refusing proper dispute resolution process. Especially in small claim courts. I think that would straighten them right.
I don’t think a competitor did that. I think it might be due to a mislabeled BOM or some other issue.
Can this person do the same to another competitors item.
Can a regular person do it for any item at Amazon.
If not then it probably due to some other issue and not a competitor item.
It's happened to us multiple times and it's always a competitor trying to get out listings taken down. They look for any data fields on a listing that haven't been filled out by the seller (most likely because they didn't seem relevant to the product) and then they'll add words to these fields that the algorithms will pick up and cause the listing to be suspended. You can try to prevent it by making sure you are Brand Registered and uploading a flat file containing a value for every single field. Amazon will eventually reactivate listings that are attacked in this way but you can lose a lot of time and money waiting.
The three different substances that Amazon said the item contained were
* methoxetamine, a synthetic hallucinogen
* jimson weed, a literal weed from the Americas that causes severe hallucinations and paralysis
* coca leaves, a traditional Andean herbal stimulant that are the natural source of cocaine
None of these items are related or similar at all, and none of them would be used in making belts. Seems like a difficult error to make through mislabeling!
After my latest Amazon order got me a completely empty, unsealed shipping envelope, I tried to get my money back. AFAICT, there is nothing in the system that even allows reporting "package was empty". Thankfully the item was less than $5.
I never got an empty package, but I got a counterfeit before. I would have liked a button for that, too. It just shows Amazon doesn't care about fighting counterfeits.
I ordered a couple of DVDs a couple of months ago, where only one turned up in the package and it was entirely feasible for the other to have fallen out. I managed to report it as "item missing" or "order incomplete" and they just shipped me another with minimal fuss. Having to go through the Amazon shopping app was a bit annoying, but the overall experience was pretty slick.
I canceled the return and looked at the Amazon app again. There is no option resembling "incomplete delivery" in my case. It looks like if you ordered only one item they assume you got something and they want it back before issuing a refund or replacement. I may just send back their defective envelope with its missing glue strip and hope for the best. I'll be dropping my Prime subscription at the end of the month.
Sure there is - 'item missing' or 'order incomplete' both properly describe your situation, and I have had the same thing happen to me with amazon fixing the problem immediately.
Also, every time I've had an issue with an amazon order that I've needed a human to help with, I've been able to get one on the phone or via chat.
Wasn't just like 1 or 2 days ago that people here were praising Amazon's custommer support over Google's on a thread of a developer who had his google account blocked ?
At this point, I believe if you get unlocky they are all equally shitty. Just yesterday a friend of mine lost access to his Microsoft account including his OneDrive with all his documents (all work related, by the way) due to an unspecified account violation.
I suppose there are some Amazon sellers who have specific items that (contrary to the public listing) actually contain drugs. In that case, people could order them as a way of buying drugs that would look like some other kind of retail business from the point of view of Amazon, shippers, and tax agencies. The seller could try to make the listed product extremely unappealing or uninteresting to the general public, or have some kind of code or out-of-band contact method that people could use to indicate that they wanted and expected specific orders to contain the drugs instead of the listed item.
Also, it looks like the "people lying in order to harm, steal from, or extort others" issue hasn't been solved for any side of a two-sided market or review system. :-(
Edit: I was first thinking of saying that this is a way that drug prohibition really empowers random people to hurt each other with false accusations. But this is probably true for any kind of high-stakes accusation. As long as there's any kind of contraband or dangerous item that Amazon cares quite a bit about banning from its site, and as long as there's a significant market in that item, people could make spurious accusations that any seller was secretly trading in it.
Amazon was facilitating selling objects that literally had radioactive thorium dust in them and it took action by the NRC to clean them up; Amazon continues to sell products of this sort that have been listed since.
Exceptional edge cases, like flagging inappropriate content, are open to abuse. This is especially true for highly automated systems like those run by Amazon and Google but I think it occurs on community sites like HN as well.
It is unfortunate that these trust based systems are being abused.
I have seen every tool used to enforce ‘laws’ also abused to deplatform competition.
Most tech companies are run with as few employees as possible, that is why SV FAANG engineers make so much.
We can look to china to understand a bit more of the statistics and personnel required to effectively moderate a system. e.g. number of censors per number of users.
This means that companies like facebook have 2-3 orders of magnitude fewer employees needed to moderate their system. i.e. they are effectively unmoderated. (this is why it was no surprise that facebook was used to plan the capital riots, even though FAANG benefited by deplatforming parlar for political reasons).
We are in an interesting situation where there are probably similar levels of legitimate and illegitimate takedowns but nothing will change because the people with power and money have what they need.
Of course none of my competitors using the exact same data set had any such problems.
I tried for YEARS to appeal it. There are simply no humans working at Google and nobody reads your emails.
Edit: Actually, I did get a response a couple times but it was obviously automated. They just said to remove the ads from the pages where such words are displayed. So I added a simple rule and a column in the database to hide ads for those keywords. That just triggered the bot to move down the list of their "obscene" language. Next it was the names of various sexual positions, acts and fetishes (Japanese does have a very rich vocabulary in that topic), then manga slang, even silly sounding onomatopoeias that when explained in plain English are "vulgar", etc.. It seems once your website is flagged there is simply no way get clean.