Hacker Newsnew | past | comments | ask | show | jobs | submit | salty_biscuits's commentslogin

Yes, my experience has been that it is great if you need to do something particularly weird, but less smooth to do something ordinary.


I'm sure there is a way of interpreting a relu as a sparsity prior on the layer.


It is fun if you ever find yourself in this situation because you can play the uno reverse card on the interviewer and ask to clarify with impenetrable jargon and look for rising panic (can I assume the graph contains a Hamiltonian circuit? etc, etc)


Nah, when you do that, Murphy's law says that the interviewer will be the only person in the world working on extending nonstandard analysis to spectral hypergraph theory, and my attempt to snow them will reveal that I only have surface level understanding of the jargon I just emitted.


That or they're the super egotistical/arrogant type but too dumb to know you know more than them and assume somehow you're the person who don't know what they're talking about

Although in that case bullet dodged.


reminds me of the (apocryphal) joke about James Gosling (who invented Java in 1994);

interviewer in 1999: we are looking for someone with 10 years of java experience.

james: I invented java 5 years ago.


well that guys not getting hired!


You are joking but this may be true.

I once was interviewing for an interesting job and the topic was general knowledge of a system made of A, B and C (all were similar). The interviewer did not know much but insisted on some very deep details about B. I told him more than was available in the docs already and at some point I said I would need to peek in the code to say more.

He told that this was too difficult for me because only people who were part of the team who designed that would understand.

I told him that I wrote almost by myself the A part so it won't be too difficult to catch up with B.

I did not get the job (ultimately a happy ending) and was told that I did not know enough on A (the part I wrote).


I am joking but not really, basically it's my belief that any place asking for 10 years of a 5 years old technology is going to be really sensitive about anyone with an "attitude"


fun fact, I once interviewed at a place in which the tech lead interviewing me had confused the terms pass by reference and pass by value - that is to say he understood that in JavaScript objects were passed by reference and what that meant on the practical effects of assigning object a to b, but he thought the technical term for this was pass by value and the technical term for things that were passed by value was pass by reference (so according to him strings were passed by reference and objects were passed by value) and no explanation on what a reference was and how pass by reference works and why it made sense to call it pass by reference could penetrate.


just a pedantic detail, strings are passed in javascript by reference, they are just immutable


I just went down the rabbit hole of reading this post and the entire thread. As someone who has been looking for a junior job, it's probably the most depressing thing I've ever read. I've been on the market for over 6 months, I've sent countless resumes out and tried various techniques, but I'm not even getting a nibble.


I guess technically it's passed the reference to the string right, so if I say a = "stringA" there is a reference to "stringA" and that is assigned to a if I then say a = "stringAA" there is another reference created for "stringAA" and assigned to a, while "stringA" is sitting around somewhere waiting to be garbage collected in a few milliseconds - that's way complicated to think about and not sure if I haven't messed it up.

Easier to just say pass by value and forget about it. OR make all your variables consts and then it don't matter.


No, thats not correct. Value and reference assignment behave the same way for = (well, reference is hiding the fact that it’s not the literal string/object but a reference to it, a number is just the number).

Where it matters is in passing arguments to a function call. If you pass 42, it’s not mutable so incrementing, or doing anything, will not modify the original variable you passed-in. For a reference, using = will assign a new value (not change the original) but modifying the referenced object like, say a.b = 5 WILL change the original object.

It’s not really “pass by reference” that a C/C++ developer would understand but it seems to be the term that has stuck.


>= (well, reference is hiding the fact that it’s not the literal string/object but a reference to it, a number is just the number).

>For a reference, using = will assign a new value (not change the original)

what I wrote was regarding only strings, so I'm not understanding - it seems you are saying the same thing I said? But maybe I'm wrong about how the actual strings are stored.


Sorry to get a bit nerdy here, but in JS, neither pass by value nor pass by reference make sense as it’s not defined by the spec and much less followed by the implementations. Strings can be pointers to buffers or ropes, numbers can be values (NaN-boxed or otherwise) or pointers depending on a number of conditions, it all depends. However, from what’s observable in the language, all variables are pass by value. There’s no way to pass anything at all by reference, primitive or not, i.e. you can modify a field of an object you were passed but you can’t change the object.


Hashtable of all strings made in program.


So the naming is super confusing in these cases and the best way to get out of it is say "the references are passed by value", but... technically he was right. In JS everything's passed by value. It doesn't matter that those values are references. Pass by ref would mean that "function foo(a) {a='123'}; b=''; foo(b)" would change the value of `b`.

Every popular language which allows pass-by-reference makes those places very explicit (like ref in c++ and c#)


>but... technically he was right. In JS everything's passed by value. It doesn't matter that those values are references.

yes, technically I know this but even so he was technically not right because he still said there was pass by reference and pass by value in JavaScript, it's just that the description he had of what happens in pass by value is what is normally described as "pass by reference" and the description he had of what happens in pass by reference is what is normally described as "pass by value".

I think we can agree that given that he used both terms and mixed up their meanings that he was not "technically right"

on edit: meaning if he had said "we pass everything by value in JS but some values are references, what happens then?" he would be right, but when he said we pass objects by value and primitives by reference - what do these two terms mean and then he accepted the description of what happens in an object when passing the reference as being correct but insisted that was called pass by value, and he accepted that the description of what happened with a variable when it has a string assigned and then that variable is assigned to another variable and then the first variable is changed was correct including the ability to change the value of variable A and not have the value of variable B changed thereby but insisted that this process is called pass by reference, I intuited through this conversation that he was unfortunately not "technically correct"


Should've been more clear, it was only a response to the objects passed by value part as correct. Yeah, he was obviously confused by other parts.


In that case you won't do well in the interview because "bad attitude" or "lack of soft skills".


Well that's just it. Too many interviewers use it as a platform to flex how awesome they are. The proper response is at the end to ask a few probing questions of where they get to apply such skills day to day.


Unrelated, have you perhaps done anything with nonstandard analysis on graphs (or in spectral hypergraph theory -- most uses of NSA on graphs require infinite graphs, how does that work when the spectrum might not be defined?)?


Hahaha! I only crammed some dense jargon into a sentence to give the air of expertise... it's a bit of a trick, finding a combination of math terms that doesn't refer to an actual field of study.


Hahaha, I know what you mean.

It's been awhile since I've looked at NSA on graphs, but it's an interesting field of study. For something of a taste, an alternative proof of Kőnig's lemma might look like:

- Start with a locally finite, connected, infinite graph G.

- Choose any nonstandard extension G* of G.

- By the transfer principle (basically just logical compactness), there exist hyperpaths [0] of unbounded (hypernatural) length in G*. Pick one, P*.

- Restricting P* to G you obtain some path P, which is the infinite path you're looking for.

I settled into industry instead, but that's the sort of thing I'd like to study if I ever go back for a PhD, hence the interest in those sorts of ideas applying to spectral theory.

[0] The "transfer" of a path isn't actually necessarily a connected path in the usual sense, but it's indexed by the hypernaturals, and each well-ordered countable segment is connected. I'm skipping the entire intro that makes those operations make sense.


Dammit, I had hoped I'd nerdsniped you... but the nerdsniper is you and now I'm curious!


Well, you did nerdsnipe me too :) I haven't looked at this in awhile, and my curiosity is re-ignited.

The most basic style of proof in a lot of nonstandard analysis is (1) lift your problem to a nonstandard space, (2) prove something interesting in that space, (3) hopefully project something interesting back down to the problem you actually care about.

E.g., in nonstandard real analysis you can look at a real-valued function like f(x) = x^2, pick any epsilon z, and compute the hyperreal analogue (f(x+z)-f(x))/z = 2x + z. This is within some infinitesimal of 2x, so you use some machinery you've built up to conclude the derivative of the real-valued function is 2x.

The graph lemma above had a similar flow. Create G*, find something interesting, project it back down to G, finish the proof.

That's certainly not the only proof style. Nonstandard topology combines basically all the normal compaction theorems into one, for example, and that's a bit more intricate.

Even such crude techniques can bear fruit quickly though. Menger's theorem was proven in the early 1900s, and only extended to infinite graphs in the late 1900s. That 3-step proof process with nonstandard graphs makes it a bog-standard freshman exercise for locally finite infinite graphs, and only a bit more involved for the full generality originally proven by Erdos and friends.

I don't have any deep insights beyond that. The Springer GTM series has a nice intro to nonstandard analysis (not actually graduate-level IMO, a mid/advanced undergrad could use it, which is a nice level for that sort of book), building it up in a way that you could probably work with nonstandard extensions of other infinite structures (like graphs) without needing many/any other resources, especially if you've done much with building models of other theories via set structures.


> (3) hopefully project something interesting back down to the problem you actually care about.

Indeed, and this is the step that standard mathematicians tend to balk at.


Murphy's law is about things going wrong. But nothing can go wrong when encountering someone who knows more about something than you. You only stand to gain.


You gotta be careful. Some interviewers, especially the ones who are going to be peers, or worse, a peer of the hiring manager, might have mixed incentives to avoid hiring someone who could show them up.

I feel that happened to me once when I was interviewed for a Java job at a stodgy health insurer and the interviewer tried to test my Java and it quickly became obvious he was really very much a Java beginner and I could run circles around him, correcting his misconceptions. I was polite about it but naive, and it quickly became obvious he was offended and gave inaccurate feedback.

Another job, one of my rounds was with a peer of the hiring manager, and he did not ask me anything really beyond introductions, and then he lied and claimed he had asked me several technical questions and I'd failed them, which did not happen. I got that job anyway and accepted the offer, which was a mistake.

So actually, you probably don't have to be careful, because this is a good way to avoid a bad job. Unless you're desperate and need to feed the kids or something. Then feel out the interviewer, and do well, but not _too_ well. Don't make the interviewer feel stupid. Save that for after you've been working with them a while and have built up social capital in the company.


> can I assume the graph contains a Hamiltonian circuit?

Many interviewers will likely ask you: what is a Hamiltonian circuit and can you think of a solution that doesn’t contain a Hamiltonian circuit?


OP should start interviewing just to record this exact scenario - then share it here for the sweet, sweet schadenfreude.


Sadly people with power are immune to shame.


I suspect that is one of the reasons they are in power. Shame is what keeps us plebeians in place.



I love this book, but what is your point? :)

Exploiting shame is a valid strategy, and defencelessness is a weakness? People are feeling shame because they're inexperienced in power relations? Power relations are fun if you view them as games?


> Shame is what keeps us plebeians in place.

>> The first step in becoming a top player is the realization that playing to win means doing whatever most increases your chances of winning. That is true by definition of playing to win. The game knows no rules of “honor” or of “cheapness.” The game only knows winning and losing.


We are more populous. Shame we didn’t organize/unionize when time was right.


We did, but then they managed to convince us that the unions weren't on our side. But there's not much stopping us from organise ourselves again.


But what push us forward? Maybe fear?


historically, hunger and fear.


An African swallow or a European swallow?


Those who don't get the reference should immediately turn in their "I'm a nerd" tee-shirts.


People who quote Monty Python aren’t nerds, they’re just old.


Maybe so, but at least my kids would get that reference.


At least you have kids… We only have an empty hole in the ground covered by a sheet of tarpaulin, but it’s a house to us!


A hole!! You were lucky, we slept naked in the middle of Death Valley and died twice a day before going to school.


I'd dispute that in the UK at least, I think most people of a nerdy disposition tend to at least be aware of the films.


So true! Nerds would quote Adams or Gibson.


nobody expects the spanish inquisition...


One has to know these things when you're a king, you know?


Not if you’re the Burger King :)


Darn, I just bought it too.

Unfortunately (after "cheating" and looking it up) Monty Python was a bit ahead of my time. Or at least outside of my community circle.


or they're just younger than you


Or they are one of today’s lucky 10,000.

https://xkcd.com/1053/


bro it's just reference of not-so-funny skit of monty python. not a big deal by any stretch.


He’s not your “bro” little mister, he’s clearly your elder!


sorry sis but i don't care if he's old or not. monty python is boring and if you gatekeep people using "jokes" from it, you should reconsider your life.


Reminds me of the time an interviewer tried to get me to walk through an efficient solution to elevators, so I just proved it was equivalent to travelling salesman.


The answer to this metaShibboleth is only in a Adams space. There are 42 of them, but they must be specified.


The error budget in the pseudorange to the satellites has various factors due to relativity, but they are just lumped into the errors in the least squares problem that you solve to get the position estimate (as per the article). So relativity is important, but you don't need to know much about it to solve for your position. Various flavours of long baseline/network RTK will need more sophisticated modelling though.


Whooping cough does not come from pigs. Pertussis is a bacterium that is a human only thing.


> Pertussis is a bacterium that is a human only thing

B. pertussis “evolved from Bordetella bronchiseptica or a B. bronchiseptica-like ancestor… B. bronchiseptica infects a broad range of mammals, including humans, and although it can cause overt disease such as kennel cough in dogs and atrophic rhinitis in pig” [1].

[1] https://www.nature.com/articles/nrmicro3235#Sec2


In some instances, it comes from feisty bureaucracies:

    The San Francisco Chronicle, December 17, 1979, p. 5 reported a claim by the Church of Scientology that the CIA conducted an open-air biological warfare experiment in 1955 near Tampa, Florida, and elsewhere in Florida with whooping cough bacteria. It was alleged that the experiment tripled the whooping cough infections in Florida to over one-thousand cases and caused whooping cough deaths in the state to increase from one to 12 over the previous year. This claim has been cited in a number of later sources, although these added no further supporting evidence.[52][53] 
https://en.m.wikipedia.org/wiki/Unethical_human_experimentat...


Basically kernel methods right?


Yes, it is there. There might be more.


I'd say a fairly large percentage would be disappointed that we let a citizen get treated like that and we did nothing as a country to assist, independent of anything else. Maybe I am out of touch though.


Here in Australia the coldest state (tas) uses more energy (8600) than anywhere else (e.g. qld 5500). They have basically 100% hydro power and use reverse cycle AC for heating. Not sure what else is going on there.


You can look at a medieval bestiary to see how people thought animals might look based on descriptions alone. Like these lovely elephants

https://britishlibrary.typepad.co.uk/digitisedmanuscripts/20...


Honestly, these are some of my favorite stories and I think more ML people need to learn more about mythology (I say this as a ML researcher btw). Because once you go down this path you start to understand how "Rino" == "Unicorn". You have to really think about how to explain things when you're working with a limited language. Yeah, we have the word "rino" now, but how would you describe one to someone who has no concept of this? Maybe a cow with one big horn? Is "like a big fat tough skin horse with a big horn coming out of its head" accurate? And then apply your classic game of telephone[0]. It is also how you get things like how in Chinese a giraffe is "long neck deer"[1] (that doesn't work for all things in Chinese and there's another game of telephone (lol, maybe I was too harsh on the British in [0]) and well... you can imagine things get warped like crazy).

There's so many rabbit holes to go down when trying to understand language, vision, reasoning, and all that stuff.

[0] (Jesus England... this is what you call this game?!) https://en.wikipedia.org/wiki/Chinese_whispers

[1] https://translate.google.com/?sl=en&tl=zh-CN&text=giraffe&op... ----> https://translate.google.com/?sl=zh-CN&tl=en&text=%E9%95%BF%...


I'd say it's not new. Take fluid dynamics as an example, the navier stokes equations predict the motion of fluids very well but you need to approximately solve them on a computer in order to get useful predictions for most setups. I guess the difference is the equation is compact and the derivation from continuum mechanics is easy enough to follow. People still rely on heuristics to answer "how does a wing produce lift?". These heuristic models are completely useless at "how much lift will this particular wing produce under these conditions?". Seems like the same kind of situation. Maybe progress forward will look like producing compact models or tooling to reason about why a particular thing happened.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: