In the past, having to work with Indians from firms like Cognizant or HCL is pretty much torture. Instead of working with 2-3 Americans, you get stuck working with 10-20 Indians who dont know jack shit about shit.
Thankfully the company recently nuked their contracts and brought everything back on shore because of how much of a shit show dealing with those companies is lol. Literally tens of millions of dollars wasted.
Im kinda convinced that's their entire business plan. They lure these mega companies with omg "skilled labor" and having to pay them less, sign XX-XXXM contracts, 2-3 years go by and these mega corpos finally see how shit it is and just cancel them. HCL and Cognizant make money still regardless.
I have seen this myself. I have also experienced more than a few Indian colleagues who were far more critical of Indians in management than the rest of us were. I feel like there is an extra layer of dynamics that just isn't apparent if you are not accustomed to seeing it.
In a way, the people that despise Indians most are Indians themselves :) I say it in a non violent way, as a self critique and a welcome embrace of ideas outside of the Indian mold.
Pytype is used heavily inside Google so they bear the penalty likely more than you. Besides, py typing libraries is a dynamically changing landscape so it isn't anything out of the norm. Not everything is an abandoned project, and if anything Google abandons some projects well after the winners and losers are apparent eg Tensorflow.
"I’m teaching myself LLM internals by re-implementing the stack from first principles." - curious what resources you're using? Any books or courses, or just building it straight up? Great work!
Modal's GPU glossary is a good overview about how GPUs work [0]. Karpathy's LLM overview is a good high level overview on LLMs [1]. 3b1b's video (and subsequent videos) on transformers was excellent at helping me understand the math at a high level [2]. This matrix multiplication optimization worklog helped me understand writing better CUDA (not for beginner intro though) [3].
During this process I also asked ChatGPT a lot of questions.
I'm definitely open to suggestions about "how to learn" with all the new tools we have. I felt this has not been straightforward to figure out.
Q. Any of you know models that do well at deleting objects from an image i.e. inpainting with mask with intention to replace mask with background? Whatever I've tried so far leaves a smudge (eg. LaMa)
You can build a pipeline where you use:
GroundingDino (description to object detection) -> SAM (segmenting) -> Stable Diffusion model (inpainting, I do mainly real photo so I like to start with realisticVisionV60B1_v51HyperVAE-inpainting and then swap if I have some special use case)
For higher quality at a higher cost of VRAM, you can also use Flux.1 Fill to do inpainting.
Lastly, Flux.1 Kontext [dev] is going to be released soon and it promises to replace the entire flow (and with better prompt understanding). HN thread here: https://news.ycombinator.com/item?id=44128322
My question wasn't about inpainting but eraser inpainting models. Most inpainting models replace objects instead of erasing them even though the prompt shares an intent to delete
+1 on this, I've been pushing my team to measure impact more. Else we remain a team of high value that remains unrealized and therefore underfunded, which hurts us all.
Very cool!! I am unfortunately very far from an AI expert. Can you share how you built this? Did you have to train your own model, or is it an open source model that you finetuned for climbing?