Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
New job has no way of coding locally? (reddit.com)
50 points by thunderbong on Nov 7, 2023 | hide | past | favorite | 76 comments


On a slightly different note, In my consulting days, I have seen a fair share of projects that have grown so complex in regards to infrastructure or they are deeply coupled with a specific version of a tech that sometimes it has taken weeks for me to get them working or worst they became impossible to set up on dev machines.

In my current company, I spent some time maintaining a docker-compose file just to ensure we have a reproducible infrastructure and all of it should work with a single command.

I firmly believe that ease of development has a major impact on the overall quality of the product.


Oh my god this reminds me of one of the worst consulting gigs where the customer had some ancient Windows NT server or something that you could only access via some screen sharing application and that production environment was the only place that could actually run the thing correctly. They also didn’t know of any way to roll things back if anything broke. Seemed like all the devs that knew what the thing was left 10 years ago


I worked with a project which to be compiled required years-old versions of esoteric tools, long-abandoned libs, unreadable scripts, and a lot of guesses (docs where written by someone who had very limited knowledge of English). It took weeks to re-create the environment locally, but it was still not the same thing vs the build machine (single), and produced occasionally different errors. Interestingly, this situation to some extent was deliberate: for some reason the project was hated by middle-level management, but they couldn't obtain permission to sunset it from tops, so essentially they were slowly asphyxiating it by not allocating time for fixes, upgrades, and automation


Holy Moses does this sound familiar. I've yet to deal with any aspect of ERP that isn't (among other things) horribly, horribly coupled to the installation's target environment.

"But of COURSE you should edit this system DLL to make a bog-simple content management system function . ."

No, no, really I shouldn't. The hell planet you from?

"It's INDUSTRY STANDARD!"[1]

Oh for the love of Christ, not this again. Go sit in the corner with the other fifty-seven "industry standards".

[1] I'm going to blow a gasket the next time someone gives me this without a numbered citation or CDRL item.


A friend of mine worked on firmware that shipped on hardware made by a household name and installed en masse through the entire US. Fairly essential infrastructure.

I almost asked if you’re that friend. The difference is that with what he was working on, it was a beloved money maker that was implausible to materially change because of recertification costs.

The last I heard, they were buying second-hand IDE licenses on eBay because the IDE package had been EOL’d ~10-15 years prior.


100% agree. Any slow iteration cycle completely destroys productivity.

I'm (annoyingly) regularly preaching to my team the value of quick turnarounds on pull request reviews and manual testing, so the team completes their changes with as much feedback as possible while their heads are still in the same space. Apparently pair programming is the ultimate form of this, but I've never tried it.


> Apparently pair programming is the ultimate form of this, but I've never tried it.

I've practiced pair programming in two companies, and I really love it. For me working with a colleague massively improve my focus when I'm the one coding, and while not being the one coding I was really happy yo be able to give instant feedbacks instead of waiting PR time (I've done it once with another senior Dev, and the second time with a junior, and pair programming with a junior is probably the most efficient way of working with a junior I've ever encountered).

There's small drawback though: it's mentally draining, and you shouldn't expect to work as many hours as when you're solo, at least not for a long period.


Intellij has DevContainers feature, sounds like exactly that.


For a long time at Etsy, every developer had their own remotely-running VM in the datacenter, with its own FQDN. All of the dev services and so forth ran on that. You'd either edit files in a local repo and then SCP them up, or you'd just SSH in and work in Vim or whatever in a screen session.

It was actually glorious. It was very easy for devops to roll out upgrades and fixes to the VM fleet, and anyone could SSH into your VM to help diagnose weirdness. Not to mention you could just give people a link to your VM hostname in order for them to test out your development branch in their browser.

I left some years ago, so I'm not sure if that's how it still works there.


This makes sense and isn’t like the shitshow OP is talking about though. I initially assumed the same thing as you, but after reading the OP, the title is the least scary part of their situation by far.

At google it has been kinda similar to what you describe for the most part. You got a beefy (depending on your situation, you could end up lucky and snag a powerful 256GB RAM one with bajillion cores) cloud machine into which you SSH or use VSCode Remote with (or the google in-browser code editor running Monaco aka same engine as VSCode that is compatible with most usual VSCode extensions). I was apprehensive at first, but the convenience of “it just werks” is difficult to beat, especially when all of it is integrated well and is the default way of doing things.

With that, it takes me less than an hour to go from a brand new machine to building my code and running it like usual (just gotta set the dotfiles for the terminal emulator and the shell + some .zshrc tweaks). No code ever lives locally, so it takes a lot of pressure off. The only downside is being screwed with no internet access, but it has not been an issue so far in my years working there.

If i left my laptop at home or something happened to it? No problem, grabbed a loaner laptop, logged in, set it up in under an hour, and it works just as good for the basic work tasks regardless of whether it is macOS or chromeOS or Linux. The whole pain of setting up the environment is straight up gone, and it lets the devs use whichever desktop environment they prefer (as the cloud machines run a flavor of linux anyway, and your laptop OS becomes entirely a personal preference).


Yeah Etsy also had different tiers. The top tier machines were the ones earmarked for the Search devs, because they had to run an Elasticsearch cluster or something. So the trick was to make a couple contributions to Search to justify getting upgraded to a Search VM : ]


To be fair though, even the most baseline tier of a cloud machine at GOOG is pretty nice. 64GB RAM + 32 cores (i think it might be more cores now, but not certain) is good enough for pretty much anything, aside from niche specialist tasks (for which you will get a more than beefy enough machine anyway).


I worked at two other places that did this. It was expensive as all get out but provided a great dev experience. One place adopted Kubernetes super duper early (2016) to containerize these environments and save money. I wish I got on that train; my colleague who did made out much better than me! Not sure about the other one.


I have worked at places that have dev environments like this and it is my preference. You could run the stack on your local machine but in practice few did, except for small parts of it.


If memory serves, they had a push to maximize developer happiness, and then changed tack and decided that doing this was too expensive.


When you setup your editor and container system to work on a remote dev environment, I've generally found this superior to coding locally - with a few edge-case exceptions.

You can't code when offline. But I generally find it difficult to code without internet access these days anyway, and the scenarios without internet access are now vanishingly small. Internet access whilst flying, is becoming standard, and only likely to improve.

The advantages are numerous. Firstly, the ability to work from even the most basic machine, anywhere with internet, is liberating. I can be perfectly productive with only my tablet whilst travelling.

Secondly, keeping a consistent and controlled environment that you never have to maintain or worry about breaking due to a system update, or move to new hardware.

If you use multiple devices(e.g. Desktop, Laptop, Tablet), or perhaps have personal and business machines, not having to repeat a setup and update process is extremely convenient. As is not having to worry about your hardware - I can code from any machine capable of running a text-editor, SSH and a web-browser.

The key thing here is the development flow needs to be extremely convenient for the developer - no frustrating processes interrupting your code-debug loops.

But when it's executed with convenience in mind, I've found it superior in every case, and in a future filled with ubiquitous internet, I wouldn't be surprised if this becomes the norm.


... Nah, screw network dependency.

Local or bust.

Given the shitty Orwellian potential of technology though, I wouldn't be surprised we end up steaming OSes from cloud on mobile dumb terminals. But that's shitty.


> Given the shitty Orwellian potential of technology though, I wouldn't be surprised we end up steaming OSes from cloud on mobile dumb terminals. But that's shitty.

I've been doing this for years with my tablet. The tablet honestly becomes 'less dumb' in the process though.

I agree some stuff moves towards this Orwellian centralisation of power - I use Geforce Now for games + Amazon Workspaces for work.

But I've also run VNC on servers I own myself for side-projects- you can own the dumb client and smart server if you need to.


Salesforce has this model of development and it's horrible. They won't let you run the code locally since the environment is proprietary. Even to run the tests, the code has to be uploaded and run on their servers.


Same in the semiconductor sector. You run everything remotely off a centralized on-prem mainframe and your laptop is just a thin client to a X11 session on the server. At least that's how it was 8 or so years ago. Maybe now they started to move to the cloud.


But that's understandable. You need powerful machines to run simulations and stuff. And we could collaborate with colleagues over phone, with a shared VNC session, etc.


Ah yes, the joys of developing in Salesforce "Apex" At a previous company, we had some crazy "build scripts" that would sync a local filesystem into SF, but I recall it being pretty fragile. Also, there were annoying name spacing issues...


This was a long time ago, but I had a similar experience with Blackbaud (A Salesforce competitor). The suite of APIs required was only installed on a server so you could not run the code locally. You could either perform code edits locally and copy the files before building, or simply do all development while remoted into the dev server.


With all the time/effort Salesforce has put in to SFDX and the VSCode plugins releasing an APEX compiler must just be completely off the table. I do a lot of Salesforce work and introducing new developers to the SF way of doing things results in a lot of "wut." facial expressions.


In a previous company, we'd have people making direct edits in Salesforce. We'd then export the edits with the Salesforce CLI "force" utility, bring it into a local git repo, run some rather buggy scripts that would change the object namespaces/prefixes to a generic prefix. Then you could finally do a diff and a PR in a normal way.

We then had other scripts that would take the "generic" code and change the namespaces/prefixes for upload into QA or prod environments. It was quite painful.


A good friend of mine is currently in a project that has quite the remote setup:

1. RDP to the consultancy's HQ.

2. From within that environment RDP to the client's machines.

All via VPN, going across the Atlantic at one point. You can't have a metting because every 10 minutes or so some part of this chain breaks and you have to reconnect.

Needless to say it's not easy to get anything done this way, so the project is already delayed and losing money.

The hilarious part is that his rate is now somewhat higher than mine, but I would have to be in a precarious financial situation to join that project(in reaction to this situation management... started looking into expanding the team).


I can top that: For one of our customers we have to

1. Enter a Windows VM on our native Linux Laptop to use the Citrix client

2. Use the Citrix client to access customer's terminal server

3. Use RDP to access a special Windows laptop

4. Enter a Linux VM on the Windows laptop

Coming back from lunch I have to enter credentials four separate times to unlock everything.


At this level I suspect it disconnects when you look at it funny. Hope the rate makes it worth it.


Years ago, I worked in support at AWS. Because of the rather peculiar way AWS accounts are linked, any company with access to AWS GovCloud had all of their accounts flagged as GovCloud accounts -- and all of their support requests were routed to my team, a team full of US citizens (most of whom either had security clearance or were pursuing it - but that's another story).

Anyway, this meant we got a really, uh, interesting collection of customer support requests -- and, more important, architectures. My favorite is what I decided to dub "the InterNAT."

The InterNAT involved a VPC subnet which had its routing rules configured to route all traffic via an EC2 NAT instance (these were the days before NAT gateway). All well and good... except that this VPC also had a VPN configured, which routed their traffic to an EC2 instance in a VPC in another of their accounts. That EC2 instance then routed the connections out of a NAT instance in its own VPC, before eventually connections reached the Internet.

What's especially interesting about this is that EC2's network stack is designed to detect and mitigate tromboning[0], but by creating this particular architecture, they had unwittingly managed to bypass those protections. So, not only were they routing through multiple NAT instances across multiple VPCs just to reach the Internet, but they were managing to route outside of the EC2 network fabric in the process, substantially increasing network latency.

Coda: Our bathrooms in the support office in Seattle had stark white walls in the stalls, which actually worked okay as whiteboards. People would doodle while taking a shit. So, of course, I diagrammed the InterNAT in a stall a few days after discovering it. Probably two years later, I saw a picture of it pop up in my Facebook feed, and had to cop to having drawn it.

Bonus: My second-favorite story (albeit much shorter) was the curious case of SSL onloading. You may have heard of SSL offloading -- using a reverse proxy or load balancer to handle SSL termination, rather than doing it on-host, in order to save resources on your actual webservers. Well, I had a customer -- also in the (in)famous GovCloud queue -- who had decided to implement the reverse. Plaintext HTTP into the load balancer, but HTTPS into the webserver. I guess this is trying to protect against the case where the call is coming from inside the house?

0. https://www.catonetworks.com/blog/the-sound-of-the-trombone/


The companies I’ve seen do this do so for one of two reasons (sometimes both): the dependencies are just too complex or security (IP loss prevention). The former reason can usually be addressed by having simulators for those dependencies. Sure, you run the risk of partly reimplementing your dependencies, but the point is to get fast, cheap feedback—not 100% faithfulness. Once fast and cheap feedback has been obtained, slow and accurate feedback is more likely to succeed.

The security concerns always miss the tradeoff: the productivity of your entire dev team now has a SPOF. When I’ve worked with “cloud dev machines,” they’re great until the cloud environment dies. Then you basically have to send your devs home for the day because there is literally nothing they can do. At least I can fix my local dev machine and not impact others when it breaks.


It’s very common in big tech, but it obviously doesn’t work like the guy on Reddit is describing.


I'm surprised more folks haven't encountered this sort of thing. All the cloud-based projects I have worked on have been basically impossible to run locally - one because it went all-in on a giant cluster of microservices, one because it went all-in on serverless, and the last because the monolith was too big to run on a local machine.

Not to mention time spent working in embedded, where your code only runs on the other side of a serial->USB adapter...


Depending on why code cannot be run locally this is a pretty big sign that the organization is a big mess. If you like that sort of thing, then there's a lot of fun and room for growth if you can start untangling the code.

There's always going to be some grumpy old man in the corner who like things to stay the way they are, but many colleague will definitively appreciate you making they lives easier, and if you're good at it, then it's an easy productivity sell to management (unless management is completely fubar as well).

Sadly it is the way at least parts of the industry is moving. You either get to work on remote VMs, in a container hell or live in production. The whole container stack gave a large number of businesses a way out of the limits VMs put on them, but now developers have mini-Kubernetes clusters running on their laptops and they either developer directly in containers or constantly rebuild. For some things there's just no way around it, you need the infrastructure to test your work, but for some everything just starts in a container.

I partly blame complex build systems, bad package managers and poor coding practices that locks you to extremely specific versions of libraries or other software packages.


Honest question: what does "local" even mean today? I think it means different things to different organizations. Most of my work involves using a local VM to RDP to a remove VM where I edit code that then runs on a dev VM.


To me, local means on my machine which I can take offline and still get useful work done.


That makes sense. I've not had that in at least a decade.


I've never not had this since 1997. Many different companies, domains, tech stacks. Even in an embedded system we had emulators we could run on our own machine.


I'm not saying that it's not possible nor desirable, and most vendors support local dev via emulators. I'm saying that I don't bother with that anymore.


To me, the files are in the filesystem of the OS on my computer. I can edit, build, and debug with a toolchain that is running on my computer.

I may or may not have dependencies in a development environment that I call over the network, but the thing I’m looking to change is running on my machine.


Yeah, your scenario is fairly common unfortunately. When your code has to access a database or system inside the firewall, and the security folks don't want developer's local machines to access anything inside the firewall. So they end up setting up these personal "local/development" VMs inside the firewall for development work, and these are not to be confused with the development server.


For people reading this & wondering how to deal with ARCON/ARCOS based access - just install code-server on the server & use VSCode for file-management, IDE & shell access all in a single tab.

No more session/SFTP expiry.


Oh yes, it happened to me. It was with Palantir software, an interesting experience, really, but it is a good thing it was a relatively short mission.

It was some kind of big data processing, probably called that because the entire database doesn't fit in a single floppy disk. So for some reason, they had to use an overly complex distributed architecture. Absolutely terrible, it took tens of minutes to get a result because was a queued job and the queue was filled with other jobs that were so poorly optimized that they took hours instead of seconds. I don't really blame the authors of these jobs since there is no profiling of any kind so you can't optimize properly, you don't even know how long your job actually took to run because it may be stuck waiting for a resource and it still counts as runtime.

In the end, they were essentially SQL queries that any decent database (ex: Postgres) could support, and our biggest tables where in the order of a few GB, which would be absolutely no trouble for them.

For dev that went beyond your typical JOIN query, I ended up getting the open source software the framework was built on, downloaded the data as csv files, and tried my stuff locally, and when I was done, copy-pasted the result in the editor. I also made a few tools using the REST API to work around some front-end problems.

The most "fun" part was when my manager went to explain me what I had to do. He imported the data into Excel, and in a few minutes of Excel magic, he did everything, just to show me. It took me 3 days to implement it in the framework. In other words, my manager was 100x more productive with Excel than I was with the framework, and I was rather "efficient" among my coworkers. My manager was really good with Excel, but still...

I think I got a glimpse of how it felt to work with punch cards.


I did a project for a bank in Luxembourg that worked like this. We were building a reporting system external to their core system and so needed some test data. Because they didn't have a test system they would dump live data then edit it in Notepad to keep their customer records private. Needless to say half the problems we had were related to the Notepad editing part.


We put developer machines in Kubernetes. And devs ssh in and code remotely. The VS Code remote ssh plugin makes this really painless. And the devs get the benefit of the kubernetes dns to access our internal system they need to program against.

To add a developer machine, we just push a yaml file to a git repo, which automatically spins up the virtual dev machine.


I trialled Telepresence[0] for my company 2 or 3 years ago, that does this sort of thing very slickly. It didn't quite work for us back then, I forget why, but I imagine it's come along a way since then.

[0] https://www.telepresence.io


Until the first l CrashLoopBackOff.


This is real problem with "cloud-native" software; half of your dependencies are proprietary online services which are impossible to run locally. Localstack is messy and incomplete. So you might end up building two implementations, one that works locally and other that works in the cloud. And hope that the behavior is close enough.


Yeah I am still kind of baffled that there are no great solutions for this yet. The feedback loop at work is insanely slow because we manually deploy the changes to a test environment. Not sure how others solve it.


When your toolchain includes large chunks of manually interfaced cloud vendor software not under your control, this becomes the reality. I have this problem! It's software that only works inside a certain vendor's software and has to talk to many services provided by that same vendor.


I have worked at places where local development was more painful than cloud development.

Some of the factors:

- Local dev required spinning up a number of resources, each of which needed to be regularly updated manually, while cloud dev environments had everything maintained automatically

- Much of the development involved IaC, which was easier to test in the cloud

- There was a need for collaborative development, and it was easier to do that on cloud dev environments

I currently work at a place where local dev is the norm, and it's fine. That said, it is a little funny to me that so many people think EVERY situation should be suitable for local dev. If I'm developing a sufficiently complex application or suite of applications, why should I expect everything to fit on my laptop?


I have never figured out how you’re supposed to do local development if you use Aws serverless tools like Sqs, sns, lambdas etc.

There are tools like localstack out there trying to emulate Aws but when I tried it it was such a hassle to use.


In my first job, our dev environment was on NFS-mounted servers - and I wouldn't be surprised if that company still follows the same model. In my last one, we started with local environments, but started moving to cloud environments (using something like repl.it) as the UX - especially running tests - was much faster on the remote dev environments.

YMMV of course, but it's horses for courses if you ask me.


Super common problem in big enterprise. Falls into three categories:

1. You can't code locally (because of super locked down computers) and have to use a VDI

2. You can code locally (I.e. your machine has an IDE) but tests and builds happen in a dev environment, or

3. You can code and test locally, but your build process is so environment-dependent, doing so is pointless.


This is my feeling exactly with any modern web product. I'm used to the entire system being run on MY processor. One language, one process. I can break and inspect the program at any point and then just resume executing.

Even just the fact that half the app might run in a browser is jarring. Desktop dev is comfy.


I worked on a system that couldn't be run locally. But in my situation everyone knew and agreed it was terrible and after many incremental changes we fixed it so you could run it offline. There were other big dev problems too like a test suite that took close to an hour. We fixed that too.


> They then have to perform a super outdated, time-consuming, and arduous method of uploading the code into a testing environment to see what their code did.

I'd bet money that multiple developers have this automated and use that time to do other things instead. Relevant XKCD https://xkcd.com/303/


My man thats the best kind of job. Every task will take ages and once you figure out how it all works you can still let it take ages, as you’ll have a valid reason. I’d setup a local dev env and not raise awareness. Milk it until the end of time. You’ll be like those folks maintaining cobol software that no one really knows how but are highly paid.


Environments like that are sole destroying. I wouldn't last 5 minutes.


There are 2 kinds of software engineers. Parent comment and Grandparent comment.


Three, there’s also the software engineer who thinks this is absolutely normal and who has no idea things can work differently (or better) and has little desire to find out or do anything other than what they’re told.


And they take it to the extreme. If the wireframe says the button is labeled "Sign Uo", you can bet that's what it'll end up labeled in production.


You run out of fucks eventually. One day everything will just be an inconvenience before the next pay day anyway.


And that’s what people call experience.

Guaranteed once one gains it what op describes is a walk in the park.


At least they are not asking you to punch holes into cards


Just like my old penetration testing days


> The way that they code is to make changes to the code by guessing

really real computer science, because there is No other way to really know what the code may do other than experimentally testing and reccording empirically gathered data..... /sarcasm


Any sufficiently advanced guessing is indistinguishable from programming.


I work using wordpress and I've been working like this practically my entire career. I use programs like winscp to edit code directly in production over ftp or scp. If the site goes down, well, boo hoo.

A way of avoiding problems or hiding new features that are currently in development is wrapping the code in... if server remote addr == my ip... do this... else do nothing. Works fine!


That’s terrifying and I think Wordpress can be stored in Git and deployed by CI/CD today.

https://spinupwp.com/advanced-wordpress-deployments-with-bud...

If all you’re ever doing is WinSCP and editing on a production server at what point does lack of experience with Git and how software is built elsewhere become a liability in career terms because interviewing examples are hard?


Many PHP hosters allow git syncing for a few years. One issue are co-workers/clients who change things directly via FTP. Having a local copy with docker (maybe vscode containers) works very well for testing. I assume OP has a lot of clients who do not have the budget for test environments which happens a lot. The SFTP workflow does work quite well, there are plugins for sublime text / vscode.


At my last company the senior dev had all 5 wordpress employees do this. FTP, edit the files directly. He simply did not know how to use git, and did not care to learn.

To be fair, it did work. Nothing ever blew up. We had hourly backups for all the major clients. Onboarding or switching between clients had zero startup costs, its literally just FTP and php files.

Happy I left though.


For modern WordPress, the roots.io ecosystem is really great (https://roots.io/), notably Bedrock for things like this:

https://roots.io/bedrock/


While not ideal, if websites aren't transactional or on the critical path of business, short, occasional downtime can be tolerable. Facebook used to push fresh PHP code straight to production multiple times per day often with obvious side effects. Who cares? It's just Facebook.


You can use an "Apache server in a box", ie something like XAMPP or WAMP or whatever it's called nowadays and run an Apache web server with PHP locally, even if you're on Windows. There's no excuse to be a cowboy like that and edit on prod.


When you have to edit something different in a different site every day you can't do that because you would waste most of the time replicating the site locally.


I am not sure what the OP mean by "The way that they code is to make changes to the code by guessing. "

Looks to me that this person is not a developer but someone who is used to copy paste random stuff from stack overflow and see the change live on the dev tools console of his browser. I don't even know how he got that job to begin with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: