Hacker Newsnew | past | comments | ask | show | jobs | submit | _ezwe's commentslogin

It's not 3 years old, we've been exploiting it when we were 14 yr old trying to find server to host warez content, and has nothing to do with the plugin itself: it's all about apache's mod_php configuration: does it allow execution of php files that are in the directory where users upload their avatar ? If yes, then they can try to upload a php script and execute it on the server.


maybe they mean social rights


i thought the minimum was the equivalent of a month of gross salary


Another apache disaster, blueimp's plugin has nothing to do with it: it's common for script kiddies to try to upload php executables on php sites, and sometimes it works.


I do think that my project is responsible and not Apache, since I provided sample code that was not secure by default when used in a default Apache configuration as is.

However I wish Apache would have changed their default config in a way that would have signalled an error if an .htaccess file is present but not applied.

Something that HA user fulafel also pointed out here: https://news.ycombinator.com/item?id=18272407


I wish <my favorite language> also had this. However, containers work. Not sure why they do java though, i thought java has been happy with jars so far.


If anybody is thinking about Java, check Graal out. I was able to successfully create fast, completely static ELF binaries of real JVM applications from fat JARs (not in the fake self-executable archive way).

It's still early days in terms of what's available in the runtime (e.g. the AWT subsystem is missing), but it's pretty damn impressive anyway.


JARs are fine until you need to get someone to install a JRE. Not a problem for server apps, but definitely not painless for client distributed apps.


You can bundle everything together, or use a commercial JDK with AOT compilation support.


Neither of those options are "painless".


Is it? Often you would distribute an installer anyway, right?


We're struggling using containers for our Python and Javascript applications. I think we're using venvs and node_modules and we oughtn't (or we shouldn't keep them in our project directory where they sometimes--but not always--get overwritten by source code volume mounts).


It can be a little tricky to do things right here.

There are a few things of note:

1. The .dockerignore file can be used to prevent use of your `node_modules` folder during `docker build`, even if you have something like `COPY . .` in your Dockerfile.. This can let you create a new `node_modules` folder from your lockfile as part of the docker build process to create an image for testing/deployment.

2. You can maintain separate Dockerfiles and such for development and for release, so e.g. for development you might use volumes and not copy things in, but for release you wouldn't use volumes for source code.

I can't tell from what you said, but it seems like one of those two tips might be relevant.

It's okay to have a development setup which doesn't use containers and then use containers for deployment (so long as you have an integ or staging environment that uses containers) as well. It's quite reasonable to have a venv during development, but inside the docker image to not use a venv at all since things will already be reasonably isolated inside the container's fs.


Thanks for the suggestions! I had to check, but we do both of those things. To make sure that node_modules aren't overwritten, we mount an empty named volume to the node_modules directory. I guess mounting the named volume directory somehow causes the node_modules directory from the base image to appear inside of the bind-mounted directory. We do the same thing for venvs as well.

We use venvs inside of the Docker container because we use pipenv and apparently pipenv's support for installing to the system is buggy and/or idiosyncratic.


Named volumes are hacky to no end and you shouldn't be using them to make sure it's empty; you should be using `.dockerignore` or being careful about what you copy in.

The pipenv bit is probably more reasonable, and I'm afraid I haven't used it enough to be sure what rough edges are likely there.


I've been using shiv recently for my django deploys.

You might wanna check it out.

Here is a small readme I put together...

https://github.com/devxpy/shiv/blob/8d8298d21380dcf0b1970856...



Sorry if I'm a little dense; that doesn't seem like it solves any problems. It just says "use virtualenv and/or docker". I guess I was hoping for "how to manage dependencies in a localdev-friendly way for a Python/Docker app" or something.


Not sure if that helps, but here is what I do:

We just use a requirements.txt [1] for each service and run pip with the -r flag in the dockerfile. No Virtualenv in the container.

Most of the time we run these containers with docker-compose locally, but sometimes we want to run the service outside the container. For that we create a virtualenv outside the folder where we keep the source for the service. The reason for that is that we don't want to accidental copy the virtualenv in to the container. It wouldn't do much there but it would increase the container size.

You can do this easily with just "python3 -m venv", but there are some tools that help with that as well. I personally use pyenv-virtualenv [2] which just keeps all virtualenvs in ~/.pyenv/versions/<env>, but there is also conda [3] and virtualenvwrapper [4] which also store the virtualenvs in a central directory.

I am not sure if there really is any more to it.

[1]: https://pip.pypa.io/en/stable/user_guide/#requirements-files

[2]: https://github.com/pyenv/pyenv-virtualenv

[3]: https://conda.io/docs/user-guide/getting-started.html#managi...

[4]: https://virtualenvwrapper.readthedocs.io/en/latest/


I'm not sure if I understand correctly. Are you saying that you run venv commands as part of the build step for your containers (i.e. a RUN line in you Dockerfile) and then pip installing modules in another build step?


No, we use pipenv instead of pip, and pipenv manages a venv. There are `--system` flags, but I guess support for installing things to the system is a little buggy or idiosyncratic or something so we use the venv behavior. I'm not sure why we tell pipenv to install the venv in the project directory though. We do the pipenv install as a RUN line in our Dockerfile. Does this answer your question?


Yeah, I think I have an understanding now. We use tagged Docker base images with the app dependencies baked in so you can just use a FROM your Dockerfile and know you have the dependencies in the image and you just add and run your app code.


Yeah, we do the same thing. This seems to work pretty well, except right now we build these images out of band of our normal build process (if you change these base images, you're expected to kick off another CI job that builds/pushes new versions and updates the versions in the FROM statements in our production Dockerfiles (and our docker-compose.ymls). I'm not sure why we're doing this, since the build cache is actually pretty good at avoiding unnecessary base image builds.


That's how we do it also. You need to integrate a new process that is triggered on base image rebuild that redeploys your container with the new base image (which has security fixes, etc). This pipeline needs to start in Dev and be applied in all your environments, so that you know you are promoting a good base image once you get to production.

We use Kubernetes to trigger rolling restarts with new images when we release a change to the base image, and so far it's been painless, but a lot of work went into it. We use Gitlab, but any CI/CD should allow you to do it.


Plop a requirements.txt file down in each repo. Use direnv or a similar tool to automatically manage virtualenvs on the local dev machines. In your container, don't use virtualenv; just add requirements.txt to the container and run pip install -r requirements.txt

See the linked code at the bottom of the page


There are so many places where this will never work. For example, if you're using openssl and you're system is missing the header files.


Which is why I run and debug locally with an interactive container; installing the packages locally is just for my editor autocompletion.


Have you tried pipenv? It creates a lock file that hashes all your dependencies so it makes sure you get the right one when you deploy.. And it automatically creates the virtualenv if its nog there when you install them.


We do use pipenv. I like the idea of it, but I think we ran into issues with where the pipenv lives. We used to use `--system`, but that seemed buggy and/or idiosyncratic, so we started using a venv and then we started using PIPENV_VENV_IN_PROJECT=true (for some reason) and now it's causing other issues. It seems unclear how it's supposed to be used with Docker containers, but maybe we're just overcomplicating things?


JARs are okay, but they still require you to have the java runtime installed in order for them to execute.


You can bundle everything together, or use a commercial JDK with AOT compilation support.


I feel so much compassion for the developers of Roslyn, that were dictated to "re-implement the bugs from the proprietary version in the open source version".


Can you provide a source for that? I'm interested in learning what kind of bugs would need to be re-implemented.



Even something you create in the weekend belongs to google when you are in contract with them ?


Some good discussion here: https://news.ycombinator.com/item?id=2208056

Bottom line is that at least in California, employees theoretically have the right to IP developed in their own time on their own hardware, but in practice, there are numerous caveats, and litigation can be extremely risky.

One of the caveats is that the invention should not overlap with a company's "existing or anticipated line of business", and with the FAANGs being such behemoths, that can cover almost any form of software development.


Something that comes up in these discussions is the concept of “company time”.

Company resources is clear cut, but what is company time for a salaried employee? How does that change with being a remote employee?

I’ve moved to a strategic product role and honestly work less than an hour a day on average and seldom go into the office. If I decide to work on a side project does that restrict me from git commits between 8-5?


> Company resources is clear cut, but what is company time for a salaried employee?

Usually salary comes with an expectation of hours per week/month, and usually also comes with a software where you write down your hours, to track overtime, minus time, etc. So all the time you write down in that system, counts as companytime.


You've had this at tech jobs? I've definitely never had to track time at any of my tech jobs (nor have I ever been eligible for overtime .)


Yes, switzerland requires time tracking, including a requirement of certain break times if your shift is long enough.


Interesting, I haven’t had a timesheet since I was an intern during college almost 20 years ago.

I didn’t realize that timesheets we’re still a common part of being a salaried employee in tech.


It depends what you do - I've always had to timesheet, but that's because I was working for clients, so we had to at the very least make sure we were estimating correctly, or, at my current job, that we're billing the hours I actually work.

It's not too onerous, since I only work for one client at a time, and time spent not at work is uncommon and usually in big chunks (e.g. an hour or two for a doctor's visit, days or weeks for vacation).


Like many companies, yes.

However, unlike many companies, they actually have a process where you can submit your "weekend projects" for Google to review and "gain back" explicit ownership of them.

Basically Google just checks to make sure it's not competing with anything Google's already doing, and then contractually assigns it back to you. (And anecdotally, if it does compete, you may be offered the opportunity to join said team, since it shows you're passionate about it.)


The assignment agreement took away all excitement I had coming to work for Google. It is so shitty. The two things I submitted to that process were rejected. I've kept on building stuff in my spare time (can't stop, won't stop), if they want to come after a side project that I made in my time, on my resources, with my ideas, and my code, that's their PR nightmare.

I cannot wait until I'm in a position to work on my own stuff full time.


It's also pretty easy to contribute to FOSS projects on your own time, as long as you're OK with Google being the owner of your contributions (I personally don't care since it's open source licensed anyway). https://opensource.google.com/docs/patching/ is an almost fully public version of the process Googlers have to follow to contribute stuff to open source projects.

Disclaimer: works at Google, maintains some FOSS code in my own time, both under my own copyright for some projects and under Google's copyright for others.


Have you encountered any project maintainers uneasy with the idea of Google "owning" that part of the source code? In practice it doesn't matter, but that part kind of confuses me.


This still kills the possibility of quickly collaborating with someone on a project in your free time. Any time you want to even start working on something together, you have to wait a couple of days for approval.

This simply doesn't work if you want to be active in a hackerspace or any other community where you iterate on projects quickly.


>This still kills the possibility of quickly collaborating with someone on a project in your free time. Any time you want to even start working on something together, you have to wait a couple of days for approval.

This depends on the project. If you and the project maintainer are ok with Google maintaining copyright over the code, there's a self-approval process that takes ~2 minutes.

And of course, that's only necessary if I'm working on the side project using company resources. If I'm not, I literally can't do the self-approval. There's only an issue if you want to maintain copyright should you want to monetize your product. Most hobby projects won't encounter those issues.


Yes. At least according to their contract in Ireland (and other locations, from what I've asked).


Has that ever been tested in court? Sounds a bit too draconian to me (but then IANAL).


Not from what I researched. But then again, would you like to go to court against Google lawyers? I wouldn't.


If you care about free software, why not run a free OS ? just wondering ...


I'm really sorry for the proprietary OS users, that's really a shame that they have to give their email to docker inc. now, if they haven't already (docker hub anyone ?), everytime they install docker on a machine (once per install ?), all this crying its makes mes wanna cry :'(


I think contributors are going to compile it from source, to contribute ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: