Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One pain point I've really felt recently with Python is in the deploy step. pip installing dependencies with a requirements.txt file seems to be the recommended way, but it's far from easy. Many dependencies (such as PyTables) don't install their own dependencies automatically, so you are left with a fragile one-by-one process for getting library code in place.

It was all fine once I got it worked out but it would so much nicer to provide a requirements.txt file and have pip figure out the ordering and dependency stuff. That and being able to install binary packages in a simple way from pip would make me much happier with python (no more waiting 10 minutes for numpy or whatever to compile).

As far as actual language features go however, I still find python a joy to work in.



If a package doesn't get all its dependencies installed via pip its because of missing information in the package itself. That's neither a flaw of pip or Python but will cause problems for any package manager.

I find the combination of virtual environments and pip very convenient to work with. When I run into trouble with missing dependencies I often find the project on GitHub and can send a pull request.

Regarding Numpy and the scientific Python stack, check out Anaconda https://store.continuum.io/cshop/anaconda/ it makes managing environments where you need these packages a lot less painful.


Not necessarily. pip cannot resolve all dependencies. For example, if a package specifies both numpy and pandas as a requirement, installation will fail. This is because pandas in turn requires numpy, and pip does not resolve the dependencies in a single step, you need to install numpy first and then go on with pandas.


In that case, couldn't you just specify Pandas as a dependency and Numpy would automatically be installed?


I think not, I think the problem is that it is a build-time dependency and pip wants to resolve all dependencies before installation begins.

See https://github.com/pypa/pip/issues/1386 (oddly, the issue is closed, while the problem is acknowledged).


I see, thanks for pointing to that issue. The other issue referred https://github.com/pypa/pip/issues/988 is still open, so I guess they work on sorting this out.


I recently tried to deploy a desktop app written in Python. It was a nightmare. I recently taught scientific Python to prospective switchers. The installation step was a nightmare.

We really need pip wheels or conda to become mainstream. Pip alone doesn't cut it on platforms without compilers (Windows / OSX). Standalone installers are fine, but they don't resolve dependencies and they are only available for Windows.

I totally agree that the installation and deployment story should be a top priority. Once done, it would be another very compelling point on Python 3's feature list.


wheels are pretty mainstream and are getting better fast. Forking the package management ecosystem again, just after we got over the last round of headaches there, is only going to make things worse


One of the reasons I've come to do most of my early stage prototyping in node is that managing npm packages has thus proven much easier than finagling with pip or gem.

Ruby installs are especially difficult to manage. Despite the numerous tutorials out there, I still don't know what, if there even is one, the canonically best way to install ruby and necessary gems is. RVM? Install through Brew? Add path to ~/.bashrc?

I say this as someone who likes using command line so much that I've written Caskfiles to automate my deployment to fresh OS X Machines.


> Despite the numerous tutorials out there, I still don't know what, if there even is one, the canonically best way to install ruby and necessary gems is.

Wait, what? On production, install the exact ruby you need from your favorite package manager. On your dev box, install any rubies you need through rbenv [1]. Put all your gem dependencies into a Gemfile [2]. On either end, bundle install [--deployment] and call it a day.

[1]: https://github.com/sstephenson/rbenv

[2]: http://bundler.io/v1.6/gemfile.html


I install rvm on my production boxes as explained at https://rvm.io/ then I put .ruby-version and .ruby-gemset files in the application directory to select the interpreter and the gemset (I might need to have different applications running - especially on staging machines). Finally I use bundle and a Gemfile. It's pretty easy. You got another answer suggesting rbenv which is also fine.


At my company we build with Jenkins, tarball it and deploy from that. I haven't had experience another way but I think it ends up being more efficient and less error-prone than doing actual pip installs during deployment.


That's basically what the company I used to work for did as well with our large python application. Build using a build script, pulling in dependencies from local build server, test, package up the result and deploy. Pip was used for pulling in libraries to the build machine and never used during deployment.


You should avoid using normal requirements.txt files in production. If you want to use pip, it's better to "pip freeze" your test environment, and use that requirements file to specify the production environment.

Otherwise you are just asking for nasty surprises when packages upgrade.


Or just specify versions in your requirements.txt file to begin with.

If you want to keep up to date with security and bug fixes (but aren't yet ready for the next big feature/backwards incompatible release), you can specify the lines as 'package>=1.1,<1.2' to get 1.1.x fix releases.

`pip list --outdated` is helpful, too.


Just a note so people don't get confused, while specifying packages with >=1.1,<1.2 seems to be similar to the tilde in npm, in practice it isn't.

Basically, when you use >=1.1,<1.2 it will install the best version that matches at the time of first install, and then that version will never be upgraded because it will always satisfy the requirements. So you don't actually get 1.1.x release updates unless you install them manually.

We do, however, use this syntax in development when testing new versions to make sure any subsequent runnings of pip doesn't obliterate the new versions of modules we are testing.

I'd love official pip support for ~1.1 type declarations.


Long compile times can be fixed with pre-compiled wheels, http://wheel.readthedocs.org/en/latest/ I shaved off 4 minutes of our build time of numpy/pandas with it.


(dormant) PyTables developer here, we do have a requirements.txt file but I understand our setup.py needs an update. Please open an issue on github and tell us about your experience and how we can improve it.


I will soon, thanks!


Check out devpi (http://doc.devpi.net/latest/). We build our projects as wheels with pip using devpi as a custom index, i.e.:

BUILD_DIR="/tmp/wheelhouse/$PROJECT" pip wheel --wheel-dir=$BUILD_DIR --find-links=$BUILD_DIR --index=http://$DEVPI_HOST:$DEVPI_PORT/${DEVPI_USER}/${DEVPI_INDEX} --extra-index-url=https://pypi.python.org/simple/ ../$PROJECT

This will try to get packages from your devpi index, but will fall back to pypi if you don't have them.

Then upload this to devpi. It can later be installed by pip with:

pip install --use-wheel --index=http://$DEVPI_HOST:$DEVPI_PORT/${DEVPI_USER}/${DEVPI_INDEX} $PROJECT

This makes deployments super fast because you're deploying pre-built wheels instead of downloading and compiling from pypi. It also gives you resiliance by storing copies of the dependencies you need in devpi, so if they vanish from pypi (or it's unavailable) you can still deploy your software with all the dependencies you developed against.


There is work going on in the python packaging SIG to make installing better. One of the big additions for pip is the ability to install pre-built wheels for those "hard to build" packages. There is also work happening on a new package metadata standard.

Unfortunately, that's all I've learned from lurking on the mailing list for a couple weeks.


As several person mentioned already, wheels (http://wheel.readthedocs.org/en/latest/) make deploying much easier (and faster), not needing to get anything from pypi.


There's nothing like a good old deb (or rpm) package. Learn fpm[1] and bundle your dependencies instead of hoping that they get there.

[1] https://github.com/jordansissel/fpm


In my experience, you can bundle the dependancies, and then add the path to your search path.

Not the best approach, but it does work.


`pip freeze`




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: