-
-
Notifications
You must be signed in to change notification settings - Fork 266
[RFC] Binary Dependencies on Linked Repos #427
Comments
The second solution seems the best for me, but sudo is a problem because of Travis is not going to be in a container and thus less speed. Maybe we can pass packages to Travis in another way than the packages section of the .travis.yml file? |
Yeah the second solution is definitely less of a square peg through the round hole solution for sure. I did some prelim research when making this RFC & I was not able to find a Travis-supported apt install solution outside of the config file. |
From a theoretical perspective, it may be possible to edit an environment variable in a Does anyone know offhand the order that the config keys are run in, specifically the |
@moylop260 maybe can help. |
that would fail with src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory See https://odoo-community.org/groups/contributors-15/contributors-49577 and OCA/maintainer-quality-tools#427
that would fail with src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory See https://odoo-community.org/groups/contributors-15/contributors-49577 and OCA/maintainer-quality-tools#427
I'm thinking that the problem is that the Python library pyodbc is on requirements.txt and the required headers on .travis.yml. If you just put the library installation on .travis.yml, we won't have all these problems. |
There is no way to install ODBC without a binary. The headers (sql.h) are required in order to build via pip. |
Ohhhh I see your point after rereading @pedrobaeza - yeah this would totally work I think. I'll submit a PR. Technically this will just be masking the problem though, so this solution is more close to my original point 1 vs our preferred 2. It will fix the build while we get our ducks in a row though! |
Yeah, it's more like point 1, but avoid to change .travis.yml files for all repos meanwhile we find the solution 2. There's only 1 problem: this can screw up runbot via docker. Let's see... |
We'll be good with Runbot - T2D honors the install steps of the Travis file. Only thing it will screw up I think is if a repo becomes interdependent on one of these database modules. The assumption would be that the library is installed, but that would not be the case with the recursion. |
I'm coming late here? The issue was fixed? |
Only partially, we still need a solution for recursive binary installation. Do you happen to know if |
We are fixed some similar cases download and adding the binary directly to I don't know if is the same case for unixodbc-dev |
Dang and it looks like environment variables aren't honored in the Ok so then maybe back to the original point 1, which would be to simply ignore pip install failures for recursive repos. The one catch would be somehow installing the non-failing libraries in the requirements file. Too bad pip doesn't have an ignore-errors flag or something. |
If we ignore |
Hmmm so that would also mean parsing the requirements file, instead of doing a Sounds like a decent enough plan to me, and while not the best solution, it would provide a more scalable fix & keeps all the python packages in the requires file instead of moving some to Travis file. Assuming this sounds plausible to you as well @moylop260, I'll look into the implementation. |
I have not performed this work for MQT. We did, however, implement dependency management in Doodba: |
And that's the main reason for all the runbot discussion: not pip vs yaml vs whatever. It's to have a Docker base image that handles all of these dependencies. I haven't got too much time for writing this on the issue, and more as it was very polarized with the flame war, but this is the most important thing to achieve (not having MQT installable via pip) for having good CIs/Staging. |
I think actually the key point here is that we need to move it from Docker. These scripts are built in Python and should benefit all of the community, not just the ones that use Docker. In this case, the library itself is abstract and new package managers can easily be added with our implementation. The real question IMO is where we put it? Really it's just a dependency management for any project, based on a defined file hierarchy. It's not Odoo specific, nor is it Linux/Docker - it just requires Python and a user with proper permissions. I think we're all just being a bit too specific in our definitions of things really. We're all talking the same language, it's just a different dialect. Once we find our common tongue, we can begin to strip out the slang/dialect. |
There's even stuff like BinDep |
But we can't abstract libraries from the OS host and so on, so the solution I think it's to used Docker as an abstraction layer. Any way, I understand that this is a bit off-topic of the specific issue, but I needed to throw out all the Docker question, although not in the proper place 😜 |
You're totally right that there is a point where we must become technology aware. In this case we're just executing arbitrary commands though - only the implementation needs to be aware. The most we need to know at the library level is what package managers we support, which is currently
In that, we cover basically everything Debian - with or without Docker. I would probably add |
Closing this without a clear solution, but the scope is very limited, so we will continue adding manually the needed binary dependencies on needed repos. |
We have an issue that's about to start popping up in repos due to OCA/server-tools#659 & OCA/server-tools#642.
In these PRs, I had to add some binary dependencies in order to properly test the implementations. I didn't think about the implications of this, but one of my builds in
product-attribute
just failed due to the 10.0 one that is merged due to pip not having SQL headers.For the moment, I just added the binary dependencies in the Travis file to 🍏 my build. I might be able to come up with a solution completely mocking out the external Python dependencies, but I think this seems kind of hacky & really just covers up a problem we're going to start seeing as repos become interdependent.
As I see it, we have two options:
apt_dependencies.txt
& install them with a scriptHonestly I think the former is the best approach. But if we do that, we also need a way to block off the modules that depend on the packages that will not be there in order to not fail the build.
The latter approach would work, but I'm pretty sure it would require
sudo: true
on all our Travis files in order for a script to be able to install binaries. I haven't tested though, so maybe I'm wrong.Anyone have thoughts?
The text was updated successfully, but these errors were encountered: