-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove 'package_revision_mode' #30
Remove 'package_revision_mode' #30
Conversation
Will the We use that in some of our recipes. Do we need to update to something else? |
Yes, it will remain. The A different discussion is if we want to have the |
Hm, Also, when only uploading a recipe, and for the same version of the package there are several revisions of this recipe, how to ensure something is reproducible. new dependency how could a recipe of rev1 been transferred to a developer who needs to test something with this dependency build? |
You can get the same package_id of the current recipe being modified. But consumers of this one, that uses
That is a different discussion, not related to the removal of
Regarding the process, there might be some things that can help to reduce unexpected upgrades for example to a new, not fully tested revision. The most typical is putting new things in a temporary server "build" repo. That new recipe revision for example, can be used from there in builds, CI, etc, but until it is not promoted (copied) from the "build" repo to the "production/develop" repo, it will not become the latest revision |
Thanks for the explaination, @memsharded
Yes, this is what we did, having the lockfile in the git repo If a developer gets back to version1 of product, and makes version1.1, they should get, if not different requested very explicit, the exact same dependency package recipes as they had before, even if they have changed meanwhile (for the same version of a dependency). Will this still be possible in future? |
Yes, sure, this proposal doesn't affect that (as long as you were not using |
AH, ok, now I see, sorry for my confusion. |
This is exactly what we are trying to remove. That feature of "package_revision_mode" was supposed (if working correctly) to address that use case, but the original fallacy is that the same binary shouldn't be built by 2 different machines, and then only one of the built artifacts (with a given package-revision) is correct, and the other artifact (with another package-revision) is incorrect. |
I think this is reasonable at the dependency graph level. The main use where I see package_revision having value is the aftermath compile code-generation bug - conan's settings tracks MSVC/GCC only to major versions, so if there were such a bug, it might be necessary to force rebuilds with a fixed compiler. These wold come from the exact same recipe and source, so there would be no recipe_revision nor package_id change, and yet the old binaries might be badly broken. Identifying affected things that need to be rebuilt isn't going to be autoamatic, since this is just outside of conan's model. But it seems pretty important that, once the new builds hit artifactory (or conan-center), that |
This is exactly what we are referring as a lack of model, and for example why we added the msvc:
version: [190, 191, 192, 193]
update: [None, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
runtime: [static, dynamic]
runtime_type: [Debug, Release]
cppstd: [14, 17, 20, 23] That is one option, to model exactly which compiler version you are using, if you want Conan to be able to track the compiler minor versions. The other approach, if you knew that happened, and you have no configuration model that identifies it, then the approach would be to force a rebuild of everything. Everything will get a new re-build and a new package revision, and then you can move again cleanly from there. What we want to avoid is the case where one package gets a force re-build, because this case was identified, and then relying on Conan to know that it should re-build consumers of that package because the package got a new package revision.
Yes, exactly, the package revisions still exist, they will be stored and tracked. They will be resolved to latest by default. The proposed change is that such package_revision cannot be used a as factor to compute consumers |
A package that produces completely different builds with no changes to recipe / sources is certainly possible (and they caused some headaches for me in the past (looking at you OpenCV)), so I'd also be interested if it would still be possible to have the package revisions in the end product lockfiles so that if a dependency got rebuilt for some reason (maybe caches got cleaned) and produced a different package, the product build would not pass. We were guarding against that with the |
If it is only the lockfiles, that is a different story, not really related to
This is what we describe in the "motivation" section that should not happen, and if it happens it is a model or process error. First, if a rebuild with the same conditions (same source code, same recipe, same configuration, etc) is done without a reason, the binary should be the same (not down to the bit level, because C++ native binaries are not deterministic in the general case due to timestamps and other stuff), and be perfectly binary compatible not requiring a re-build. And if something changed that affected the binary and the binary is different and require rebuilding consumers, that is a something that should be modeled somewhere, it shouldn't happen without control.
Ok, yes, lets keep working on the 2.0 lockfiles to provide for this case correctly. |
To add some more detail about a scenario I had that might paint a clearer picture: After one app rebuild we noticed the CPU usage went to 100%. After investigating, it turned out that the app was using a rebuilt OpenCV binary which was built using the same source and recipe (don't recall why it was rebuilt, I think we were still early into Conan usage and some caches really might have been cleaned). However this new binary was using OpenMP as the threading backend (which as I understood used busy wait in "idling" threads) instead of Intel's TBB. This happened because OpenCV's CMake scripts had a fallback mechanism for threading backends where they would try to fetch one and if that one's not available fetch another one. Between the builds Intel changed their project organization to oneAPI which in turn broke the link OpenCV was trying to get. 😃 Now the solution here would have probably been to use TBB directly as a Conan package, but the team was tight on resources and so we were OK with letting some packages manage their dependencies if they are the only users of those dependencies (which was the case for TBB). But I'd prefer a build to not go through in these situations than to rely that other packages got their processes done right (which I feel is often not the case). Again, I'm guessing that if package revisions would be fixed in an end product's lockfile, the build break would be the outcome if it comes to that. |
I think we agree on this 😄 I am not saying that these errors won't happen, they happen from time to time, in this case an error in the OpenCV CMakeLists + package recipe, that would produce a completely different binary because something changed in the environment, and suddenly it is linking against a different threading backend. What the proposal is suggesting is that when this kind of errors happen, something like forcing a rebuild on the packages would solve it. Doing a
I think we are also now in a better place, with so many recipes in ConanCenter, some of them very iterated and mature, providing good support for many scenarios, that doing the right solution, like using TBB from a Conan package, or maybe the OpenCV recipe has improved and avoids this possibility via explicit options (https://github.com/conan-io/conan-center-index/blob/2486b313b8c92ac1c1f02e2cc46986b43002c2dd/recipes/opencv/4.x/conanfile.py#L25). Hopefully putting efforts in implementing the right approaches, eliminates the need for overly protective mechanisms that shouldn't be necessary otherwise.
It is still not fully clear how a lockfile would fully solve it, locking a previous package revision. If at some point you have a defective package revision, you eventually need to remove it or to create an extra new package revision that solves that defect and becomes latest. In the case you are describing, it seems that you don't want to rebuild all the consumers using the other backend, but instead you want to fix the OpenCV package to keep using the original backend you intended to use in the first place. That can only be fixed by patching the OpenCV recipe (that will create a new recipe-revision, problem solved), or fixing the environment and creating a new package revision that becomes latest. In any case, lets try to make sure that the 2.0 lockfiles manage package-revisions too. |
I would like to elaborate a bit further with the idea that there are better, explicit mechanisms to deal with potential problems of problematic setups. With this proposal we don't pretend to say that there are not complicated situations in which some package binary might be wrong and something should be done to fix it. What we are saying is that Lets say that we are facing a challenging, fragile and problematic setup of packages, that could happen for different reasons, like very old and unmaintained build scripts, or uncontrolled configurations of builds. Of course the right solution would be to fix those original problems, but in case it is not possible due to time restrictions or the like, a explicit model can be added to Conan to deal with this. We should at least assume that usually (most of the time) a sane build is happening, and then something like Introducing a explicit, managed solution to this can be done in a few minutes with the following steps:
And that's it. With those 2 steps that can be done in minutes (assuming that you are distributing your configuration with |
I was thinking of a feature where I'm able to say "if I'm not sure if the dependencies of a package are EXACTLY the same as last time it was built (which I feel can only be true if their package ids is the same) then someone should look into what changed and why". This seems like something that lockfiles might offer. All in all I am now in favor of removing |
Thanks very much all for the feedback. This proposal got positive support. Besides the feedback in this thread, we have had several conversations with tribe members, and everything is looking good, so merging this as accepted. |
Conan 2.0 will remove the
package_revision_mode
from thedefault_package_id_mode
and from thepackage_id()
method definitions. The package revision will never be part of the "package_id" of consumers.The current 1.X functionality that allows taking into account the
package_revision
of dependencies into the computation of consumerspackage_id
seems to be serving an ill-formed scenario in which package binaries are unnecessarily re-built without any changes in source code, configuration or dependencies, which should never happen according thepackage_id
logic.This mode also introduces uncertainty, as it inhibits the computation of downstream consumers "package_id" until the upstream packages have been built, and forces the definition of
"Package_ID_Unknown"
identifiers, which cannot be known until the dependencies are re-built from source, and then the action (build, download, etc) is also unknown when computing the dependency graph, which is very undesirable for any advanced use cases, like Continuous Integration, as it requires the constant re-evaluation of the dependency graph.The complexity that this feature produces in the codebase is also problematic and can affect other features robustness, as binary compatibility definitions.
So this PR proposes to remove this case, and rely on modes like
recipe_revision_mode
which can provide "exact" dependency re-building for any potential change in source code or configuration that requires re-building.