-
Notifications
You must be signed in to change notification settings - Fork 6.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[llama-cpp] add new port (and its ggml dependency) #43925
base: master
Are you sure you want to change the base?
Conversation
) | ||
|
||
vcpkg_cmake_install() | ||
vcpkg_cmake_config_fixup(PACKAGE_NAME llama CONFIG_PATH "lib/cmake/llama") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had additionally:
vcpkg_cmake_config_fixup(CONFIG_PATH lib/cmake/Llama PACKAGE_NAME llama)
vcpkg_replace_string("${CURRENT_PACKAGES_DIR}/share/llama/LlamaConfig.cmake" "/../../../" "/../../")
vcpkg_replace_string("${CURRENT_PACKAGES_DIR}/share/llama/LlamaConfig.cmake" "${CURRENT_INSTALLED_DIR}" "\${PACKAGE_PREFIX_DIR}")
I think it has hardcoded paths to the CUDA libs somewhere?
@Neumann-A thanks for the suggestion. This is, in fact, just a quick scaffolding. Why didn't you push the port here? I was really sure to find it here already (by you or someone else, it's such a common lib nowadays 😄 ) |
in any case I will work on the port in the following days, with the goal of reaching green light everywhere, in case you don't have already your version working and you're good to go upstreaming it |
b99d183
to
f0ae0d4
Compare
exclude arm+windows, unsupported exclude uwp, unsupported exclude android, unsupported
9c3a432
to
f00b565
Compare
find_package
calls are REQUIRED, are satisfied byvcpkg.json
's declared dependencies, or disabled with CMAKE_DISABLE_FIND_PACKAGE_Xxx.vcpkg.json
matches what upstream says.vcpkg.json
matches what upstream says../vcpkg x-add-version --all
and committing the result.