Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[llama-cpp] add new port (and its ggml dependency) #43925

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

cenit
Copy link
Contributor

@cenit cenit commented Feb 19, 2025

  • Changes comply with the maintainer guide.
  • The name of the port matches an existing name for this component on https://repology.org/ if possible, and/or is strongly associated with that component on search engines.
  • Optional dependencies are resolved in exactly one way. For example, if the component is built with CMake, all find_package calls are REQUIRED, are satisfied by vcpkg.json's declared dependencies, or disabled with CMAKE_DISABLE_FIND_PACKAGE_Xxx.
  • The versioning scheme in vcpkg.json matches what upstream says.
  • The license declaration in vcpkg.json matches what upstream says.
  • The installed as the "copyright" file matches what upstream says.
  • The source code of the component installed comes from an authoritative source.
  • The generated "usage text" is accurate. See adding-usage for context.
  • The version database is fixed by rerunning ./vcpkg x-add-version --all and committing the result.
  • Only one version is in the new port's versions file.
  • Only one version is added to each modified port's versions file.

@cenit cenit marked this pull request as draft February 19, 2025 23:17
)

vcpkg_cmake_install()
vcpkg_cmake_config_fixup(PACKAGE_NAME llama CONFIG_PATH "lib/cmake/llama")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had additionally:

vcpkg_cmake_config_fixup(CONFIG_PATH lib/cmake/Llama PACKAGE_NAME llama)
vcpkg_replace_string("${CURRENT_PACKAGES_DIR}/share/llama/LlamaConfig.cmake" "/../../../" "/../../")
vcpkg_replace_string("${CURRENT_PACKAGES_DIR}/share/llama/LlamaConfig.cmake" "${CURRENT_INSTALLED_DIR}" "\${PACKAGE_PREFIX_DIR}")

I think it has hardcoded paths to the CUDA libs somewhere?

@cenit
Copy link
Contributor Author

cenit commented Feb 19, 2025

@Neumann-A thanks for the suggestion. This is, in fact, just a quick scaffolding. Why didn't you push the port here? I was really sure to find it here already (by you or someone else, it's such a common lib nowadays 😄 )

@cenit
Copy link
Contributor Author

cenit commented Feb 19, 2025

in any case I will work on the port in the following days, with the goal of reaching green light everywhere, in case you don't have already your version working and you're good to go upstreaming it

@Cheney-W Cheney-W added the category:new-port The issue is requesting a new library to be added; consider making a PR! label Feb 20, 2025
@Cheney-W Cheney-W changed the title [llama.cpp] add new port (and its ggml dependency) [llama-cpp/ggml] add new port (and its ggml dependency) Feb 20, 2025
@cenit cenit force-pushed the dev/cenit/llamacpp branch 2 times, most recently from b99d183 to f0ae0d4 Compare February 20, 2025 21:54
exclude arm+windows, unsupported
exclude uwp, unsupported
exclude android, unsupported
@cenit cenit force-pushed the dev/cenit/llamacpp branch from 9c3a432 to f00b565 Compare February 21, 2025 12:45
@cenit cenit changed the title [llama-cpp/ggml] add new port (and its ggml dependency) [llama-cpp] add new port (and its ggml dependency) Feb 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category:new-port The issue is requesting a new library to be added; consider making a PR!
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants