Skip to content
This repository has been archived by the owner on May 6, 2022. It is now read-only.

Binding to an existing resource #2789

Closed
danielepolencic opened this issue Mar 25, 2020 · 8 comments
Closed

Binding to an existing resource #2789

danielepolencic opened this issue Mar 25, 2020 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@danielepolencic
Copy link

This is not a bug report, but a request for clarification.

It's often common to have several environments such as dev, test prod.
In this case, prod has a dedicated database, but dev and test share a single database.

I can use the Service Catalog to provision a database in prod and dev.
But how can I use the Service Catalog to bind the test environment to an existing database?

I tried to look into the docs, but I could find anything related to it.
Is there a property or object designed to bind to an existing resource in the cloud provider?

@teddyking
Copy link
Contributor

I think what you're describing is a "user-provided" service. User-provided services are a concept that originated in the Cloud Foundry world and essentially allow a user to bind to a "external" service instance (i.e. an instance that has not been explicitely provisioned through the marketplace / service-catalog).

For the case you're describing you would provision a database for the prod and dev environmnet through service-catalog, then create a user-provided service to be able to access the dev database from the test environment.

AFAIK there's no way to do this built into service-catalog, but it does come with the ups-broker. In theory you could deploy the ups-broker alongside service-catalog and use that to provide the functionality you're asking for.

For example:

# install and register the broker
kubectl create ns ups-broker
helm install ups-broker ./charts/ups-broker -n ups-broker
svcat register ups-broker --url http://ups-broker-ups-broker.ups-broker.svc.cluster.local --scope cluster

# create a user-provided service in test env, specifying the credentials for the dev database
svcat provision my-user-provided-service --class user-provided-service --plan default --params-json '{"credentials":{"host":"192.0.2.100","username":"admin","password":"password"}}'

# create a binding to the user-provided service instance in test env
svcat bind my-user-provided-service

# credentials are returned in the secret created by the binding
k get secret my-user-provided-service -o yaml

However it seems like the ups-broker is only used for testing inside the service-catalog repo, so I'm not sure if it would be suitable for use in "real world" deployments.

@mszostok
Copy link
Contributor

Hi

There was an idea to replace the ups-broker with a built-in functionality in Service Catalog: #2189

Unfortunately it wasn't implemented but if you are interested then the pull-request is more than welcome 👍

AFAIK currently the ups-broker is the only way, am I right @jberkhahn?

@danielepolencic
Copy link
Author

if you are interested then the pull-request is more than welcome 👍

I wish I had the skills! My Go is very poor.

Is there any documentation or code for the ups-broker? I can't seem to find it.

@jberkhahn
Copy link
Contributor

the docs for the ups-broker are in the chart dir: https://github.com/kubernetes-sigs/service-catalog/tree/master/charts/ups-broker

the code for the ups broker is in the contrib dir: https://github.com/kubernetes-sigs/service-catalog/tree/master/contrib

Note that the ups broker in it's current state is pretty much a toy, it's useful for testing purposes but I would not recommend using it in a production environment.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 25, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 25, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants