-
Notifications
You must be signed in to change notification settings - Fork 217
Dependent resources determined at runtime? #1508
New issue
Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? No Sign in to your account
Comments
Closing this as duplicate of #1182. Please re-open if you believe this is incorrect… |
Ahh, sorry |
Ok, looking at the code changes in the linked issue, that's primarily aimed at build-your-own replica-set? A variable number of pods which are intended to be interchangeable, but which need to vary slightly, e.g. passing an identifier as an environment variable? If so, that would probably be sufficient for my needs... it certainly addresses the primary issue, i.e. the inability to scale the application. Being able to dynamically define multiple groups — e.g 3 copies of pod A, 4 copies of pod B — would be handy, but it's not something I actually need... it's more about anticipating uses... |
The idea here is create a variable number of resources based on primary. any resource, even external, not Kubernetes scoped. If you need just pods, probably it is better to use replica set.
This use case is analogical, it's not based on primary, but still dynamic in sense, that it would need code changed (in terms of dependent resources to do ), so the bulk is the way to go here.
Each group would be in this case a separate bulk resource. Note that this feature is not released yet, and might undergo API changes. |
I'd love to just use replica set — it would make things much easier. But the best way to describe my use case is that I have a pool of (e.g.) 5 slots registered in central configuration... so I can start up to 5 pods, and each one must be told which of the 5 slots it should use. The bulk resources seem to be an ideal answer to this, going by how the
Yes, that's fine... I'm still a long way off needing a production version of this, being very much at the research stage. It also belatedly occurs to me that in the short term, I can probably fake it acceptably for testing. While the current SDK doesn't permit arbitrary scaling or dynamic creation of dependent resources, is there anything stopping me adding (e.g) 5 DR instances to the workflow, and using conditions to allow between 0 and 5 to be reconciled based on the primary resource spec? Obviously it means there's a hard cap on scaling, but that's okay for now — we wouldn't want more than 2 'replicas' for development work, just the minimum for finding problems. |
A brief outline of a problem discussed in Discord — I'm building an operator to assist in porting a legacy application to Kubernetes. For the purposes of this conversation, the key features are a) one pod representing a centralised configuration server, and b) a set number of application pods determined entirely by the custom resource spec.
It's the latter part I'm asking about here. In all of the examples I've seen, the set of dependent resources is hardcoded, or at least, known in the reconciler constructor... one deployment, one service, one ingress, etc. This is not the case for me — the set of pods can only be determined by looking at an instance of the custom resource spec to see what's required. I'm also unable to use replica-sets for scaling, because even if two pods are otherwise identical, they need a unique identity registered in the config server.
Any advice?
(And yes, I know this is ugly architecture, but fixing it is a longer-term effort).
The text was updated successfully, but these errors were encountered: