Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Search - "kubectl"
-
Just needing somewhere to let some steam off
Tl;dr: perfectly fine commandline system is replaced by bad ui system because it has a ui.
For a while now we have had a development k8s cluster for the dev team. Using helm as composing framework everything worked perfectly via the console. Being able to quickly test new code to existing apps, and even deploy new (and even third party apps) on a simar-to-production system was a breeze.
Introducing Rancher
We are now required to commit every helm configuration change to a git repository and merge to master (master is used on dev and prod) before even being able to test the the configuration change, as the package is not created until after the merge is completed.
Rolling out new tags now also requires a VCS change as you have to point to the docker image version within a file.
As we now have this awesome new system, the ops didn't see a reason to give us access to kubectl. So the dev team is stuck with a ui, but this should give the dev team more flexibility and independence, and more people from the team can roll releases.
Back to reality: since the new system we have hogged more time from ops than we have done in a while, everyone needs to learn a new unintuitive tool, and the funny thing, only a few people can actually accept VCS changes as it impacts dev and prod. So the entire reason this was done, so it is reachable to more people, is out the window.3 -
It's the first time I've heard someone pronounce kubectl as kubecuttle....
How hard is it to say kjub-cee-tee-el?12 -
This Christmas I transitioned into a new job. At the old job I was the only kubernetes-guy, so since they no longer have any developers who are confident working with that, they decided to go with LAMP-stack.
The data from dev-kubernetes-server was backed up by some guy and moved to an offsite-server, or so they told me. Turns out, he had backed up the kubectl-config-file, and not the databases. Now everything is wiped. Sure glad we still have that config-file!
Of course, since that was only our dev-server, there was nothing too important there, except for all the documentation. The only other backup? On my laptop, which I turned in to them, and is now wiped and used by one of the sales-guys.
Now I’m being called in at least twice a day, since I was their goto-guy for almost anything backend-related. Feels great, after they spent a couple of months attempting to rewrite everything in pure PHP (with a strict no-dependency policy for some reason). Fml.2 -
I don't understand wtf is happening today..
- in project A, terraform suddenly decided to stop working with kubernetes-related providers -- the CA cert mismatch error. I agree, it should be not working, because there are 2 kube-api severs behind an LB. But why now??? Why was it working for the last 2 months, until NOW????
- in project B, terraform suddenly decided to stop working _correctly_ with kubernetes-related providers -- it doesn't find resources randomly, even though they are available and I can see them via kubectl get. TF_LOG=DEBUG shows terraform sending correct requests to the kube-api, but the response is a 404. wtf... I see those resources present in another terminal window, only using kubectl. wtf....
- my PR in github was commented, I wanted to ask a question seconds later, and I'm getting a 502 from GH
wtf... I can't spot a pattern and that drives me freaking crazy.
Is this the Friday's curse...? IDK4 -
Kubernetes question:
So far I've created two pods, mongo & Go
Exposed those pods using services
Their IP is 10.x.x.x and accessible from my machine only (virtual lan I'm guessing only known to host), but my machine's network ip is 192.x.x.x therefore, not accessible from outside world and to do so I need to put nginx in front to receive requests and route them internally.
Is there a way in kubernetes to make it work like nginx in terms of:
Kubernetes listen to port 80 (for example) route based on received url. As you know in enginx we define a server block with server domain_name.tld
Anything similar in kubernetes? I've cheked ingress-nginx controller, and also saw LoadBalancer but that requires a cloud provider.
If anyone can also give an example it would be great, so far examples I checked ended up screwing my setup and had to reset kubectl to get things back working18 -
last week I finished a course on k8s "simply, for developers". Today I decided to fiddle with kubectl to move a project I'm working on to k8s and it was a complete success (thing runs on minikube and pulls containers from a private Gitlab registry).
I'm happy with the results, there is still room for learning k8s and devops in general. -
alright, so I'm seeding the DB with a 4GB SQL dump through a k8s port-forward across the world (behind the pond), port-forwarded through socat (a bastian container).
What are the chances I'll succeed? :D4