Kubecon 2023
Day 1
Food is going to be good it seems breakfast snack lunch snack and more..
Lots of walking around last night happy to sit for a bit today
ISTIO con for me this AM
9AM
ISTIO
Gateways and Controllers are all setup with a mix of community charts currated by the project to get up and running.
With the Mesh we will get additional deployment types within the cluster - not via tooling like Hanress
Observability
- Inbound and outbound traffic
- debug with Kaili - anomalies detection flow review
- flow based?
- applicaiton traffic tracing and context propagation
- by tracing the full path you gain additional info into the flows (Hopefully not much)
- Track the configuration pushses to proxies see how happy envoy is
- resources
- the amount of data is going to kill anything you point it at - (Promethus)
- DISABLE_HOST_HEZDER_FALLBACK - this helped to get metrics going agian by reducing load
Ambient -
COnfig moddes for Envoy
- Send the world - -
- delta
- some are still very large
- on demand
- Can lazy load things as needed
Ambient mode - no sidecar - Waypoint Proxy ztunnel
- Waypoint Proxy does this stuff that would have been in envoy
- ztunnel is running on all nodes
- ztunnel config is not as crazy complext as others as its specific for ztunnel - its specific vs generic like envoy
- ztunnel is written in rust and is an attempt to get things working faster —
- You can also define a lot of the setup centrall and then the configuration just uses it IE TLS
- Dev sets a single item to use MTLS.. backend would be setup to foce 1.3 specific cyphers /etc
Waypoint -
- Envoy based
- gateway to the namespace
- Complex configuration surface
- proxy systems only send to a single namespace
- this is a pattern that might help
- scopped down to the namespace waypoint proxy stays in a namespace
- massive reduction of the configuration needed to scale and configure
- Building better controllers same Speaker -
Envoy — Platform - Agnostic security
Lukonde Mwila - Senior Dev AWS Containers from the Couch SPiffee Spire —
Dont let your teams have 50 Security Identity models - Lock it down let everyone use the same setup .
We did good on the Sidecar setup but a lot of work is still with the dev –
We use Certs but no specific Spiffee or the Spire tooling? - SDS protocol - Spire agent and the SDS server -
trust bundle and what not this gets us out of the we just chekc issuer setup - now they are individual identity .
Spire agent is the SDS API server Lots of fields / configurations on SDS big stuff is
- TLSCertiicate gets cert
CertificateValidationContext - Trust bundle -
ACM can be the root — or Spire it has this built in .
mindmap root((Service Mesh)) Auth
mindmap root((EKS)) NODES Fargate AMI (RHEL8) CONTROL AWS VISIBILITY Cloudwatch vs Splunk/On prem Techops maintain?
mindmap root(Demo) OPA Exising Force Artifactory No Root ETC ETC ETC COSIGN Sign GL Verify GL Harness K8 SEMGREP
What attributes are needed? — OPA whats allowed after this ID has been passed … What are your criteria — that could be the use of the attribute
Lets include some info on the demo that includes things like new tooling
More of an intake process …?
CICD things — what do we do ?
OPA policy - dso pipeline fix apia setup of pipeline - dso
How do we say NO
VAULT Kubernetes
Talk from Akami https://www.codingforentrepreneurs.com/ - Presenter also provided a Kubernetes book and another goodie to us.
Interesting side good blog articles. Link to lab files below.
I am also a linode customer now :) 250$ should get me far lets see ::)
.env secrets
- Setup vm
- Install Vault
- Vault CLI - UI
- K8 cluster
- setup vault agent operator and setup injector
https://github.com/codingforentrepreneurs/kubecon23-chi cfe.sh/github - kubecon23-chi –> clone or fork repo kubecon23-chi
New Firewall called vault-firewall inbound enable ssh incomming 8200
VAULT_ADDR=""
Cluster with a bunch of stuff
K*- label vualt-k7s-kubecon 1.27 ha na node ppol chared cpu linde 2gb
Threat model
Decidouse Attack tree visualize tool — deciduous- https://github.com/argoproj/argoproj/blob/main/docs/end_user_threat_model.pdf LInk to report from ControlPlane
Should we force a full git flow… IE you push here and all this happens – ?
Maybe a demo for someone today each group would setup this flow maybe with some help from devops but possibly in a new way each time - .
Would be nice to have some end to end examples .. Dev - > Gitlab - > Artifactory - > Harness - > K8
10 -> 8 -> 4 -> 6 -> 6
The findings from this on Argo are still somewhat applicable to anyone. keep the same posture — its mostly around defaults and RBAC limiatation .
Lighting talkshello-rego.rego topaz policy / console - Aserto Authorizers @omrig Omri Gazitt -
- Giadarnos is better than loumolnoties
Tiny talk on tiny containers - Marantis Eric Gregory - Small images are faster per reduce consumption less attack surface WASM can be smaller than the smallest Scratch Image
I should have a slide that shows dockerfiles as UBI is adopted with as many changes as possible.
pex multistage for Python -
Java small ? any custom build runtimes? using something like Maven/Gradle its just another cluster runtime for wasm just like runc and gvisor /etc lets get that going
IAC - Developer user experience - new tool i dont want to learn it Usually you trade off self service helps many but removes some flexability \
oBSERVABILITY O11LY WHAT?
o11ly
Open telemetry Saas v
- Some tooling examples . Maven Build OTel extension Ansible OTel callback pytest-otel otel-cli (pipelines sh)
1 - build Maven otel Bash script otel-cli 2 - Test Junit (jupiter plugin) 3 - Package Artifactory Logs via FileLogReciever 4 - Deploy Ansible OTel Callback Or this is Harness?
- Bash script again use the otel-cli
send all to Open tel collector —?
wow Jeager — that might be used to look at all of this on the same plce - wow —– o11ly o11ly o11ly
#Day 2
Food is ook but will need to find a good breakfast place. Lunch will try today
Security Hub opens today
Nice walk in