Info locales

🌞 06:25 | 13:50 | 21:16

⛅ +7jours ⇒ +7min +7min = +14 min

Tableau HTML - le hollandais volant

Le Hollandais Volants -- Liens

Voilà.

Oui c’est un outil de feignasse.


— (permalink)

Le nez a évolué pour porter des lunettes - Psycho Évo #3 - YouTube

Le Hollandais Volants -- Liens
Si vous demandez à un éthologue pourquoi il a choisi ce métier, il vous parlera sûrement d’un de ces comportements incroyables vu dans un documentaire animalier à l’âge de 8 ans.

Alors que si vous demandez à un géologue pourquoi il a choisi ce métier, il vous répondra sûrement que ce n’était pas un choix.

Ce géologue-bashing constant xD

Ils le font dans The big Bang Theory sans arrêt, mais aussi dans NCIS (S10E03) : https://lehollandaisvolant.net/files/ncis-geologist-jokes.mp4


— (permalink)

Christ_OFF on pix.diaspodon.fr

Liens SebSauvage
Moi je joue à Metro Exodus, et Christ_OFF lui se balade à Chernobyl 😅
(Permalink)

Audacity's new management hits rewind on telemetry plans following community outrage • The Register

Liens SebSauvage
Machine-arrière sur la télémétrie (GoogleAnalytics) que la nouvelle équipe voulait mettre dans Audacity.
(Permalink)

Trouver un lien de suivi d’un colis - le hollandais volant

Le Hollandais Volants -- Liens

😁

Tous n'y sont pas, mais ça va venir.


— (permalink)

Ask an OpenShift Admin Office Hour - Red Hat CoreOS

OpenShift blog

Once an application is deployed, we don’t think about the operating system much - unless it breaks. This is even more true for containerized applications, whether they’re deployed to a single host using Podman or across a Kubernetes cluster. But even though the operating system is mostly ignored by the application team, it plays an important role for administrators and can dramatically affect our experience.

OpenShift 4 introduced a significant shift in how Red Hat deploys and manages the operating system underpinning everything else in OpenShift, changing from Red Hat Enterprise Linux (RHEL), and sometimes RHEL Atomic, to Red Hat Enterprise Linux CoreOS (RHCOS).

This episode we welcome Mark Russell, product manager, and Derrick Ornelas, product experience engineer, to talk about why and how RHCOS is different, along with discussing some configuration options for manageability, performance, and resiliency.

As always, please see the list below for additional links to specific topics, questions, and supporting materials for the episode!

If you’re interested in more streaming content, please subscribe to the OpenShift.tv streaming calendar to see the upcoming episode topics and to receive any schedule changes. If you have questions or topic suggestions for the Ask an OpenShift Admin Office Hour, please contact us via Discord, Twitter, or come join us live, Wednesdays at 11am EDT / 1500 UTC, on YouTube and Twitch.

Episode 28 recorded stream:

 

Supporting links for today:

  • Use this link to jump directly to where we start talking about today’s topic
  • We were absent the last two weeks as a result of Red Hat Summit, GitOpsCon, OpenShift Commons, and KubeCon EMEA. We have some great summaries of each of those events at the links, so if you’re curious what we were doing be sure to give them a read!
  • Deploying an OpenShift cluster across multiple sites, for example having control plane nodes in three different datacenters, is possible, but there are some important things you should know and prepare for. This KCS provides some guidelines and recommendations if you choose to pursue multi-site OpenShift.
  • Updates / upgrades to OpenShift 4.7 are available now! If you have deployed your OpenShift cluster to vSphere using a non-integrated, platform agnostic (a.k.a. bare metal UPI or simply set platform=none in the install-config.yaml), then you’ll want to pay special attention to the known issues! Due to a bug between VXLAN offload and the RHEL 8.3 kernel used by RHCOS in OpenShift 4.7, there can be some packet loss occurring. The workaround in the release notes is to use VM hardware version 13, but if you’re using a newer version you can also apply machine config to disable the offload and workaround the issue that way.
  • Did you know that you can change the domain name used for Routes? Setting an appsDomain on the Ingress configuration will cause all Routes to use the configured domain instead of the default. FYI, the docs say it’s AWS only, but that’s incorrect (and there’s a BZ to fix it) - the option is supported with any deployment type!
  • Last, but not least, following up from when Katherine Dubé was on the stream to talk about installation methods, there’s now a blog post which shows how to manually add nodes to vSphere IPI clusters.

Questions answered during the stream:

Service rendu par La Poste | Banque de France

Le Hollandais Volants -- Liens

Si vous avez un billet de banque déchiré ou abîmé de telle sorte que les commerçants n’en veulent pas (ils ont le droit, si l’état du billet est objectivement mauvais — tout comme vous avez le droit d’en refuser aussi), on peut aller à la banque de France pour les échanger.

Si le montant des billets à échanger est inférieur à 800 €, ils vous les échangent immédiatement. Sinon, ça sera par virement.

Prévoyez une pièce d’identité et éventuellement d’autres documents (liste dans le liens).

Le lien donne aussi une liste (dans un fichier .xlsx) de bureaux de la Poste qui font également ce service.


— (permalink)

GitOps Guide to the Galaxy Episode 14: Exploring CI with Tekton

OpenShift blog

 

Just to catch you up on some of our online TV shows on OpenShift.tv, Christian Hernandez and Chris Short have been covering the exciting realm of GitOps in a series exploring said Galaxy. In Episode 14, the pair take a look at Tekton would be considered in a CI/CD workflow. From the video description: 

Now that we've looked at other continuous integration (CI) tools, let's see how Tekton stacks up. Tekton is a young yet powerful CI tool for Kubernetes and OpenShift that's optimized for building and deploying cloud-native, microservice-based applications. When it comes to CI, there are lots of options, each with a slightly different set of features, so we'll look at why you'd want to use it and dive into the features Tekton offers.

Mes petits outils en ligne - le hollandais volant

Le Hollandais Volants -- Liens

Le thème sombre est en ligne pour mes outils.
En fait il s’adapte aux préférences du navigateur. Si vous configurez un thème sombre, les outils passeront en sombre.

Tous les outils ont été mis à jour pour fonctionner, mais certains peuvent encore contenir des parties blanches conséquentes, comme l’outils pour tracer des fonctions mathématiques.

D’autres outils n’ont pas été modifiés (comme le "code rain" matrix, ou l’outil pour tracer des Mandelbrot).

D’autres outils vont venir, dont :
– une page qui propose les liens de suivi des transporteurs (DHL, UPS, etc) : on rentre son numéro, il sort le lien ;
– un outil qui fait tableaux HTML (entrez « 3x3 » et il sort le HTML complet d’un tableau 3x3 ; le HTML d’un tableau est fastidieux à taper à la main…).

Toujours sans pub, ni trackers, tout en JS (donc rien en envoyé au serveur). Ce qui change de beaucoup de sites. Et ça va faire 8 ans que cette page existe. Là aussi, ça change de nombreux sites et outils super pratiques qui partent…

Je songe à faire un site dédié pour ces outils.


— (permalink)

DevTools for CSS layouts 2021 edition

Le Hollandais Volants -- Liens

Oui bon j’avoue que les Devtools de Firefox sont nettement plus pratiques et orienté "humains" que ceux de Chrome, qui sont là pour être là sans être pratiques (Chrome n’étant surtout qu’un outil de consommation de web).

Problème : Vivaldi a le même…

Est-ce que Vivaldi va un jour refaire un outil comme Opera Dragonfly à son époque ? :D


— (permalink)

Why do some COVID patients get dangerously ill while others don't even notice they contracted the virus

Le Hollandais Volants -- Liens

— (permalink)

Chernobyl - Every Lie We Tell Incurs A Debt To The Truth - Legasov's Speech with Vichnaya Pamyat - YouTube

Le Hollandais Volants -- Liens
Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid

Je suis en train de revoir la série Chernobyl.
Cette citation résonne toujours autant, plus que jamais.

Je pense à Blanquer qui dit que les écoles ne sont pas un jeu de contamination. Je pense à Castex qui dit que 400 est un bon chiffre pour déconfiner. Je pense à Macron qui disant d’aller au théâtre, à Sibeth Ndiaye qui disait qu’un masque ne servait à rien…

La dette ?
Ils ont été 100 000 à commencer de la payer de leur vie. C’est 2127 fois plus que les chiffres officiels de Chernobyl. Et on sera encore beaucoup plus à la payer de notre vie.
Sans parler des centaines de milliers de covid-long, de personnes avec des séquelles…

Tout ça à cause de mensonges de types en costume qui ne pensent qu’à 2022.


— (permalink)

SSO Integration for the OpenShift GitOps Operator

OpenShift blog

This is a demo-heavy blog. Readers of this blog will get an idea about why SSO is important, how OpenShift handles authN/authZ and a step-by-step guide on using Red Hat Single Sign-On(RHSSO) to log in to an Argo CD application.

OpenShiftGitops

Why SSO?

Single-Sign-On (SSO) is the preferred, if not the only, way of authentication for most enterprise applications. From the user perspective, SSO offers speed and convenience i.e. you only need to authenticate once. The most important consideration from the business perspective is the security SSO offers. SSO reduces the attack vector because users only log in via a specific channel and authorized accounts can be managed in a single place.

What are the options for OpenShift AuthN/AuthZ?

OpenShift can be used as an identity provider with custom access permissions configured. OpenShift Container Plaform master includes a built-in OAuth server. Users/apps obtain OAuth access tokens to authenticate themselves to the API.

When a new OAuth token is requested, the OAuth server uses the configured identity provider to determine the identity of the person/app making the request and maps a role binding to that identity. This role binding determines what access that person/app is allowed and an associated access token is returned. Every request for an OAuth token must specify the OAuth client that will receive and use the token. This blog focuses on using OpenShift as an identity provider and RHSSO operator as an identity broker.

Hands-On: SSO using RHSSO operator for Argo CD apps

Pre-requisites

  • OpenShift 4.X cluster
  • Installation of oc command-line tool
  • Installation of the following operators
    • Red Hat Single Sign-On (RHSSO) Operator v7.4.6 installed under 'Keycloak' namespace
      • Existing Keycloak realm and access to Keycloak Admin dashboard
    • Red Hat OpenShift GitOps Operator v1.1.0

You can install the RHSSO operator under keycloak namespace and can use all other default settings when installing the above operators.

Steps

Connect to your OpenShift 4.X cluster from command-line so that you can execute oc commands.

  1. Finding your Argo CD server route:
oc get route <instance-name>-server -n <namespace>

For example, for the out-of-the-box (OOTB) Argo CD instance under openshift-gitops namespace, the command to find the Argo CD server route is:

oc get route openshift-gitops-server -n openshift-gitops
  1. Creating a new client in Keycloak:

Log in to your Keycloak server, select the realm you want to use, navigate to the Clients page, and then click the Create button in the upper-right section of the screen. Use the following values

adding-argocd-client

Be sure to change the root URL to your ArgoCD server URL. Once you click Save, configure the client according to the following:

argocd-client-configuration

If you've filled-out the Root URL before, some of the fields would be pre-populated. The important fields to note are the Access Type which is set to confidential and Base URL which is set to /applications.

Make sure to click Save. You should now have a new tab called Credentials. You can copy the Secret that will be required in a later step.

  1. Configuring the groups claim

To manage users in Argo CD, you must configure a groups claim that can be included in the authentication token.

To do this, start by creating a new Client Scope called groups and use the settings from the image below.

add-client-scopes

Once you've created the client scope you can now add a Token Mapper which will add the groups claim to the token when the client requests the groups scope. Make sure to set the Name as well as the Token Claim Name to groups, the Mapper Type as Group Membership and Full group path OFF.

create-protocol-mapper

You can now configure the client to provide the groups scope. You can now assign the groups scope either to the Assigned Default Client Scopes or to the Assigned Optional Client Scopes. If you put it in the Optional category you will need to make sure that Argo CD requests the scope in it's OIDC configuration. Let's use Assigned Default Client Scopes by navigating to Clients → Client Scopes, selecting groups from the Available Client Scopes and clicking Add selected option. The groups scope must be in the Available Client Scopes table.

add-default-client-scopes

  1. Creating an Admin group

Navigate to Groups and create a group called ArgoCDAdmins

  1. Configuring Argo CD OIDC

To configure Argo CD OpenID Connect (OIDC), you must generate your client secret, encode it, and add it to your custom resource.

  1. Adding current user to ArgoCDAdmins group

Create a group called ArgoCDAdmins and have your current user join the group.

ArgoCDAdmins

  1. Configuring Argo CD OIDC

To configure Argo CD OpenID Connect (OIDC), you must encode your existing client secret (or generate the client secret if you don't have it already), and add it to your custom resource.

a. First you'll need to encode the client secret in base64:

$ echo -n '<secret you copied in step 2>' | base64

Now edit the secret argocd-secret and add the base64 value to an oidc.keycloak.clientSecret key:

oc edit secret argocd-secret -n <namespace>

Example YAML of the secret:

yaml apiVersion: v1
kind: Secret
metadata: name: Argo CD-secret
data:
oidc.keycloak.clientSecret: ODMwODM5NTgtOGVjNi00N2IwLWE0MTEtYThjNTUzODFmYmQy …

b. Next, edit the Argo CD custom resource and add the OIDC configuration to enable the Keycloak authentication:

oc edit argocd -n <your_namespace>

Example of Argo CD custom resource:

apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
creationTimestamp: null
name: argocd
namespace: argocd
spec:
resourceExclusions: |
- apiGroups:
- tekton.dev
clusters:
- '*'
kinds:
- TaskRun
- PipelineRun
oidcConfig: |
name: OpenShift Single Sign-On
issuer: https://<your Keycloak server URL>/auth/realms/<your realm>
clientID: argocd
clientSecret: $oidc.keycloak.clientSecret
requestedScopes: ["openid", "profile", "email", "groups"]
server:
route:
enabled: true
  1. Keycloak Identity Brokering with OpenShift

You can configure a Keycloak instance to use OpenShift for authentication through Identity Brokering. This allows for Single Sign-On (SSO) between the OpenShift cluster and the Keycloak instance.

a. You can obtain the OpenShift Container Platform API server URL either from UI or CLI.

From the top-right corner of the Admin UI console, click on the ? and you'll see the API server URL like below:

server-url

Alternatively, you can execute:

oc status

server-url-cli

b. In the Keycloak server dashboard, navigate to Identity Providers and select Openshift v4. Specify the following values:

Base Url: <OpenShift API Server URL obtained above>

Client ID: keycloak-broker (this has to match the name of the OAuth Client specified in step 9)

Client Secret: 12345 (this can be any value you choose but has to match the value of the secret specified in step 9)

Display Name: Login with OpenShift

Default Scopes: user:full

  1. Registering an OAuth client

Execute the following YAML to register your OAuth client:

oc create -f <(echo '
kind: OAuthClient
apiVersion: oauth.openshift.io/v1
metadata:
name: keycloak-broker
secret: "12345"
redirectURIs:
- "https://<your Keycloak server URL>/auth/realms/<your realm>/broker/openshift-v4/endpoint"
grantMethod: prompt
')

If the user has not granted access to this client, the grantMethod determines which action to take when this client requests tokens. Specify auto to automatically approve the grant and retry the request, or prompt to prompt the user to approve or deny the grant.

At this point, you should be seeing a Login with OpenShift button on your Argo CD server UI and be able to use your OpenShift credentials to log in to the Argo CD server UI.

  • Troubleshooting: You might have to use an incognito window to avoid errors related to caching

If you already have an OpenShift user created, you can skip step 10.

  1. Creating an OpenShift user via htpasswd (optional)

a. Create a password 12345 for the user dewan and stores this info to the file htpasswd

htpasswd -c -B -b htpasswd dewan 12345

b. While you're connected to your openshift cluster, execute from the terminal:

oc create secret generic htpass-secret --from-file=htpasswd=htpasswd -n openshift-config

c. Create the following YAML to add a new oauth CR:

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_htpasswd_provider
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret

Execute:

 oc apply -f htpasswd-cr.yaml
  1. Log in to Argo CD using OpenShift

Navigate to your Argo CD server URL (you might need to open this in an incognito window to avoid caching). Once you click LOGIN VIA OPENSHIFT, you'll be taken to a keycloak page with a button OPENSHIFT LOGIN. Click this button and you'll be redirected to your openshift login page where you can use dewan/12345 credential (or your existing credential) to log in (configured via htpasswd).

authorize-access

You will need to authorize access for the first time.

  1. Done? Not quite yet!

Once you login to Argo CD server with (or your own) user, you will fail to create a new Argo CD application because Argo CD RBAC grants permissions to the user on the basis of which group does it belong to on Keycloak. In the previous steps, you created openshift-v4 as Identity Provider in Keycloak but unfortunately Keycloak does not read the group claims or group information from OpenShift.

fail-app-create

68747470733a2f2f6d656469612e67697068792e636f6d2f6d656469612f42456f62357177466b534a37472f736f757263652e676966

So you need to go back to Keycloak server and add the user (dewan) to appropriate groups(ArgoCDAdmins in this case).

  1. Configure groups and Argo CD RBAC

Role-based access control (RBAC) allows you to provide relevant permissions to users.

a. In the Keycloak dashboard, navigate to Users → < your-user > -> Groups. Add the user (dewan) to the Keycloak group ArgoCDAdmins.

b. Ensure that ArgoCDAdmins group has the required permissions in the argocd-rbac config map.

oc edit configmap argocd-rbac-cm -n <namespace>

Example of a config map that defines admin permissions.

apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.csv: |
g, ArgoCDAdmins, role:admin

That's it! Now your user (dewan) can successfully create Argo CD applications and perform other actions. A table is provided in the appendix section that lists built-in permissions for Argo CD.

Appendix

This section lists the permissions that are granted to Argo CD to manage specific cluster-scoped resources which include platform operators, optional OLM operators, and user management. Note that Argo CD is not granted cluster-admin permissions.

   
Resource group What it configures for a user or an administrator
operators.coreos.com Optional operators managed by OLM
user.openshift.io, rbac.authorization.k8s.io Groups, Users, and their permissions
config.openshift.io Control plane operators managed by CVO used to configure cluster-wide build configuration, registry configuration, and scheduler policies
storage.k8s.io Storage
console.openshift.io Console customization

It’s Been a Full Year Since we Launched OpenShift TV

OpenShift blog

Josh Wood, Erik Jacobs, and Chris Short beat Prometheus into shape after setting up a Sinatra app to monitor first, during one of our first live streams last May.

Just over a year ago, we launched OpenShift.TV amid the pandemic, providing unprecedented access and engagement with Red Hat’s experts and leaders to our community, customers and partners. Our charter was for content to be honest, unscripted and accessible, with no user registration required. We wanted to air live programming not to convince the audience that Red Hat has fantastic people and solutions (we think we do,) but instead to validate these assertions and help viewers take their ideas further with Red Hat. If you want to learn even more about a subject on a show, then feel free to opt-in to our newsletters and lists during a show for follow-up; otherwise, just keep learning and interacting!

In one year, executive Producer Chris Short and the team have produced over 540 hours of content -- all of which is archived. To put this into perspective, someone would need to sit down for three straight weeks, without sleep to watch everything the team has produced. It would be like binge-watching all current seasons of Game of Thrones on HBO and Orange is The New Black on Netflix -- three times.

The content is technical, relevant and varied, presented by the experts and leaders in their fields. This includes favorite recurring shows, like 

  • “In the Clouds,” where viewers can interact with leaders from all over Red Hat.
  • “The Level Up Hour” where traditional RHEL admins learn how to start leveraging containers and OpenShift.
  • “Ask an OpenShift Admin” where viewers learn more about running and managing an OpenShift environment.

Additionally, we have our syndicated shows from the popular DevNation and OpenShift Commons brands. We took the “unprecedented access” mantra even further with our OpenShift product management team live streaming the roadmap for the world every few months -- THAT is open source.

However, Red Hatters doing what Red Hatters do on a daily basis for an audience is cool, but it’s the unscripted part that has provided some of the most memorable and sincere moments. Chris Short and I chatted recently, and he provided me some of his “biggest surprises.” 

  • When asked about the biggest overall surprise for OpenShift.TV, Chris shared, “Honestly, the guests themselves. Maybe not so much a surprise, but very validating that we have such a great and open culture. From engineers to our executive leadership -- everyone is genuinely nice and engaging. It really made my job easier.”
  • When discussing the “neatest” things we have done, “Wow, there are so many things -- even simulcasting across Twitch, YouTube, Facebook, and Periscope is really cool when you think about it, but overall it is how we are pushing the envelope on engagement with live quizzes and gamification. It’s amazing how many people love Langdon’s “Sweet, sweet internet points,” and it’s always fun when guests beat me at our quizzes.”
  • Biggest surprise? “Rob Szumski, he is one of our OpenShift PMs (Product Managers), and to be honest, I was expecting a lot of the same when we have PMs on, but Rob completely caught me off guard. He showed up with a remote-controlled, away mode CNC machine in the Running a Kubelet without a Control Plane — Controlling the CNC machine and camera episode. Totally safe. We swear. The audience said Szumski was on a Tony Stark level.”

These are just a few of the highlights, as Chris had many. So whether it is a high-level executive that opens up themselves to questions from any and everyone, audience members battling wits with our hosts, or multi-billion dollar superheroes with product management as a day job, OpenShift.TV’s first year delivered. At a time when the world was becoming less connected, we wanted to provide a way to stay connected. Mission accomplished, and I cannot wait to see what is next!

more
mark as read