Info locales

🌞 08:17 | 12:46 | 17:14

⛅ +7jours ⇒ -6min +1min = -5 min

Marmite norvégienne — Wikipédia

Le Hollandais Volants -- Liens

Une astuce simple pour faire des économies de gaz ou d’électricité, selon ce que vous utilisez pour la cuisson.

Au lieu de chauffer la casserole d’eau (10 minutes) puis laisser cuire sur le feu (15 minutes) puis jeter toute l’eau chaude, on peut juste faire chauffer la casserole (10 minutes) puis laisser reposer hors du feu pendant 1 heure ou plus.

L’eau étant chaude, elle continuera de cuire les aliments. Vu qu’il n’y a plus de feu en dessous, l’eau refroidira, donc la cuisson prendra plus de temps. Mais si le temps n’est pas un facteur, ça marche très bien.

Je le fais pour les pommes de terre, les carottes, les endives, bref les aliments qui demandent une cuisson assez longue dans de l’eau.

Quand j’arrive chez moi le soir, je porte l’eau et les légumes à ébullition, puis je coupe le feu et je laisse ça tel quel jusqu’à l’heure de manger (1-2h après), la casserole est toujours à 60-70 °C mais le temps passé suffit à faire cuire tout ça.

Bien-sûr, pensez à mettre un couvercle sur la casserole, sinon ça refroidira beaucoup plus vite (en plus de mettre de la vapeur d’eau partout).

La « marmite norvégienne » consiste mettre la casserole dans un sorte de « manteau » favorisant encore plus la rétention de chaleur.
J’ai déjà vu ces trucs là pour les théières aussi. Mais en vrai, on peut aussi enroule la casserole dans une laine ou un drap.

Ça demande de s’y prendre 1h ou 2 à l’avance, mais si vous mangez beaucoup de patates, c’est une astuce simple. Je n’ai aucun chiffre, mais vu qu’on passe de 20-25 minutes de cuisson à 10, ça n’est pas négligeable.

~

Une autre astuce vraiment simple, mais que — à ma grande surprise — personne ne fait : quand on fait bouillir de l’eau dans une bouilloire (pour du thé par exemple) : il est inutile de remplir la bouilloire de 2 litres à raz bord, si l’on ne veut qu’une tasse.

C’est du bon sens, mais j’ai déjà vu des personnes mettre systématiquement 1 L ou 1,5 L pour deux personnes. C’est insensé : mettez juste ce qu’il faut (ou au moins le minimum pour la bouilloire). Ça chauffera plus rapidement et consommera bien moins d’énergie.


— (permalink)

How to Build a Disconnected OpenShift Cluster With Mirror Registries on RHEL CoreOS Using Podman and Systemd

OpenShift blog

Introduction

In OCP 4.5, Red Hat introduced the compact cluster, an OpenShift cluster consisting of three physical machines that have both the supervisor and worker roles applied. With limited hardware, it is now possible to have a highly available platform that can be used for both traditional virtual machines and containerized workloads.

This type of deployment is perfect for remote locations where:

  • space is limited
  • network connectivity to the edge is slow, unreliable, or not always present
  • high availability and autonomy is required
  • hardware cost has a high impact (in case of 100s of edge deployments)

The compact cluster can be either a connected cluster (with access to the Red Hat container registries) or disconnected (air-gapped). In case of a disconnected install, OpenShift’s container images need to be copied to a mirror registry on premises. While we recommend using Red Hat Quay, any OCI compatible container registry can be used for this.

This mirror registry needs to be available and reachable at all times for multiple reasons:

  • During upgrades of the platform, OpenShift needs to pull new image versions of its containerized components.
  • A pod can be (re)started and scheduled to a different host for various reasons (server failure, a reboot during an upgrade, an extra node is added, health check probe failing, etc.). If the container image is not present on this node, it needs to be pulled from the mirror registry.
  • Image garbage collection can remove a cached container image on a node if it is not in use. If the container image is needed in the future, it will need to be pulled again.

In this blog, we describe a way of deploying edge type clusters that occupy a small footprint (compact configuration) and are disconnected, that is, they have no connectivity to the internet. In addition, the network connection to a central location is not guaranteed, which means we need to have a mirror registry in the edge location.

Considering the space, network and cost constraints, this mirror registry is a challenge. If we add one extra physical machine to host the mirror registry, our hardware cost increases by 33%. This mirror registry also becomes a single point of failure if not deployed in a highly available way.

Deploying a highly available registry (for example, Red Hat Quay) on top of the OpenShift cluster and using that as the mirror registry for the underlying cluster itself could be another approach. This works very well during normal operations, but there are chicken-and-egg scenarios to consider when cold-starting the OpenShift cluster. There might not be three different availability zones in the edge location (something you would typically have in public clouds), so a power outage in the edge would result in the whole cluster being shut off. For some use cases, there might be a need to install the cluster in a central location, turn it off, and move the hardware to the remote location. In both scenarios, it is needed to cold start the cluster. When starting the cluster, if a container of the registry or one of its dependencies is scheduled on a node where the image was not pulled before, it will not be able to start because it cannot pull the needed container image. By consequence, other components of the OpenShift platform might not be able to start because the registry is not running. 

We have worked around this challenge by deploying registries on the RHEL CoreOS nodes using systemd and podman. Since podman is not under OpenShift’s control, this system can be treated as “outside of the cluster” avoiding any chicken-and-egg scenarios. However, it does not require extra hardware and does not have a single point of failure. Systemd makes sure that the container registry starts at boot time and is running before the Kubelet is started. Redundancy is created by installing this registry on multiple RHEL CoreOS nodes and defining multiple registry mirrors in the ImageContentSourcePolicy. The registries will host the container images of OpenShift and its operators. Other container images can be stored in OpenShift’s built-in registry.

As backend storage for this registry, we defined a directory at /var/local-registry. /var persist across reboots and updates of RHEL CoreOS.

In this blog post, we will walk through the steps to set up such a cluster.

Prerequisites

The mirror registries will be hosted on the RHEL CoreOS nodes themselves, so they can only be deployed post-installation. To start, you need a (three node) compact cluster. This could be a connected cluster (container images can be downloaded directly from the Red Hat registries) or a disconnected cluster (installed using a temporary mirror registry). Next, the container images are copied to these mirror registries and an ImageContentSourcePolicy is defined. The network connection to the Red Hat registries (in case of a connected cluster) or the temporary mirror registry (in case of an already disconnected cluster) can be removed at the end.

While you definitely can add more worker nodes to a compact cluster or have different machines for supervisor and worker roles, this is out of scope of this blog post.

To set things up, you need to have a machine with network connectivity to the OpenShift API and RHEL CoreOS nodes. This could be a bare-metal machine, a virtual machine, or simply a laptop connected to the cluster. We will refer to this as the workstation. The following tools need to be installed on the workstation:

The oc cli, opm cli and pull secret can be downloaded from https://console.redhat.com/openshift/downloads.

In the procedure below, we have used variables so it is easier for you to copy and paste. “node1”, “node2” and “node3” represent the IPv4 address of the three RHEL CoreOS nodes, while “nodes” is an array. “quay_user” is an account on quay.io (which you can get for free). “path_to_pull_secret” is the path to the pull secret to access the Red Hat repositories on your workstation:

node1=<ip of node1>
node2=<ip of node2>
node3=<ip of node3>
nodes="$node1 $node2 $node3"
quay_user=<username on quay.io>
path_to_pull_secret=</path/to/pull/secret/on/workstation>

Deployment

Get the container registry

The container registry that we will be using is the CNCF Distribution Registry. As a proof of concept, we used an already existing container image, which can be found here docker://docker.io/library/registry:2.

Note that the CNCF Distribution registry is not shipped nor supported by Red Hat. 

Optionally: To avoid hitting the rate limits of Docker Hub, we copied the registry container image to quay.io and used that one instead. Create a free account on Quay.io with a public repository named “registry”:

$ podman login -u $quay_user quay.io
$ skopeo copy docker://docker.io/library/registry:2 docker://quay.io/${quay_user}/registry:2

Use Skopeo to get the digest of the container

$ sha=$(skopeo inspect docker://quay.io/${quay_user}/registry:2 --format "{{ .Digest }}")
$ echo $sha
sha256:b0b8dd398630cbb819d9a9c2fbd50561370856874b5d5d935be2e0af07c0ff4c

Using a systemd unit file, we can guarantee that this container starts upon boot of RHEL Coreos and that this is started before the Kubelet or CRIO. We will use a MachineConfig Custom Resource (and the Machine Config Operator) to define it on the nodes:

cat <<EOF >> machine_config.yaml
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
  machineconfiguration.openshift.io/role: master
name: 99-master-local-registry
spec:
config:
  ignition:
    config: {}
    security:
      tls: {}
    timeouts: {}
    version: 3.1.0
  networkd: {}
  passwd: {}
  systemd:
    units:
      - name: container-registry.service
        enabled: true
        contents: |
          [Unit]
          Description=Local OpenShift Container Registry
          Wants=network.target
          After=network-online.target
          [Service]
          Environment=PODMAN_SYSTEMD_UNIT=%n
          Restart=on-failure
          TimeoutStopSec=70
          ExecStartPre=/usr/bin/mkdir -p /var/local-registry
          ExecStartPre=/bin/rm -f %t/container-registry.pid %t/container-registry.ctr-id
          ExecStart=/usr/bin/podman run --conmon-pidfile %t/container-registry.pid --cidfile %t/container-registry.ctr-id --cgroups=no-conmon --replace -d --net=host --name registry -v /var/local-registry:/var/lib/registry:z quay.io/$quay_user/registry@$sha
          ExecStop=/usr/bin/podman stop --ignore --cidfile %t/container-registry.ctr-id -t 10
          ExecStopPost=/usr/bin/podman rm --ignore -f --cidfile %t/container-registry.ctr-id
          PIDFile=%t/container-registry.pid
          Type=forking
          [Install]
          WantedBy=multi-user.target default.target
EOF

oc apply -f machine_config.yaml

 

The container image is referenced by its digest, since this is a requirement when using repository mirrors. Also note that the directory /var/local-registry (arbitrarily chosen) is automatically created if it does not exist. The contents of /var are preserved during RHEL CoreOS updates.

The Machine Config Operator will apply this configuration to each node and gracefully reboot them one at a time. You can monitor this process:

$ watch oc get nodes

NAME                           STATUS                     ROLES           AGE   VERSION
<node1>                        Ready                      master,worker   44h   v1.22.0-rc.0+75ee307
<node2>                        Ready                      master,worker   44h   v1.22.0-rc.0+75ee307
<node3>                        NotReady,SchedulingDisabled master,worker   44h   v1.22.0-rc.0+75ee307

After a while, you can check that the service is running on each node:

$ oc debug node/<node-name>
sh-4.4# chroot /host
sh-4.4# systemctl is-active container-registry
active

Define the registries within OpenShift

In an ideal scenario, the registries are using TLS. In our setup, this is not the case, so we will define these registries as insecure. This needs to be done both on your workstation as well in OpenShift.

To define these registries as insecure on your workstation, edit /etc/containers/registries.conf to set the following:

[registries.insecure]
registries = ['<node1>','<node2>', '<node3>']

The above lines are version 1 syntax of the containers-registries configuration file and might be different for newer versions of the container tools. On our RHEL8 workstation, we have been using the container-tools:3.0 yum module.

To set it in OpenShift, create the following custom resource:

cat <<EOF >> insecure.yaml
apiVersion: config.openshift.io/v1
kind: Image
metadata:
name: cluster
spec:
registrySources:
  insecureRegistries:
  - localhost:5000
  - $node1:5000
  - $node2:5000
  - $node3:5000
EOF

oc apply -f insecure.yaml

Note that we defined “localhost:5000” as well. The reasoning behind this is that we will define multiple mirrors for redundancy with “localhost:5000” being the first one to try. Since a container registry is running locally on each node, this avoids all container image pulls falling on one node. In case of clusters with extra worker nodes (without a local registry), you might want to leave this entry out.

Again you can use “oc get nodes” to follow the progress and check the applied configuration:

$ oc debug node/<node>
sh-4.4# chroot /host
sh-4.4# cat /etc/containers/registries.conf
...
[[registry]]
prefix = ""
location = "localhost:5000"
insecure = true
...

Copy the container images of OpenShift

There are three repositories defined in the registries: “ocp” contains the images of the OpenShift components, “registry” contains the image of the Docker Distribution container registry itself and “olm-mirror” contains all the content of the OpenShift operators. After transferring the needed container images, you will need to create ImageContentSourcePolicy custom resources. An ImageContentSourcePolicy holds cluster-wide information about which mirrors to try for a particular container image.

We will first copy the OpenShift container images to the newly created registries. Execute the following commands for all three registries:

OCP_RELEASE=<OCP version, example 4.9.0>
LOCAL_REGISTRY='<IP OF REGISTRY>:5000'
LOCAL_REPOSITORY=ocp4/openshift
PRODUCT_REPO='openshift-release-dev'
LOCAL_SECRET_JSON=$path_to_pull_secret
RELEASE_NAME="ocp-release"
ARCHITECTURE=x86_64

oc adm release mirror -a ${LOCAL_SECRET_JSON} --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --insecure=true

The above command will only work if your workstation is connected to the cluster as well as the Red Hat registries. In case this is not possible, copy the content to removable media and upload it to the cluster’s registries as described here.

The mirror command prints out an example of an ImageContentSourcePolicy that can be used. Let’s check the ImageContentSourcePolicy shown after copying the content to the first node:

cat <<EOF >> icsp.yaml
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: example
spec:
repositoryDigestMirrors:
- mirrors:
  - $node1:5000/ocp4/openshift
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - $node1:5000/ocp4/openshift
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
EOF

You can define multiple mirrors in an ImageContentSourcePolicy. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the original source repository. Upon success, the image is pulled to the node.

To get redundancy, we will define multiple mirrors, with the first one being “localhost:5000”. Let’s use awk to generate a new file based on the previous output:

awk '{if (!/'$node1'/) {print ; next} ;  print gensub(/'$node1'/, "localhost", "1") ; print;  print gensub(/'$node1'/, "'$node2'", "1") ; print gensub(/'$node1'/, "'$node3'", "1") } ' icsp.yaml >> icsp_new.yaml

icsp_new.yaml looks as follows:

apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: example
spec:
repositoryDigestMirrors:
- mirrors:
  - localhost:5000/ocp4/openshift
  - 10.10.21.174:5000/ocp4/openshift
  - 10.10.21.175:5000/ocp4/openshift
  - 10.10.21.176:5000/ocp4/openshift
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - localhost:5000/ocp4/openshift
  - 10.10.21.174:5000/ocp4/openshift
  - 10.10.21.175:5000/ocp4/openshift
  - 10.10.21.176:5000/ocp4/openshift

Let’s create the new ImageContentSourcePolicy:

oc create -f icsp_new.yaml

Copy the container image of the CNCF Distribution registry

We also need to copy the container image of the registry itself. This could be needed when a failed node is replaced. The same repository can also be used to upload new versions of the CNCF Distribution registry.

for node in $nodes;
do
skopeo copy docker://quay.io/$quay_user/registry:2 docker://$node:5000/registry:2
done

For this we need to create an ImageContentSourcePolicy as well:

cat <<EOF >> icsp_registry.yaml
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: registry
spec:
repositoryDigestMirrors:
- mirrors:
  - localhost:5000/registry
  - $node1:5000/registry
  - $node2:5000/registry
  - $node3:5000/registry
  source: quay.io/$quay_user/registry
EOF

oc create -f icsp_registry.yaml

Set up the Operator Lifecycle Manager

To copy the containers needed for the OpenShift operators, we will follow the official documentation.

If not already done, disable the sources for the default catalog:

oc patch OperatorHub cluster --type json \
    -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

On your workstation, prune an index image. You need to define a list of operators you want to copy. For testing, we only used the local-storage-operator, but you will likely need other operators too. As registry name, we used an arbitrary name, called “dummyregistry”:

podman login registry.redhat.io
opm index prune \
  -f registry.redhat.io/redhat/redhat-operator-index:v4.9 \
  -p local-storage-operator \
   -t dummyregistry.io:5000/olm-mirror/redhat-operator-index:v4.9

Copy this image to all the registries.

for node in $nodes;
do
podman push dummyregistry.io:5000/olm-mirror/redhat-operator-index:v4.9 \
$node:5000/olm-mirror/redhat-operator-index:v4.9
done

Next, we willl need to get all the content and push it to the registry.

for node in $nodes;
do
oc adm catalog mirror $node:5000/olm-mirror/redhat-operator-index:v4.9 \
$node:5000/olm-mirror --insecure --index-filter-by-os='linux/amd64' \
-a $path_to_pull_secret
done

After executing the mirror command, a subfolder is created with an ImageContentSource and a CatalogSource. To know more about the CatalogSource custom resource, check this page.

For example, the ImageContentSourcePolicy.yaml for the first node looks like this:

apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
labels:
  operators.openshift.org/catalog: "true"
name: redhat-operator-index-0
spec:
repositoryDigestMirrors:
- mirrors:
  - $node1:5000/olm-mirror/openshift4-ose-local-storage-operator-bundle
  source: registry.redhat.io/openshift4/ose-local-storage-operator-bundle
- mirrors:
  - $node1:5000/olm-mirror/openshift4-ose-local-storage-operator
  source: registry.redhat.io/openshift4/ose-local-storage-operator
- mirrors:
  - $node1:5000/olm-mirror/openshift4-ose-local-storage-diskmaker
  source: registry.redhat.io/openshift4/ose-local-storage-diskmaker
- mirrors:
  - $node1:5000/olm-mirror/openshift4-ose-local-storage-static-provisioner
  source: registry.redhat.io/openshift4/ose-local-storage-static-provisioner

This file obviously depends on the operators you defined in the prune command. Again, we need to duplicate lines for each mirror registry, with “localhost:5000” being the first one. We can use the same awk command for this:

awk '{if (!/'$node1'/) {print ; next} ;  print gensub(/'$node1'/, "localhost", "1") ; print; print gensub(/'$node1'/, "'$node2'", "1") ; print gensub(/'$node1'/, "'$node3'", "1") } ' imageContentSourcePolicy.yaml >> imageContentSourcePolicy_new.yaml
oc apply -f imageContentSourcePolicy_new.yaml

A catalogSource.yaml is created as well:

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: redhat-operator-index
namespace: openshift-marketplace
spec:
Image: $node1:5000/olm-mirror/olm-mirror-redhat-operator-index:v4.9
sourceType: grpc

We will need to edit this file to point to our dummyregistry and use a sha256 hash instead:

sha=$(skopeo inspect docker://$node1:5000/olm-mirror/redhat-operator-index:v4.9 --format="{{ .Digest }}")

cat <<EOF >> catalogsource.yaml

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: redhat-operator-index
namespace: openshift-marketplace
spec:
image: dummyregistry.io:5000/olm-mirror/redhat-operator-index@$sha
sourceType: grpc
publisher: my_org
updateStrategy:
  registryPoll:
    interval: 30m
EOF

cat <<EOF >> icsp_dummyregistry.yaml
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
labels:
  operators.openshift.org/catalog: "true"
name: dummyregistry
spec:
repositoryDigestMirrors:
- mirrors:
  - localhost:5000/olm-mirror/redhat-operator-index
  - $node1:5000/olm-mirror/redhat-operator-index
  - $node2:5000/olm-mirror/redhat-operator-index
  - $node3:5000/olm-mirror/redhat-operator-index
  source: dummyregistry.io:5000/olm-mirror/redhat-operator-index
EOF

oc apply -f icsp_dummyregistry.yaml

oc apply -f catalogsource.yaml

That is it. You can now disconnect the clusters or remove the temporary registry you used during the OpenShift installation.

In case of updates, you can copy the new container content and create ImageContentSourcePolicies in a similar way. For more information about updates in disconnected environments, please refer to these docs.

To consider

The containers that are created using systemd are not managed by OpenShift, and as a consequence, the allocation of resources to these containers are unknown to the OpenShift platform. You have to account for what resources the container registry needs and ensure they do not conflict or 'starve' the resources of other system components. For more information on how to configure this, see this page.

It is also possible (depending on the configuration) that the deployment of the registry confuses OpenShift into thinking it is using more resources than it is. For example, if the container registry places content in / or /var, then OCP may see this space fill up and try to evict pods to free up space.

Other things to consider could be the usage of HTTPS/certificates and/or pull secrets to connect to the registries.

Survrire à une attaque nucléaire

Liens SebSauvage
Lu sur Twitter:
« 1970: Nous allons concevoir un réseau global capable de survrire à une attaque nucléaire.
2021: AWS est tombé et ma machine à café ne marche plus. »
(Permalink)

We are lying to you ...And We'll Do it Again - YouTube

Le Hollandais Volants -- Liens

(Video de Kurtzgazgt)

Et je poussoie énormément.

Vulgariser, ce n'est pas éduquer.
Vulgariser, c'est expliquer et rendre des notions compliquées appréhendables par le non initiés (mais néanmoins curieux).

Et pour ça, oui, on doit mentir.

On dit que la terre est une sphère, alors qu'elle est légèrement aplatie.
On dit que les atomes sont des billes, alors que ça ne le sont probablement pas.
Etc.

En fait, la science elle-même — la "Science" — n'est pas une source de vérité. La Science, c'est un moyen d'expliquer notre univers et tout ce qui s'y trouve et qui satisfait ce qu'on observe.

La Science, c'est une description du monde, mais ce n'est pas le monde.

Et la vulgarisation c'est un peu pareil. Avec Couleur-Science, je n'ai jamais prétendu ni voulu me substituer au professeur. Au mieux, je souhaite attiser la curiosité et permettre à ceux qui se posent des questions de pouvoir se forger une réponse. Je ne donne pas La Réponse, mais plutôt un moyen de se la représenter.

C'est pour ça que j'ai prendre des exemples et des analogies diverses et variées de la vie courante. Ça marche bien, mais ça a des limitations. L'entropie ne s'applique pas à un jeu de cartes, par exemple. Mais l'entropie ET le jeu de cartes font intervenir la notion de probabilités, et c'est pour ça que l'on illustre le premier par le second.

Mais c'est bien une mission différente de la recherche pure ou de l'enseignement, et donc avec ses propres possibilités.


— (permalink)

Microsoft starts rolling out redesigned Notepad for Windows 11

Liens SebSauvage
2021 : Microsoft décide timidement d'améliorer l'éditeur de texte Notepad de Windows.
Tout le monde depuis les 18 ans passés : Utilise Notepad++
(Permalink)

Crise des migrants : la personne qui lacérait des tentes a été « licenciée », affirme Darmanin

Liens SebSauvage
Darmanin : Paie des gens précaires pour lacérer les tentes des pauvres.
Darmanin : "Ah vous voyez, cépaléflics !"
Darmanin : se prend une mauvaise image médiatique.
Darmanin: "Oh làlà oui c'est innacceptable de lacérer des tentes !"
Darmanin : Licencie la personne qui a lacéré les tentes.

"Les cons ça ose tout..." vous connaissez la suite.
(Et après ils vont pleurer "gneu gneu gneu mais pourquoi le bas peuple perde confiance dans la classe politique ?")
(Permalink)

12-YO Dies by Suicide After Bullies Say He’d Go to Hell for Being Gay

Liens SebSauvage
L'homophobie, le harcèlement et la bigoterie TUENT.
Les mots sont des armes. Des armes qui tuent.
(Permalink)

Edmond Bloch, les croisades d'un juif ami des antisémites

Liens SebSauvage
Rappel:
On peut être juif et antisémite.
On peut être étranger et raciste.
On peut être homosexuel et homophobe.

Zorglub est étranger, avec des parents issus de l'immigration, mais con comme un balai qu'il est, il est raciste et contre l'immigration.

(via https://sammyfisherjr.net/Shaarli/?RH_D7w)
(Permalink)

Digging into Linux namespaces - part 1

Liens SebSauvage
Un jour il faudrait vraiment que je prenne le temps de voir comment fonctionnent les namespaces sous Linux, car c'est vraiment un mécanisme puissant pour isoler les applications et pour améliorer la sécurité. C'est aussi un des composants de base des conteneurs.
(Permalink)

YouTube Processes 4 Million Content ID Claims Per Day, Transparency Report Reveals * TorrentFreak

Liens SebSauvage
Le fameux système "ContentID" de YouTube détecte les oeuvres sous copyright et bloque les vidéos YouTube qui les utilisent.
ContentID "bloque" ainsi de manière automatisée, sans contrôle humain, 4 millions de vidéos *PAR* *JOUR*. De manière justifiée ou non (on rappelle que juste filmer la lune ou s'enregistrer taper au clavier suffit pour être censuré par ContentID:  https://sebsauvage.net/links/?o88XmQ https://sebsauvage.net/links/?05mm1w)
Et encore, il n'y a qu'une poignée d'ayants-droits qui ont le droit d'utiliser ContentID (les autres doivent se coltiner une déclaration de violation par formulaire web).
(Permalink)

Tests de radiesthésie : l’échec des expériences de Munich | La Théière Cosmique

Liens SebSauvage
Radiesthésie (aka "baguette de sourcier") = foutaises
(Permalink)

Usage sécurisé d’(Open)SSH | Agence nationale de la sécurité des systèmes d'information

Liens SebSauvage
Le guide de l'ANSSI pour la sécurisation d'OpenSSH.
(Permalink)

Responding to Tor censorship in Russia | The Tor Project

Liens SebSauvage
Après avoir bannie les VPN (https://sebsauvage.net/links/?nGqAEA) et mis la main sur le plus gros réseau social russe (https://sebsauvage.net/links/?KBlB4A), la dictature Russe bannit l'utilisation de TOR.
(Permalink)

Open Terms Archive on Twitter: "When Google Classroom removes the mention "free" from its Privacy policy 🧐 https://t.co/YgBMnFCUpQ https://t.co/UHGQS352Os" / Twitter

Le Hollandais Volants -- Liens

Hahaha !

Le cloud des gafam ? Fuyez, pauvre fou !


— (permalink)

Did You Know Applications Running on Kubernetes Love Bare Metal?

OpenShift blog

Red Hat is excited about bare metal and the potential benefits it can bring to both public and on-premises computing. Once upon a time, bare metal was declared a dirty word in the datacenter (whisper). It was something that needed to be virtualized with a hypervisor (ESXi, KVM, Hyper-V, Xen, and others) to offer its true potential. Something dramatic and evolutionary would need to happen to change this operational and economic position—a position that forced many people to pay to get hypervisors on their hardware. Luckily for the world, not one but two innovations hit within a 5-year span to change history.

The first thing that opened the door to bare-metal computing coming back into popularity in the modern cloud was Linux containerization and the ability to have portable workloads packaged in an open container initiative (OCI) format. Linux containers transformed modern workloads and led to standardizations in orchestration. Those innovations allowed developers to attach their application services to an infrastructure at a higher and more powerful API level. The second thing that really made bare metal move came from intelligent application services. These include big data analytics, high-performance computing, machine learning (ML), deep learning, and artificial intelligence (AI). These high-bandwidth and low-latency workloads, and the distributed toolsets or development models that leverage them, typically require large compute resources commonly found in bare metal. Connecting the results and inference back to traditional application in ways that allowed data proximity to shine through to a powerful end-user experience, made better through analytics, excelled on bare metal. These specific use cases exploit the underlying hardware in a much more efficient and performant way as compared to passing instruction sets through a virtualization layer.  This combination of Linux containerization and analytics at scale changed how people wanted to interact with bare metal and opened the floodgates of people seeking this competitive advantage for their businesses.

At the same time, specific commercial verticals had innovation spikes that benefited from bare metal. A good example is 5G in the telecommunications space. As providers and suppliers revamped their solution stacks for 5G, they modernized and incorporated the agility of containerization. This drove the need for IPv6, SR-IOV, Container Network Functions (CNFs), NUMA topologies, and other innovations in containerized applications on bare metal. Another good example? The media and entertainment vertical driving new human interactions with data at the edge. Cheap, simple, and lightweight devices, which need to run containers to process large amounts of data, are cost prohibitive to bring back into the datacenter. All that requires containerization and its framework to be able to leverage unique, bare-metal solutions that cross CPU chip types (that is, ARM).

Red Hat, being a leader in Linux operating systems, OpenStack, containerization, and Kubernetes, saw an opportunity to do something special through the combination of these technologies, which would benefit customers looking to create a competitive advantage with bare metal. We've been thinking about this for a while. By laying RHEL down on bare metal, as people have been doing for decades, we have everything we need to form a great user experience.

The first thing we created on this journey was an open source community around how to automate server management for Kubernetes clusters. On this front, we were fortunate enough to have been already doing it for years in another community. Leveraging the OpenStack Ironic open source project, we created Metal3. Metal3 allows us to automate the orchestration of servers in racks through such popular interfaces as IPMI and Redfish to alleviate the operational burdens of owning them. After the server is up, we have networking. Ansible offers an exhaustive list of network device automations to line up your racks. We have zero touch provisioning (ZTP) that places all these steps in a global deployment pattern ready for thousands of end points.

We believe there will always be a need to have virtual machines (VMs) in a complete solution, so we created the KubeVirt open source community from our knowledge of KVM and RHV. It's a fresh look at modernizing legacy VMs by treating them as if they were pods on Kubernetes: same network, same scale up or scale down, same YAML or resource objects. The project has ported it all over to native Kubernetes API, all right next to your containers. Adept at storage, GPU, and sophisticated application deployment patterns for databases through operators, Red Hat has invested in numerous open source projects to make sure people can get the most out of running Kubernetes on bare metal where appropriate, such as:

RHEL

Metal3

Zero touch provisioning

KubeVirt

Ansible networking

Assisted installer

Rook

Ceph

GPU

Operators

Multi-cluster management

Container network functions

Red Hat Edge

Katacontainers

Resource Control

Now that you understand the backdrop against which bare metal is important in modern computing, let’s turn our attention to some common misconceptions in the industry. I'll use an example I saw recently on the internet. In a nutshell, some people conflate the ability to add more kubelets to linearly increasing the work coefficient of a cluster. What people mistakenly point out during such a conversation is that the open source Kubernetes community limits how many pods the kubelet can have, and that results in a pod limit of around 500 pods per kubelet. They'll take that limit number and compare it to carving up the same node into VMs in an effort to point out that you can get more pods from the same machine by having more kubelets, each with a 500-pod limit, than having a single kubelet with one 500-pod limit.

I'll point out that this pod limit is not a hard one, and it's not really a best practice to think of it this way. You shouldn't think of the number of pods per node as an independent number that is not influenced by other things in the same cluster. Most people who are leveraging bare metal are doing so for larger workloads that would consume most boxes at pod densities of around 250. Or, they are doing so to better access a raw hardware resource from a PCI bus or other location. To date, there's been little need to redesign the kubelet around the issue. Maybe someday in the future it will be needed.

The other thing people should consider when thinking about resource consumption is the workload itself. Now with public clouds, datacenters, VMs, bare metal, and new specialized devices for servers, we have a lot of options for pairing our workloads to the correct target. We no longer need to be forced into a simplistic world of only VMs. Running as many lightweight API endpoints as you can on bare metal is likely not the best use of bare metal. Such a workload’s characteristics belong on a public cloud or on a VM with containers running on top and does not need to run on bare metal. This is why OpenShift offers an ability to leverage OpenShift Virtualization (kubeVirt) so that users can make the right choices for their workloads.

When you look at getting the most out of your Kubernetes clusters, there's more to consider than the pods. Kubernetes has a variety of API resource objects, and they all have a relationship or consequence to each other. This is especially important as you add more container images, more pods, more kubernetes services, more ports, more LoadBalancers, more secrets, more namespaces, more PVCs, and on and on. They can all trigger the use of one another as you deploy more real-world, even lightweight, applications. Eventually, one will normally reach exhaustion in the cluster on one of these other resources before the pod limit. In terms of the business-critical application loads we see, we find very large clusters normally will hit limits on the etcd and service APIs before pod limits are reached.  When thinking about resource object exhaustion, there is more than pods to consider.

The Future

With OpenShift, you can run on multiple public clouds, bare metal, and VMs. You can let the requirements of the application service drive the infrastructure choice. Think about the power of running a mixed KubeVirt, container, and Knative Serverless composite workload in the same cluster. Now close your eyes and see it across multiple clusters. Some clusters you run and some clusters run for you, all of them being the same OpenShift. We at Red Hat cannot wait to see what you turn your bare metal into. Let us know.

Validating Secure Boot Functionality in a SNO for OpenShift 4.9

OpenShift blog

Secure boot is a compromise between the UEFI bootloader and the operating system it hands off to. Explaining the complete details of secure boot in RHEL/RHCOS is out of scope for this document. The following article does an excellent job of explaining it properly.

When using secure boot in a single node deployment of OpenShift, two methods can be used: manually setting secure boot in the system BIOS or using the bare-metal host CR to set the secure boot setting during provisioning.

More information regarding signing drivers and verifying them on RHEL8 is available in the official documentation online.

Verifying Secure Boot Status

A redfish curl command can be used to determine the current state of secure boot against a BMC endpoint. There are two examples below, one for HP and one for Dell:

# HP
curl -sS -k -u Administrator:password -H "Content-Type: application/json" -H "OData-Version: 4.0" https://[fd00:4888:x:y::z]/redfish/v1/Systems/1/SecureBoot | jq '.'
{
"@odata.context": "/redfish/v1/$metadata#SecureBoot.SecureBoot",
"@odata.etag": "W/\"4A4CB737\"",
"@odata.id": "/redfish/v1/Systems/1/SecureBoot",
"@odata.type": "#SecureBoot.v1_0_0.SecureBoot",
"Id": "SecureBoot",
"Actions": {
  "#SecureBoot.ResetKeys": {
    "target": "/redfish/v1/Systems/1/SecureBoot/Actions/SecureBoot.ResetKeys"
  }
},
"Name": "SecureBoot",
"SecureBootCurrentBoot": "Disabled",
"SecureBootEnable": false,
"SecureBootMode": "UserMode"
}
# Dell
curl -sS -k -u administrator:password -H "Content-Type: application/json" -H "OData-Version: 4.0" https://10.19.x.y/redfish/v1/Systems/System.Embedded.1/SecureBoot | jq '.'
{
"@odata.context": "/redfish/v1/$metadata#SecureBoot.SecureBoot",
"@odata.id": "/redfish/v1/Systems/System.Embedded.1/SecureBoot",
"@odata.type": "#SecureBoot.v1_0_6.SecureBoot",
"Actions": {
  "#SecureBoot.ResetKeys": {
    "ResetKeysType@Redfish.AllowableValues": [
      "ResetAllKeysToDefault",
      "DeleteAllKeys",
      "DeletePK",
      "ResetPK",
      "ResetKEK",
      "ResetDB",
      "ResetDBX"
    ],
    "target": "/redfish/v1/Systems/System.Embedded.1/SecureBoot/Actions/SecureBoot.ResetKeys"
  },
  "Oem": {}
},
"Description": "UEFI Secure Boot",
"Id": "SecureBoot",
"Name": "UEFI Secure Boot",
"Oem": {
  "Dell": {
    "@odata.context": "/redfish/v1/$metadata#DellSecureBoot.DellSecureBoot",
    "@odata.type": "#DellSecureBoot.v1_0_0.DellSecureBoot",
    "Certificates": {
      "@odata.id": "/redfish/v1/Systems/System.Embedded.1/SecureBoot/Oem/Dell/Certificates"
    }
  }
},
"SecureBootCurrentBoot": "Enabled",
"SecureBootEnable": true,
"SecureBootMode": "DeployedMode"
}

Enabling Secure Boot

There are two methods for enabling secure boot. The first would be to interact with the hardware in question. Here is a redfish command to enable it on Dell hardware:

curl -i -k -X PATCH -u administrator:password -H "Content-Type: application/json" -d '{"SecureBootEnable":true}' https://10.19.x.y/redfish/v1/Systems/System.Embedded.1/SecureBoot

{"@Message.ExtendedInfo":[{"Message":"Successfully Completed Request","MessageArgs":[],"MessageArgs@odata.count":0,"MessageId":"Base.1.7.Success","RelatedProperties":[],"RelatedProperties@odata.count":0,"Resolution":"None","Severity":"OK"},{"Message":"The operation is successfully completed.","MessageArgs":[],"MessageArgs@odata.count":0,"MessageId":"IDRAC.2.4.SYS430","RelatedProperties":[],"RelatedProperties@odata.count":0,"Resolution":"No response action is required.However, to make them immediately effective, restart the host server.","Severity":"Informational"}]}

curl -i -k -X POST -u administrator:password -H "Content-Type: application/json" -d '{"ResetType":"GracefulRestart"}' https://10.19.x.y/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset

The following is the equivalent command for HPE hardware:

curl -i -k -X PATCH -u administrator:password -H "Content-Type: application/json" -d '{"SecureBootEnable":true}' https://10.19.x.y/redfish/v1/Systems/1/SecureBoot/
HTTP/1.1 200 OK
Cache-Control: no-cache
Content-length: 261
Content-type: application/json; charset=utf-8
Date: Thu, 18 Nov 2021 14:08:35 GMT
ETag: W/"78E0DBAE"
X-Content-Type-Options: nosniff
X-Frame-Options: sameorigin
X-XSS-Protection: 1; mode=block
X_HP-CHRP-Service-Version: 1.0.3
{"Messages":[{"MessageID":"iLO.0.10.SystemResetRequired"}],"Type":"ExtendedError.1.0.0","error":{"@Message.ExtendedInfo":[{"MessageID":"iLO.0.10.SystemResetRequired"}],"code":"iLO.0.10.ExtendedInfo","message":"See @Message.ExtendedInfo for more information."}}
# Validating that the change has been made:
curl -i -k -X GET -u administrator:password https://10.19.x.y/redfish/v1/Systems/1/SecureBoot/
HTTP/1.1 200 OK
Allow: GET, HEAD, PATCH
Cache-Control: no-cache
Content-length: 407
Content-type: application/json; charset=utf-8
Date: Thu, 18 Nov 2021 14:10:19 GMT
ETag: W/"AEA4F742"
Link: </redfish/v1/SchemaStore/en/HpSecureBoot.json/>; rel=describedby
X-Content-Type-Options: nosniff
X-Frame-Options: sameorigin
X-XSS-Protection: 1; mode=block
X_HP-CHRP-Service-Version: 1.0.3
{"@odata.context":"/redfish/v1/$metadata#Systems/Members/1/SecureBoot$entity","@odata.id":"/redfish/v1/Systems/1/SecureBoot/","@odata.type":"#HpSecureBoot.1.0.0.HpSecureBoot","Id":"SecureBoot","Name":"SecureBoot","ResetAllKeys":false,"ResetToDefaultKeys":false,"SecureBootCurrentState":false,"SecureBootEnable":true,"Type":"HpSecureBoot.1.0.0","links":{"self":{"href":"/redfish/v1/Systems/1/SecureBoot/"}}}
curl -i -k -X POST -u administrator:password -H "Content-Type: application/json" -d '{"ResetType":"ForceRestart"}' https://10.19.x.y/redfish/v1/Systems/1/Actions/ComputerSystem.Reset/
{"Messages":[{"MessageID":"Base.0.10.Success"}],"Type":"ExtendedError.1.0.0","error":{"@Message.ExtendedInfo

The custom resource for the bare-metal host can be used to configure secure boot as well. Here is an example of the CR to use for doing an assisted service SNO deployment. For more information regarding assisted service ACM ZTP install, refer to the following unofficial repository.

spec:
online: true
automatedCleaningMode: disabled
bootMACAddress: e4:43:4b:x:y:z
bootMode: UEFISecureBoot
bmc:
  address: idrac-virtualmedia+https://10.19.x.y/redfish/v1/Systems/System.Embedded.1
  credentialsName: sno-worker-2-worker
  disableCertificateVerification: true

After the single node is installed, the secure boot state can be verified a couple of different ways. The mokutil command can be used to verify and to add custom certificates for secure boot:

mokutil --sb-state
SecureBoot enabled
mokutil --list-enrolled
[key 1]
SHA1 Fingerprint: e6:f5:06:46:20:69:aa:17:d2:e8:61:05:03:63:5c:20:f3:a9:95:c3
Certificate:
  Data:
      Version: 3 (0x2)
      Serial Number:
          83:73:0d:2b:72:80:d1:5a
      Signature Algorithm: sha256WithRSAEncryption
      Issuer: O=Red Hat, Inc., CN=Red Hat Secure Boot CA 5/emailAddress=secalert@redhat.com
      Validity
          Not Before: Jun  9 08:15:36 2020 GMT
          Not After : Jan 18 08:15:36 2038 GMT
      Subject: O=Red Hat, Inc., CN=Red Hat Secure Boot CA 5/emailAddress=secalert@redhat.com
      Subject Public Key Info:
          Public Key Algorithm: rsaEncryption
              RSA Public-Key: (2048 bit)
..omitted..
dmesg | egrep 'efi|secure'
[    0.000000] efi: EFI v2.70 by Dell Inc.
[    0.000000] efi:  ACPI=0x6fffe000  ACPI 2.0=0x6fffe014  SMBIOS=0x68d36000  SMBIOS 3.0=0x68d34000  MEMATTR=0x6525f020  MOKvar=0x683a9000
[    0.000000] secureboot: Secure boot enabled
..omitted..
efibootmgr -v
BootCurrent: 0008
BootOrder: 0001,0006,0008
Boot0000* Hard drive C:        VenHw(d6c0639f-c705-4eb9-aa4f-5802d8823de6)............................f.........................................................A.....................P.E.R.C. .H.7.3.0.P. .M.X. .(.b.u.s. .1.8. .d.e.v. .0.0.)...
Boot0001  NIC in Mezzanine 1A Port 1 Partition 1        VenHw(3a191845-5f86-4e78-8fce-c4cff59f9daa)
Boot0002* IBA 40G Slot 3B00 v1118        BBS(128,IBA 40G Slot 3B00 v1118,0x0)..;...................................................................................A.....................I.B.A. .4.0.G. .S.l.o.t. .3.B.0.0. .v.1.1.1.8...
Boot0003* IBA 40G Slot 8600 v1118        BBS(128,IBA 40G Slot 8600 v1118,0x0)..................`.......`...`.......................................................A.....................I.B.A. .4.0.G. .S.l.o.t. .8.6.0.0. .v.1.1.1.8...
Boot0006* Red Hat Enterprise Linux        HD(2,GPT,6fe00ee0-1948-4e3c-8254-c5d7702b677f,0x1000,0x3f800)/File(\EFI\redhat\shimx64.efi)
Boot0008* Red Hat Enterprise Linux        HD(2,GPT,1e8869d4-1225-4915-866c-9e18550a9a72,0x1000,0x3f800)/File(\EFI\redhat\shimx64.efi)
MirroredPercentageAbove4G: 0.00
MirrorMemoryBelow4GB: false

Real Time Kernel Interactions

After enabling or installing the RT kernel, some of the native interactions with secure boot change. The first thing to note is that the efivars are missing, so the usage of mokutil and efibootmgr does not operate normally. See the following BZ for more details.

uname -r
4.18.0-305.19.1.rt7.91.el8_4.x86_64
mokutil --sb-state
EFI variables are not supported on this system
mokutil -l
EFI variables are not supported on this system
efibootmgr -v
EFI variables are not supported on this system
dmesg | grep -i secure
[    0.000000] secureboot: Secure boot enabled
[    0.000000] Kernel is locked down from EFI secure boot; see man kernel_lockdown.7

Adding the following kernel boot arguments will allow for interaction again:

cat <<EOF | oc create -f -
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
  machineconfiguration.openshift.io/role: master
name: 99-efi-runtime-path-kargs
spec:
kernelArguments:
- "efi=runtime"
EOF

Out-of-tree Driver Installation and caveats

When compiling a driver out-of-tree, there are a few options for compiling that include signing and not signing drivers. When adding an unsigned driver from out-of-tree, the following error will occur when loading it:

Nov 16 22:19:15 worker-2 bash[203728]: depmod: WARNING: could not open /lib/modules/4.18.0-305.25.1.rt7.97.el8_4.x86_64/modules.order: No such file or directory
Nov 16 22:19:18 worker-2 bash[203728]: depmod: WARNING: could not open /lib/modules/4.18.0-305.25.1.rt7.97.el8_4.x86_64/modules.builtin: No such file or directory
Nov 16 22:19:18 worker-2 bash[203728]: rmmod: ERROR: Module ice is not currently loaded
Nov 16 22:19:19 worker-2 bash[203728]: modprobe: ERROR: could not insert 'ice': Required key not available

This SRO documentation includes instructions for building an OOT kernel module. The CNF features repo includes some additional assistance to sign them upon creation. It requires the following openssl commands to generate the appropriate certificates:

openssl req -new -x509 -newkey rsa:2048 -sha256 -nodes -days 3650 -subj "/CN=Telco 5G PK Certificate/" -keyout PK.key  -outform PEM -out PK.pem
openssl req -new -x509 -newkey rsa:2048 -sha256 -nodes -days 3650 -subj "/CN=Telco 5G KEK Certificate/" -keyout KEK.key -out KEK.pem
openssl req -new -newkey rsa:2048 -sha256 -nodes -subj "/CN=Telco 5G KEK Certificate/" -keyout KEK.key -out KEK.csr
openssl x509 -req -in KEK.csr -CA PK.pem -CAkey PK.key -set_serial 1 -days 3650 -outform DER -out KEK.der
openssl x509 -inform DER -outform PEM -in KEK.der -out KEK.crt

After the kernel module is completed and added to the registry in question, import the DER file created using the mokutil command and provide a password. Once added, reboot the server and the mokmanager.efi should start and prompt you to import the public key.

Now, the OOT driver systemd unit files should load, and the OOT drivers should be accessible.

mokutil --import /home/core/KEK.der

# After reboot and importing via mokmanager the certificate will be viewable 

keyctl list %:.platform
7 keys in keyring:
620882315: ---lswrv     0     0 asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4
359314383: ---lswrv     0     0 asymmetric: Telco 5G KEK Certificate: 46f6d18fe9187744442352bea6f9062b7ab2e2f4

mokutil -l
..omitted..
[key 3]
SHA1 Fingerprint: c2:2b:73:21:5c:aa:dd:de:fe:dd:83:ce:0f:0c:21:cd:d8:45:07:c2
Certificate:
   Data:
       Version: 1 (0x0)
       Serial Number: 1 (0x1)
       Signature Algorithm: sha256WithRSAEncryption
       Issuer: CN=Telco 5G KEK Certificate
       Validity
           Not Before: Nov 16 20:37:56 2021 GMT
           Not After : Nov 14 20:37:56 2031 GMT
       Subject: CN=Telco 5G KEK Certificate
       Subject Public Key Info:
           Public Key Algorithm: rsaEncryption
               RSA Public-Key: (2048 bit)
               Modulus:
..omitted..

modinfo ice.ko  | egrep '^version|^signer'

version:        1.6.4
signer:         Telco 5G PK Certificate

 

Troubleshooting

Intermittently, while provisioning via Assisted Service/ACM CRs, the life cycle controller can get into a state where it can not set secure boot. The logs resemble the following:

  Normal  ProvisioningStarted     2m27s  metal3-baremetal-controller  Image provisioning started for https://assisted-image-service-open-cluster-management.apps.clus3a.t5g.lab.eng.bos.redhat.com/images/e38d94fd-1e57-4da8-bd6b-0903e856d8a3?api_key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbmZyYV9lbnZfaWQiOiJlMzhkOTRmZC0xZTU3LTRkYTgtYmQ2Yi0wOTAzZTg1NmQ4YTMifQ.Y43XSDaUahJdRTmImzSwjKSyJIHy-ryswIcPVcSkGx1sx8qkT5SbjZrq_Zl12DgsGtcDnhPGn-40WWwfD6Rf-g&arch=x86_64&type=minimal-iso&version=4.9
 Normal  ProvisioningError       2m6s   metal3-baremetal-controller  Image provisioning failed: Failed to deploy. Exception: HTTP POST https://10.19.28.55/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.7.GeneralError: A general error has occurred. See ExtendedInfo for more information Extended information: [{'Message': 'Unable to mount remote share http://10.19.32.197:6180/redfish/boot-dfa94054-d7a0-4b92-858e-2b76e77ffeff.iso?filename=tmp_uxe1rp4.iso.', 'MessageArgs': ['http://10.19.32.197:6180/redfish/boot-dfa94054-d7a0-4b92-858e-2b76e77ffeff.iso?filename=tmp_uxe1rp4.iso'], 'MessageArgs@odata.count': 1, 'MessageId': 'IDRAC.2.4.RAC0720', 'RelatedProperties': ['#/Image'], 'RelatedProperties@odata.count': 1, 'Resolution': 'Retry the operation.', 'Severity': 'Informational'}]
 Normal  DeprovisioningStarted   47s    metal3-baremetal-controller  Image deprovisioning started
 Normal  DeprovisioningComplete  37s    metal3-baremetal-controller  Image deprovisioning completed
 Normal  ProvisioningStarted     36s    metal3-baremetal-controller  Image provisioning started for https://assisted-image-service-open-cluster-management.apps.clus3a.t5g.lab.eng.bos.redhat.com/images/e38d94fd-1e57-4da8-bd6b-0903e856d8a3?api_key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbmZyYV9lbnZfaWQiOiJlMzhkOTRmZC0xZTU3LTRkYTgtYmQ2Yi0wOTAzZTg1NmQ4YTMifQ.Y43XSDaUahJdRTmImzSwjKSyJIHy-ryswIcPVcSkGx1sx8qkT5SbjZrq_Zl12DgsGtcDnhPGn-40WWwfD6Rf-g&arch=x86_64&type=minimal-iso&version=4.9
 Normal  ProvisioningError       26s    metal3-baremetal-controller  Image provisioning failed: Deploy step deploy.deploy failed: Redfish exception occurred. Error: Failed to set secure boot state on node dfa94054-d7a0-4b92-858e-2b76e77ffeff to True: HTTP PATCH https://10.19.28.55/redfish/v1/Systems/System.Embedded.1/SecureBoot returned code 400. Base.1.7.GeneralError: Pending configuration values are already committed, unable to perform another set operation. Extended information: [{'Message': 'Pending configuration values are already committed, unable to perform another set operation.', 'MessageArgs': ['SecureBootEnable'], 'MessageArgs@odata.count': 1, 'MessageId': 'IDRAC.2.4.SYS011', 'RelatedProperties': ['#/SecureBootEnable'], 'RelatedProperties@odata.count': 1, 'Resolution': 'Wait for the scheduled job to complete or delete the configuration jobs before attempting more set attribute operations.', 'Severity':

Note:

This was resolved in 4.10 via an ironic change to clear the life cycle controller. See the following BZ.

Workaround:

  1. Remove the bare-metalhost CR.
  2. Clear the life cycle controller job queue.
  3. Re-add the bare-metal host CR.

Summary

The idea of creating a custom certificate and signing a driver can be a daunting task. Using the driver toolkit, SRO, and NFD operators help to compile an out-of-tree driver for the version of OpenShift of your choosing. Then efimanager and mokutil make importing the certificate a painless administrative process.   

Cerveaux Non Disponibles : "Savez-vous qu’en France un casier judiciaire vier…" - La Quadrature du Net - Mastodon - Media Fédéré

Liens SebSauvage
Je pose ça là: « Savez-vous qu’en France un casier judiciaire vierge est exigé pour 396 métiers dont celui de caissier.e ? Mais que des personnes condamnées pour emplois fictifs, corruption, haine raciale ou agression sexuelle peuvent devenir élu·e·s de la République ou ministres ? »
(Permalink)

The Popular Family Safety App Life360 Is Selling Precise Location Data on Its Tens of Millions of Users – The Markup

Liens SebSauvage
Imagines tu installes une application sur tous les appareils de tes enfants pour SÉCURISER, et tu apprends que la boîte qui fait le logiciel VENDS LES DONNÉES DE GÉOLOCALISATION au minimum à 12 partenaires commerciaux.
(Permalink)

⭐⭐⭐⭐ DON DE CLÉS STEAM ⭐⭐⭐⭐

Liens SebSauvage
J'ai acheté un bundle de jeu pour une oeuvre de charité, et à cause de jeux en double ou qui ne m'intéressent pas, je donne des clés Steam.
⚠️ IL FAUT QUE TOUT PARTE ! ⚠️ Ces clés Steam sont à activer avant le 1er janvier 2022 (sinon elles sont perdues).
Demandez-moi sur Mastodon (ou par d'autres moyens: email, DeltaChat, etc), et je vous donne les clés pour les jeux qui vous intéressent.
(Permalink)

Hadopi : 13 millions d’avertissements, 517 jugements, le bilan de la riposte graduée

Le Hollandais Volants -- Liens

13 millions de courriers pour 28 millions de foyers en France.
Ça fait presque un foyer sur deux quand même…

Et quand bien même certains en reçoivent plusieurs, ça fait toujours 1 foyer sur 5 ou 6 (à la louche) qui en ont reçu. Ça me semble beaucoup.


— (permalink)

Les Jeunes avec Macron on Twitter: "#CalendrierAventJAM 🎄 - 5⃣ DÉCEMBREPour une école de la confiance, où chaque enfant peut réussir et s'émanciper, peu importe son lieu de naissance, de vie ou les réseaux au sein de la famille. ⤵️ The French Education est le résultat d'une société ➕ juste et ➕ égalitaire. https://t.co/WHsl8tPoSN" / Twitter

Le Hollandais Volants -- Liens

Un ministre qui danse avec des enfants sur une affiche pour une série intitulée « French Éducation », dans le calendrier de l’avent officiel du parti présidentiel sur Twitter.

Je… non, je ne sais pas quoi dire là.


— (permalink)

No easter eggs in curl | daniel.haxx.se

Liens SebSauvage
Il n'y a pas d'Easter Egg dans curl. Et je pense qu'il a totalement raison. curl fait partie de ces briques fondamentales qui font tourner internet et toutes les infrastructures qui sont dessus. Un comportement non attendu comme un Easter Egg pourrait avoir des conséquences dramatiques.

(Contexte: Un "Easter Egg" (ou Oeuf de Pâques) est une fonctionnalité cachée dans un logiciel qui se déclenche selon différents critères: soit une date précise, soit suite à un évènement (suite de touches ou clics souris). C'est généralement pour faire une blague. Par exemple VLC change sont icône à l'occasion de Noël ou l'Halloween.)
(Permalink)

How to provide NBDE in OpenShift with the tang-operator

OpenShift blog

You can deploy a tang-operator to automate the deployment of a Tang server in an OpenShift cluster that requires Network Bound Disk Encryption (NBDE) internally, leveraging the tools that OpenShift provides to achieve this automation. You can also configure the Clevis tool to use the deployed TangServer. This can help you avoid the need to enter a password to access encrypted volumes while maintaining high security standards.

NBDE

Network Bound Disk Encryption (NBDE) is a subcategory of Policy Based Decryption (PBD) that allows binding encrypted volumes to a special network server. PBD allows combining different disk unlocking methods into a set of rules, called a policy, which enable the unlocking of one volume by means of different ways.

Red Hat Enterprise Linux (RHEL) provides an automated decryption policy framework (Clevis) that allows to define a policy at encryption time that must be satisfied for the data to decrypt. Clevis contains a series of plugins, called pins, that enable different unlocking capabilities. The current available pins are:

  • tpm2 -  Clevis provides support to encrypt a key in a Trusted Platform Module 2.0 () chip. The cryptographically-strong, random key used for encryption is encrypted using the TPM2 chip, and then at decryption time is decrypted using the TPM2 to allow clevis to decrypt the secret stored in the JWE.
  • sss - Shamir’s Secret Sharing (SSS) is an algorithm that allows Clevis to mix pins together to create sophisticated unlocking and high-availability policies.
  • tang - Allows volumes to be unlocked using a network server. In particular, Clevis provides support for the Tang network binding server. Tang provides a stateless, lightweight server, where encryption/decryption of the data works via HTTP.

Tang operator

This article will focus on the Tang server, and, in particular, on how you can deploy this kind of server in K8S architecture in general, and OpenShift in particular. To provide the deployment of a Tang server, a new Kubernetes operator, named tang-operator, has been implemented. With this specific operator, you can deploy automation of Tang Servers in OpenShift infrastructure.
To this end, tang-operator defines a new Custom Resource Definition (CRD), named TangServer, which allows deployment of one to many tang servers, with one entry point for each of them. Each Custom Resource object of the type TangServer will be deployable with different parameters, such as the number of replicas per deployment. You can see the different configurable parameters in following figure:

Figure 1: TangServer form

As you can see, the only mandatory parameters to specify are the name of the TangServer and the number of replicas to launch (1 by default). For each TangServer deployed, a new Kubernetes deployment will be launched, with as many Tang Server pods as the number of replicas specified in the previous form. Note that, for TangServer deploy to work appropriately, a Persistent Volume Claim (PVC) must have been created. This will be described with more detail in the section Step-by-step guide to create an NBDE scenario in OpenShift. Otherwise, the Pod will not be able to launch, because it requires a PVC to hold the keys that will be used for encryption/decryption.

The other parameters should not be modified, as they are normally fixed to previous values, which are the default ones. They have been kept as configurable for any special cases where they could be required to be modified.

TangServer deployment

The creation of a TangServer Custom Resource object implies, basically, the execution of the next steps by the logic of the controller:

  1. Launch of a Kubernetes deployment, with the number of Pods of the Tang server container equal to the number of replicas specified in the creation form. If, for example, a TangServer named “tangserver-multi-replica” with three replicas is created, the deployment created will be as follows:
  2. Startup of a network service, where the HTTP traffic should be configured to be sent by the Clevis client. The IP and port of the TangServer service can be consulted in the Networking => Service tab of the OpenShift web console:

Figure 2: TangServer deployment

If all the requirements for the PODs to be deployed are met, then each of them will be in the “Ready” state, which supposes that they are able to process HTTP traffic corresponding to the encryption/decryption requests performed by a Clevis client in a separate physical or virtual machine that can access OpenShift’s network cluster where Tang Servers are running:

image4-Nov-30-2021-04-51-27-84-PM

Figure 3: TangServer Pods

Figure 4: TangServer Service

To get the details of the Service, you can click the “service-tangserver-multi-replica”  service. Details of the service, such as Public IP (External Load Balancer IP)  and port will be shown:

Figure 5: TangServer Service details

In the “Service routing” section, you can check Load Balancer IP and service port find out the HTTP endpoint to send Tang Server traffic:

Figure 6: TangServer Service routing details

In this particular case, to configure the deployed TangServer, HTTP URL http://34.134.164.138:8081/adv must be specified.

Note that the previous URL allows retrieving the keys available in this deployment:

$ curl http://34.134.164.138:8081/adv 2>/dev/null | jq 
{
"payload": “eyJrZXlzIjpbeyJhbGciOiJFQ01SIiwiY3J2IjoiUC01MjEiLCJrZXlfb3…”,
"protected": "eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9",
"signature": "AVCUXIBvcXt9H4Dx7X8gA9Tbcd-VZJ6jaoJLyBa7NAwBrrMLh6ljNlJpH1BjYxI…"
}

Step-by-step guide to create an NBDE scenario in OpenShift

Taking the previously described information into account, this guide describes, step by step, how to get an NBDE setup where a Virtual Machine is configured with Clevis, and, in particular, with the tang pin. You will need:

  • An OpenShift cluster, where TangServer will be deployed via tang-operator.
  • A virtual machine with Fedora or Red Hat Enterprise Linux operating system, to configure the Clevis client.
  • The “operator-sdk” tool, which is required to install the tang-operator. Installation steps are defined in the Installation section.
  • OpenShift CLI tool, a.k.a. “oc”, which is required to execute some commands from the command line to the cluster.

Once previous requirements are ready, the steps to follow will be:

  1. Assignment for “anyuid” permissions to cluster. To allow deployment of a Tang Server container as root, it is required that permissions are granted in the cluster. To do so, next command must be issued:
  2. Creation of a specific project for this scenario. In order to do this, just select, in the OpenShift Web console,  “ Home => Projects => Create Project “. Once this is created, a form similar to the one below will appear. Fill the information appropriately and click on the “Create” button:
  3. Creation of a Persistent Volume Claim. For Tang Server to be deployed appropriately, a Persistent Volume Claim must be created. To do so, create:
  4. Assign permissions
  5. Persistent Volume: Select “Storage => PersistentVolume => Create PersistentVolume”
  6. Fill the name of the PersistentVolume to Create, then click on “Create” button:
  7. Persistent Volume Claim: Select “Storage => PersistentVolumeClaim => Create PersistentVolumeClaim”
  8. Fill the PersistentVolumeClaim name and the size claimed (1Gb should be enough). Take into consideration the amount of keys to handle in the Tang Server and adjust the Size appropriately. After filling required information, click on Create button:
Installation of the tang-operator. Once all previous requirements are ready, it is time to install the tang-operator. This operator is not included in the operator hub, so it must be installed manually with “operator-sdk” tool: After installation, it can be observed that the tang-operator is available in the operators section. To check it has been correctly installed, select, in the OpenShift Web Console, “Operators => Installed Operators”: When tang-operator is up and running, new TangServer deployments can be created. In this example, a pair of them will be launched, to achieve an architecture similar to next one: To achieve the previous deployment, a TangServer with  two replicas is required. Follow steps similar to the ones described in the Tang operator section, but specifying two replicas. From “Installed Operators” form, click “Tang” => “Tang Server” => “Create TangServer” button: Once the “Create TangServer” button is selected, TangServer form will launch. Fill the amount of replicas, and be sure PVC is the one that was created previously. Please, not that if it was named as “tangserver-pvc”, it is not required to make any change on PVC form entry, as it is the default name to be used: After checking the Pods are up and running appropriately for TangServer deployment, find the endpoint information corresponding to this TangServer. To do so, click on “Networking” => “Services” in OpenShift Web Console. After that, select “service-tangserver”: After the previous configuration, TangServer is ready to start handling key requests and responses via HTTP. In this step, “clevis” will be configured so that it uses the TangServer deployment for encryption and decryption operations. It is assumed that you have configured a RHEL/Fedora machine with at least one encrypted partition, which can access the previous IP address and retrieve keys from the previous URL: The next steps will install “clevis”, and configure it appropriately. First, install clevis: After “clevis” is installed, check which partition is encrypted, so that it can be provided to “clevis” when configuring the Tang pin. To do so, type “lsblk”: After identifying partitions to use, configure clevis to use the tang server for each of the partitions: You can check existing bindings for each partition with the “clevis luks list -d <partition>’ command: Finally, configure dracut to allow clevis to use networking at boot time. To do so, edit the dracut clevis file with your favourite editor, include kernel_cmdline=”rd.neednet=1” in that file and enter “dracut -v -f”: Now, restart the clevis machine, and note that it is not necessary to type the password for unlocking the encrypted volumes (it can take several seconds, normally 5 to 10, for each partition to be decrypted). You can also check that, if the deployment is deleted (by removing the operator, or just destroying the TangServer deployment), rebooting the clevis machine will result in it being stuck on the password prompt, which means that the automatic unlocking is not working anymore.
$ oc adm policy add-scc-to-group anyuid system:authenticated

Figure 7: Creation of a Project in OpenShift

Figure 8: Creation of a Persistent Volume

image10-Nov-30-2021-04-56-44-39-PM

Figure 9: Creation of a Persistent Volume Claim

$ operator-sdk run bundle quay.io/sarroutb/tang-operator-bundle:latest
INFO[0008] Successfully created registry pod: quay-io-sarroutb-tang-operator-bundle-v0-0-16
INFO[0009] Created CatalogSource: tang-operator-catalog
INFO[0009] OperatorGroup "operator-sdk-og" created
INFO[0009] Created Subscription: tang-operator-v0-0-16-sub
INFO[0011] Approved InstallPlan install-lqf9f for the Subscription: tang-operator-v0-0-16-sub
INFO[0011] Waiting for ClusterServiceVersion to reach 'Succeeded' phase
INFO[0012]   Waiting for ClusterServiceVersion "default/tang-operator.v0.0.16"
INFO[0018]   Found ClusterServiceVersion "default/tang-operator.v0.0.16" phase: Pending
INFO[0020]   Found ClusterServiceVersion "default/tang-operator.v0.0.16" phase: InstallReady
INFO[0021]   Found ClusterServiceVersion "default/tang-operator.v0.0.16" phase: Installing
INFO[0031]   Found ClusterServiceVersion "default/tang-operator.v0.0.16" phase: Succeeded
INFO[0031] OLM has successfully installed "tang-operator.v0.0.16"

Figure 10: Correct Installation of Tang operator

Note that Status: “Succeeded” appears for the Tang Operator, which means that Tang Operator has been properly installed.

Figure 11: Multi Replica Deployment

Figure 12: Tang Server Creation

Figure 13: Tang Server Form

After you have filled in the required information, click the “Create” button, and the deployment will be launched. Verify that the deployment has “2 of 2” Pods running:

Figure 14: Tang Server Deployment

For the correct scenario to be achieved, both pods should be in the “Running” status, and its container should be Ready (1/1). You can verify this by clicking “Workload => Pods” in OpenShift Web Console:

Figure 15: Tang Server Pods (Running Status)


Figure 16: Tang Server Service

After the previous service is selected, the information about the “Service Routing” must be annotated. For this particular case, this information will be IP=34.136.106.78, port=8081:


Figure 17: Tang Server Service Routing details

Now, the URL to configure Clevis with Tang ping is known: http://34.136.106.78:8081/adv
Let’s configure Clevis appropriately so that it can be unlocked from the recently created Tang Server.

[clevis@fedora ~]$ ping 34.136.106.78 -c 1;
PING 34.136.106.78 (34.136.106.78) 56(84) bytes of data.
64 bytes from 34.136.106.78: icmp_seq=1 ttl=54 time=113 ms
--- 34.136.106.78 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 112.658/112.768/112.879/0.110 ms
[clevis@fedora ~]$ curl http://34.136.106.78:8081/adv 2> /dev/null | jq
{
"payload": "eyJrZXlzIjpasjdflkñajXAfasqpiu12baskdfjñsafWEja2jñajsdf…...XdEpZOGtWRWEifV19",
"protected": "eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9",
"signature": "SU2vCcPV5kfx6CIKIJm……..oJBrSnO6Z7XL7ItgJrRppuX3ARkPa8zoYo36o0rGmnXKYE"
}


[clevis@fedora ~]$ sudo dnf install -y clevis clevis-dracut
[sudo] password for clevis:
Last metadata expiration check: 1:17:10 ago on Tue 05 Oct 2021 11:12:57 AM CEST.
Dependencies resolved.
… (omitted output)
Installed:
clevis-18-1.fc34.x86_64       clevis-dracut-18-1.fc34.x86_64  clevis-luks-18-1.fc34.x86_64  clevis-pin-tpm2-0.3.0-1.fc34.x86_64  clevis-systemd-18-1.fc34.x86_64  luksmeta-9-10.fc34.x86_64
tpm2-tools-5.0-2.fc34.x86_64
Complete!
[clevis@fedora ~]$



[clevis@fedora ~]$ lsblk
NAME                                          MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
vda                                           252:0    0   12G  0 disk
├─vda1                                        252:1    0 1023M  0 part  /boot
└─vda2                                        252:2    0   11G  0 part
└─luks-c526e27d-f87c-4e18-9f15-cbc08c9359c8 253:0    0   11G  0 crypt /
vdb                                           252:16   0   12G  0 disk
└─vdb1                                        252:17   0   12G  0 part
└─luks-2bfa665e-b9ca-474b-8ca8-245a83ea5bd1 253:1    0   12G  0 crypt /home

In this case, both “vda2” and “vdb1” will be configured to use the tang server.

[clevis@fedora ~]$ sudo clevis luks bind -d /dev/vda2 tang '{"url":"http://34.136.106.78:8081"}'
Enter existing LUKS password:
Warning: Value 512 is outside of the allowed entropy range, adjusting it.
The advertisement contains the following signing keys:
vMHbTU1Kzv-8okN-HVOBEWe7ReKj0CyJy28lSm_eAB8
Do you wish to trust these keys? [ynYN] y
[clevis@fedora ~]$ sudo clevis luks bind -d /dev/vdb1 tang '{"url":"http://34.136.106.78:8081"}'
Enter existing LUKS password:
Warning: Value 512 is outside of the allowed entropy range, adjusting it.
The advertisement contains the following signing keys:
vMHbTU1Kzv-8okN-HVOBEWe7ReKj0CyJy28lSm_eAB8
Do you wish to trust these keys? [ynYN] y


[clevis@fedora ~]$ sudo clevis luks list -d /dev/vda2
1: tang '{"url":"http://34.136.106.78:8081"}'
[clevis@fedora ~]$ sudo clevis luks list -d /dev/vdb1
2: tang '{"url":"http://34.136.106.78:8081"}'


[clevis@fedora ~]$ cat /etc/dracut.conf.d/clevis.conf
kernel_cmdline="rd.neednet=1"
[clevis@fedora ~]$ sudo dracut -v -f
dracut: Executing: /usr/bin/dracut -v -f


dracut: *** Including module: clevis ***
dracut: *** Including module: clevis-pin-sss ***
dracut: *** Including module: clevis-pin-tang ***
dracut: *** Including module: clevis-pin-tpm2 ***


dracut: *** Creating initramfs image file '/boot/initramfs-5.11.12-300.fc34.x86_64.img' done ***

Summary

In this blog entry, tang-operator has been described, so that an OpenShift cluster administrator can be able to deploy a TangServer deployment inside OpenShift, and configure a clevis machine to use this server as decryption server, avoiding the requirement to introduce a password for encrypted volumes.

A step by step guide has also been provided to clarify the different steps that must be accomplished for the operator installation, for TangServer deployment and for Clevis installation and configuration.

References

Un yacht virtuel vendu dans le metaverse pour 650 000 dollars sous forme de NFT

Liens SebSauvage
650 000 dollars pour la propriété d'un modèle 3D tout pourri de Yacht.
Y'a pas à dire, le marché des NFT c'est bien débile. Bah après tout si quelques artistes arrivent à se faire de l'argent avec, pourquoi pas.
Comme quoi, c'est une question de timing et de marketing: Le Metaverse de Facebook n'a rien d'original par rapport à Second Life sorti il y a 18 ans. On lui a juste ajouté la marque "Facebook" et les NFT. Tant qu'il y a des gogos pour payer...

De manière assez amusante, Second Life n'est pas mort, et possède aussi sa place de marché. Par exemple vous pouvez vous payer ce corps de beau gosse pour la modique somme de 1250 dollars (attention les mains et pieds ne sont pas inclus et sont à acheter séparément):
https://marketplace.secondlife.com/p/Slink-Physique-Male-Mesh-Body/7776924?ple=c
Mais bon, c'est pas Facebook et c'est pas des NFT, donc les médias n'en parlent pas.
(Attendez que les médias découvrent le prix de décals d'armes à feux dans CS:GO)
(Permalink)

World Land Trust Bundle by KrisWB and 55 others - itch.io

Liens SebSauvage
Et hop... itch.io sort un bundle avec 66 éléments pour 5$. Il contient l'excellent Ygnlet. Rien que pour ce jeu, ça vaut le coup.
Il y a aussi Botanicula, Fugl, Somorost, Hacknet, Hidden Folkds, Cloud Gardens, Spring Falls...
(Permalink)

Les amendes sans contact : une stratégie de harcèlement policier – Technopolice

Liens SebSauvage
Et hop... des amendes automatisées.
(Permalink)

A mysterious threat actor is running hundreds of malicious Tor relays - The Record by Recorded Future

Liens SebSauvage
C'est carrément flippant.
(Permalink)

Le réseau social TRUTH de Trump admet qu'il est basé sur Mastodon, qui avait menacé de poursuivre le site pour avoir prétendument violé sa licence open source

Liens SebSauvage
Haha c'est trop drole: Après s'être fait virer de Facebook, Twitter et Google, Trump a voulu montrer son propre réseau social, "TRUTH Social". En piquant le code de Mastodon et en violant la license.
Ils se sont fait rembarrer avant même la sortie  ^^
Bah oui, on ne pique pas le code de Logiciels Libres impunément. Il y a des obligations.
(Permalink)

SimpleLogin | Protéger votre vie privée avec les alias mail

Liens SebSauvage
Tiens un service de mail "jetable" qui ressemble beaucoup à SpamGourmet.com (que j'utilise depuis plus de 18 ans).
(via https://www.toolinux.com/?depuis-l-arrivee-de-firefox-relay-et-hide-my)
(Permalink)

Interdiction des « thérapies » de conversion : le Sénat adopte le texte en commission - Association STOP HOMOPHOBIE | Information - Prévention - Aide aux victimes

Liens SebSauvage
Bien !
(Permalink)

Control Ultimate Edition sur Steam

Liens SebSauvage
J'ai essayé CONTROL (pour Windows, mais fonctionne sous Linux via Proton). (Il était en promo à 9,90€ chez Fanatical il y a quelques jours).
C'est un FPS d'action avec une héroïne féminine.
*Très* fortement inspiré des SCP (https://sebsauvage.net/links/?GezyIA), Control vous invite à explorer les bureaux d'une organisation secrète infestés par le Hiss, un phénomène surnaturel. Comme pour les SCP, on tombe sur de nombreux rapports, mémos et mails qui rendent l'histoire plus profonde (mais toujours aussi étrange). Je vous conseille de prendre le temps de les lire.
En explorant, vous collectez des ressources et des "objets" surnaturels qui vous permettent d'avoir de nouvelles capacités (dash, télékinésie, etc.). Il est possible de "fabriquer" certains mods ou capacités.
L'utilisation des différents pouvoir est assez plaisante. Pas de munitions à collecter: Les armes et pouvoirs se rechargent tout seuls, ce qui vous oblige à les alterner de manière judicieuse pour ne pas être pris⋅e au dépourvu. Même si on se perd un peu dans les lieux le gameplay est satisfaisant, alternant les phases calmes (exploration) aux phases de combat intense (et avec tout l'environnement destructible, c'est un carnage !)
Tout est impressionnant: Les imposants lieux que vous visitez, le soin des textures et lumières, les effets, les bruitages et la musique. C'est vraiment très cinématographique: On a l'impression d'être dans un film.
Si vous aimez les SCP, vous aimerez vous perdre dans l'étrangeté de ce jeu.
(Permalink)

Le Kremlin rachète VKontakte, le « Facebook russe »

Liens SebSauvage
Vous ne connaissez sans doute pas VKontakt, mais c'est un réseau social russe extrêmement populaire (100 millions d'utilisateurs actifs). Par le biais de différentes filiales, le gouvernement Russe vient de se le payer. Je vous laisse imaginer la suite.
(Permalink)

Nitter.net is down · Issue #473 · zedeus/nitter · GitHub

Liens SebSauvage
Nitter.net (qui permet juste de consulter ce qui est publié sur Twitter.com sans passer par le site officiel de twitter.com) est bloqué depuis le 30 novembre à case d'une plainte DMCA. Bien sûr c'est une plainte abusive, mais ça suffit pour foutre le bordel.
D'où l'intérêt - comme le préconise Framasoft - de décentraliser en multipliant les instances.
(On trouve d'ailleurs déjà un certain nombres d'instance Nitter un peu partout, heureusement.)
(Permalink)
more
mark as read