From be6bf841793aa8ee4ee3a4d295142138860b4174 Mon Sep 17 00:00:00 2001 From: Forresst Date: Wed, 13 Jan 2021 14:21:19 +0100 Subject: [PATCH] Sync with original text (#13) --- .operations/writing-guidelines.french.md | 2 +- README.french.md | 555 +++++++++++++++--- .../docker/avoid-build-time-secrets.french.md | 93 +++ .../docker/bootstrap-using-node.french.md | 85 +++ sections/docker/clean-cache.french.md | 27 + sections/docker/docker-ignore.french.md | 49 ++ sections/docker/generic-tips.french.md | 29 + sections/docker/graceful-shutdown.french.md | 84 +++ sections/docker/image-tags.french.md | 27 + .../docker/install-for-production.french.md | 86 +++ sections/docker/lint-dockerfile.french.md | 25 + sections/docker/memory-limit.french.md | 70 +++ sections/docker/multi_stage_builds.french.md | 116 ++++ .../restart-and-replicate-processes.french.md | 44 ++ sections/docker/scan-images.french.md | 30 + sections/docker/smaller_base_images.french.md | 13 + ...use-cache-for-shorter-build-time.french.md | 115 ++++ sections/errorhandling/apmproducts.french.md | 4 +- .../documentingusingswagger.french.md | 2 +- .../errorhandling/returningpromises.french.md | 285 +++++++++ .../errorhandling/usematurelogger.french.md | 55 +- .../useonlythebuiltinerror.french.md | 6 +- sections/performance/block-loop.french.md | 2 +- sections/performance/nativeoverutil.french.md | 2 +- .../production/assigntransactionid.french.md | 10 +- .../installpackageswithnpmci.french.md | 30 + .../production/lockdependencies.french.md | 2 +- sections/production/smartlogging.french.md | 2 +- .../breakintcomponents.french.md | 4 +- .../projectstructre/configguide.french.md | 2 +- .../projectstructre/createlayers.french.md | 6 +- .../projectstructre/separateexpress.french.md | 2 + .../projectstructre/thincomponents.french.md | 4 +- .../projectstructre/wraputilities.french.md | 2 +- sections/security/bcryptpasswords.french.md | 32 - .../commonsecuritybestpractices.french.md | 36 +- .../security/dependencysecurity.french.md | 2 +- sections/security/expirejwt.french.md | 6 +- sections/security/regex.french.md | 2 +- sections/security/userpasswords.french.md | 129 ++++ sections/security/validation.french.md | 4 +- .../3-parts-in-name.french.md | 4 +- sections/testingandquality/citools.french.md | 4 +- .../testingandquality/refactoring.french.md | 6 +- .../test-middlewares.french.md | 30 + 45 files changed, 1945 insertions(+), 180 deletions(-) create mode 100644 sections/docker/avoid-build-time-secrets.french.md create mode 100644 sections/docker/bootstrap-using-node.french.md create mode 100644 sections/docker/clean-cache.french.md create mode 100644 sections/docker/docker-ignore.french.md create mode 100644 sections/docker/generic-tips.french.md create mode 100644 sections/docker/graceful-shutdown.french.md create mode 100644 sections/docker/image-tags.french.md create mode 100644 sections/docker/install-for-production.french.md create mode 100644 sections/docker/lint-dockerfile.french.md create mode 100644 sections/docker/memory-limit.french.md create mode 100644 sections/docker/multi_stage_builds.french.md create mode 100644 sections/docker/restart-and-replicate-processes.french.md create mode 100644 sections/docker/scan-images.french.md create mode 100644 sections/docker/smaller_base_images.french.md create mode 100644 sections/docker/use-cache-for-shorter-build-time.french.md create mode 100644 sections/errorhandling/returningpromises.french.md create mode 100644 sections/production/installpackageswithnpmci.french.md delete mode 100644 sections/security/bcryptpasswords.french.md create mode 100644 sections/security/userpasswords.french.md create mode 100644 sections/testingandquality/test-middlewares.french.md diff --git a/.operations/writing-guidelines.french.md b/.operations/writing-guidelines.french.md index 98ffd5137..96ff82941 100644 --- a/.operations/writing-guidelines.french.md +++ b/.operations/writing-guidelines.french.md @@ -16,7 +16,7 @@ En plus d'être d'une grande fiabilité et d'une grande qualité rédactionnelle ## 4. Formatage cohérent -Le contenu est présenté à l'aide de modèles prédéfinis. Tout contenu futur doit être conforme au même modèle. Si vous souhaitez ajouter de nouveaux points, copiez le format d'un point existant et complétez-le selon vos besoins. Pour plus d'informations, veuillez consulter [ce modèle](https://github.com/i0natan/nodebestpractices/blob/master/sections/template.md). +Le contenu est présenté à l'aide de modèles prédéfinis. Tout contenu futur doit être conforme au même modèle. Si vous souhaitez ajouter de nouveaux points, copiez le format d'un point existant et complétez-le selon vos besoins. Pour plus d'informations, veuillez consulter [ce modèle](https://github.com/goldbergyoni/nodebestpractices/blob/master/sections/template.md). ## 5. C'est à propos de Node.js diff --git a/README.french.md b/README.french.md index 090e9dc3b..b43f5665c 100644 --- a/README.french.md +++ b/README.french.md @@ -9,16 +9,17 @@
- 85 items Dernière mise à jour : 12 oct. 2019 Mis à jour pour node 12.12.0 + 102 items Dernière mise à jour : Novembre 2020 Mis à jour pour Node 14.0.0

[![nodepractices](/assets/images/twitter-s.png)](https://twitter.com/nodepractices/) **Suivez nous sur Twitter !** [**@nodepractices**](https://twitter.com/nodepractices/) +
-Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chinese.md), [![BR](/assets/flags/BR.png)**BR**](/README.brazilian-portuguese.md), [![RU](/assets/flags/RU.png)**RU**](/README.russian.md) [(![ES](/assets/flags/ES.png)**ES**, ![FR](/assets/flags/FR.png)**FR**, ![HE](/assets/flags/HE.png)**HE**, ![KR](/assets/flags/KR.png)**KR** et ![TR](/assets/flags/TR.png)**TR** en cours !)](#traductions) +Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chinese.md), [![BR](/assets/flags/BR.png)**BR**](/README.brazilian-portuguese.md), [![RU](/assets/flags/RU.png)**RU**](/README.russian.md), [![PL](/assets/flags/PL.png)**PL**](/README.polish.md) [(![ES](/assets/flags/ES.png)**ES**, ![FR](/assets/flags/FR.png)**FR**, ![HE](/assets/flags/HE.png)**HE**, ![KR](/assets/flags/KR.png)**KR** and ![TR](/assets/flags/TR.png)**TR** in progress!)](#traductions)
@@ -26,21 +27,22 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines # Dernières bonnes pratiques et nouveautés -- **✅ Nouvelle bonne pratique :** 7.1: [Ne bloquez pas la boucle d'événements](#7-brouillon--performance) par Keith Holliday +- - **✅ Nouvelle bonne pratique :** Le point 2.12 de [Alexsey](https://github.com/Alexsey) montre comment le retour sans attendre (await) les fonctions async conduit à des traces de pile partielles. Cela peut devenir un problème important lors du dépannage des exceptions en production qui ne disposent pas de certaines trames d'exécution + +- **✅ Nouvelle bonne pratique :** Le point 6.8 de Josh Hemphill recommande de "protéger les mots de passe/secrets des utilisateurs en utilisant BCrypt ou Script". Elle contient une explication approfondie sur le moment et les raisons pour lesquelles chaque option convient à un projet spécifique. Ne manquez pas ce petit guide avec un bref aperçu des différentes options de hachage -- **🇷🇺 Traduction russe :** L'incroyable Alex Ivanov vient de publier une [traduction russe](/README.russian.md) +- **:whale: Node.js + Bonnes pratiques Docker** : Nous venons de publier la section Docker avec Node.js qui comprend 15 points sur les meilleures techniques de codage avec Docker -- **Nous recherchons des contributeurs de Typescript :** vous voulez aider à fournir des exemples TypeScript ? Contactez-nous en ouvrant une issue

# Bienvenue ! 3 Choses à savoir avant tout -**1. Vous êtes, en fait, en train de lire un regroupement des meilleurs articles sur Node.js. -** ce référentiel est un résumé et il conserve le contenu le mieux classé sur les bonnes pratiques Node.js, ainsi que du contenu écrit ici par des collaborateurs +**1. Vous êtes en train de lire un regroupement des meilleurs articles sur Node.js. -** ce référentiel est un résumé et il conserve le contenu le mieux classé sur les bonnes pratiques Node.js, ainsi que du contenu écrit ici par des collaborateurs **2. Il s'agit du plus grand assemblage d'articles et il s'agrandit chaque semaine -** actuellement, plus de 80 bonnes pratiques, guides de style et astuces d'architecture sont présentés. Nous serions ravis de vous voir contribuer ici, qu'il s'agisse de corriger des erreurs de code, d'aider aux traductions ou de suggérer de nouvelles idées brillantes. Consultez nos [recommandations d'écriture](/.operations/writing-guidelines.french.md) -**3. La plupart des bonnes pratiques contiennent des informations supplémentaires -** la plupart des points ont un lien **🔗Plus d'infos** qui développe la bonne pratique avec des exemples de code, des citations venant de pages sélectionnées et plus encore. +**3. Les bonnes pratiques contiennent des informations supplémentaires -** la plupart des points ont un lien **🔗Plus d'infos** qui développe la bonne pratique avec des exemples de code, des citations venant de pages sélectionnées et plus encore.

@@ -49,10 +51,11 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines 1. [Structure de projet (5)](#1-structure-de-projet) 2. [Gestion des erreurs (11) ](#2-gestion-des-erreurs) 3. [Style du code (12) ](#3-style-du-code) -4. [Tests et pratiques générales de qualité (12) ](#4-tests-et-pratiques-générales-de-qualité) -5. [Mise en production (18) ](#5-mise-en-production) +4. [Tests et pratiques générales de qualité (13) ](#4-tests-et-pratiques-générales-de-qualité) +5. [Mise en production (19) ](#5-mise-en-production) 6. [Sécurité (25)](#6-sécurité) 7. [Performance (2) (Travail en cours ✍️)](#7-brouillon--performance) +8. [Pratiques de Docker (15)](#8-bonnes-pratiques-de-docker)

@@ -60,7 +63,7 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines ## ![✔] 1.1 Organisez votre projet en composants -**TL;PL :** Le pire obstacle des énormes applications est la maintenance d'une base de code immense contenant des centaines de dépendances - un tel monolithe ralentit les développeurs tentant d'ajouter de nouvelles fonctionnalités. Pour éviter cela, répartissez votre code en composants, chacun dans son propre dossier avec son code dédié, et assurez vous que chaque unité soit courte et simple. Visitez le lien 'Plus d'infos' plus bas pour voir des exemples de structure de projet correcte. +**TL;PL :** Le pire obstacle des énormes applications est la maintenance d'une base de code immense contenant des centaines de dépendances - un tel monolithe ralentit les développeurs tentant d'ajouter de nouvelles fonctionnalités. Pour éviter cela, répartissez votre code en composants, chacun dans son dossier avec son code dédié, et assurez vous que chaque unité soit courte et simple. Visitez le lien 'Plus d'infos' plus bas pour voir des exemples de structure de projet correcte. **Autrement :** Lorsque les développeurs qui codent de nouvelles fonctionnalités ont du mal à réaliser l'impact de leur changement et craignent de casser d'autres composants dépendants - les déploiements deviennent plus lents et plus risqués. Il est aussi considéré plus difficile d'élargir un modèle d'application quand les unités opérationnelles ne sont pas séparées. @@ -68,11 +71,11 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines

-## ![✔] 1.2 Organisez vos composants en strates, gardez Express à l'intérieur de son périmètre +## ![✔] 1.2 Organisez vos composants en strates, gardez la couche web à l'intérieur de son périmètre -**TL;PL :** Chaque composant devrait contenir des « strates » - un objet dédié pour le web, un pour la logique et un pour le code d'accès aux données. Cela permet non seulement de séparer clairement les responsabilités mais permet aussi de simuler et de tester le système de manière plus simple. Bien qu'il s'agisse d'un modèle très courant, les développeurs d'API ont tendance à mélanger les strates en passant l'objet dédié au web (Express req, res) à la logique opérationnelle et aux strates de données - cela rend l'application dépendante et accessible seulement par Express. +**TL;PL :** Chaque composant devrait contenir des « strates » - un objet dédié pour le web, un pour la logique et un pour le code d'accès aux données. Cela permet non seulement de séparer clairement les responsabilités mais permet aussi de simuler et de tester le système de manière plus simple. Bien qu'il s'agisse d'un modèle très courant, les développeurs d'API ont tendance à mélanger les strates en passant l'objet dédié au web (Par exemple Express req, res) à la logique opérationnelle et aux strates de données - cela rend l'application dépendante et accessible seulement par les frameworks web spécifiques. -**Autrement :** Les tests, les jobs CRON et les autres middlewares non-Express ne peuvent pas accéder à une application qui mélange les objets web avec les autres strates. +**Autrement :** Les tests, les jobs CRON, les déclencheurs des files d'attente de messages et etc ne peuvent pas accéder à une application qui mélange les objets web avec les autres strates. 🔗 [**Plus d'infos : organisez en strates votre app**](/sections/projectstructre/createlayers.french.md) @@ -100,9 +103,9 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines ## ![✔] 1.5 Utilisez une configuration respectueuse de l'environnement, sécurisée et hiérarchique -**TL;PL :** La mise en place d'une configuration parfaite et sans faille doit garantir que (a) les clés peuvent être lues depuis un fichier ET à partir de la variable d'environnement (b) les secrets sont conservés hors du code source (c) la configuration est hiérarchique pour une recherche plus simple. Certains paquets peuvent gérer la plupart de ces points comme [rc](https://www.npmjs.com/package/rc), [nconf](https://www.npmjs.com/package/nconf) et [config](https://www.npmjs.com/package/config). +**TL;PL :** La mise en place d'une configuration parfaite et sans faille doit garantir que (a) les clés peuvent être lues depuis un fichier ET à partir de la variable d'environnement (b) les secrets sont conservés hors du code source (c) la configuration est hiérarchique pour une recherche plus simple. Certains paquets peuvent gérer la plupart de ces points comme [rc](https://www.npmjs.com/package/rc), [nconf](https://www.npmjs.com/package/nconf), [config](https://www.npmjs.com/package/config) et [convict](https://www.npmjs.com/package/convict). -**Autrement :** Ne pas se soucier de ces exigences de configuration ne fera que ralentir l'équipe de développement ou l'équipe de devops. Probablement les deux. +**Autrement :** Ne pas se soucier de ces exigences de configuration ne fera que ralentir l'équipe de développement ou l'équipe de DevOps. Probablement les deux. 🔗 [**Plus d'infos : bonnes pratiques de configuration**](/sections/projectstructre/configguide.french.md) @@ -124,7 +127,7 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines ## ![✔] 2.2 Utilisez uniquement l'objet intégré Error -**TL;PL :** Beaucoup lèvent des erreurs sous forme de chaîne ou de type personnalisé - cela complique la logique de gestion des erreurs et l'interopérabilité entre les modules. Que vous rejetiez une promesse, leviez une exception ou émettiez une erreur - l'utilisation uniquement de l'objet intégré Error augmentera l'uniformité et empêchera la perte d'informations. +**TL;PL :** Beaucoup lèvent des erreurs sous forme de chaîne ou de type personnalisé - cela complique la logique de gestion des erreurs et l'interopérabilité entre les modules. Que vous rejetiez une promesse, leviez une exception ou émettiez une erreur - l'utilisation uniquement de l'objet intégré Error (ou un objet qui étend l'objet Error) augmentera l'uniformité et empêchera la perte d'informations. **Autrement :** Lorsque vous appelez un composant, le type d'erreurs en retour étant incertain - cela rend la gestion des erreurs beaucoup plus difficile. Pire encore, l'utilisation de types personnalisés pour décrire des erreurs peut entraîner la perte d'informations d'erreurs critiques comme la trace de la pile ! @@ -142,7 +145,7 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines

-## ![✔] 2.4 Gérez les erreurs de manière centralisée, pas dans un middleware Express +## ![✔] 2.4 Gérez les erreurs de manière centralisée, pas dans un middleware **TL;PL :** Les logiques de gestion des erreurs telles que le mail à l'administrateur et la journalisation doivent être encapsulées dans un objet dédié et centralisé, pour que tous les points de terminaison (par exemple, middleware Express, tâches cron, tests unitaires) l'appellent lorsqu'une erreur survient. @@ -174,7 +177,7 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines ## ![✔] 2.7 Utilisez un outil de journalisation mature pour augmenter la visibilité des erreurs -**TL;PL :** Un ensemble d'outils de journalisation matures comme [Winston](https://www.npmjs.com/package/winston), [Bunyan](https://github.com/trentm/node-bunyan), [Log4js](http://stritti.github.io/log4js/) ou [Pino](https://github.com/pinojs/pino), accélérera la découverte et la compréhension des erreurs. Alors oubliez console.log. +**TL;PL :** Un ensemble d'outils de journalisation matures comme [Pino](https://github.com/pinojs/pino) ou [Log4js](http://stritti.github.io/log4js/), accélérera la découverte et la compréhension des erreurs. Alors oubliez console.log. **Autrement :** En parcourant les console.logs ou manuellement par le biais d'un fichier texte désordonné sans outils d'interrogation ou d'une visionneuse de journaux décente, vous pourriez être occupé au travail jusqu'à tard dans la nuit. @@ -214,12 +217,26 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines ## ![✔] 2.11 Échouez rapidement, valider les arguments à l'aide d'une bibliothèque dédiée -**TL;PL :** Cela devrait faire partie de vos bonnes pratiques avec Express - Contrôlez les arguments de l'API pour éviter les bugs désagréables qui sont beaucoup plus difficiles à suivre plus tard. Le code de validation est généralement fastidieux, sauf si vous utilisez une bibliothèque d'aide très cool comme Joi. +**TL;PL :** Contrôlez les arguments de l'API pour éviter les bugs désagréables qui sont beaucoup plus difficiles à suivre plus tard. Le code de validation est généralement fastidieux, sauf si vous utilisez une bibliothèque d'aide très cool comme [ajv](https://www.npmjs.com/package/ajv) et [Joi](https://www.npmjs.com/package/joi). **Autrement :** Considérez ceci - votre fonction attend un argument numérique « Discount » que l'appelant oublie de passer, plus loin dans le code, il vérifie si Discount!= 0 (le montant de la remise autorisée est supérieur à zéro), ensuite le code permet à l'utilisateur de profiter d'un remise. OMG, quel méchant bug. Le vois-tu ? 🔗 [**Plus d'infos : échec rapide**](/sections/errorhandling/failfast.french.md) +

+ +## ![✔] 2.12 Attendez toujours les promesses avant de retourner afin d'éviter des traces de pile partielles + +**TL;PL :** Faites toujours `return await` lorsque vous retournez une promesse afin de bénéficier d'une trace de pile complète. Si une +fonction retourne une promesse, cette fonction doit être déclarée comme fonction `async` et explicitement +attendre (`await`) la promesse avant de la retourner. + +**Autrement :** La fonction qui retourne une promesse sans attendre n'apparaîtra pas dans la trace de la pile. +De telles trames manquantes compliqueraient probablement la compréhension du flux qui conduit à l'erreur, +surtout si la cause du comportement anormal se situe à l'intérieur de la fonction manquante + +🔗 [**Plus d'infos : le retour des promesses**](/sections/errorhandling/returningpromises.french.md) +


⬆ Retourner en haut de la page

@@ -240,7 +257,7 @@ Lire dans une autre langue : [![CN](/assets/flags/CN.png)**CN**](/README.chines **TL;PL :** En plus des règles standard ESLint couvrant JavaScript vanilla, ajoutez des plugins spécifiques à Node.js comme [eslint-plugin-node](https://www.npmjs.com/package/eslint-plugin-node), [eslint-plugin-mocha](https://www.npmjs.com/package/eslint-plugin-mocha) et [eslint-plugin-node-security](https://www.npmjs.com/package/eslint-plugin-security). -**Autrement :** De nombreux modèles de code Node.js défectueux peuvent s'échapper des radars. Par exemple, les développeurs pourrait exiger des fichiers avec une variable donnée comme chemin d'accès (`require(variableCommeChemin)`) qui permet aux attaquants d'exécuter n'importe quel script JS. Les linters de Node.js peuvent détecter de tels modèles et se plaindre en amont. +**Autrement :** De nombreux modèles de code Node.js défectueux peuvent s'échapper des radars. Par exemple, les développeurs pourrait exiger des fichiers avec une variable donnée comme un chemin d'accès (`require(variableCommeChemin)`) qui permet aux attaquants d'exécuter n'importe quel script JS. Les linters de Node.js peuvent détecter de tels modèles et se plaindre en amont.

@@ -257,7 +274,7 @@ function someFunction() { } // À éviter -function someFunction() +function someFunction { // bloc de code } @@ -335,11 +352,11 @@ class SomeClassExample {} // pour les noms de constantes, nous utilisons le mot-clé const et lowerCamelCase const config = { - key: 'value' + key: "value" }; // pour les noms de variables et de fonctions, nous utilisons lowerCamelCase -let someVariableExample = 'value'; +let someVariableExample = "value"; function doSomething() {} ``` @@ -373,12 +390,12 @@ function doSomething() {} ```javascript // À faire -module.exports.SMSProvider = require('./SMSProvider'); -module.exports.SMSNumberResolver = require('./SMSNumberResolver'); +module.exports.SMSProvider = require("./SMSProvider"); +module.exports.SMSNumberResolver = require("./SMSNumberResolver"); // À éviter -module.exports.SMSProvider = require('./SMSProvider/SMSProvider.js'); -module.exports.SMSNumberResolver = require('./SMSNumberResolver/SMSNumberResolver.js'); +module.exports.SMSProvider = require("./SMSProvider/SMSProvider.js"); +module.exports.SMSNumberResolver = require("./SMSNumberResolver/SMSNumberResolver.js"); ```

@@ -392,18 +409,18 @@ module.exports.SMSNumberResolver = require('./SMSNumberResolver/SMSNumberResolve ### 3.10 Exemple de code ```javascript -'' == '0' // false -0 == '' // true -0 == '0' // true +"" == "0"; // false +0 == ""; // true +0 == "0"; // true -false == 'false' // false -false == '0' // true +false == "false"; // false +false == "0"; // true -false == undefined // false -false == null // false -null == undefined // true +false == undefined; // false +false == null; // false +null == undefined; // true -' \t\r\n ' == 0 // true +" \t\r\n " == 0; // true ``` Toutes les déclarations ci-dessus renverront false si elles sont utilisées avec `===` @@ -416,7 +433,7 @@ Toutes les déclarations ci-dessus renverront false si elles sont utilisées ave **Autrement :** La gestion des erreurs asynchrones dans le style des fonctions de rappel est probablement le chemin le plus rapide vers l'enfer - ce style oblige de vérifier les erreurs partout, à gérer les imbrications de code gênantes et rend difficile la compréhension du flux du code. -🔗[**Plus d'infos :** guide pour async await 1.0](https://github.com/yortus/asyncawait) +🔗[**Plus d'infos :** guide pour async-await 1.0](https://github.com/yortus/asyncawait)

@@ -462,7 +479,6 @@ Toutes les déclarations ci-dessus renverront false si elles sont utilisées ave

- ## ![✔] 4.4 Détectez les problèmes de code avec un linter **TL;PL :** Utilisez un linter de code pour vérifier la qualité et détecter les antipatterns au plus tôt. Exécutez-le avant les tests et ajoutez-le en tant que git-hook de pré-commit pour diminuer le temps nécessaire pour examiner et corriger tout problème. Vérifiez également la [section 3](#3-style-du-code) sur les pratiques de style de code. @@ -473,7 +489,7 @@ Toutes les déclarations ci-dessus renverront false si elles sont utilisées ave ## ![✔] 4.5 Évitez les tests globaux, ajoutez des données pour chaque test -**TL;PL :** Pour éviter le chevauchement des tests et expliquer facilement le déroulement du test, chaque test doit ajouter et agir sur son propre ensemble d'enregistrement de la base de données. Chaque fois qu'un test a besoin de récupérer ou de présumer l'existence de certaines données de la BD - il doit explicitement ajouter ces données et éviter de modifier tout autre enregistrement. +**TL;PL :** Pour éviter le chevauchement de test et expliquer facilement le déroulement du test, chaque test doit ajouter et agir sur son propre ensemble d'enregistrement de la base de données. Chaque fois qu'un test a besoin de récupérer ou de présumer l'existence de certaines données de la BD - il doit explicitement ajouter ces données et éviter de modifier tout autre enregistrement. ****Autrement :** Considérez un scénario où le déploiement est interrompu à cause de l'échec des tests, l'équipe va maintenant passer un temps d'investigation précieux qui se terminera par une triste conclusion : le système fonctionne bien, les tests interfèrent cependant les uns avec les autres et interrompent la construction. @@ -491,10 +507,8 @@ Toutes les déclarations ci-dessus renverront false si elles sont utilisées ave ## ![✔] 4.7 Étiquetez vos tests -**TL;PL :** Différents tests doivent s'exécuter selon différents scénarios : test d'intégrité, sans IO, les tests doivent s'exécuter lorsqu'un développeur enregistre ou commit un fichier, les tests complets de bout en bout s'exécutent généralement lorsqu'une nouvelle « pull request » est soumise, etc. Cela peut être réalisé en étiquetant les tests avec des mots clés comme #IO #api #integrite afin que vous puissiez utiliser votre harnais de test et invoquer le sous-ensemble souhaité. Par exemple, voici comment vous invoqueriez uniquement le groupe de test d'intégrité avec [Mocha](https://mochajs.org/) : -``` -mocha --grep "sanity" -``` +**TL;PL :** Différents tests doivent s'exécuter selon différents scénarios : test d'intégrité, sans IO, les tests doivent s'exécuter lorsqu'un développeur enregistre ou commit un fichier, les tests complets de bout en bout s'exécutent généralement lorsqu'une nouvelle « pull request » est soumise, etc. Cela peut être réalisé en étiquetant les tests avec des mots clés comme #IO #api #integrite afin que vous puissiez utiliser votre harnais de test et invoquer le sous-ensemble souhaité. Par exemple, voici comment vous invoqueriez uniquement le groupe de test d'intégrité avec [Mocha](https://mochajs.org/) : mocha --grep 'sanity' + **Autrement :** Exécutez tous les tests, y compris les tests qui effectuent des dizaines de requêtes sur la base de données, chaque fois qu'un développeur apporte un petit changement, cela peut être extrêmement lent et souvent les développeurs s'abstiennent de faire des tests.

@@ -517,13 +531,12 @@ mocha --grep "sanity" ## ![✔] 4.10 Utilisez pour les tests e2e un environnement proche de la production -**TL;PL :** Les tests de bout en bout (e2e) qui comprennent l'utilisation de données en direct sont les maillons les plus faibles du processus du CI car ils dépendent de plusieurs services complexes comme la base de données. Utilisez un environnement de test continue aussi proche que possible de votre production actuelle. +**TL;PL :** Les tests de bout en bout (e2e) qui comprennent l'utilisation de données en direct sont les maillons les plus faibles du processus du CI car ils dépendent de plusieurs services complexes comme la base de données. Utilisez un environnement de test continue aussi proche que possible de votre production actuelle. (Un oubli pour continue ici. A en juger par la clause **Autrement**, cela devrait mentionner docker-compose) **Autrement :** Sans docker-compose, les équipes doivent maintenir une base de données de test pour chaque environnement de test, y compris les machines des développeurs, garder toutes ces bases de données synchronisées afin que les résultats des tests ne varient pas d'un environnement à l'autre.

- ## ![✔] 4.11 Refactorisez régulièrement à l'aide d'outils d'analyse statique **TL;PL :** L'utilisation d'outils d'analyse statique vous aide en donnant des moyens concrets d'améliorer la qualité du code et permet de maintenir votre code. Vous pouvez ajouter des outils d'analyse statique à votre CI pour échouer lorsqu'il trouve du code incorrect. Ses principaux arguments de vente par rapport au contrôle ordinaire de code sont la capacité d'inspecter la qualité dans le contexte de plusieurs fichiers (par exemple, détecter les doublons), d'effectuer une analyse avancée (par exemple la complexité du code) et de suivre l'historique et la progression des problèmes de code. Deux exemples d'outils que vous pouvez utiliser sont [Sonarqube](https://www.sonarqube.org/) (+ de 2 600 [étoiles](https://github.com/SonarSource/sonarqube)) et [Code Climate](https://codeclimate.com/) (+ de 1 500 [étoiles](https://github.com/codeclimate/codeclimate)). @@ -542,6 +555,14 @@ mocha --grep "sanity" 🔗[**Plus d'infos : Choisissez soigneusement votre plateforme CI**](/sections/testingandquality/citools.french.md) +## ![✔] 4.13 Testez vos middlewares de manière isolée + +**TL;PL :** Lorsqu'un middleware contient une logique immense qui couvre de nombreuses requêtes, cela vaut la peine de le tester de manière isolée sans pour autant éveiller tout le framework du web. Cela peut être facilement réalisé en espionnant les objets {req, res, next}. + +**Autrement :** Un bogue dans le middleware Express === un bogue dans toutes ou la plupart des requêtes + +🔗 [**Plus d'infos : Testez vos middlewares de manière isolée**](/sections/testingandquality/test-middlewares.french.md) +


⬆ Retourner en haut de la page

@@ -724,6 +745,16 @@ mocha --grep "sanity" 🔗 [**Read More: Log Routing**](/sections/production/logrouting.md) +

+ +## ![✔] 5.19. Install your packages with `npm ci` + +**TL;DR:** You have to be sure that production code uses the exact version of the packages you have tested it with. Run `npm ci` to do a clean install of your dependencies matching package.json and package-lock.json. + +**Otherwise:****** QA will thoroughly test the code and approve a version that will behave differently in production. Even worse, different servers in the same production cluster might run different code + +🔗 [**Read More: Use npm ci**](/sections/production/installpackageswithnpmci.md) +


⬆ Return to top

@@ -814,15 +845,15 @@ mocha --grep "sanity"

-## ![✔] 6.8. Avoid using the Node.js crypto library for handling passwords, use Bcrypt +## ![✔] 6.8. Protect Users' Passwords/Secrets using brypt or scrypt -**TL;DR:** Passwords or secrets (API keys) should be stored using a secure hash + salt function like `bcrypt`, that should be a preferred choice over its JavaScript implementation due to performance and security reasons. +**TL;DR:** Passwords or secrets (e.g. API keys) should be stored using a secure hash + salt function like `bcrypt`,`scrypt`, or worst case `pbkdf2`. -**Otherwise:** Passwords or secrets that are persisted without using a secure function are vulnerable to brute forcing and dictionary attacks that will lead to their disclosure eventually. +**Otherwise:** Passwords and secrets that are stored without using a secure function are vulnerable to brute forcing and dictionary attacks that will lead to their disclosure eventually. -🔗 [**Read More: Use Bcrypt**](/sections/security/bcryptpasswords.md) +🔗 [**Read More: User Passwords**](/sections/security/userpasswords.md)

@@ -931,7 +962,7 @@ mocha --grep "sanity" **TL;DR:** Avoid requiring/importing another file with a path that was given as parameter due to the concern that it could have originated from user input. This rule can be extended for accessing files in general (i.e. `fs.readFile()`) or other sensitive resource access with dynamic variables originating from user input. [Eslint-plugin-security](https://www.npmjs.com/package/eslint-plugin-security) linter can catch such patterns and warn early enough -**Otherwise:** Malicious user input could find its way to a parameter that is used to require tampered files, for example, a previously uploaded file on the filesystem, or access already existing system files. +**Otherwise:** Malicious user input could find its way to a parameter that is used to require tampered files, for example, a previously uploaded file on the file system, or access already existing system files. 🔗 [**Read More: Safe module loading**](/sections/security/safemoduleloading.md) @@ -979,7 +1010,7 @@ mocha --grep "sanity" **TL;DR:** Any step in the development chain should be protected with MFA (multi-factor authentication), npm/Yarn are a sweet opportunity for attackers who can get their hands on some developer's password. Using developer credentials, attackers can inject malicious code into libraries that are widely installed across projects and services. Maybe even across the web if published in public. Enabling 2-factor-authentication in npm leaves almost zero chances for attackers to alter your package code. -**Otherwise:** [Have you heard about the eslint developer who's password was hijacked?](https://medium.com/@oprearocks/eslint-backdoor-what-it-is-and-how-to-fix-the-issue-221f58f1a8c8) +**Otherwise:** [Have you heard about the eslint developer whose password was hijacked?](https://medium.com/@oprearocks/eslint-backdoor-what-it-is-and-how-to-fix-the-issue-221f58f1a8c8)

@@ -1032,7 +1063,7 @@ mocha --grep "sanity" # `7. Draft: Performance Best Practices` -## Our contributors are working on this section. [Would you like to join?](https://github.com/i0natan/nodebestpractices/issues/256) +## Our contributors are working on this section. [Would you like to join?](https://github.com/goldbergyoni/nodebestpractices/issues/256)

@@ -1046,11 +1077,10 @@ mocha --grep "sanity"


- ## ![✔] 7.2. Prefer native JS methods over user-land utils like Lodash - **TL;DR:** It's often more penalising to use utility libraries like `lodash` and `underscore` over native methods as it leads to unneeded dependencies and slower performance. - Bear in mind that with the introduction of the new V8 engine alongside the new ES standards, native methods were improved in such a way that it's now about 50% more performant than utility libraries. +**TL;DR:** It's often more penalising to use utility libraries like `lodash` and `underscore` over native methods as it leads to unneeded dependencies and slower performance. +Bear in mind that with the introduction of the new V8 engine alongside the new ES standards, native methods were improved in such a way that it's now about 50% more performant than utility libraries. **Otherwise:** You'll have to maintain less performant projects where you could have simply used what was **already** available or dealt with a few more lines in exchange of a few more files. @@ -1058,10 +1088,189 @@ mocha --grep "sanity"


+

⬆ Return to top

+ +# `8. Docker Best Practices` + +🏅 Many thanks to [Bret Fisher](https://github.com/BretFisher) from whom we learned many of the following practices + +

+ +## ![✔] 8.1 Use multi-stage builds for leaner and more secure Docker images + +**TL;DR:** Use multi-stage build to copy only necessary production artifacts. A lot of build-time dependencies and files are not needed for running your application. With multi-stage builds these resources can be used during build while the runtime environment contains only what's necessary. Multi-stage builds are an easy way to get rid of overweight and security threats. + +**Otherwise:** Larger images will take longer to build and ship, build-only tools might contain vulnerabilities and secrets only meant for the build phase might be leaked. + +### Example Dockerfile for multi-stage builds + +```dockerfile +FROM node:14.4.0 AS build + +COPY . . +RUN npm ci && npm run build + +FROM node:slim-14.4.0 + +USER node +EXPOSE 8080 + +COPY --from=build /home/node/app/dist /home/node/app/package.json /home/node/app/package-lock.json ./ +RUN npm ci --production + +CMD [ "node", "dist/app.js" ] +``` + +🔗 [**Read More: Use multi-stage builds**](/sections/docker/multi_stage_builds.md) + +


+ +## ![✔] 8.2. Bootstrap using 'node' command, avoid npm start + +**TL;DR:** use `CMD ['node','server.js']` to start your app, avoid using npm scripts which don't pass OS signals to the code. This prevents problems with child-process, signal handling, graceful shutdown and having zombie processes. + +**Otherwise:** When no signals are passed, your code will never be notified about shutdowns. Without this, it will lose its chance to close properly possibly losing current requests and/or data. + +[**Read More: Bootstrap container using node command, avoid npm start**](/sections/docker/bootstrap-using-node.md) + +


+ +## ![✔] 8.3. Let the Docker runtime handle replication and uptime + +**TL;DR:** When using a Docker run time orchestrator (e.g., Kubernetes), invoke the Node.js process directly without intermediate process managers or custom code that replicate the process (e.g. PM2, Cluster module). The runtime platform has the highest amount of data and visibility for making placement decision - It knows best how many processes are needed, how to spread them and what to do in case of crashes + +**Otherwise:** Container keeps crashing due to lack of resources will get restarted indefinitely by the process manager. Should Kubernetes be aware of that, it could relocate it to a different roomy instance + +🔗 [**Read More: Let the Docker orchestrator restart and replicate processes**](/sections/docker/restart-and-replicate-processes.md) + +


+ +## ![✔] 8.4. Use .dockerignore to prevent leaking secrets + +**TL;DR**: Include a `.dockerignore` file that filters out common secret files and development artifacts. By doing so, you might prevent secrets from leaking into the image. As a bonus the build time will significantly decrease. Also, ensure not to copy all files recursively rather explicitly choose what should be copied to Docker + +**Otherwise**: Common personal secret files like `.env`, `.aws` and `.npmrc` will be shared with anybody with access to the image (e.g. Docker repository) + +🔗 [**Read More: Use .dockerignore**](/sections/docker/docker-ignore.md) + +


+ +## ![✔] 8.5. Clean-up dependencies before production + +**TL;DR:** Although Dev-Dependencies are sometimes needed during the build and test life-cycle, eventually the image that is shipped to production should be minimal and clean from development dependencies. Doing so guarantees that only necessary code is shipped and the amount of potential attacks (i.e. attack surface) is minimized. When using multi-stage build (see dedicated bullet) this can be achieved by installing all dependencies first and finally running `npm ci --production` + +**Otherwise:** Many of the infamous npm security breaches were found within development packages (e.g. [eslint-scope](https://eslint.org/blog/2018/07/postmortem-for-malicious-package-publishes)) + +🔗 Read More: [Remove development dependencies](/sections/docker/install-for-production.md) + +


+ +## ![✔] 8.6. Shutdown smartly and gracefully + +**TL;DR:** Handle the process SIGTERM event and clean-up all existing connection and resources. This should be done while responding to ongoing requests. In Dockerized runtimes shutting down containers is not a rare event, rather a frequent occurrence that happen as part of routine work. Achieving this demands some thoughtful code to orchestrate several moving parts: The load balancer, keep-alive connections, the HTTP server and other resources + +**Otherwise:** Dying immediately means not responding to thousands of disappointed users + +🔗 [**Read More: Graceful shutdown**](/sections/docker/graceful-shutdown.md) + +


+ +## ![✔] 8.7. Set memory limits using both Docker and v8 + +**TL;DR:** Always configure a memory limit using both Docker and the JavaScript runtime flags. The Docker limit is needed to make thoughtful container placement decision, the --v8's flag max-old-space is needed to kick off the GC on time and prevent under utilization of memory. Practically, set the v8's old space memory to be a just bit less than the container limit + +**Otherwise:** The docker definition is needed to perform thoughtful scaling decision and prevent starving other citizens. Without also defining the v8's limits, it will under utilize the container resources - Without explicit instructions it crashes when utilizing ~50-60% of its host resources + +🔗 [**Read More: Set memory limits using Docker only**](/sections/docker/memory-limit.md) + +


+ +## ![✔] 8.8. Plan for efficient caching + +**TL;DR:** Rebuilding a whole docker image from cache can be nearly instantaneous if done correctly. The less updated instructions should be at the top of your Dockerfile and the ones constantly changing (like app code) should be at the bottom. + +**Otherwise:** Docker build will be very long and consume lot of resources even when making tiny changes + +🔗 [**Read More: Leverage caching to reduce build times**](/sections/docker/use-cache-for-shorter-build-time.md) + +


+ +## ![✔] 8.9. Use explicit image reference, avoid `latest` tag + +**TL;DR:** Specify an explicit image digest or versioned label, never refer to `latest`. Developers are often led to believe that specifying the `latest` tag will provide them with the most recent image in the repository however this is not the case. Using a digest guarantees that every instance of the service is running exactly the same code. + +In addition, referring to an image tag means that the base image is subject to change, as image tags cannot be relied upon for a deterministic install. Instead, if a deterministic install is expected, a SHA256 digest can be used to reference an exact image. + +**Otherwise:** A new version of a base image could be deployed into production with breaking changes, causing unintended application behaviour. + +🔗 [**Read More: Understand image tags and use the "latest" tag with caution**](/sections/docker/image-tags.md) + +


+ +## ![✔] 8.10. Prefer smaller Docker base images + +**TL;DR:** Large images lead to higher exposure to vulnerabilities and increased resource consumption. Using leaner Docker images, such as Slim and Alpine Linux variants, mitigates this issue. + +**Otherwise:** Building, pushing, and pulling images will take longer, unknown attack vectors can be used by malicious actors and more resources are consumed. + +🔗 [**Read More: Prefer smaller images**](/sections/docker/smaller_base_images.md) + +


+ +## ![✔] 8.11. Clean-out build-time secrets, avoid secrets in args + +**TL;DR:** Avoid secrets leaking from the Docker build environment. A Docker image is typically shared in multiple environment like CI and a registry that are not as sanitized as production. A typical example is an npm token which is usually passed to a dockerfile as argument. This token stays within the image long after it is needed and allows the attacker indefinite access to a private npm registry. This can be avoided by coping a secret file like `.npmrc` and then removing it using multi-stage build (beware, build history should be deleted as well) or by using Docker build-kit secret feature which leaves zero traces + +**Otherwise:** Everyone with access to the CI and docker registry will also get access to some precious organization secrets as a bonus + +🔗 [**Read More: Clean-out build-time secrets**](/sections/docker/avoid-build-time-secrets.md) + +


+ +## ![✔] 8.12. Scan images for multi layers of vulnerabilities + +**TL;DR:** Besides checking code dependencies vulnerabilities also scan the final image that is shipped to production. Docker image scanners check the code dependencies but also the OS binaries. This E2E security scan covers more ground and verifies that no bad guy injected bad things during the build. Consequently, it is recommended running this as the last step before deployment. There are a handful of free and commercial scanners that also provide CI/CD plugins + +**Otherwise:** Your code might be entirely free from vulnerabilities. However it might still get hacked due to vulnerable version of OS-level binaries (e.g. OpenSSL, TarBall) that are commonly being used by applications + +🔗 [**Read More: Generic Docker practices**](/sections/docker/scan-images.md) + +


+ +## ![✔] 8.13 Clean NODE_MODULE cache + +**TL;DR:** After installing dependencies in a container remove the local cache. It doesn't make any sense to duplicate the dependencies for faster future installs since there won't be any further installs - A Docker image is immutable. Using a single line of code tens of MB (typically 10-50% of the image size) are shaved off + +**Otherwise:** The image that will get shipped to production will weigh 30% more due to files that will never get used + +🔗 [**Read More: Clean NODE_MODULE cache**](/sections/docker/clean-cache.md) + +


+ +## ![✔] 8.14. Generic Docker practices + +**TL;DR:** This is a collection of Docker advice that is not related directly to Node.js - the Node implementation is not much different than any other language. Click read more to skim through. + +🔗 [**Read More: Generic Docker practices**](/sections/docker/generic-tips.md) + +


+ + +## ![✔] 8.15. Lint your Dockerfile + +**TL;DR:** Linting your Dockerfile is an important step to identify issues in your Dockerfile which differ from best practices. By checking for potential flaws using a specialised Docker linter, performance and security improvements can be easily identified, saving countless hours of wasted time or security issues in production code. + +**Otherwise:** Mistakenly the Dockerfile creator left Root as the production user, and also used an image from unknown source repository. This could be avoided with with just a simple linter. + +🔗 [**Read More: Lint your Dockerfile**](/sections/docker/lint-dockerfile.md) + +


+ +

⬆ Return to top

# Milestones -To maintain this guide and keep it up to date, we are constantly updating and improving the guidelines and best practices with the help of the community. You can follow our [milestones](https://github.com/i0natan/nodebestpractices/milestones) and join the working groups if you want to contribute to this project +To maintain this guide and keep it up to date, we are constantly updating and improving the guidelines and best practices with the help of the community. You can follow our [milestones](https://github.com/goldbergyoni/nodebestpractices/milestones) and join the working groups if you want to contribute to this project
@@ -1074,24 +1283,25 @@ All translations are contributed by the community. We will be happy to get any h - ![BR](/assets/flags/BR.png) [Brazilian Portuguese](./README.brazilian-portuguese.md) - Courtesy of [Marcelo Melo](https://github.com/marcelosdm) - ![CN](/assets/flags/CN.png) [Chinese](./README.chinese.md) - Courtesy of [Matt Jin](https://github.com/mattjin) - ![RU](/assets/flags/RU.png) [Russian](./README.russian.md) - Courtesy of [Alex Ivanov](https://github.com/contributorpw) +- ![PL](/assets/flags/PL.png) [Polish](./README.polish.md) - Courtesy of [Michal Biesiada](https://github.com/mbiesiad) ### Translations in progress -- ![FR](/assets/flags/FR.png) [French](https://github.com/gaspaonrocks/nodebestpractices/blob/french-translation/README.french.md) ([Discussion](https://github.com/i0natan/nodebestpractices/issues/129)) -- ![HE](/assets/flags/HE.png) Hebrew ([Discussion](https://github.com/i0natan/nodebestpractices/issues/156)) -- ![KR](/assets/flags/KR.png) [Korean](README.korean.md) - Courtesy of [Sangbeom Han](https://github.com/uronly14me) ([Discussion](https://github.com/i0natan/nodebestpractices/issues/94)) -- ![ES](/assets/flags/ES.png) [Spanish](https://github.com/i0natan/nodebestpractices/blob/spanish-translation/README.spanish.md) ([Discussion](https://github.com/i0natan/nodebestpractices/issues/95)) -- ![TR](/assets/flags/TR.png) Turkish ([Discussion](https://github.com/i0natan/nodebestpractices/issues/139)) +- ![FR](/assets/flags/FR.png) [French](https://github.com/gaspaonrocks/nodebestpractices/blob/french-translation/README.french.md) ([Discussion](https://github.com/goldbergyoni/nodebestpractices/issues/129)) +- ![HE](/assets/flags/HE.png) Hebrew ([Discussion](https://github.com/goldbergyoni/nodebestpractices/issues/156)) +- ![KR](/assets/flags/KR.png) [Korean](README.korean.md) - Courtesy of [Sangbeom Han](https://github.com/uronly14me) ([Discussion](https://github.com/goldbergyoni/nodebestpractices/issues/94)) +- ![ES](/assets/flags/ES.png) [Spanish](https://github.com/goldbergyoni/nodebestpractices/blob/spanish-translation/README.spanish.md) ([Discussion](https://github.com/goldbergyoni/nodebestpractices/issues/95)) +- ![TR](/assets/flags/TR.png) Turkish ([Discussion](https://github.com/goldbergyoni/nodebestpractices/issues/139))

## Steering Committee -Meet the steering committee members - the people who work together to provide guidance and future direction to the project. In addition, each member of the committee leads a project tracked under our [Github projects](https://github.com/i0natan/nodebestpractices/projects). +Meet the steering committee members - the people who work together to provide guidance and future direction to the project. In addition, each member of the committee leads a project tracked under our [Github projects](https://github.com/goldbergyoni/nodebestpractices/projects). -[Yoni Goldberg](https://github.com/i0natan) +[Yoni Goldberg](https://github.com/goldbergyoni) @@ -1118,14 +1328,25 @@ Full Stack Developer & Site Reliability Engineer based in New Zealand, intereste
+ + +[Kevyn Bruyere](https://github.com/kevynb) + + +Independent full-stack developer with a taste for Ops and automation. + +
+ +### Steering Committee Emeriti + [Sagir Khan](https://github.com/sagirk) - + -Deep specialist in JavaScript and its ecosystem — React, Node.js, MongoDB, pretty much anything that involves using JavaScript/JSON in any layer of the system — building products using the web platform for the world’s most recognized brands. Individual Member of the Node.js Foundation, collaborating on the Community Committee's Website Redesign Initiative. +Deep specialist in JavaScript and its ecosystem — React, Node.js, TypeScript, GraphQL, MongoDB, pretty much anything that involves JS/JSON in any layer of the system — building products using the web platform for the world’s most recognized brands. Individual Member of the Node.js Foundation.
@@ -1135,20 +1356,202 @@ Thank you to all our collaborators! 🙏 Our collaborators are members who are contributing to the repository on a regular basis, through suggesting new best practices, triaging issues, reviewing pull requests and more. If you are interested in helping us guide thousands of people to craft better Node.js applications, please read our [contributor guidelines](/.operations/CONTRIBUTING.md) 🎉 -| | | -| :--: | :--: | -| [Ido Richter (Founder)](https://github.com/idori) | [Keith Holliday](https://github.com/TheHollidayInn) | +| | |target="_blank"> | +| :---------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------: | +| [Ido Richter (Founder)](https://github.com/idori) | [Keith Holliday](https://github.com/TheHollidayInn) | -### Past collaborators +### Collaborator Emeriti | | -| :--: | -| [Refael Ackermann](https://github.com/refack) | +| :-------------------------------------------------------------------------------------------------------------------------: | +| [Refael Ackermann](https://github.com/refack) |
-## Thank You Notes - -We appreciate any contribution, from a single word fix to a new best practice. View our contributors and [contributing documentation here!](CONTRIBUTORS.md) - -


+## Contributing +If you've ever wanted to contribute to open source, now is your chance! See the [contributing docs](.operations/CONTRIBUTING.md) for more information. + +## Contributors ✨ + +Thanks goes to these wonderful people who have contributed to this repository! + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Kevin Rambaud

🖋

Michael Fine

🖋

Shreya Dahal

🖋

Matheus Cruz Rocha

🖋

Yog Mehta

🖋

Kudakwashe Paradzayi

🖋

t1st3

🖋

mulijordan1976

🖋

Matan Kushner

🖋

Fabio Hiroki

🖋

James Sumners

🖋

Dan Gamble

🖋

PJ Trainor

🖋

Remek Ambroziak

🖋

Yoni Jah

🖋

Misha Khokhlov

🖋

Evgeny Orekhov

🖋

-

🖋

Isaac Halvorson

🖋

Vedran Karačić

🖋

lallenlowe

🖋

Nathan Wells

🖋

Paulo Reis

🖋

syzer

🖋

David Sancho

🖋

Robert Manolea

🖋

Xavier Ho

🖋

Aaron

🖋

Jan Charles Maghirang Adona

🖋

Allen

🖋

Leonardo Villela

🖋

Michał Załęcki

🖋

Chris Nicola

🖋

Alejandro Corredor

🖋

cwar

🖋

Yuwei

🖋

Utkarsh Bhatt

🖋

Duarte Mendes

🖋

Jason Kim

🖋

Mitja O.

🖋

Sandro Miguel Marques

🖋

Gabe

🖋

Ron Gross

🖋

Valeri Karpov

🖋

Sergio Bernal

🖋

Nikola Telkedzhiev

🖋

Vitor Godoy

🖋

Manish Saraan

🖋

Sangbeom Han

🖋

blackmatch

🖋

Joe Reeve

🖋

Ryan Busby

🖋

Iman Mohamadi

🖋

Sergii Paryzhskyi

🖋

Kapil Patel

🖋

迷渡

🖋

Hozefa

🖋

Ethan

🖋

Sam

🖋

Arlind

🖋

Teddy Toussaint

🖋

Lewis

🖋

Gabriel Lidenor

🖋

Roman

🖋

Francozeira

🖋

Invvard

🖋

Rômulo Garofalo

🖋

Tho Q Luong

🖋

Burak Shen

🖋

Martin Muzatko

🖋

Jared Collier

🖋

Hilton Meyer

🖋

ChangJoo Park(박창주)

🖋

Masahiro Sakaguchi

🖋

Keith Holliday

🖋

coreyc

🖋

Maximilian Berkmann

🖋

Douglas Mariano Valero

🖋

Marcelo Melo

🖋

Mehmet Perk

🖋

ryan ouyang

🖋

Shabeer

🖋

Eduard Kyvenko

🖋

Deyvison Rocha

🖋

George Mamer

🖋

Konstantinos Leimonis

🖋

Oliver Lluberes

🌍

Tien Do

🖋

Ranvir Singh

🖋

Vadim Nicolaev

🖋 🌍

German Gamboa Gonzalez

🖋

Hafez

🖋

Chandiran

🖋

VinayaSathyanarayana

🖋

Kim Kern

🖋

Kenneth Freitas

🖋

songe

🖋

Kirill Shekhovtsov

🖋

Serge

🖋

keyrwinz

🖋

Dmitry Nikitenko

🖋

bushuai

👀 🖋

Benjamin Gruenbaum

🖋

Ezequiel

🌍

Juan José Rodríguez

🌍

Or Bin

🖋

Andreo Vieira

🖋

Michael Solomon

🖋

Jimmy Callin

🖋

Siddharth

🖋

Ryan Smith

🖋

Tom Boettger

🖋

Joaquín Ormaechea

🌍

dfrzuz

🌍

Victor Homyakov

🖋

Josh

🖋 🛡️

Alec Francis

🖋

arjun6610

🖋

Jan Osch

🖋

Thiago Rotondo Sampaio

🌍

Alexsey

🖋

Luis A. Acurero

🌍

Lucas Romano

🌍

Denise Case

🖋

Nick Ribal

🖋

0xflotus

🖋

Jonathan Chen

🖋

Dilan Srilal

🖋

vladthelittleone

🌍

Nik Osvalds

🖋

Daniel Kiss

📖

Forresst

🖋
+ + + + + \ No newline at end of file diff --git a/sections/docker/avoid-build-time-secrets.french.md b/sections/docker/avoid-build-time-secrets.french.md new file mode 100644 index 000000000..884206419 --- /dev/null +++ b/sections/docker/avoid-build-time-secrets.french.md @@ -0,0 +1,93 @@ +# Clean build-time secrets, avoid secrets as args + +

+ +### One Paragraph Explainer + + +A Docker image isn't just a bunch of files but rather multiple layers revealing what happened during build-time. In a very common scenario, developers need the npm token during build time (mostly for private registries) - this is falsely achieved by passing the token as a build time args. It might seem innocent and safe, however this token can now be fetched from the developer's machine Docker history, from the Docker registry and the CI. An attacker who gets access to that token is now capable of writing into the organization private npm registry. There are two more secured alternatives: The flawless one is using Docker --secret feature (experimental as of July 2020) which allows mounting a file during build time only. The second approach is using multi-stage build with args, building and then copying only the necessary files to production. The last technique will not ship the secrets with the images but will appear in the local Docker history - This is typically considered as secured enough for most organizations. + +

+ +### Code Example – Using Docker mounted secrets (experimental but stable) + +
+ +Dockerfile + +``` +# syntax = docker/dockerfile:1.0-experimental + +FROM node:12-slim +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +RUN --mount=type=secret,id=npm,target=/root/.npmrc npm ci + +# The rest comes here +``` + +
+ +

+ +### Code Example – Building securely using multi-stage build + +
+ +Dockerfile + +``` + +FROM node:12-slim AS build +ARG NPM_TOKEN +WORKDIR /usr/src/app +COPY . /dist +RUN echo "//registry.npmjs.org/:\_authToken=\$NPM_TOKEN" > .npmrc && \ + npm ci --production && \ + rm -f .npmrc + +FROM build as prod +COPY --from=build /dist /dist +CMD ["node","index.js"] + +# The ARG and .npmrc won't appear in the final image but can be found in the Docker daemon un-tagged images list - make sure to delete those +``` + +
+ +

+ +### Code Example Anti Pattern – Using build time args + +
+ +Dockerfile + +``` + +FROM node:12-slim +ARG NPM_TOKEN +WORKDIR /usr/src/app +COPY . /dist +RUN echo "//registry.npmjs.org/:\_authToken=\$NPM_TOKEN" > .npmrc && \ + npm ci --production && \ + rm -f .npmrc + +# Deleting the .npmrc within the same copy command will not save it inside the layer, however it can be found in image history + +CMD ["node","index.js"] +``` + +
+ +

+ +### Blog Quote: "These secrets aren’t saved in the final Docker" + +From the blog, [Alexandra Ulsh](https://www.alexandraulsh.com/2019/02/24/docker-build-secrets-and-npmrc/?fbclid=IwAR0EAr1nr4_QiGzlNQcQKkd9rem19an9atJRO_8-n7oOZXwprToFQ53Y0KQ) + +> In November 2018 Docker 18.09 introduced a new --secret flag for docker build. This allows us to pass secrets from a file to our Docker builds. These secrets aren’t saved in the final Docker image, any intermediate images, or the image commit history. With build secrets, you can now securely build Docker images with private npm packages without build arguments and multi-stage builds. + +``` + +``` \ No newline at end of file diff --git a/sections/docker/bootstrap-using-node.french.md b/sections/docker/bootstrap-using-node.french.md new file mode 100644 index 000000000..09114c16a --- /dev/null +++ b/sections/docker/bootstrap-using-node.french.md @@ -0,0 +1,85 @@ +# Bootstrap container using node command instead of npm + +## One paragraph explainer + +We are used to see code examples where folks start their app using `CMD 'npm start'`. This is a bad practice. The `npm` binary will not forward signals to your app which prevents graceful shutdown (see [/sections/docker/graceful-shutdown.md]). If you are using Child-processes they won’t be cleaned up correctly in case of unexpected shutdown, leaving zombie processes on your host. `npm start` also results in having an extra process for no benefit. To start you app use `CMD ['node','server.js']`. If your app spawns child-processes also use `TINI` as an entrypoint. + +### Code example - Bootsraping using Node + +```dockerfile + +FROM node:12-slim AS build + + +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +RUN npm ci --production && npm clean cache --force + +CMD ["node", "server.js"] +``` + + +### Code example - Using Tiny as entrypoint + +```dockerfile + +FROM node:12-slim AS build + +# Add Tini if using child-processes +ENV TINI_VERSION v0.19.0 +ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini +RUN chmod +x /tini + +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +RUN npm ci --production && npm clean cache --force + +ENTRYPOINT ["/tini", "--"] + +CMD ["node", "server.js"] +``` + +### Antipatterns + +Using npm start +```dockerfile + +FROM node:12-slim AS build +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +RUN npm ci --production && npm clean cache --force + +# don’t do that! +CMD "npm start" +``` + +Using node in a single string will start a bash/ash shell process to execute your command. That is almost the same as using `npm` + +```dockerfile + +FROM node:12-slim AS build +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +RUN npm ci --production && npm clean cache --force + +# don’t do that, it will start bash +CMD "node server.js" +``` + +Starting with npm, here’s the process tree: +``` +$ ps falx + UID PID PPID COMMAND + 0 1 0 npm + 0 16 1 sh -c node server.js + 0 17 16 \_ node server.js +``` +There is no advantage to those two extra process. + +Sources: + + +https://maximorlov.com/process-signals-inside-docker-containers/ + + +https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#handling-kernel-signals \ No newline at end of file diff --git a/sections/docker/clean-cache.french.md b/sections/docker/clean-cache.french.md new file mode 100644 index 000000000..3126564f1 --- /dev/null +++ b/sections/docker/clean-cache.french.md @@ -0,0 +1,27 @@ +# Clean NODE_MODULE cache + +

+ +### One Paragraph Explainer + +Node package managers, npm & Yarn, cache the installed packages locally so that future projects which need the same libraries won't need to fetch from a remote repository. Although this duplicates the packages and consumes more storage - it pays off in a local development environment that typically keeps installing the same packages. In a Docker container this storage increase is worthless since it installs the dependency only once. By removing this cache, using a single line of code, tens of MB are shaved from the image. While doing so, ensure that it doesn't exit with non-zero code and fail the CI build because of caching issues - This can be avoided by including the --force flag. + +*Please not that this is not relevant if you are using a multi-stage build as long as you don't install new packages in the last stage* + +

+ +### Code Example – Clean cache + +
+Dockerfile + +``` +FROM node:12-slim AS build +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +RUN npm ci --production && npm cache clean --force + +# The rest comes here +``` + +
\ No newline at end of file diff --git a/sections/docker/docker-ignore.french.md b/sections/docker/docker-ignore.french.md new file mode 100644 index 000000000..72f5823f1 --- /dev/null +++ b/sections/docker/docker-ignore.french.md @@ -0,0 +1,49 @@ +# Use .dockerignore to prevent leaking secrets + +

+ +### One Paragraph Explainer + +The Docker build command copies the local files into the build context environment over a virtual network. Be careful - development and CI folders contain secrets like .npmrc, .aws, .env files and other sensitive files. Consequently, Docker images might hold secrets and expose them in unsafe territories (e.g. Docker repository, partners servers). In a better world the Dockerfile should be explicit about what is being copied. On top of this include a .dockerignore file that acts as the last safety net that filters out unnecessary folders and potential secrets. Doing so also boosts the build speed - By leaving out common development folders that have no use in production (e.g. .git, test results, IDE configuration), the builder can better utilize the cache and achieve better performance + +

+ +### Code Example – A good default .dockerignore for Node.js + +
+.dockerignore + +``` +**/node_modules/ +**/.git +**/README.md +**/LICENSE +**/.vscode +**/npm-debug.log +**/coverage +**/.env +**/.editorconfig +**/.aws +**/dist +``` + +
+ +

+ +### Code Example Anti-Pattern – Recursive copy of all files + +
+Dockerfile + +``` +FROM node:12-slim AS build +WORKDIR /usr/src/app +# The next line copies everything +COPY . . + +# The rest comes here + +``` + +
\ No newline at end of file diff --git a/sections/docker/generic-tips.french.md b/sections/docker/generic-tips.french.md new file mode 100644 index 000000000..5e4fba34c --- /dev/null +++ b/sections/docker/generic-tips.french.md @@ -0,0 +1,29 @@ +[✔]: ../../assets/images/checkbox-small-blue.png + +# Common Node.js Docker best practices + +This common Docker guidelines section contains best practices that are standardized among all programming languages and have no special Node.js interpretation + +## ![✔] Prefer COPY over ADD command + +**TL;DR:** COPY is safer as it copies local files only while ADD supports fancier fetches like downloading binaries from remote sites + +## ![✔] Avoid updating the base OS + +**TL;DR:** Updating the local binaries during build (e.g. apt-get update) creates inconsistent images every time it runs and also demands elevated privileges. Instead use base images that are updated frequently + +## ![✔] Classify images using labels + +**TL;DR:** Providing metadata for each image might help Ops professionals treat it adequately. For example, include the maintainer name, build date and other information that might prove useful when someone needs to reason about an image + +## ![✔] Use unprivileged containers + +**TL;DR:** Privileged container have the same permissions and capabilities as the root user over the host machine. This is rarely needed and as a rule of thumb one should use the 'node' user that is created within official Node images + +## ![✔] Inspect and verify the final result + +**TL;DR:** Sometimes it's easy to overlook side effects in the build process like leaked secrets or unnecessary files. Inspecting the produced image using tools like [Dive](https://github.com/wagoodman/dive) can easily help to identify such issues + +## ![✔] Perform integrity check + +**TL;DR:** While pulling base or final images, the network might be mislead and redirected to download malicious images. Nothing in the standard Docker protocol prevents this unless signing and verifying the content. [Docker Notary](https://docs.docker.com/notary/getting_started/) is one of the tools to achieve this \ No newline at end of file diff --git a/sections/docker/graceful-shutdown.french.md b/sections/docker/graceful-shutdown.french.md new file mode 100644 index 000000000..4bf4c1128 --- /dev/null +++ b/sections/docker/graceful-shutdown.french.md @@ -0,0 +1,84 @@ +# Shutdown gracefully + +

+ +### One Paragraph Explainer + +In a Dockerized runtime like Kubernetes, containers are born and die frequently. This happens not only when errors are thrown but also for good reasons like relocating containers, replacing them with a newer version and more. It's achieved by sending a notice (SIGTERM signal) to the process with a 30 second grace period. This puts a challenge on the developer to ensure the app is handling the ongoing requests and clean-up resources in a timely fashion. Otherwise thousands of sad users will not get a response. Implementation-wise, the shutdown code should wait until all ongoing requests are flushed out and then clean-up resources. Easier said than done, practically it demands orchestrating several parts: Tell the LoadBalancer that the app is not ready to serve more requests (via health-check), wait for existing requests to be done, avoid handling new requests, clean-up resources and finally log some useful information before dying. If Keep-Alive connections are being used, the clients must also be notified that a new connection should be established - A library like [Stoppable](https://github.com/hunterloftis/stoppable) can greatly help achieving this. + +

+ + +### Code Example – Placing Node.js as the root process allows passing signals to the code (see [bootstrap using node](/sections/docker/bootstrap-using-node.md)) + +
+ +Dockerfile + +``` + +FROM node:12-slim + +# Build logic comes here + +CMD ["node", "index.js"] +#This line above will make Node.js the root process (PID1) + +``` + +
+ +

+ +### Code Example – Using Tiny process manager to forward signals to Node + +
+ +Dockerfile + +``` + +FROM node:12-slim + +# Build logic comes here + +ENV TINI_VERSION v0.19.0 +ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini +RUN chmod +x /tini +ENTRYPOINT ["/tini", "--"] + +CMD ["node", "index.js"] +#Now Node will run a sub-process of TINI which acts as PID1 + +``` + +
+ +

+ +### Code Example Anti Pattern – Using npm scripts to initialize the process + +
+ +Dockerfile + +``` + +FROM node:12-slim + +# Build logic comes here + +CMD ["npm", "start"] +#Now Node will run a sub-process of npm and won't receive signals + +``` + +
+ +

+ +### Example - The shutdown phases + +From the blog, [Rising Stack](https://blog.risingstack.com/graceful-shutdown-node-js-kubernetes/) + +![alt text](/assets/images/Kubernetes-graceful-shutdown-flowchart.png "The shutdown phases") \ No newline at end of file diff --git a/sections/docker/image-tags.french.md b/sections/docker/image-tags.french.md new file mode 100644 index 000000000..f770243da --- /dev/null +++ b/sections/docker/image-tags.french.md @@ -0,0 +1,27 @@ +# Understand image tags vs digests and use the `:latest` tag with caution + +### One Paragraph Explainer + +If this is a production situation and security and stability are important then just "convenience" is likely not the best deciding factor. In addition the `:latest` tag is Docker's default tag. This means that a developer who forgets to add an explicit tag will accidentally push a new version of an image as `latest`, which might end in very unintended results if the `latest` tag is being relied upon as the latest production image. + +### Code example: + +```bash +$ docker build -t company/image_name:0.1 . +# :latest image is not updated +$ docker build -t company/image_name +# :latest image is updated +$ docker build -t company/image_name:0.2 . +# :latest image is not updated +$ docker build -t company/image_name:latest . +# :latest image is updated +``` + +### What Other Bloggers Say +From the blog by [Vladislav Supalov](https://vsupalov.com/docker-latest-tag/): +> Some people expect that :latest always points to the most-recently-pushed version of an image. That’s not true. + +From the [Docker success center](https://success.docker.com/article/images-tagging-vs-digests) +> + +
\ No newline at end of file diff --git a/sections/docker/install-for-production.french.md b/sections/docker/install-for-production.french.md new file mode 100644 index 000000000..5a7572537 --- /dev/null +++ b/sections/docker/install-for-production.french.md @@ -0,0 +1,86 @@ +# Remove development dependencies + +

+ +### One Paragraph Explainer + +Dev dependencies greatly increase the container attack surface (i.e. potential security weakness) and the container size. As an example, some of the most impactful npm security breaches were originated from devDependencies like [eslint-scope](https://eslint.org/blog/2018/07/postmortem-for-malicious-package-publishes) or affected dev packages like [event-stream that was used by nodemon](https://snyk.io/blog/a-post-mortem-of-the-malicious-event-stream-backdoor/). For those reasons the image that is finally shipped to production should be safe and minimal. Running npm install with a `--production` is a great start, however it gets even safer to run `npm ci` that ensures a fresh install and the existence of a lock file. Removing the local cache can shave additional tens of MB. Often there is a need to test or debug within a container using devDependencies - In that case, [multi stage builds](/sections/docker/multi_stage_builds.md) can help in having different sets of dependencies and finally only those for production. + +

+ +### Code Example – Installing for production + +
+ +Dockerfile + +``` +FROM node:12-slim AS build +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +RUN npm ci --production && npm clean cache --force + +# The rest comes here +``` + +
+ +

+ +### Code Example – Installing for production with multi-stage build + +
+ +Dockerfile + +``` +FROM node:14.8.0-alpine AS build +COPY --chown=node:node package.json package-lock.json ./ +# ✅ Safe install +RUN npm ci +COPY --chown=node:node src ./src +RUN npm run build + +# Run-time stage +FROM node:14.8.0-alpine +COPY --chown=node:node --from=build package.json package-lock.json ./ +COPY --chown=node:node --from=build node_modules ./node_modules +COPY --chown=node:node --from=build dist ./dist + +# ✅ Clean dev packages +RUN npm prune --production + +CMD [ "node", "dist/app.js" ] +``` + +
+ + +

+ +### Code Example Anti-Pattern – Installing all dependencies in a single stage dockerfile + +
+ +Dockerfile + +``` + +FROM node:12-slim AS build +WORKDIR /usr/src/app +COPY package.json package-lock.json ./ +# Two mistakes below: Installing dev dependencies, not deleting the cache after npm install +RUN npm install + +# The rest comes here +``` + +
+ +

+ +### Blog Quote: "npm ci is also more strict than a regular install" + +From [npm documentation](https://docs.npmjs.com/cli/ci.html) + +> This command is similar to npm-install, except it’s meant to be used in automated environments such as test platforms, continuous integration, and deployment – or any situation where you want to make sure you’re doing a clean install of your dependencies. It can be significantly faster than a regular npm install by skipping certain user-oriented features. It is also more strict than a regular install, which can help catch errors or inconsistencies caused by the incrementally-installed local environments of most npm users. \ No newline at end of file diff --git a/sections/docker/lint-dockerfile.french.md b/sections/docker/lint-dockerfile.french.md new file mode 100644 index 000000000..64ff2df01 --- /dev/null +++ b/sections/docker/lint-dockerfile.french.md @@ -0,0 +1,25 @@ +# Lint your Dockerfile + +### One Paragraph Explainer + +As our core application code is linted to conform to best practices and eliminate issues and bugs before it could become a problem, so too should our Dockerfiles. Linting the Dockerfile means increasing the chances of catching production issues on time with very light effort. For example, it can ensure that there aren’t any structural problems with the logic and instructions specified in your Dockerfiles like trying to copy from non-existing stage, copying from unknown online repository, running the app with power user (SUDO) and many more. The Open Source Dockerfile linter [Hadolint](https://github.com/hadolint/hadolint) can be used manually or as part of a CI process to lint your Dockerfile/s. Hadolint is a specialized Dockerfile linter that aims to embrace the [Docker best practices.](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) + + +
+ +### Code example: Inspecting a Dockerfile using hadolint + +```bash +hadolint production.Dockerfile +hadolint --ignore DL3003 --ignore DL3006 # exclude specific rules +hadolint --trusted-registry my-company.com:500 # Warn when using untrusted FROM images +``` + +### What Other Bloggers Say + +From the blog by [Josh Reichardt](https://thepracticalsysadmin.com/lint-your-dockerfiles-with-hadolint/): +> If you haven’t already gotten in to the habit of linting your Dockerfiles you should. Code linting is a common practice in software development which helps find, identify and eliminate issues and bugs before they are ever able to become a problem. One of the main benefits of linting your code is that it helps identify and eliminate nasty little bugs before they ever have a chance to become a problem. + +From the blog by [Jamie Phillips](https://www.phillipsj.net/posts/hadolint-linting-your-dockerfile/) +> Linters are commonly used in development to help teams detect programmatic and stylistic errors. Hadolint is a linter created for Dockerfiles using Haskell. This tool validates against the best practices outlined by Docker and takes a neat approach to parse the Dockerfile that you should checkout. It supports all major platforms, and this tutorial will be leveraging the container to perform the linting on an example Dockerfile. +
\ No newline at end of file diff --git a/sections/docker/memory-limit.french.md b/sections/docker/memory-limit.french.md new file mode 100644 index 000000000..81ae9eff3 --- /dev/null +++ b/sections/docker/memory-limit.french.md @@ -0,0 +1,70 @@ +# Set memory limits using both Docker and v8 + +

+ +### One Paragraph Explainer + +A memory limit tells the process/container the maximum allowed memory usage - a request or usage beyond this number will kill the process (OOMKill). Applying this is a great practice to ensure one citizen doesn't drink all the juice alone and leaves other components to starve. Memory limits also allow the runtime to place a container in the right instance - placing a container that consumes 500MB in an instance with 300MB memory available will lead to failures. Two different options allow configuring this limit: V8 flags (--max-old-space-size) and the Docker runtime, both are absolutely needed. Ensure to always configure the Docker runtime limits as it has a much wider perspective for making the right health decisions: Given this limit, the runtime knows how to scale and create more resources. It can also make a thoughtful decision on when to crash - if a container has a short burst in memory request and the hosting instance is capable of supporting this, Docker will let the container stay alive. Last, with Docker the Ops experts can set various production memory configurations that can be taken into account like memory swap. This by itself won't be enough - Without setting v8's --max-old-space-size, the JavaScript runtime won't push the garbage collection when getting close to the limits and will also crash when utilizing only 50-60% of the host environment. Consequently, set v8's limit to be 75-100% of Docker's memory limit. + +

+ +### Code Example – Memory limit with Docker + +
+Bash + +``` +docker run --memory 512m my-node-app +``` + +
+ +

+ +### Code Example – Memory limit with Kubernetes and v8 + +
+Kubernetes deployment yaml + +``` +apiVersion: v1 +kind: Pod +metadata: + name: my-node-app +spec: + containers: + - name: my-node-app + image: my-node-app + resources: + requests: + memory: "400Mi" + limits: + memory: "500Mi" + command: ["node index.js --max-old-space-size=350"] +``` + +
+ +

+ +### Kubernetes documentation: "If you do not specify a memory limit" + +From [K8S documentation](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/) + +> The Container has no upper bound on the amount of memory it uses. The Container could use all of the memory available on the Node where it is running which in turn could invoke the OOM Killer. Further, in case of an OOM Kill, a container with no resource limits will have a greater chance of being killed. + +

+ +### Docker documentation: "it throws an OOME and starts killing processes " + +From [Docker official docs](https://docs.docker.com/config/containers/resource_constraints/) + +> It is important not to allow a running container to consume too much of the host machine’s memory. On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory. + +

+ +### Node.js documentation: "V8 will spend more time on garbage collection" + +From [Node.js official docs](https://nodejs.org/api/cli.html#cli_max_old_space_size_size_in_megabytes) + +> Sets the max memory size of V8's old memory section. As memory consumption approaches the limit, V8 will spend more time on garbage collection in an effort to free unused memory. On a machine with 2GB of memory, consider setting this to 1536 (1.5GB) to leave some memory for other uses and avoid swapping. \ No newline at end of file diff --git a/sections/docker/multi_stage_builds.french.md b/sections/docker/multi_stage_builds.french.md new file mode 100644 index 000000000..1a9131676 --- /dev/null +++ b/sections/docker/multi_stage_builds.french.md @@ -0,0 +1,116 @@ +# Use multi-stage builds + +### One Paragraph Explainer + +Multi-stage builds allow to separate build- and runtime-specific environment details, such as available binaries, exposed environment variables, and even the underlying operating system. Splitting up your Dockerfiles into multiple stages will help to reduce final image and container size as you'll only ship what you really need to run your application. Sometimes you'll need to include tools that are only needed during the build phase, for example development dependencies such as the TypeScript CLI. You can install it during the build stage and only use the final output in the run stage. This also means your image will shrink as some dependencies won't get copied over. You might also have to expose environment variables during build that should not be present at runtime (see [avoid build time secrets](/sections/docker/avoid-build-time-secrets.md)), such as API Keys and secrets used for communicating with specific services. In the final stage, you can copy in pre-built resources such as your build folder, or production-only dependencies (which you can also fetch in a subsequent step). + +### Example + +Let's imagine the following directory structure + +``` +- Dockerfile +- src/ + - index.ts +- package.json +- yarn.lock +- .dockerignore +- docs/ + - README.md +``` + +Your [.dockerignore](/sections/docker/dockerignore.md) will already filter out files that won't be needed for building and running your application. + +``` +# Don't copy in existing node_modules, we'll fetch our own +node_modules + +# Docs are large, we don't need them in our Docker image +docs +``` + +#### Dockerfile with multiple stages + +Since Docker is often used in continuous integration environments it is recommended to use the `npm ci` command (instead of `npm install`). It is faster, stricter and reduces inconsistencies by using only the versions specified in the package-lock.json file. See [here](https://docs.npmjs.com/cli/ci.html#description) for more info. This example uses yarn as package manager for which the equivalent to `npm ci` is the `yarn install --frozen-lockfile` [command](https://classic.yarnpkg.com/en/docs/cli/install/). + +```dockerfile +FROM node:14.4.0 AS build + +COPY --chown=node:node . . +RUN yarn install --frozen-lockfile && yarn build + +FROM node:14.4.0 + +USER node +EXPOSE 8080 + +# Copy results from previous stage +COPY --chown=node:node --from=build /home/node/app/dist /home/node/app/package.json /home/node/app/yarn.lock ./ +RUN yarn install --frozen-lockfile --production + +CMD [ "node", "dist/app.js" ] +``` + +#### Dockerfile with multiple stages and different base images + +```dockerfile +FROM node:14.4.0 AS build + +COPY --chown=node:node . . +RUN yarn install --frozen-lockfile && yarn build + +# This will use a minimal base image for the runtime +FROM node:14.4.0-alpine + +USER node +EXPOSE 8080 + +# Copy results from previous stage +COPY --chown=node:node --from=build /home/node/app/dist /home/node/app/package.json /home/node/app/yarn.lock ./ +RUN yarn install --frozen-lockfile --production + +CMD [ "node", "dist/app.js" ] +``` + +#### Full Dockerfile with multiple stages and different base images + +Our Dockerfile will contain two phases: One for building the application using the fully-featured Node.js Docker image, +and a second phase for running the application, based on the minimal Alpine image. We'll only copy over the built files to our second stage, +and then install production dependencies. + +```dockerfile +# Start with fully-featured Node.js base image +FROM node:14.4.0 AS build + +USER node +WORKDIR /home/node/app + +# Copy dependency information and install all dependencies +COPY --chown=node:node package.json yarn.lock ./ + +RUN yarn install --frozen-lockfile + +# Copy source code (and all other relevant files) +COPY --chown=node:node src ./src + +# Build code +RUN yarn build + +# Run-time stage +FROM node:14.4.0-alpine + +# Set non-root user and expose port 8080 +USER node +EXPOSE 8080 + +WORKDIR /home/node/app + +# Copy dependency information and install production-only dependencies +COPY --chown=node:node package.json yarn.lock ./ +RUN yarn install --frozen-lockfile --production + +# Copy results from previous stage +COPY --chown=node:node --from=build /home/node/app/dist ./dist + +CMD [ "node", "dist/app.js" ] +``` \ No newline at end of file diff --git a/sections/docker/restart-and-replicate-processes.french.md b/sections/docker/restart-and-replicate-processes.french.md new file mode 100644 index 000000000..ac2a78f74 --- /dev/null +++ b/sections/docker/restart-and-replicate-processes.french.md @@ -0,0 +1,44 @@ +# Let the Docker orchestrator restart and replicate processes + +

+ +### One Paragraph Explainer + +Docker runtime orchestrators like Kubernetes are really good at making containers health and placement decisions: They will take care to maximize the number of containers, balance them across zones, and take into account many cluster factors while making these decisions. Goes without words, they identify failing processes (i.e., containers) and restart them in the right place. Despite that some may be tempted to use custom code or tools to replicate the Node process for CPU utilization or restart the process upon failure (e.g., Cluster module, PM2). These local tools don't have the perspective and the data that is available on the cluster level. For example, when the instances resources can host 3 containers and given 2 regions or zones, Kubernetes will take care to spread the containers across zones. This way, in case of a zonal or regional failure, the app will stay alive. On the contrary side when using local tools for restarting the process the Docker orchestrator is not aware of the errors and can not make thoughtful decisions like relocating the container to a new instance or zone. + +

+ +### Code Example – Invoking Node.js directly without intermediate tools + +
+ +Dockerfile + +``` + +FROM node:12-slim + +# The build logic comes here + +CMD ["node", "index.js"] +``` + +
+ +

+ +### Code Example Anti Pattern – Using a process manager + +
+ +Dockerfile + +``` +FROM node:12-slim + +# The build logic comes here + +CMD ["pm2-runtime", "indes.js"] +``` + +
\ No newline at end of file diff --git a/sections/docker/scan-images.french.md b/sections/docker/scan-images.french.md new file mode 100644 index 000000000..ac1732a99 --- /dev/null +++ b/sections/docker/scan-images.french.md @@ -0,0 +1,30 @@ +# Scan the entire image before production + +

+ +### One Paragraph Explainer + +Scanning the code for vulnerabilities is a valuable act but it doesn't cover all the potential threats. Why? Because vulnerabilities also exist on the OS level and the app might execute those binaries like Shell, Tarball, OpenSSL. Also, vulnerable dependencies might be injected after the code scan (i.e. supply chain attacks) - hence scanning the final image just before production is in order. This idea resembles E2E tests - after testing the various pieces in-isolation, it's valuable to finally check the assembled deliverable. There are 3 main scanner families: Local/CI binaries with a cached vulnerabilities DB, scanners as a service in the cloud and a niche of tools which scan during the docker build itself. The first group is the most popular and usually the fastest - Tools like [Trivvy](https://github.com/aquasecurity/trivy), [Anchore](https://github.com/anchore/anchore) and [Snyk](https://support.snyk.io/hc/en-us/articles/360003946897-Container-security-overview) are worth exploring. Most CI vendors provide a local plugin that facilitates the interaction with these scanners. It should be noted that these scanners cover a lot of ground and therefore will show findings in almost every scan - consider setting a high threshold bar to avoid getting overwhelmed + +

+ +### Code Example – Scanning with Trivvy + +
+ +Bash + +``` +sudo apt-get install rpm +$ wget https://github.com/aquasecurity/trivy/releases/download/{TRIVY_VERSION}/trivy_{TRIVY_VERSION}_Linux-64bit.deb +$ sudo dpkg -i trivy_{TRIVY_VERSION}_Linux-64bit.deb +trivy image [YOUR_IMAGE_NAME] +``` + +
+ +

+ +### Report Example – Docker scan results (By Anchore) + +![Report examples](/assets/images/anchore-report.png "Docker scan report") \ No newline at end of file diff --git a/sections/docker/smaller_base_images.french.md b/sections/docker/smaller_base_images.french.md new file mode 100644 index 000000000..64a96b490 --- /dev/null +++ b/sections/docker/smaller_base_images.french.md @@ -0,0 +1,13 @@ +# Prefer smaller Docker base images + +Large Docker images can lead to higher exposure to vulnerabilities and increased resource consumption. Often you don't need certain packages installed at runtime that are needed for building. +Pulling and storing larger images will become more expensive at scale, when dealing with larger images. By design minimal images may not come with common libraries needed for building native modules or packages useful for debugging (e.g. curl) pre-installed. +Using the Alpine Linux variants of images can lead to a reduced footprint in terms of resources used and the amount of attack vectors present in fully-featured systems. The Node.js v14.4.0 Docker image is ~345MB in size versus ~39MB for the Alpine version, which is almost 10x smaller. +A Slim variant based on Debian, which is only 38MB in size and contains the minimal packages needed to run Node.js, is also a great choice. + +### Blog Quote: "If you want to shrink your Docker images, have your services start faster and be more secure then try Alpine out." + +From [Nick Janetakis' blog](https://nickjanetakis.com/blog/the-3-biggest-wins-when-using-alpine-as-a-base-docker-image) + +> It’s no secret by now that Docker is heavily using Alpine as a base image for official Docker images. This movement started near the beginning of 2016. [...] + When pulling down new Docker images onto a fresh server, you can expect the initial pull to be quite a bit faster on Alpine. The slower your network is, the bigger the difference it will be. [...] Another perk of being much smaller in size is that the surface area to be attacked is much less. When there’s not a lot of packages and libraries on your system, there’s very little that can go wrong. \ No newline at end of file diff --git a/sections/docker/use-cache-for-shorter-build-time.french.md b/sections/docker/use-cache-for-shorter-build-time.french.md new file mode 100644 index 000000000..44b0de74f --- /dev/null +++ b/sections/docker/use-cache-for-shorter-build-time.french.md @@ -0,0 +1,115 @@ +# Leverage caching to reduce build times + +## One paragraph explainer + +Docker images are a combination of layers, each instruction in your Dockerfile creates a layer. The docker daemon can reuse those layers between builds if the instructions are identical or in the case of a `COPY` or `ADD` files used are identical. ⚠️ If the cache can't be used for a particular layer all the subsequent layers will be invalidated too. That's why order is important. It is crucial to layout your Dockerfile correctly to reduce the number of moving parts in your build; the less updated instructions should be at the top and the ones constantly changing (like app code) should be at the bottom. It's also important to think that instructions that trigger long operation should be close to the top to ensure they happen only when really necessary (unless it changes every time you build your docker image). Rebuilding a whole docker image from cache can be nearly instantaneous if done correctly. + +![Docker layers](/assets/images/docker_layers_schema.png) + +* Image taken from [Digging into Docker layers](https://medium.com/@jessgreb01/digging-into-docker-layers-c22f948ed612) by jessgreb01* + +### Rules + +#### Avoid LABEL that change all the time + +If you have a label containing the build number at the top of your Dockerfile, the cache will be invalidated at every build + +```Dockerfile +#Beginning of the file +FROM node:10.22.0-alpine3.11 as builder + +# Don't do that here! +LABEL build_number="483" + +#... Rest of the Dockerfile +``` + +#### Have a good .dockerignore file + +[**See: On the importance of docker ignore**](/sections/docker/docker-ignore.md) + +The docker ignore avoids copying files that could bust our cache logic, like tests results reports, logs or temporary files. + +#### Install "system" packages first + +It is recommended to create a base docker image that has all the system packages you use. If you **really** need to install packages using `apt`,`yum`,`apk` or the likes, this should be one of the first instructions. You don't want to reinstall make,gcc or g++ every time you build your node app. +**Do not install package only for convenience, this is a production app.** + +#### First, only ADD your package.json and your lockfile + +```Dockerfile +COPY "package.json" "package-lock.json" "./" +RUN npm ci +``` + +The lockfile and the package.json change less often. Copying them first will keep the `npm install` step in the cache, this saves precious time. + +### Then copy your files and run build step (if needed) + +```Dockerfile +COPY . . +RUN npm run build +``` + +## Examples + +### Basic Example with node_modules needing OS dependencies +```Dockerfile +#Create node image version alias +FROM node:10.22.0-alpine3.11 as builder + +RUN apk add --no-cache \ + build-base \ + gcc \ + g++ \ + make + +USER node +WORKDIR /app +COPY "package.json" "package-lock.json" "./" +RUN npm ci --production +COPY . "./" + +FROM node as app +USER node +WORKDIR /app +COPY --from=builder /app/ "./" +RUN npm prune --production + +CMD ["node", "dist/server.js"] +``` + + +### Example with a build step (when using typescript for example) +```Dockerfile +#Create node image version alias +FROM node:10.22.0-alpine3.11 as builder + +RUN apk add --no-cache \ + build-base \ + gcc \ + g++ \ + make + +USER node +WORKDIR /app +COPY "package.json" "package-lock.json" "./" +RUN npm ci +COPY . . +RUN npm run build + +FROM node as app +USER node +WORKDIR /app +# Only copying the files that we need +COPY --from=builder /app/node_modules node_modules +COPY --from=builder /app/package.json . +COPY --from=builder /app/dist dist +RUN npm prune --production + +CMD ["node", "dist/server.js"] +``` + +## Useful links + +Docker docks: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache \ No newline at end of file diff --git a/sections/errorhandling/apmproducts.french.md b/sections/errorhandling/apmproducts.french.md index 39663e090..1d4f206fd 100644 --- a/sections/errorhandling/apmproducts.french.md +++ b/sections/errorhandling/apmproducts.french.md @@ -22,7 +22,7 @@ Les produits APM regroupent 3 pôles principaux : ### Exemple : UpTimeRobot.Com - Tableau de bord de surveillance de site Web -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/uptimerobot.jpg "Tableau de bord de surveillance de site Web") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/uptimerobot.jpg "Tableau de bord de surveillance de site Web") ### Exemple : AppDynamics.Com - Surveillance de bout en bout combinée à une instrumentation de code -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/app-dynamics-dashboard.png "Surveillance de bout en bout combinée à une instrumentation de code") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/app-dynamics-dashboard.png "Surveillance de bout en bout combinée à une instrumentation de code") diff --git a/sections/errorhandling/documentingusingswagger.french.md b/sections/errorhandling/documentingusingswagger.french.md index d28324a60..13f407951 100644 --- a/sections/errorhandling/documentingusingswagger.french.md +++ b/sections/errorhandling/documentingusingswagger.french.md @@ -49,4 +49,4 @@ Extrait du blog de Joyent classé en 1ere position pour les mots clés “Node.j ### Outil utile : créateur de documentation en ligne Swagger -![Schéma d'API Swagger](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/swaggerDoc.png "Gestion des erreurs de l'API") +![Schéma d'API Swagger](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/swaggerDoc.png "Gestion des erreurs de l'API") diff --git a/sections/errorhandling/returningpromises.french.md b/sections/errorhandling/returningpromises.french.md new file mode 100644 index 000000000..3650d503a --- /dev/null +++ b/sections/errorhandling/returningpromises.french.md @@ -0,0 +1,285 @@ +# Returning promises + +
+ +### One Paragraph Explainer + +When an error occurs, whether from a synchronous or asynchronous flow, it's imperative to have a full stacktrace of the error flow. Surprisingly, if an async function returns a promise (e.g., calls other async function) without awaiting, should an error occur then the caller function won't appear in the stacktrace. This will leave the person who diagnoses the error with partial information - All the more if the error cause lies within that caller function. There is a feature v8 called "zero-cost async stacktraces" that allow stacktraces not to be cut on the most recent `await`. But due to non-trivial implementation details, it will not work if the return value of a function (sync or async) is a promise. So, to avoid holes in stacktraces when returned promises would be rejected, we must always explicitly resolve promises with `await` before returning them from functions + +
+ +### Code example Anti-Pattern: Calling async function without awaiting + +
Javascript +

+ +```javascript +async function throwAsync(msg) { + await null // need to await at least something to be truly async (see note #2) + throw Error(msg) +} + +async function returnWithoutAwait () { + return throwAsync('missing returnWithoutAwait in the stacktrace') +} + +// 👎 will NOT have returnWithoutAwait in the stacktrace +returnWithoutAwait().catch(console.log) +``` + +would log + +``` +Error: missing returnWithoutAwait in the stacktrace + at throwAsync ([...]) +``` +

+
+ +### Code example: Calling and awaiting as appropriate + +
Javascript +

+ +```javascript +async function throwAsync(msg) { + await null // need to await at least something to be truly async (see note #2) + throw Error(msg) +} + +async function returnWithAwait() { + return await throwAsync('with all frames present') +} + +// 👍 will have returnWithAwait in the stacktrace +returnWithAwait().catch(console.log) +``` + +would log + +``` +Error: with all frames present + at throwAsync ([...]) + at async returnWithAwait ([...]) +``` + +

+
+ +
+ +### Code example Anti-Pattern: Returning a promise without tagging the function as async + +
Javascript +

+ +```javascript +async function throwAsync () { + await null // need to await at least something to be truly async (see note #2) + throw Error('missing syncFn in the stacktrace') +} + +function syncFn () { + return throwAsync() +} + +async function asyncFn () { + return await syncFn() +} + +// 👎 syncFn would be missing in the stacktrace because it returns a promise while been sync +asyncFn().catch(console.log) +``` + +would log + +``` +Error: missing syncFn in the stacktrace + at throwAsync ([...]) + at async asyncFn ([...]) +``` + +

+
+ +### Code example: Tagging the function that returns a promise as async + +
Javascript +

+ +```javascript +async function throwAsync () { + await null // need to await at least something to be truly async (see note #2) + throw Error('with all frames present') +} + +async function changedFromSyncToAsyncFn () { + return await throwAsync() +} + +async function asyncFn () { + return await changedFromSyncToAsyncFn() +} + +// 👍 now changedFromSyncToAsyncFn would present in the stacktrace +asyncFn().catch(console.log) +``` + +would log + +``` +Error: with all frames present + at throwAsync ([...]) + at changedFromSyncToAsyncFn ([...]) + at async asyncFn ([...]) +``` + +

+
+ +
+ +### Code Example Anti-pattern #3: direct usage of async callback where sync callback is expected + +
Javascript +

+ +```javascript +async function getUser (id) { + await null + if (!id) throw Error('stacktrace is missing the place where getUser has been called') + return {id} +} + +const userIds = [1, 2, 0, 3] + +// 👎 the stacktrace would include getUser function but would give no clue on where it has been called +Promise.all(userIds.map(getUser)).catch(console.log) +``` + +would log + +``` +Error: stacktrace is missing the place where getUser has been called + at getUser ([...]) + at async Promise.all (index 2) +``` + +*Side-note*: it may looks like `Promise.all (index 2)` can help understanding the place where `getUser` has been called, +but due to a [completely different bug in v8](https://bugs.chromium.org/p/v8/issues/detail?id=9023), `(index 2)` is +a line from internals of v8 + +

+
+ +### Code example: wrap async callback in a dummy async function before passing it as a sync callback + +
Javascript +

+ +*Note 1*: in case if you control the code of the function that would call the callback - just change that function to +async and add `await` before the callback call. Below I assume that you are not in charge of the code that is calling +the callback (or it's change is unacceptable for example because of backward compatibility) + +*Note 2*: quite often usage of async callback in places where sync one is expected would not work at all. This is not about +how to fix the code that is not working - it's about how to fix stacktrace in case if code is already working as +expected + +```javascript +async function getUser (id) { + await null + if (!id) throw Error('with all frames present') + return {id} +} + +const userIds = [1, 2, 0, 3] + +// 👍 now the line below is in the stacktrace +Promise.all(userIds.map(async id => await getUser(id))).catch(console.log) +``` + +would log + +``` +Error: with all frames present + at getUser ([...]) + at async ([...]) + at async Promise.all (index 2) +``` + +where thanks to explicit `await` in `map`, the end of the line `at async ([...])` would point to the exact place where +`getUser` has been called + +*Side-note*: if async function that wrap `getUser` would miss `await` before return (anti-pattern #1 + anti-pattern #3) +then only one frame would left in the stacktrace: + +```javascript +[...] + +// 👎 anti-pattern 1 + anti-pattern 3 - only one frame left in stacktrace +Promise.all(userIds.map(async id => getUser(id))).catch(console.log) +``` + +would log + +``` +Error: [...] + at getUser ([...]) +``` + +

+
+ +
+ +## Advanced explanation + +The mechanisms behind sync functions stacktraces and async functions stacktraces in v8 implementation are quite different: +sync stacktrace is based on **stack** provided by operating system Node.js is running on (just like in most programming +languages). When an async function is executing, the **stack** of operating system is popping it out as soon as the +function is getting to it's first `await`. So async stacktrace is a mix of operating system **stack** and a rejected +**promise resolution chain**. Zero-cost async stacktraces implementation is extending the **promise resolution chain** +only when the promise is getting `awaited` [¹](#1). Because only `async` functions may `await`, +sync function would always be missed in async stacktrace if any async operation has been performed after the function +has been called [²](#2) + +### The tradeoff + +Every `await` creates a new microtask in the event loop, so adding more `await`s to the code would +introduce some performance penalty. Nevertheless, the performance penalty introduced by network or +database is [tremendously larger](https://colin-scott.github.io/personal_website/research/interactive_latency.html) +so additional `await`s penalty is not something that should be considered during web servers or CLI +development unless for a very hot code per request or command. So removing `await`s in +`return await`s should be one of the last places to search for noticeable performance boost and +definitely should never be done up-front + + +### Why return await was considered as anti-pattern in the past + +There is a number of [excellent articles](https://jakearchibald.com/2017/await-vs-return-vs-return-await/) explained +why `return await` should never be used outside of `try` block and even an +[ESLint rule](https://eslint.org/docs/rules/no-return-await) that disallows it. The reason for that is the fact that +since async/await become available with transpilers in Node.js 0.10 (and got native support in Node.js 7.6) and until +"zero-cost async stacktraces" was introduced in Node.js 10 and unflagged in Node.js 12, `return await` was absolutely +equivalent to `return` for any code outside of `try` block. It may still be the same for some other ES engines. This +is why resolving promises before returning them is the best practice for Node.js and not for the EcmaScript in general + +### Notes: + +1. One another reason why async stacktrace has such tricky implementation is the limitation that stacktrace +must always be built synchronously, on the same tick of event loop [¹](#1) +2. Without `await` in `throwAsync` the code would be executed in the same phase of event loop. This is a +degenerated case when OS **stack** would not get empty and stacktrace be full even without explicitly +awaiting the function result. Usually usage of promises include some async operations and so parts of +the stacktrace would get lost +3. Zero-cost async stacktraces still would not work for complicated promise usages e.g. single promise +awaited many times in different places + +### References: + 1. [Blog post on zero-cost async stacktraces in v8](https://v8.dev/blog/fast-async) +
+ + 2. [Document on zero-cost async stacktraces with mentioned here implementation details]( + https://docs.google.com/document/d/13Sy_kBIJGP0XT34V1CV3nkWya4TwYx9L3Yv45LdGB6Q/edit + ) +
\ No newline at end of file diff --git a/sections/errorhandling/usematurelogger.french.md b/sections/errorhandling/usematurelogger.french.md index 4273ed51f..e9fb1fc87 100644 --- a/sections/errorhandling/usematurelogger.french.md +++ b/sections/errorhandling/usematurelogger.french.md @@ -2,49 +2,40 @@ ### Un paragraphe d'explication -Nous aimons tous console.log mais de toute évidence, un outil de journalisation réputé et persistant comme [Winston][winston] (très populaire) ou [Pino][pino] (le nouveau gamin de la ville qui se concentre sur la performance) est obligatoire pour les projets sérieux. Un ensemble de pratiques et d'outils aidera à comprendre les erreurs beaucoup plus rapidement - (1) journalisez fréquemment en utilisant différents niveaux (débogage, informations, erreur), (2) lors de la journalisation, fournissez des informations contextuelles sous forme d'objets JSON, voir l'exemple ci-dessous. (3) Regardez et filtrez les journaux à l'aide d'une API d'interrogation des journaux (intégrée dans la plupart des enregistreurs) ou d'un logiciel de visualisation des journaux. (4) Exposer et conserver le journal de bord pour l'équipe d'exploitation à l'aide d'outils d'intelligence opérationnelle comme Splunk. +Nous adorons console.log mais un logger réputé et persistant comme [Pino][pino] (une option plus récente axée sur les performances) est obligatoire pour les projets sérieux. +Des outils de journalisation très performants permettent d'identifier les erreurs et les problèmes éventuels. Les recommandations en matière de journalisation sont : + +1. Enregistrer fréquemment en utilisant différents niveaux (débogage, info, erreur). +2. Lors de la journalisation, fournir des informations contextuelles sous forme d'objets JSON. +3. Surveiller et filtrer les journaux à l'aide d'une API d'interrogation des journaux (intégrée à de nombreux enregistreurs) ou d'un logiciel de visualisation des journaux. +4. Exposer et conserver les déclarations de journal avec des outils de renseignement opérationnel tels que [Splunk][splunk]. -[winston]: https://www.npmjs.com/package/winston [pino]: https://www.npmjs.com/package/pino +[splunk]: https://www.splunk.com/ -### Exemple de code - l'outil de journalisation Winston en action +### Exemple de code ```javascript +const pino = require('pino'); + // votre objet de journalisation centralisé -const logger = new winston.Logger({ - level: 'info', - transports: [ - new (winston.transports.Console)() - ] -}); +const logger = pino(); // code personnalisé quelque part à l'aide de l'outil de journalisation -logger.log('info', 'Message du journal de test avec un paramètre %s', 'un paramètre', { anything: 'Ce sont des métadonnées' }); -``` - -### Exemple de code - Interrogation du répertoire du journal (recherche d'entrées) - -```javascript -const options = { - from: Date.now() - 24 * 60 * 60 * 1000, - until: new Date(), - limit: 10, - start: 0, - order: 'desc', - fields: ['message'] -}; - -// Recherchez les éléments enregistrés entre aujourd'hui et hier. -winston.query(options, (err, results) => { - // exécute la fonction de rappel avec results -}); +logger.info({ anything: 'This is metadata' }, 'Test Log Message with some parameter %s', 'some parameter'); ``` ### Citation de blog : « Exigences d'un outil de journalisation » - Extrait du blog de Strong Loop + Extrait du blog de Strong Loop ("Comparing Winston and Bunyan Node.js Logging" par Alex Corbatchev, 24 juin 2014) : > Permet d'identifier quelques exigences (pour un outil de journalisation) : -1. Chaque ligne du journal est horodatée. Celle-ci est assez explicite - vous devriez pouvoir dire quand chaque entrée du journal s'est produite. -2. Le format d'enregistrement doit être facilement assimilable par les humains ainsi que par les machines. -3. Permet plusieurs flux de destination configurables. Par exemple, vous pouvez écrire des journaux de trace dans un fichier, mais lorsqu'une erreur se produit, cela écrit dans le même fichier, puis dans le fichier d'erreur et envoi un e-mail en même temps… +> 1. Chaque ligne du journal est horodatée. Celle-ci est assez explicite - vous devriez pouvoir dire quand chaque entrée du journal s'est produite. +> 2. Le format d'enregistrement doit être facilement assimilable par les humains ainsi que par les machines. +> 3. Permet plusieurs flux de destination configurables. Par exemple, vous pouvez écrire des journaux de trace dans un fichier, mais lorsqu'une erreur se produit, cela écrit dans le même fichier, puis dans le fichier d'erreur et envoi un e-mail en même temps… + +### Où est Winston ? + +Pour plus d'informations sur les raisons pour lesquelles les favoris traditionnels (par exemple, Winston) peuvent ne pas être inclus dans la liste actuelle des meilleures pratiques recommandées, veuillez consulter [#684][#684]. + +[#684]: https://github.com/goldbergyoni/nodebestpractices/issues/684 diff --git a/sections/errorhandling/useonlythebuiltinerror.french.md b/sections/errorhandling/useonlythebuiltinerror.french.md index 2ffc5d90e..a8e2e7d56 100644 --- a/sections/errorhandling/useonlythebuiltinerror.french.md +++ b/sections/errorhandling/useonlythebuiltinerror.french.md @@ -2,7 +2,7 @@ ### Un paragraphe d'explication -La nature permissive de JavaScript ainsi que sa variété d'options de flux de code (par exemple, EventEmitter, fonction de rappel, promesses, etc.) peut faire varier considérablement la façon dont les développeurs génèrent des erreurs - certains utilisent des chaînes, d'autres définissent leurs propres types personnalisés. L'utilisation de l'objet Error intégré de Node.js aide à maintenir l'uniformité dans votre code et avec les bibliothèques tierces, il préserve également des informations importantes comme la StackTrace. Lors de la levée de l'exception, il est généralement recommandé de la remplir avec des propriétés contextuelles supplémentaires telles que le nom de l'erreur et le code d'erreur HTTP associé. Pour atteindre cette uniformité et ces pratiques, envisagez d'étendre l'objet Error avec des propriétés supplémentaires, voir l'exemple de code ci-dessous. +La nature permissive de JavaScript ainsi que sa variété d'options de flux de code (par exemple, EventEmitter, fonction de rappel, promesses, etc.) peut faire varier considérablement la façon dont les développeurs génèrent des erreurs - certains utilisent des chaînes, d'autres définissent leurs propres types personnalisés. L'utilisation de l'objet Error intégré de Node.js aide à maintenir l'uniformité dans votre code et avec les bibliothèques tierces, il préserve également des informations importantes comme la StackTrace. Lors de la levée de l'exception, il est généralement recommandé de la remplir avec des propriétés contextuelles supplémentaires telles que le nom de l'erreur et le code d'erreur HTTP associé. Pour atteindre cette uniformité et ces pratiques, envisagez d'étendre l'objet Error avec des propriétés supplémentaires, mais attention à ne pas en faire trop. Il est généralement judicieux d'étendre l'objet Error une seule fois avec un AppError pour toutes les erreurs au niveau de l'application, et de passer en argument toutes les données dont vous avez besoin pour différencier les différents types d'erreurs. Il n'est pas nécessaire d'étendre l'objet Error plusieurs fois (une fois pour chaque cas d'erreur, comme DbError, HttpError). Consulter l'exemple de code ci-dessous. ### Exemple de code - la bonne méthode @@ -96,7 +96,7 @@ if(user == null) Extrait du blog de Ben Nadel classé en 5ème position pour les mots clés “Node.js error object” ->… Personnellement, je ne vois pas l'intérêt d'avoir beaucoup de types d'objets d'erreur différents - JavaScript, en tant que langage, ne semble pas répondre à la capture d'erreurs basée sur le constructeur. En tant que tel, la différenciation sur une propriété d'objet semble beaucoup plus facile que la différenciation sur un type de constructeur… +>… Personnellement, je ne vois pas l'intérêt d'avoir beaucoup de types d'objets d'erreur différents [comparé à l'extension d'une seule fois de AppError] - JavaScript, en tant que langage, ne semble pas répondre à la capture d'erreurs basée sur le constructeur. En tant que tel, la différenciation sur une propriété d'objet semble beaucoup plus facile que la différenciation sur un type de constructeur… ### Citation de blog : « Une chaîne n'est pas une erreur » @@ -108,7 +108,7 @@ Extrait du blog de devthought.com classé en 6ème position pour les mots clés Extrait du blog de machadogj -> …Un problème que j'ai avec la classe Error est qu'il n'est pas si simple à étendre. Bien sûr, vous pouvez hériter de la classe et créer vos propres classes d'erreur comme HttpError, DbError, etc. Cependant, cela prend du temps et n'ajoute pas trop de valeur à moins que vous ne fassiez quelque chose avec des types. Parfois, vous voulez simplement ajouter un message et conserver l'erreur interne, et parfois vous voudrez peut-être étendre l'erreur avec des paramètres, etc.… +> …Un problème que j'ai avec la classe Error est qu'il n'est pas si simple à étendre. Bien sûr, vous pouvez hériter de la classe et créer vos propres classes d'erreur comme HttpError, DbError, etc. Cependant, cela prend du temps et n'ajoute pas trop de valeur [que de l'étendre une seule fois pour une AppError] à moins que vous ne fassiez quelque chose avec des types. Parfois, vous voulez simplement ajouter un message et conserver l'erreur interne, et parfois vous voudrez peut-être étendre l'erreur avec des paramètres, etc.… ### Citation de blog : « Toutes les erreurs JavaScript et Système levées par Node.js héritent de Error » diff --git a/sections/performance/block-loop.french.md b/sections/performance/block-loop.french.md index 531de7fef..16d7e192b 100644 --- a/sections/performance/block-loop.french.md +++ b/sections/performance/block-loop.french.md @@ -28,7 +28,7 @@ while loop. ### The results ``` -─────────┬────────┬────────┬────────┬────────┬───────────┬──────────┬───────────┐ +┌─────────┬────────┬────────┬────────┬────────┬───────────┬──────────┬───────────┐ │ Stat │ 2.5% │ 50% │ 97.5% │ 99% │ Avg │ Stdev │ Max │ ├─────────┼────────┼────────┼────────┼────────┼───────────┼──────────┼───────────┤ │ Latency │ 270 ms │ 300 ms │ 328 ms │ 331 ms │ 300.56 ms │ 38.55 ms │ 577.05 ms │ diff --git a/sections/performance/nativeoverutil.french.md b/sections/performance/nativeoverutil.french.md index 09059cece..e4c9f2a2c 100644 --- a/sections/performance/nativeoverutil.french.md +++ b/sections/performance/nativeoverutil.french.md @@ -5,7 +5,7 @@ ### One Paragraph Explainer -Sometimes, using native methods is better than requiring `lodash` or `underscore` because it will not lead in a performance boost and use more space than necessary. +Sometimes, using native methods is better than requiring _lodash_ or _underscore_ because those libraries can lead to performance loss or take up more space than needed. The performance using native methods result in an [overall ~50% gain](https://github.com/Berkmann18/NativeVsUtils/blob/master/analysis.xlsx) which includes the following methods: `Array.concat`, `Array.fill`, `Array.filter`, `Array.map`, `(Array|String).indexOf`, `Object.find`, ... diff --git a/sections/production/assigntransactionid.french.md b/sections/production/assigntransactionid.french.md index b398ed291..f37fd4b55 100644 --- a/sections/production/assigntransactionid.french.md +++ b/sections/production/assigntransactionid.french.md @@ -4,14 +4,14 @@ ### One Paragraph Explainer -A typical log is a warehouse of entries from all components and requests. Upon detection of some suspicious line or error, it becomes hairy to match other lines that belong to the same specific flow (e.g. the user “John” tried to buy something). This becomes even more critical and challenging in a microservice environment when a request/transaction might span across multiple computers. Address this by assigning a unique transaction identifier value to all the entries from the same request so when detecting one line one can copy the id and search for every line that has similar transaction Id. However, achieving this In Node is not straightforward as a single thread is used to serve all requests –consider using a library that that can group data on the request level – see code example on the next slide. When calling other microservice, pass the transaction Id using an HTTP header like “x-transaction-id” to keep the same context. +A typical log is a warehouse of entries from all components and requests. Upon detection of some suspicious line or error, it becomes hairy to match other lines that belong to the same specific flow (e.g. the user “John” tried to buy something). This becomes even more critical and challenging in a microservice environment when a request/transaction might span across multiple computers. Address this by assigning a unique transaction identifier value to all the entries from the same request so when detecting one line one can copy the id and search for every line that has similar transaction id. However, achieving this In Node is not straightforward as a single thread is used to serve all requests –consider using a library that that can group data on the request level – see code example on the next slide. When calling other microservices, pass the transaction id using an HTTP header like “x-transaction-id” to keep the same context.

### Code example: typical Express configuration ```javascript -// when receiving a new request, start a new isolated context and set a transaction Id. The following example is using the npm library continuation-local-storage to isolate requests +// when receiving a new request, start a new isolated context and set a transaction id. The following example is using the npm library continuation-local-storage to isolate requests const { createNamespace } = require('continuation-local-storage'); const session = createNamespace('my session'); @@ -19,18 +19,18 @@ const session = createNamespace('my session'); router.get('/:id', (req, res, next) => { session.set('transactionId', 'some unique GUID'); someService.getById(req.params.id); - logger.info('Starting now to get something by Id'); + logger.info('Starting now to get something by id'); }); // Now any other service or components can have access to the contextual, per-request, data class someService { getById(id) { - logger.info('Starting to get something by Id'); + logger.info('Starting to get something by id'); // other logic comes here } } -// The logger can now append the transaction-id to each entry so that entries from the same request will have the same value +// The logger can now append the transaction id to each entry so that entries from the same request will have the same value class logger { info (message) { console.log(`${message} ${session.get('transactionId')}`); diff --git a/sections/production/installpackageswithnpmci.french.md b/sections/production/installpackageswithnpmci.french.md new file mode 100644 index 000000000..42e6a85a4 --- /dev/null +++ b/sections/production/installpackageswithnpmci.french.md @@ -0,0 +1,30 @@ +# Install packages with npm ci in production + +

+ +### One Paragraph Explainer + +You locked your dependencies following [**Lock dependencies**](/sections/production/lockdependencies.md) but you now need to make sure those exact package versions are used in production. + +Using `npm ci` to install packages will do exactly that and more. +* It will fail if your `package.json` and your `package-lock.json` do not match (they should) or if you don't have a lock file +* If a `node_modules` folder is present it will automatically remove it before installing +* It is faster! Nearly twice as fast according to [the release blog post](https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable) + +### When can this be useful? +You are guaranteed that you CI environment or QA will test your software with exactly the same package version that the one you will later send to production. +Also, if for some reason someone manually changes package.json, not through a cli command but rather by directly editing package.json, a gap between package.json & package-lock.json is created and an error will be thrown. + +### What npm says + +From [npm ci documentation](https://docs.npmjs.com/cli/ci.html) +> This command is similar to npm-install, except it’s meant to be used in automated environments such as test platforms, continuous integration, and deployment – or any situation where you want to make sure you’re doing a clean install of your dependencies. + +[Blog post announcing the release of `ci` command](https://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable) +> The command offers massive improvements to both the performance and reliability of builds for continuous integration / continuous deployment processes, providing a consistent and fast experience for developers using CI/CD in their workflow. + +[npmjs: dependencies and devDepencies](https://docs.npmjs.com/specifying-dependencies-and-devdependencies-in-a-package-json-file) +> "dependencies": Packages required by your application in production. +> "devDependencies": Packages that are only needed for local development and testing. + +

\ No newline at end of file diff --git a/sections/production/lockdependencies.french.md b/sections/production/lockdependencies.french.md index ac21668c9..6702b633d 100644 --- a/sections/production/lockdependencies.french.md +++ b/sections/production/lockdependencies.french.md @@ -39,7 +39,7 @@ save-exact:true

-### Exemple de code : fichier de verrouillage des dépendances npm 5 - package.json +### Exemple de code : fichier de verrouillage des dépendances npm 5 - package-lock.json ```json { diff --git a/sections/production/smartlogging.french.md b/sections/production/smartlogging.french.md index dd73f6419..f40d569b2 100644 --- a/sections/production/smartlogging.french.md +++ b/sections/production/smartlogging.french.md @@ -8,7 +8,7 @@ Puisque vous produisez de toute façon des relevés de log et que vous avez mani **1. enregistrement intelligent** – au minimum, vous devez utiliser une bibliothèque de journalisation réputée comme [Winston](https://github.com/winstonjs/winston), [Bunyan](https://github.com/trentm/node-bunyan) et écrire des informations significatives à chaque début et fin de transaction. Pensez également à formater les relevés du journal en JSON et à fournir toutes les propriétés contextuelles (par exemple, l'ID utilisateur, le type d'opération, etc.) afin que l'équipe d'exploitation puisse agir sur ces champs. Incluez également un ID de transaction unique sur chaque ligne de journal, pour plus d'informations, reportez-vous à l'un des points suivants « Attribuez un ID de transaction à chaque relevé du journal ». Un dernier point à considérer, c'est également d'inclure un agent qui enregistre les ressources système de la mémoire et du processeur comme Elastic Beat. -**2. agrégation intelligente** – une fois que vous avez des informations complètes sur le système de fichiers de vos serveurs, il est temps de les transférer périodiquement vers un système qui agrège, installe et visualise ces données. Elastic stack, par exemple, est un choix populaire et gratuit qui offre tous les composants pour agréger et visualiser les données. De nombreux produits commerciaux offrent des fonctionnalités similaires, mais ils réduisent considérablement le temps d'installation et ne nécessitent pas d'hébergement. +**2. agrégation intelligente** – une fois que vous disposez d'informations complètes sur le système de fichiers de votre serveur, il est temps de les pousser périodiquement vers un système qui agrège, facilite et visualise ces données. Elastic stack, par exemple, est un choix populaire et gratuit qui offre tous les composants pour agréger et visualiser les données. De nombreux produits commerciaux offrent des fonctionnalités similaires, mais ils réduisent considérablement le temps d'installation et ne nécessitent pas d'hébergement. **3. visualisation intelligente** – maintenant que l'information est agrégée et consultable, on ne peut être que satisfait de la puissance d'une recherche facile dans les logs mais cela peut aller beaucoup plus loin sans codage ni effort. Nous pouvons maintenant afficher d'importantes mesures opérationnelles comme le taux d'erreur, le CPU moyen au cours de la journée, le nombre de nouveaux utilisateurs qui se sont inscrits au cours de la dernière heure et toute autre mesure qui aide à gérer et à améliorer notre application. diff --git a/sections/projectstructre/breakintcomponents.french.md b/sections/projectstructre/breakintcomponents.french.md index b16644e58..77e6ab0cf 100644 --- a/sections/projectstructre/breakintcomponents.french.md +++ b/sections/projectstructre/breakintcomponents.french.md @@ -28,10 +28,10 @@ Alors, est-ce que est l'architecture de votre application parle d'elle-même ? Q ### Bon : Organisez votre solution avec des composants autonomes -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/structurebycomponents.PNG "Solution d'organisation par composants") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/structurebycomponents.PNG "Solution d'organisation par composants")

### Mauvais : Regroupez vos fichiers selon leur rôle technique -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/structurebyroles.PNG "Solution d'organisation par rôles techniques") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/structurebyroles.PNG "Solution d'organisation par rôles techniques") diff --git a/sections/projectstructre/configguide.french.md b/sections/projectstructre/configguide.french.md index 22daf6b09..8cf42d94d 100644 --- a/sections/projectstructre/configguide.french.md +++ b/sections/projectstructre/configguide.french.md @@ -16,7 +16,7 @@ Lorsqu'il s'agit de données de configuration, beaucoup de choses peuvent tout s 5. l'application doit échouer le plus rapidement possible et fournir un retour immédiat si les variables d'environnement requises ne sont pas présentes au démarrage, ceci peut être réalisé en utilisant [convict](https://www.npmjs.com/package/convict) pour valider la configuration. -Certaines bibliothèques de configuration peuvent fournir gratuitement la plupart de ces fonctionnalités, jetez un œil aux bibliothèques npm comme [rc](https://www.npmjs.com/package/rc), [nconf](https://www.npmjs.com/package/nconf) et [config](https://www.npmjs.com/package/config) qui traitent un bon nombre de ces exigences. +Certaines bibliothèques de configuration peuvent fournir gratuitement la plupart de ces fonctionnalités, jetez un œil aux bibliothèques npm comme [rc](https://www.npmjs.com/package/rc), [nconf](https://www.npmjs.com/package/nconf), [config](https://www.npmjs.com/package/config) et [convict](https://www.npmjs.com/package/convict) qui traitent un bon nombre de ces exigences.

diff --git a/sections/projectstructre/createlayers.french.md b/sections/projectstructre/createlayers.french.md index 9f3c6365d..f514e4405 100644 --- a/sections/projectstructre/createlayers.french.md +++ b/sections/projectstructre/createlayers.french.md @@ -2,12 +2,12 @@

- ### Séparez le code des composants en strates : web, services et DAL (couche d'accès aux données) + ### Séparez le code des composants en strates : web, services et couche d'accès aux données (DAL) -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/structurebycomponents.PNG "Séparez le code des composants en strates") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/structurebycomponents.PNG "Séparez le code des composants en strates")

### Explication en 1 min : l'inconvénient de mélanger les strates -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/keepexpressinweb.gif "L'inconvénient de mélanger les strates") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/keepexpressinweb.gif "L'inconvénient de mélanger les strates") diff --git a/sections/projectstructre/separateexpress.french.md b/sections/projectstructre/separateexpress.french.md index 462e35adb..dea000f63 100644 --- a/sections/projectstructre/separateexpress.french.md +++ b/sections/projectstructre/separateexpress.french.md @@ -57,6 +57,7 @@ const server = http.createServer(app); Javascript ```javascript +const request = require('supertest'); const app = express(); app.get('/user', (req, res) => { @@ -79,6 +80,7 @@ request(app) Typescript ```typescript +import * as request from "supertest"; const app = express(); app.get('/user', (req: Request, res: Response) => { diff --git a/sections/projectstructre/thincomponents.french.md b/sections/projectstructre/thincomponents.french.md index 7bf35665c..eb3abde05 100644 --- a/sections/projectstructre/thincomponents.french.md +++ b/sections/projectstructre/thincomponents.french.md @@ -18,10 +18,10 @@ Pour les applications de taille moyenne et supérieure, les monolithes sont vrai ### Bon : Organisez votre solution avec des composants autonomes -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/structurebycomponents.PNG "Solution d'organisation par composants") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/structurebycomponents.PNG "Solution d'organisation par composants")

### Mauvais : Regroupez vos fichiers selon leur rôle technique -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/structurebyroles.PNG "Solution d'organisation par rôles techniques") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/structurebyroles.PNG "Solution d'organisation par rôles techniques") diff --git a/sections/projectstructre/wraputilities.french.md b/sections/projectstructre/wraputilities.french.md index 2eac0b452..661d1b84a 100644 --- a/sections/projectstructre/wraputilities.french.md +++ b/sections/projectstructre/wraputilities.french.md @@ -10,4 +10,4 @@ Une fois que vous commencez à vous développer et que vous avez différents com ### Partage de vos propres utilitaires communs entre les environnements et les composants -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/Privatenpm.png "Solution d'organisation par composants") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/Privatenpm.png "Solution d'organisation par composants") diff --git a/sections/security/bcryptpasswords.french.md b/sections/security/bcryptpasswords.french.md deleted file mode 100644 index ef8ebd7c5..000000000 --- a/sections/security/bcryptpasswords.french.md +++ /dev/null @@ -1,32 +0,0 @@ -# Évitez d'utiliser la librairie Crypto de Node.js pour les mots de passe, utilisez Bcrypt - -### Un paragraphe d'explication - -Quand on stocke les mots de passe des utilisateurs, l'utilisation d'un algorithme de hachage adaptatif comme bcrypt, offert par le [module npm bcrypt](https://www.npmjs.com/package/bcrypt), est recommandée plutôt que d'utiliser le module crypto natif de Node.js. `Math.random()` ne devrait aussi jamais être utilisée dans la génération de mot de passe ou de token du fait de sa prévisibilité. - -Le module `bcrypt` ou similaire devrait être utilisé par opposition à l'implémentation JavaScript, quand on utilise `bcrypt`, un nombre de « tours » peut être spécifié afin de fournir un hash sécurisé. Cela défini le facteur travail ou le nombre de « tours » pour lesquels les données sont traitées, et plus de tours de hachage mène à un hash plus sécurisé (bien que cela coûte plus de temps du CPU). L'introduction des tours de hachage signifie que le facteur de [force brute](https://fr.wikipedia.org/wiki/Attaque_par_force_brute) est significativement réduit, car les craqueurs de mot de passe sont réduits, ce qui augmente le temps requis pour une tentative. - -### Exemple de code - -```javascript -try { -// Génère de manière asynchrone un mot de passe sécurisé en utilisant 10 tours de hachage - const hash = await bcrypt.hash('myPassword', 10); - // Conserve le hash sécurité au niveau de l'utilisateur - - // Compare le mot de passe entré avec le hash enregistré - const match = await bcrypt.compare('somePassword', hash); - if (match) { - // Les mots de passe correspondent - } else { - // Les mots de passe ne correspondent pas - } -} catch { - logger.error('could not hash password.') -} -``` - -### Ce que disent les autres blogueurs - -Extrait du blog de [Max McCarty](https://dzone.com/articles/nodejs-and-password-storage-with-bcrypt): -> ... il ne faut pas seulement utiliser le bon algorithme de hachage. J'ai beaucoup parlé de comment le bon outil inclus aussi l'ingrédient nécessaire de « temps » comme une partie de l'algorithme de hachage de mot de passe et de ce que cela signifie pour l'attaquant qui essaie de craquer les mots de passe par force brute. \ No newline at end of file diff --git a/sections/security/commonsecuritybestpractices.french.md b/sections/security/commonsecuritybestpractices.french.md index 1f44271a2..167f54ce4 100644 --- a/sections/security/commonsecuritybestpractices.french.md +++ b/sections/security/commonsecuritybestpractices.french.md @@ -24,7 +24,7 @@ The common security guidelines section contains best practices that are standard ## ![✔] Generating random strings using Node.js -**TL;DR:** Using a custom-built function generating pseudo-random strings for tokens and other security-sensitive use cases might actually not be as random as you think, rendering your application vulnerable to cryptographic attacks. When you have to generate secure random strings, use the [`crypto.RandomBytes(size, [callback])`](https://nodejs.org/dist/latest-v9.x/docs/api/crypto.html#crypto_crypto_randombytes_size_callback) function using available entropy provided by the system. +**TL;DR:** Using a custom-built function generating pseudo-random strings for tokens and other security-sensitive use cases might actually not be as random as you think, rendering your application vulnerable to cryptographic attacks. When you have to generate secure random strings, use the [`crypto.randomBytes(size, [callback])`](https://nodejs.org/api/crypto.html#crypto_crypto_randombytes_size_callback) function using available entropy provided by the system. **Otherwise:** When generating pseudo-random strings without cryptographically secure methods, attackers might predict and reproduce the generated results, rendering your application insecure @@ -75,7 +75,7 @@ Going on, below we've listed some important bits of advice from the OWASP projec ## ![✔] OWASP A9: Using Components With Known Security Vulneraibilities -- Scan docker images for known vulnerabilities (using Docker's and other vendors offer scanning services) +- Scan docker images for known vulnerabilities (using Docker's and other vendors' scanning services) - Enable automatic instance (machine) patching and upgrades to avoid running old OS versions that lack security patches - Provide the user with both 'id', 'access' and 'refresh' token so the access token is short-lived and renewed with the refresh token - Log and audit each API call to cloud and management services (e.g who deleted the S3 bucket?) using services like AWS CloudTrail @@ -95,5 +95,37 @@ Going on, below we've listed some important bits of advice from the OWASP projec - Applying context-sensitive encoding when modifying the browser document on the client-side acts against DOM XSS - Enabling a Content-Security Policy (CSP) as a defense-in-depth mitigating control against XSS +## ![✔] Protect Personally Identifyable Information (PII Data) + +- Personally identifiable information (PII) is any data that can be used to identify a specific individual +- Protect Personally Identifyable Information in the Applications by encrypting them +- Follow the data privacy laws of the land + + +- Reference laws: + +- European Union: GDPR - https://ec.europa.eu/info/law/law-topic/data-protection_en +- India: https://meity.gov.in/writereaddata/files/Personal_Data_Protection_Bill,2018.pdf +- Singapore: https://www.pdpc.gov.sg/Legislation-and-Guidelines/Personal-Data-Protection-Act-Overview + +## ![✔] Have a security.txt File [PRODUCTION] + +**TL;DR:** Have a text file called ```security.txt``` under ```/.well-known``` directory (/.well-known/security.txt) or in the root directory (/security.txt) of your website or your web application in production. ```security.txt``` file should contain details using which security researchers can report vulnerabilities and also the contact details of the responsible person/group (email id and/or phone numbers) to whom the reports have to be sent. + +**Otherwise:** You may not be notified about the vulnerabilities. You will miss the opportunity to act on the vulnerabilities in time. + +🔗 [**Read More: security.txt**](https://securitytxt.org/) +


+ +## ![✔] Have a SECURITY.md File [OPEN SOURCE] + +**TL;DR:** To give people instructions for responsibly reporting security vulnerabilities in your project, you can add a SECURITY.md file to your repository's root, docs, or .github folder. SECURITY.md file should contain details using which security researchers can report vulnerabilities and also the contact details of the responsible person/group (email id and/or phone numbers) to whom the reports have to be sent. + +**Otherwise:** You may not be notified about the vulnerabilities. You will miss the opportunity to act on the vulnerabilities in time. + +🔗 [**Read More: SECURITY.md**](https://help.github.com/en/github/managing-security-vulnerabilities/adding-a-security-policy-to-your-repository) + +


+


diff --git a/sections/security/dependencysecurity.french.md b/sections/security/dependencysecurity.french.md index 38abb514b..33d52ad93 100644 --- a/sections/security/dependencysecurity.french.md +++ b/sections/security/dependencysecurity.french.md @@ -37,7 +37,7 @@ An example of the output of the Synk GitHub integration automatically created pu ### Greenkeeper -Greenkeeper is a service which offers real-time dependency updates, which keeps an application more secure by always using the most update to date and patched dependency versions. +Greenkeeper is a service which offers real-time dependency updates, which keeps an application more secure by always using the most up to date and patched dependency versions. Greenkeeper watches the npm dependencies specified in a repository's `package.json` file, and automatically creates a working branch with each dependency update. The repository CI suite is then run to reveal any breaking changes for the updated dependency version in the application. If CI fails due to the dependency update, a clear and concise issue is created in the repository to be auctioned, outlining the current and updated package versions, along with information and commit history of the updated version. diff --git a/sections/security/expirejwt.french.md b/sections/security/expirejwt.french.md index 4948f4829..76ac96887 100644 --- a/sections/security/expirejwt.french.md +++ b/sections/security/expirejwt.french.md @@ -7,7 +7,7 @@ Due to this, when using JWT authentication, an application should manage a black ### `express-jwt-blacklist` example -An example of running `express-jwt-blacklist` on a Node.js project using the `express-jwt`. Note that it is important to not use the default store settings(in-memory) cache of `express-jwt-blacklist`, but to use an external store such as Redis to revoke tokens across many Node.js processes. +An example of running `express-jwt-blacklist` on a Node.js project using the `express-jwt`. Note that it is important to not use the default store settings (in-memory) cache of `express-jwt-blacklist`, but to use an external store such as Redis to revoke tokens across many Node.js processes. ```javascript const jwt = require('express-jwt'); @@ -26,12 +26,12 @@ blacklist.configure({ } } }); - + app.use(jwt({ secret: 'my-secret', isRevoked: blacklist.isRevoked })); - + app.get('/logout', (req, res) => { blacklist.revoke(req.user) res.sendStatus(200); diff --git a/sections/security/regex.french.md b/sections/security/regex.french.md index 1a0bac846..e5198bd70 100644 --- a/sections/security/regex.french.md +++ b/sections/security/regex.french.md @@ -11,7 +11,7 @@ Some [OWASP examples](https://www.owasp.org/index.php/Regular_expression_Denial_

-### Code Example – Enabling SSL/TLS using the Express framework +### Code Example – Validating exponential time RegEx and using validators instead of RegEx ```javascript const saferegex = require('safe-regex'); diff --git a/sections/security/userpasswords.french.md b/sections/security/userpasswords.french.md new file mode 100644 index 000000000..08f5d8b6b --- /dev/null +++ b/sections/security/userpasswords.french.md @@ -0,0 +1,129 @@ +# Secure Your Users' Passwords + +### One Paragraph Explainer + +**Always** hash users' passwords as opposed to storing them as text; there are three options that depend on your use case for hashing user passwords. All the below functions need to be implemented properly to provide any security. (Reference the minimums or see the [IETF's recommendations](https://tools.ietf.org/id/draft-whited-kitten-password-storage-00.html#name-kdf-recommendations) for what parameters to use for each) You should use the recommended properties as a minimum, using higher parameters and a combination that's unique to your program can help mitigate some of the damage if someone ever runs off with your password hashes. Also, always add a [salt](https://auth0.com/blog/adding-salt-to-hashing-a-better-way-to-store-passwords/) (reproducible data, unique to the user and your system) to your passwords before you hash. + + - For the majority of use cases, the popular library [`bcrypt`](https://www.npmjs.com/package/bcrypt) can be used. (minimum: `cost:12`, password lengths must be <64) + - For a slightly harder native solution, or for unlimited size passwords, use the [`scrypt`](https://nodejs.org/dist/latest-v14.x/docs/api/crypto.html#crypto_crypto_scrypt_password_salt_keylen_options_callback) function. (minimums: `N:32768, r:8, p:1`) + - For FIPS/Government compliance use the older [`PBKDF2`](https://nodejs.org/dist/latest-v14.x/docs/api/crypto.html#crypto_crypto_pbkdf2_password_salt_iterations_keylen_digest_callback) function included in the native crypto module. (minimums: `iterations: 10000, length:{salt: 16, password: 32}`) + +(NOTE: `Math.random()` should **never** be used as part of any password or token generation due to its predictability. See [the advanced section](#randomness) for more info) + +### Code example - Bcrypt + +```javascript +const iterations = 12; +try { +// asynchronously generate a secure password + const hash = await bcrypt.hash('myPassword', iterations); + // Store secure hash in user record + + // compare a provided password input with saved hash + const match = await bcrypt.compare('somePassword', hash); + if (match) { + // Passwords match + } else { + // Passwords don't match + } +} catch { + logger.error('could not hash password.') +} +``` + +### Code example - SCrypt + +```javascript + const outSize = 64; + const hash = crypto.scryptSync('myUnlimitedPassword','someUniqueUserValueForSalt',outSize).toString('hex'); + + // Store secure hash in user record + + // compare a provided password input with saved hash + const match = hash === crypto.scryptSync('someUnlimitedPassword','derivedSalt',outSize).toString('hex'); + + if (match) { + // Passwords match + } else { + // Passwords don't match + } +``` + +### Code example - PBKDF2 (Password-Based Key Derivation Function, Crypto Spec v2.1) + +```javascript +try { + const outSize = 64; + const digest = 'blake2b512'; + const iterations = 12; + const hash = crypto.pbkdf2Sync('myPassword','someUniqueUserValueForSalt', iterations * 1000, digest, outSize).toString('hex'); + + // Store secure hash in user record + + // compare a provided password input with saved hash + const match = hash === crypto.pbkdf2Sync('somePassword','derivedSalt', iterations * 1000, digest, outSize).toString('hex'); + + if (match) { + // Passwords match + } else { + // Passwords don't match + } +} catch { + logger.error('could not hash password.') +} +``` + +### What other bloggers say + +From the blog by [Max McCarty](https://dzone.com/articles/nodejs-and-password-storage-with-bcrypt): +> ... it’s not just using the right hashing algorithm. I’ve talked extensively about how the right tool also includes the necessary ingredient of “time” as part of the password hashing algorithm and what it means for the attacker who’s trying to crack passwords through brute-force. + +From the blog [Troy Hunt - Creator of HaveIBeenPwned.com](https://www.troyhunt.com/we-didnt-encrypt-your-password-we-hashed-it-heres-what-that-means/): +> Saying that passwords are “encrypted” over and over again doesn’t make it so. They’re bcrypt hashes so good job there, but the fact they’re suggesting everyone changes their password illustrates that even good hashing has its risks. + +### Advanced & References + +#### Algorithms + +When storing user passwords, there are a few options to consider based on what the priority is. + +All of the below algorithms/functions need to be implemented properly to provide any security. Please see the [IETF's recommendations](https://tools.ietf.org/id/draft-whited-kitten-password-storage-00.html#name-kdf-recommendations) for what parameters to use for each. You should use the recommended properties as a minimum, using higher parameters and a combination that's unique to your program can differentiate the hash of the exact same password and salt from someone elses implementation to yours, mitigating some of the risk if someone ever runs off with your password hashes. + +The external dependency, [`bcrypt`](https://www.npmjs.com/package/bcrypt) is the most widely supported and should be used when possible, as when using `bcrypt`, a number of 'rounds' can be specified in order to provide a secure hash. This sets the work factor or the number of 'rounds' the data is processed for, and more hashing rounds leads to more secure hash (although this at the cost of CPU time). The introduction of hashing rounds means that the brute force factor is significantly reduced, as password crackers are slowed down increasing the time required to generate one attempt. + +The [`scrypt`](https://nodejs.org/dist/latest-v14.x/docs/api/crypto.html#crypto_crypto_scrypt_password_salt_keylen_options_callback) function included in the native crypto module can be used as it is a slight improvement over `bcrypt`, allowing for unlimited length passwords, and does not add a dependency, though it needs more configuration and is newer and thus less scrutinized. `scrypt` uses cost (to increase CPU/memory cost), blockSize (to increase memory cost), and parallelization (to increase the cost of breaking it up into separate operations) together to define how secure it is, how long it will take, and what it's most secure against. + +If FIPS or other compliance is absolutely necessary, the older [`PBKDF2`](https://nodejs.org/dist/latest-v14.x/docs/api/crypto.html#crypto_crypto_pbkdf2_password_salt_iterations_keylen_digest_callback) function included in the native crypto module should be used. `PBKDF2` has a similar api to `bcrypt` in that it uses an iteration count to define the strength and time to spend. + +On track to be added sometime in 2021 (Through addition to OpenSSL) the `Argon2` function is the winner of the [Password Hashing Competition](https://password-hashing.net/) and top recommended by [OWASP](https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Password_Storage_Cheat_Sheet.md#modern-algorithms) and the [IETF](https://tools.ietf.org/id/draft-whited-kitten-password-storage-00.html#name-kdf-recommendations). Once added to the native crypto module, `Argon2` should be stable and take precedence. + +#### Salting + +No matter what the algorithm/function, always include some string that's unique to your system and the specific user. Some examples might be a combination of username/userID and your app name or the user's email and your business email. However this should also be considered when choosing your hashing algorithm/function since BCrypt limits you to 64 characters, whereas the more complicated SCrypt lets you use as much salt and password as you want. + +##### Why use salt? + +Adding salt changes the hash and thus makes it different from a hash of the same password in someone elses +system. If someone uses the same password for multiple sites, and a hash of their password is obtained from someone elses data breach, they won't be able to match it to the hash in your database. When everyone uses hashes, it becomes nearly impossible for attackers to identify patterns of password reuse. + +#### Password Length + +If your password plus salt has to come in under a limit, and if you use good salt, your users passwords will be even more limited. One way around this, which can be also be good generally, is to pre-hash the passwords. It can create administrative burden, but if you can commit to consistently using a single, simple, hash on the font-end you can set the pre-hash to generate hex strings of an exact length from the password before transmitting it to your server. This means you can have strong checks on your API; for example, only allowing hex characters of exactly 256 characters in length, but still giving users the ability to use passwords of any length with any characters. (You still need to use a good enough hashing that you don't accidentally create collisions where two different passwords create the same hash, it doesn't hurt to use more secure hashing for this, it just takes longer) + +Example Browser code would be `const hash = crypto.subtle.digest('sha-256', password)`; + +#### Randomness + +Whenever possible, leave the generation of randomness to the algorithms you choose. Notice, no mention of an alternative to `Math.random()` was provided, that's because you should even avoid using `crypto.random()` yourself, as *Randomness* is a special kind of limited resource on a computer, and abusing it can cause problems for your program and even other programs on the machine. + +#### How BCrypt/SCrypt Work + +Both BCrypt/SCrypt use iterations since their premise is that if it takes you X amount of time and resources to hash once, and the attacker X^some-big-number to break the hash with brute force, then if you hash the hash of a hash etc. then you're greatly increasing the magnitude of resources an attacker would have to expend to break your hash. SCrypt also has parameters for block/chunk size and parallelism to try and add "hardness" for attackers that try to assume they only need a certain amount of RAM or CPU cores, though the effectiveness of these parameters is hotly debated. + +#### References + + - IETF - Password Storage Reccomendations: https://tools.ietf.org/id/draft-whited-kitten-password-storage-00.html + - OWASP - Password Storage CheatSheet: https://github.com/OWASP/CheatSheetSeries/blob/master/cheatsheets/Password_Storage_Cheat_Sheet.md + - auth0 - What is Password Salt: https://auth0.com/blog/adding-salt-to-hashing-a-better-way-to-store-passwords/ + - Troy Hunt - What's the difference between Hashing and Encryption: https://www.troyhunt.com/we-didnt-encrypt-your-password-we-hashed-it-heres-what-that-means/ + - Password Hashing Competition: https://password-hashing.net/ diff --git a/sections/security/validation.french.md b/sections/security/validation.french.md index 4dbd65621..c7185a6dd 100644 --- a/sections/security/validation.french.md +++ b/sections/security/validation.french.md @@ -2,7 +2,7 @@ ### One Paragraph Explainer -Validation is about being very explicit on what payload our app is willing to accept and failing fast should the input deviates from the expectations. This minimizes an attackers surface who can no longer try out payloads with a different structure, values and length. Practically it prevents attacks like DDOS (code is unlikely to fail when the input is well defined) and Insecure Deserialization (JSON contain no surprises). Though validation can be coded or rely upon classes and types (TypeScript, ES6 classes) the community seems to increasingly like JSON-based schemas as these allow declaring complex rules without coding and share the expectations with the frontend. JSON-schema is an emerging standard that is supported by many npm libraries and tools (e.g. [jsonschema](https://www.npmjs.com/package/jsonschema), [Postman](http://blog.getpostman.com/2017/07/28/api-testing-tips-from-a-postman-professional/)), [joi](https://www.npmjs.com/package/joi) is also highly popular with sweet syntax. Typically JSON syntax can't cover all validation scenario and custom code or pre-baked validation frameworks like [validator.js](https://github.com/chriso/validator.js/) come in handy. Regardless of the chosen syntax, ensure to run the validation as early as possible - For example, by using Express middleware that validates the request body before the request is passed to the route handler +Validation is about being very explicit on what payload our app is willing to accept and failing fast should the input deviate from the expectations. This minimizes the attacker's surface who can no longer try out payloads with a different structure, values and length. Practically it prevents attacks like DDOS (code is unlikely to fail when the input is well defined) and Insecure Deserialization (JSON contain no surprises). Though validation can be coded or relied upon classes and types (TypeScript, ES6 classes) the community seems to increasingly like JSON-based schemas as these allow declaring complex rules without coding and share the expectations with the frontend. JSON-schema is an emerging standard that is supported by many npm libraries and tools (e.g. [jsonschema](https://www.npmjs.com/package/jsonschema), [Postman](http://blog.getpostman.com/2017/07/28/api-testing-tips-from-a-postman-professional/)), [joi](https://www.npmjs.com/package/@hapi/joi) is also highly popular with sweet syntax. Typically JSON syntax can't cover all validation scenario and custom code or pre-baked validation frameworks like [validator.js](https://github.com/chriso/validator.js/) come in handy. Regardless of the chosen syntax, ensure to run the validation as early as possible - For example, by using Express middleware that validates the request body before the request is passed to the route handler ### Example - JSON-Schema validation rules @@ -33,7 +33,7 @@ Validation is about being very explicit on what payload our app is willing to ac const JSONValidator = require('jsonschema').Validator; class Product { - + validate() { const v = new JSONValidator(); diff --git a/sections/testingandquality/3-parts-in-name.french.md b/sections/testingandquality/3-parts-in-name.french.md index d28fcdba5..76b45bdf5 100644 --- a/sections/testingandquality/3-parts-in-name.french.md +++ b/sections/testingandquality/3-parts-in-name.french.md @@ -12,6 +12,8 @@ Un rapport de test devrait indiquer si la révision actuelle de l'application sa (3) Quel est le résultat attendu ? Par exemple, le nouveau produit n'est pas approuvé

+

+ ### Exemple de code : un nom de test qui comprend 3 parties ```javascript //1. unité testée @@ -47,6 +49,6 @@ describe('Service Produits', () => { [Extrait du blog de « 30 bonnes pratiques de test avec Node.js » par Yoni Goldberg](https://medium.com/@me_37286/yoni-goldberg-javascript-nodejs-testing-best-practices-2b98924c9347) - ![Un exemple de rapport de test](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/test-report-like-requirements.jpeg "Un exemple de rapport de test") + ![Un exemple de rapport de test](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/test-report-like-requirements.jpeg "Un exemple de rapport de test")

\ No newline at end of file diff --git a/sections/testingandquality/citools.french.md b/sections/testingandquality/citools.french.md index fc4a311e5..9e27ea763 100644 --- a/sections/testingandquality/citools.french.md +++ b/sections/testingandquality/citools.french.md @@ -42,10 +42,10 @@ jobs: ### Circle CI - CI du cloud avec une configuration presque nul -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/circleci.png "Gestion des erreurs API") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/circleci.png "Gestion des erreurs API") ### Jenkins - CI sophistiqué et robuste -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/jenkins_dashboard.png "Gestion des erreurs API") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/jenkins_dashboard.png "Gestion des erreurs API")

diff --git a/sections/testingandquality/refactoring.french.md b/sections/testingandquality/refactoring.french.md index 2ce6c18fb..985d39741 100644 --- a/sections/testingandquality/refactoring.french.md +++ b/sections/testingandquality/refactoring.french.md @@ -29,15 +29,15 @@ seront toujours un problème si la qualité implicite de votre JavaScript est ma ### Exemple : analyse de méthodes complexes avec CodeClimate (commercial) -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/codeanalysis-climate-complex-methods.PNG "Analyse de méthodes complexes") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/codeanalysis-climate-complex-methods.PNG "Analyse de méthodes complexes") ### Exemple : tendances et historique de l'analyse de code avec CodeClimate (commercial) -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/codeanalysis-climate-history.PNG "Historique d'analyse de code") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/codeanalysis-climate-history.PNG "Historique d'analyse de code") ### Exemple : résumé et tendances de l'analyse de code avec SonarQube (commercial) -![alt text](https://github.com/i0natan/nodebestpractices/blob/master/assets/images/codeanalysis-sonarqube-dashboard.PNG "Historique d'analyse de code") +![alt text](https://github.com/goldbergyoni/nodebestpractices/blob/master/assets/images/codeanalysis-sonarqube-dashboard.PNG "Historique d'analyse de code")

diff --git a/sections/testingandquality/test-middlewares.french.md b/sections/testingandquality/test-middlewares.french.md new file mode 100644 index 000000000..a550e9845 --- /dev/null +++ b/sections/testingandquality/test-middlewares.french.md @@ -0,0 +1,30 @@ +# Testez vos middlewares de manière isolée + +

+ +### Un paragraphe d'explication + +Beaucoup évitent les tests de Middleware parce qu'ils représentent une petite partie du système et nécessitent un serveur Express en ligne. Les deux raisons sont fausses - les Middlewares sont petits mais affectent toutes ou la plupart des requêtes et peuvent être testés facilement comme de pures fonctions qui récupèrent des objets JS `{req, res}`. Pour tester une fonction d'un Middleware, il suffit de l'appeler et d'espionner ([en utilisant Sinon par exemple](https://www.npmjs.com/package/sinon)) l'interaction avec les objets {req, res} pour s'assurer que la fonction a effectué la bonne action. La bibliothèque [node-mock-http] (https://www.npmjs.com/package/node-mocks-http) va encore plus loin et prend en compte les objets {req, res} en même temps que l'espionnage de leur comportement. Par exemple, elle peut affirmer si le statut http qui a été défini sur l'objet res correspond à celui attendu (voir l'exemple ci-dessous). + +

+ +### Code exemple : tester le middleware de manière isolée + +```javascript +// le middleware que nous voulons tester +const unitUnderTest = require("./middleware") +const httpMocks = require("node-mocks-http"); +// Syntaxe Jest, équivalante à describe() & it() dans Mocha +test("Une requête sans entête d'authentification, doit renvoyer le statut http 403", () => { + const request = httpMocks.createRequest({ + method: "GET", + url: "/user/42", + headers: { + authentication: "" + } + }); + const response = httpMocks.createResponse(); + unitUnderTest(request, response); + expect(response.statusCode).toBe(403); +}); +``` \ No newline at end of file